An Experimental Evaluation of Tools for Grading Concurrent Programming Exercises - Inria - Institut national de recherche en sciences et technologies du numérique
Communication Dans Un Congrès Année : 2023

An Experimental Evaluation of Tools for Grading Concurrent Programming Exercises

Résumé

Automatic grading based on unit tests is a key feature of massive open online courses (MOOC) on programming, as it allows instant feedback to students and enables courses to scale up. This technique works well for sequential programs, by checking outputs against a sample of inputs, but unfortunately it is not adequate for detecting races and deadlocks, which precludes its use for concurrent programming, a key subject in parallel and distributed computing courses. In this paper we provide a hands-on evaluation of verification and testing tools for concurrent programs, collecting a precise set of requirements, and describing to what extent they can or can not be used for this purpose. Our conclusion is that automatic grading of concurrent programming exercises remains an open challenge.
Fichier sous embargo
Fichier sous embargo
1 2 4
Année Mois Jours
Avant la publication
jeudi 1 janvier 2026
Fichier sous embargo
jeudi 1 janvier 2026
Connectez-vous pour demander l'accès au fichier

Dates et versions

hal-04731926 , version 1 (11-10-2024)

Licence

Identifiants

Citer

Manuel Barros, Maria Ramos, Alexandre Gomes, Alcino Cunha, José Pereira, et al.. An Experimental Evaluation of Tools for Grading Concurrent Programming Exercises. 43th International Conference on Formal Techniques for Distributed Objects, Components, and Systems (FORTE), Jun 2023, Lisbon, Portugal. pp.3-20, ⟨10.1007/978-3-031-35355-0_1⟩. ⟨hal-04731926⟩
0 Consultations
0 Téléchargements

Altmetric

Partager

More