An Experimental Evaluation of Tools for Grading Concurrent Programming Exercises
Résumé
Automatic grading based on unit tests is a key feature of massive open online courses (MOOC) on programming, as it allows instant feedback to students and enables courses to scale up. This technique works well for sequential programs, by checking outputs against a sample of inputs, but unfortunately it is not adequate for detecting races and deadlocks, which precludes its use for concurrent programming, a key subject in parallel and distributed computing courses. In this paper we provide a hands-on evaluation of verification and testing tools for concurrent programs, collecting a precise set of requirements, and describing to what extent they can or can not be used for this purpose. Our conclusion is that automatic grading of concurrent programming exercises remains an open challenge.