Aller au contenu principal

Towards Automated Testing for Simple Programming Exercises

Automated feedback and grading platforms can require substantial effort when encoding new programming exercises for first-year students. Such exercises are usually simple but require defining several test cases to ensure their functional correctness. This paper describes our initial effort to leverage automated test case generation for simple programming exercises. We rely on grey-box fuzzing and random combinations of method calls to test the students’ solutions and compare their execution to the results produced by a reference implementation. We implemented our approach in a prototype, called SimPyTest, openly available on GitHub. We discuss its usage and possible future extensions.

Auteur(s)

Identificateur d'objet numérique (DOI)
10.1145/3548660.3561334
Auteur(s) non membre(s) de CYBEREXCELLENCE
Pierre Ortegat
Benoît Vanderose