This repository contains the problem specifications, the automated tester and the necessary data files for the graded lab problems for the course CCPS 109 Computer Science I, as taught by Ilkka Kokkarinen for the Chang School of Continuing Education, Ryerson University, Toronto, Canada.
Write all your functions one by one into the same file
labs109.py so that the acceptance tester script
tester109.py can find them. This tester will execute the automated acceptance tests precisely for the functions implemented in
labs109.py. The acceptance tester gives each implemented function a large number of pseudorandomly generated test cases, and computes a checksum from the answers that your function returns. This checksum is compared to the checksum computed from the instructor's private model solution. The function passes the acceptance test if and only if these two checksums are identical. (The function should still produce its results in a reasonable time, but the tester measures but doesn't enforce.)
This setup allows students to work at home at their convenience, and to know at all times whether their functions are correct just by running the acceptance test script. This automation eliminates basically all any additional labour needed from the the human instructor or the TA's to run the basic bureaucracy and grading of the course, so they can rather fully concentrate on helping students debug their solutions that fail the acceptance tests. Once students have given each problem a good old "college try" (including once sleeping over it to have another look at the problem with fresh eyes the following day) but still haven't been able to untangle that particular knot, they can contact the course personnel for help to get them loose again.
In addition to the precomputed checksums that serve as definitive acceptance tests for each function, the file
expected_answers contains the expected results for the first 300 test cases for each problem. The acceptance tester will compare these in lockstep to the results produced by the student solution. At the first discrepancy, the test for that function is immediately terminated, and the test case arguments, the expected correct result and the returned incorrect result are displayed as a valuable debugging aid. Having a reasonably short but explicit test case available is an invaluable aid in debugging a function that does not pass the automated test.
Everyone teaching or learning Python is welcome to use, adapt and distribute these problems and the associated acceptance tester for their own purposes as they see fit. The author welcomes feedback by email at
[email protected] from computer science instructors who use these problems in their courses.
The lab specification document and the automated tester software
tester109.py are released under the GNU General Public License v3, with no warranties implied by the author.
words_sorted.txt adapted from dwyl/english-words.