AUTOGRADING AND DETECTING PLAGIARISM IN STUDENT PROGRAMMING ASSIGNMENTS

ABSTRACT In computer science, practical assignments ensure that students put the theory they learn in class into practice by writing computer programs to solve problems. Practical assignments also play a critical role in assessing students’ understanding of course materials. For course facilitators, grading programming assignments is a time-consuming task. The course facilitators must run each student’s submission. Moreover, some students copy the code from their friends and change the lexicon and structure. This makes it nearly impossible for the course facilitators to detect plagiarism. A possible solution to these problems is a system that allows course facilitators to write tests that apply automatically to all students’ submissions and consequently allocate grades based on test results. To curb the plagiarism issue, the system should have a component that calculates the peer plagiarism index and flags students’ submissions that may have plagiarism issues. This applied project is an attempt to develop, test and evaluate such a system. While designing the system, it became apparent that running students’ submission and instructors’ tests on the server would pose a security threat to the server. After evaluating possible workaround for the issue, we decided to run the submissions and tests on a docker sandbox within a virtual machine. The plagiarism index is calculated by quantifying the lexical and structural similarities. To integrate the two components, we developed an API. To test and demonstrate the workings of the system, we developed a frontend client to consume the critical endpoints of the API. This project is proof of concept that the solution for the problem can be developed and successfully deployed