Results of the Plagiarism Detection Support Tool Test 2020
There is a general belief that software must be able to easily do things that humans find difficult. Since finding sources for plagiarism in a text is not an easy task, there is a wide-spread expectation that it must be simple for software to determine if a text is plagiarized or not. Software cannot determine plagiarism, but it can work as a support tool for identifying some text-similarity that may constitute plagiarism. But how well do the various systems work? Results of this test report on a collaborative test of 15 web-based text-matching systems that can be used when plagiarism is suspected. It was conducted by researchers from seven countries using test material in eight different languages, evaluating the effectiveness of the systems on single-source and multi-source documents. A usability examination was also performed. The test was conducted by the TeSToP working group of the European Network for Academic Integrity.
The sobering results show that although some systems can indeed help identify some plagiarized content, they clearly do not find all plagiarism and at times also identify non-plagiarized material as problematic.
The complete report is available in this publication (and in preprint). The summary results can be found in the Summary page, the list of usability criteria is also available.
The following list links to the individual tests of the systems.