Weitere Informationen

DIE HOCHMÜTIGE DOHLE UND DER PFAU

Einst lebte eine Dohle, voll von eitlem Stolz, die stahl sich Federn, die dem Pfau entfallen warn und putzte sich damit. Das eigne Dohlenvolk ver- achtend trat sie in der schönen Pfauen Reihn. Der Unver- schämten reißt man hier die Federn aus, jagt sie mit Schnäbeln. Und die Dohle, bös verbleit will wieder nun betrübt zu ihrem Volk zurück. Die aber stoßen sie von sich, mit herbem Schimpf. Und eine derer, die zuvor verachtet, sprach zu ihr “Hätt’ unsre Lebensart dir vormals conveniert, hätt’st du, was die Natur dir schenkte, akzeptiert, dann wär dir weder jene Schande widerfahrn noch müsstest du zum Unglück jetzt verstoßen sein.”

Diese Version von Aesops Fabel ist aus Wilfried Strohs Sammlung von Übersetzungen von Jan Novák: “Aesopia”, die auf Geschichten von Phaedrus basieren.

Softwaretest 2013

Results of the Plagiarism Detection System Test 2013

Can software automatically detect plagiarism? Many companies sell software that suggests just that. Prof. Dr. Debora Weber-Wulff, professor for media and computing at the HTW Berlin, has previously conducted six tests of plagiarism detection systems, in 2004, 2007, 2008, 2010, 2011, and 2012. For 2013, instead of attempting to test all possible systems, a selection was made that included software previously found to be at least partially useful, as well as some newcomers. In all, 28 systems were investigated, but only 15 systems were able to complete the test series that included many new test cases designed to address specific aspects of the use of plagiarism detection systems at educational institutions. In particular, large files that simulated bachelor’s and master’s theses were constructed, one test case was designed to determine if the software can access and use Google Books, and some test cases that use cheats sometimes used by students to thwart such software were put together. In addition, Hebrew was used as the non-Latin test case language 2013.

The results are comparable with previous years: Even if some of the systems are easier to use now, they still do not produce the documentation that would be necessary in Germany for presentation to an examination board. Most troublesome is the continued presence of false negatives – the software misses plagiarism that is present – and above all false positives. When systems report significant plagiarism for common phrases, or even for a paper that is completely original, using these results without close examination may cause grave damage. In particular, the numbers reported by the systems are not consistent and should be treated only as possible indicators, not as absolute judgment values.

So-called plagiarism detection software does not detect plagiarism. In general, it can only demonstrate text parallels. The decision as to whether a text is plagiarism or not must solely rest with the educator using the software: It is only a tool, not an absolute test.

A university can and should make software available for their educators to use, but they should not use it as a general screening tool for all texts. If at all, general screening could only be reasonably used for first-year student papers.

The complete report is available online as HTML and in a printable form as a PDF. The individual scores can be found in a separate table, the scoring form is also available.  The following list links to the individual tests of the systems.

Partially useful systems

Marginally useful systems

Useless for academic purposes

Systems looked at but not tested

A short explanation of why each of these systems was not tested can be found on a separate page.

  • Academic Plagiarism
  • AntiPlag
  • Custom Writings
  • Effective Papers
  • iThenticate
  • KOPI
  • PaperRater
  • The Pensters
  • Plagiarism Checker
  • Plagium
  • PlagSpotter
  • Small SEO Tools
  • WriteCheck

Partial test 2014 of asystem that only looks at the Wikipedia as a source