Ask A Silly Question, Get A Non-Standard Answer
Guest article by Simon Very
I have taught science at the post-compulsory level for ten years. A considerable portion of the courses that I delivered were internally assessed and coursework based. Assessment of these courses did not involve standardized testing, but they did involve standardized criteria to compare learners’ work against; the writing of coursework briefs also needed to comply with sets of standard criteria. The need to be compliant with a set of standards tended to create considerable difficulties around originality of learners’ work; plagiarism related concerns were common.
For many years I have been aware that educators use plagiarism detection software (I used Turnitin) to discourage students’ plagiarism, but I recently discovered that learners have begun to use paraphrasing software that may or may not be specifically configured to defeat plagiarism checking software (as described in this article). The paraphrased writing (pasting?) produced reads strangely enough to be fairly easily recognizable as not normal (although the same could be said for some learners’ authentically composed work).
The possibility cannot be easily ignored that growing use of refined versions of paraphrasing tools that are designed to fool plagiarism detection software could lock educators and learners into an arms race of text analysis and obfuscation.
In teaching, I have experienced a similar situation to this hypothetical arms race (but with lower grade weapons). My learners’ completed assignments often contained many, many ‘ready-made’ answers to questions; answers easily found by using search engines (and often by using the questions in assignment briefs verbatim as search terms in search engines).
My initial response to these ready-made answers was to not accept these answers as valid assignment content. This response was seriously limited by the proviso that information from secondary sources was permissible if it was appropriately referenced.
Most of my learners quite soon became rather good at correctly referencing content that they used. The learners had discovered that the external rules for the delivery of the qualifications they were studying for did not make any clear pronouncements on proportions of original content permissible (nor permit centers to make their own strict rules- that would create standardization problems).
Presumably, the learners must have carefully read through these (very dry) regulations, or at least vaguely known of someone who had done a good example of learner-led, undirected, collaborative applied problem solving by the learners. The acceptance of unlimited secondary sources in assignments led to the submission of assignments that consisted almost entirely of referenced quotes, linked by a few throwaway extra lines were common, with different learners’ assignments only differing in those few throwaway lines.
I responded to this development by ruling that the first submitter of a secondary source had the right to use it (they had at least found it themselves- presumably- rather than having been sent it by someone else) but subsequent submissions of the same source were however considered plagiarisms of the first submission. This policy worked as intended until most learners learned that different answers to the same question were available online in abundance and that these answers were sufficiently generic that many paraphrased versions of pretty much any given answer existed and hence a class of learners could all find non-identical versions of answers with a small degree of effort.
This did not imply that most learners recognized the common aspects and elements of comparable, generic answers in an analytical sense-only through instrumental, combinatorial eliminations, such as by repeated feedback elicitation.
Again, this struck me as the kind of effective collaborative, self-directed problem solving by my learners that I would have been very happy to see being applied to the actual understanding of the assignment content.
As this situation continued I gradually came to understand that the instrumental ability to find general answers was only sufficient for standard questions; the sort of questions that very clearly and directly addressed learning criteria and standards from qualification and curriculum specifications.
If non-standard, idiosyncratic questions (silly questions?) were asked (questions which had the same underlying meanings as standard questions but were phrased differently or appeared in specific contexts), then search engines were poor at finding ready-made answers (responding more to the phrasing or context than question meaning). Developing this strategy, I came to realize that the limitations of Google’s search algorithms made it possible to include terminological combinations in questions that ‘spoofed’ the search algorithms into returning content which nominally matched the questions but in semantically inappropriate ways.
Carefully using Google Advanced Search or (better still) thinking carefully about what a question meant enough to rephrase it in various forms could go a long way to guiding students towards ready-made answers. Generally, learners were much less successful at doing that than they had been at bypassing my earlier strategies to get them to produce original work.
Many learners seemed to genuinely find it hard to accept that search engines could fail to deliver required information that was nominally sought, nor that deliberative thinking was an important consideration for search engine use.
Learners’ difficulty in accepting this reality struck me as an excellent example of how
I am Simon Very, and I blog at https://metalearningsite.wordpress.com/