“Does Using ChatGPT Result in Human Cognitive Augmentation?”, 2024-01-19 ():
Human cognitive performance is enhanced by the use of tools. For example, a human can produce a much greater, and more accurate, volume of mathematical calculation in a unit of time using a calculator or a spreadsheet application on a computer. Such tools have taken over the burden of lower level cognitive grunt work but the human still serves the role of the expert performing higher level thinking and reasoning.
Recently, however, unsupervised, deep, machine learning has produced cognitive systems able to outperform humans in several domains. When humans use these tools in a human cog ensemble, the cognitive ability of the human is augmented. In some cases, even non experts can achieve, and even exceed, the performance of experts in a particular domain, synthetic expertise. A new cognitive system, ChatGPT-3.5, has burst onto the scene during the past year.
This paper investigates human cognitive augmentation due to using ChatGPT by presenting the results of two experiments comparing responses created using ChatGPT with results created not using ChatGPT. [Asking the human to come up with ideas for reducing littering from shooting clays, and what age to retire at.]
We find using ChatGPT does not always result in cognitive augmentation and does not yet replace human judgement, discernment, and evaluation in certain types of tasks. In fact, ChatGPT was observed to result in misleading users resulting in negative cognitive augmentation.
…Interestingly, students using ChatGPT synthesized a number of ideas having no effect at all on the primary problem—littering of the grass by the skeet clay fragments. Examining these ideas in detail shows these ideas were related to “educating shooters about the environmental impact” and “educating shooters about gun safety.” These ideas can be explained when one analyzes the response from ChatGPT when given the problem statement as the prompt.
ChatGPT is trained from articles and other content available on the Internet. Because the problem statement involves guns and shooting, ChatGPT responded with suggestions to educate shooters about gun safety because on the Internet, when one sees a document about guns and shooting, it is very likely to also include comments about safety. Even though the concepts of guns and safety are understandably related, the safety issue has nothing to do with solving the problem given in the problem statement—littering the grass field. ChatGPT however does not perform such in-depth analysis to realize this. ChatGPT’s responses are driven by word association. [No, this would be typical RLHF backfiring.]
Likewise, because the problem statement mentions littering and damaging grass, ChatGPT finds associations with environmental issues important and therefore responded to students suggesting education about the environment since this is found in millions of pages on the Internet when litter and harming grass is mentioned. While one could argue you might be able to talk a shooter out of shooting after they understand the harm to the grass, this is not likely to change the mind of the vast majority of shooters, so is not a practical solution. Interestingly, in this case, using of ChatGPT actually distracted students by misleading them to consider things having nothing to do with the problem. Therefore, one could argue using ChatGPT actually decreased cognitive ability—resulting in negative cognitive augmentation.