A Human’s Review of ChatGPT’s Review of The Antibiotics Tale by Sonny Liew & Hsu Li Yang
A.I seems to be taking over the world and relinquishing humans from traditionally creative jobs. People are using ChatGPT to write all sorts of things now, from academic essays to movie scripts to poems and jokes. Like many busy writers burdened with life’s mundane aspects, I too, tasked the
free mighty artificial intelligence to help me produce a long-overdue book review. Here it is:
What do you think?
My fellow editor thought it was rather impressive: it gave us the relevant content we asked for, the arguments are logical and supported with some details, the organisation is clear and the language is concise. It is indeed impressive for a chatbot, because it looks like a fairly decent book review produced by an obedient student who has demonstrated objectively their skills of knowledge synthesis. But this apparent obedience and objectivity is exactly its biggest issue when it comes to writing creatively: The contents felt vaguely familiar and suspiciously unoriginal.
My educator’s sixth sense tingles, as the attempt felt like a third-hand regurgitation of what has already been published. My students have the same problem with art criticism: they paraphrase what they’ve read about an artist or artwork from my slides or some websites to answer essay questions, but without truly formulating their own opinion. ChatGPT declares that it has no personal opinion because it is an A.I.. Like my students, it simply gathers, rearranges and outputs to give me exactly what I asked.
A review, or any sort of art criticism, contains value judgement that is tinted with personal, cultural and political motivations. A purely factual piece of writing is bland, as there would be no argument in the first place. Art criticism also entails more than just generic claims of “the artist does a great job with beautiful and detailed illustration”. Visual evidence needs to be given in support of any claims of “greatness”. Up till this point, I realised that perhaps I need to give specific instructions for this A.I. student to improve its writing. Here’s the second revision:
The student, again, faithfully followed my instructions. It is able to provide visual evidence to support its claims, such as how the colour red is used to set the mood when the character is in physical danger, or how the panels become smaller and tighter during an emotionally intense scene. But honestly, how are these considered “standout” features like it claims? Does the reader even care? Out of the many interesting things in the book, it chose to talk about a red background. Mediocrity is a common mistake that plagues my students’ visual analysis too: e.g. “the artist used one point perspective to add depth and make the background look realistic; the detailed chiaroscuro makes the face look three-dimensional”. Not wrong, but Meh.
Most interestingly, it claims that the visuals in the third story is “impressive”. I’m not sure how an A.I. can be “impressed”: did it actually “see” the work? And even if it “saw” patterns, did the sight trigger its emotional response of awe? For such praise to flow effortlessly from an A.I. with no personal opinion, it felt somewhat patronising.
Some are worried that A.I. might one day replace human writers, but as far as creative writings go, ChatGPT produces predictable and formulaic contents without personality or flair. It relies heavily on existing secondary data. It is afterall a tool that bends to the will of the user, like a student who tries too hard to please the teacher, the latter of whom spends too much energy dictating the work. Perhaps it is not due to its perfection or talent in writing, but its student-like ability to arouse some degree of frustration, that it is a good candidate to passing the Turing test.