Praxis: A Writing Center Journal • Vol. 23, No. 1 (2025)

What is Too Much Help from GenAI?

Jeffrey Warndof
University of Arkansas - Fort Smith
jeffrey.warndof@uafs.edu

It’s safe to say that writing centers are fairly unified in both practice and ethos in 2025. It would be shocking, for example, to encounter a paper titled “Coming Full Circle: A Return to the Fix-it-Shop Model!” in an academic journal for writing center scholarship. Ideas such as collaborative learning, balancing directive, nondirective tutoring, and social constructivism have become conventional wisdom in writing center theory (Babcock), but there is nothing conventional about generative AI chatbots like ChatGPT. 

These bots “know” but are not cognizant. They are unfeeling, yet have been shown to be biased. Whatever this new technology is, there is no escaping it (Cheatle). Use of tools like ChatGPTcan be “refused” (Fernandes et al.) on the grounds that their nature is ethically and pedagogically antithetical to writing. Yet some educators or educational services—including writing centers—may feel compelled to integrate generative AI (GenAI) into their practices in the spirit of “adapt or die.” As Scott Graham puts it, “Prohibition has always been doomed to failure” (163). 

Ironically, in integrating AI technology, writing centers risk backsliding to the mid-century “fix-it-shop” model from which they have striven to divorce themselves. In the fix-it-shop model, a tutor essentially writes or edits a client’s paper for them. In this way,  the tutor provides too much help. How much to assist a client with their writing is the ethical consideration that largely defines tutoring. What constitutes “too much” help is subjective, but theory provides heuristics so that tutors may perform their duties with relative uniformity: read the paper together and aloud; keep the paper between the tutor and the client; don’t mark on the paper; ask questions; learn from the client; use scaffolding techniques, etc. (Brooks 3-4; Fitzgerald and Ianetta 105).  

But what constitutes too much help from a chatbot? Early writing center studies on GenAI addressed key topics such as adding AI skills in tutor training, and the question of human vs. AI feedback. But there’s not been much scholarship on how AI should be used as a tutoring tool. Perkins et al. argue that leaving AI policy to the discretion of professors results in a lack of cohesion in universities, and  though ethical AI usage is “seldom defined or demonstrated” they emphasize that “it is important that writing centers do not shy away from this complicated question.” 

A writing center may lack cohesion when its AI policy is incongruent with its overall tutoring practice. Perhaps considering the idea of “too much” from a chatbot would provide insight. 

Upon identifying a needed revision in a client’s paper, a well-intentioned tutor will ask him or herself: “I could do X for the client, but should I?” The tutor is essentially determining how much help to give. The tutor could just say, “This paragraph needs a topic sentence,” or “Say this instead.” But this would be too much help. Instead, techniques that guide and scaffold the client should be used. This is common sense for most tutors. 

But what if, despite  the tutor’s guidance, the client cannot seem to draft a topic sentence, and  the tutor suggests summoning help from ChatGPT? Perhaps the tutor has the client explain the situation to ChatGPT and prompt it to produce three options for a strong topic sentence. But the tutor tells the client that they cannot simply copy one of three sentences: they must write their own, only modelling the chosen sentence. The client does so. In this scenario, the tutor guided the client away from copy-and-pasting directly fromChatGPT. By determining which candidate sentence best suited their paper then personally drafting one, that student might feel more confident in future drafts—thus improving not just the paper, but also the writer. 

Nevertheless, in this scenario, too much help is provided. Take this same scenario sans ChatGPT. The client asks the tutor, “Could you write three topic sentences for me based on this paragraph? I just need something to help get me started.” Time constraints aside, an ethical tutor would not write the sentences for the student. Too much help is too much help whether it be natural or artificial. Thus, a common-sense heuristic like “If I, the tutor, could do X, should I?” has utility writing centers in these times. 

With this in mind, the question arises: can chatbots ethically and productively supplement tutoring in writing centers. Two uses might be considered. General purpose chatbots such as ChatGPT can be prompted to recommend sources on specific topics. A user’s ability to search with complete sentences and follow-up questions can yield results a traditional search engine, prompted with keywords and phrases, cannot. In this way, a chatbot can be used strictly as a research tool. A tutor trained in prompt engineering could guide a client in this process. There are also chatbots specifically designed for source retrieval, such as Perplexity and Elicit. Using AI tools in these applications is arguably equivalent to a tutor helping a client use their institution’s academic databases. Because the help the AI is providing is strictly source retrieval, it passes the heuristic: a tutor would if they could.   

A chatbot can also excel as a “reverse dictionary” (Xu et al.).  A client may be struggling to think of a particular word or concept based on a set of qualities; for example, they may be wondering "When I watch a movie that I've seen before with someone who hasn't, it feels like I'm seeing it for the first time. What's a word that describes this experience?” In this chat, ChatGPT accurately suggested “vicarious” (“When I watch”). After a dense dorm room discussion on the nature of reality, a student may ponder “What’s the idea that everything you experience is just a product of your mind?” When that student asks ChatGPT, the AI accurately responds with “solipsism.” Inquiries which require fully worded context are often better suited for chatbots than search engines (Nunes). And, although a tutor could rightfully refer a client to the correct word or concept if they could, their "dataset” is miniscule compared to any chatbot. The reverse dictionary use of AI is arguably appropriate and also leverages the chatbot’s vast dataset.  

Unless prompted otherwise, a chatbot will give students answers, but they won’t  guide them — and this is antithetical to tutoring. Individual writing centers must take care to keep their AI policy consistent with their overall practice. The heuristic presented here can be used to make sure both chatbots and humans are held to the same standard when it comes to offering “too much help.”

Works Cited

Babcock, Rebecca. “Writing Center Theory and Research: A Review.” The Peer Review, vol. 7, no. 2, Spring 2023, https://thepeerreview-iwca.org/issues/issue-7-2/writing-center-theory-and-researcha-review/. 

Brooks, Jeff. “Minimalist Tutoring: Making the Student Do All the Work.” The Writing Lab Newsletter, vol. 15, no. 6, Feb 1991, https://ucwbling.chicagolandwritingcenters.org/wpcontent/uploads/2015/06/Jeff-Brooks-Minimalist-Tutoring-Making-the-Student-Do-All-the-Work.pdf   

Cheatle, Joseph N. “TPR AI Special Issue Introduction: No Escaping GenAI: Confronting a New Writing Center Reality.” The Peer Review, vol. 9, no. 2, Mar. 2025, https://thepeerreview-iwca.org/issue9-2/introduction/. 

Fernandes, Maggie, et al. “What Is GenAI Refusal?” Refusing Generative AI in Writing Studies, 12 Nov. 2024, https://refusinggenai.wordpress.com/what-is-refusal/. 

Graham, S. Scott. “Post-Process but Not Post-Writing: Large Language Models and a Future for Composition Pedagogy.” Composition Studies, vol. 51, no. 1, 2023, p. 163, https://compstudiesjournal.com/wp-content/uploads/2023/06/graham.pdf. 

Nunes, Gabriel B. “Use AI as a Reverse Search Engine.” Kronopath, 28 Feb. 2024, https://www.kronopath.com/blog/use-ai-as-a-reverse-search-engine/.  

Perkins, Meredith, et al. “How the Lack of Cohesion in University AI Policy Poses Challenges to Writing Consultants.” Praxis, vol. 22, no. 1, 2024, http://www.praxisuwc.com/221-perkins-et-al. 

“Prompt Engineering.” OpenAI Platform, https://platform.openai.com/docs/guides/prompt-engineering. Accessed 8 Apr. 2025. 

Xu, Ningyu, et al. “On the Tip of the Tongue: Analyzing Conceptual Representation in Large Language Models with Reverse-Dictionary Probe.” Arxiv, https://arxiv.org/html/2402.14404v2. Accessed 17 Mar. 2025. 

“What’s the idea that everything you experience is just a product of your mind?” prompt. ChatGPT, OpenAI, 8 April. 2023, https://chatgpt.com/share/67f56eb4-8970-8002-af6e-4cf04e34ff98.

“When I watch a movie that I’ve seen...” prompt. ChatGPT, 4 Dec version, OpenAI, 4 Dec. 2025, https://chatgpt.com/share/6931a357-ed6c-8002-a2cd-38d76854a33c