Praxis: A Writing Center Journal • Vol. 16, No. 1 (2018)

Aligning With the Center: How We Elicit Tutee Perspectives in Writing Center Scholarship

Yanar Hashlamon
The Ohio State University
hashlamon.1@osu.edu

Abstract
This meta-analysis of writing center scholarship surveys the last twenty years of empirical work from The Writing Center Journal, WLN: A Journal of Writing Center Scholarship, and Praxis: A Writing Center Journal. Writing centers are traditionally predicated on treating writers as both beneficiaries of tutoring and active collaborators in its success. Our pedagogy is tutee-centered in its practice and the benefits it produces, and although we pride ourselves in acting as team players in tutoring sessions, does the same quality emerge in existing research? This paper finds writing center scholarship is rife with studies where the writer-as-beneficiary takes precedence over the often-absent writer-as-collaborator. Put another way, we often attend to writers as recipients of tutoring, but we rarely address their perspectives as active participants in testing our pedagogical assumptions. This paper demonstrates historical trends in scholarship and recent moves to center writers in rigorous, participatory roles in evidence-based inquiry. By engaging with tendencies in data collection in writing center research, this project addresses an unconsidered gap between existing principles and the role of tutees in our evolving research practices. This project offers a custom taxonomy for tutee-based studies, and a thematically organized table of findings.   

It’s safe to say that every writing center tutor fears an unhappy student. While directors and administrators carry the weight of institutional expectation on their shoulders, a tutor’s success is often an affective one. We tell ourselves everything’s okay even if a writer leaves without that smile on their face—we give them what they need, not just what they want, after all. Phillip J. Sloan aptly describes this attitude, a kind of tutor’s hubris, as a “relationship with students, far from an equal collaboration, [that] is predicated on what we believe they need” (4). At the same time, our student-focused practices are ingrained adages that are taught, reinforced, and reflected on; how many studies open with near-compulsory reference to Stephen North and the importance of “student centeredness”? But how many open with those students’ words instead? Their thoughts? Their experiences? Our pedagogy is writer-centered in its practice and the benefits it produces, so to what degree, if at all, have we been eliciting tutee¹ perspectives in writing center research?

In this article, I systematically examine the apparent gap between principles and practice by conducting a meta-analysis of the last thirty years of writing center scholarship concerning tutee participants. Although tutee perspectives may be an under-represented focus of study, writing center studies as a whole have spent the last two decades calling attention to the need to confirm untested orthodoxy—what Shamoon and Burns label the “writing center bible” (135). Our proverbial bible includes the commandments of non-directive tutoring, prioritizing collaboration, and producing better writers over better papers (Shamoon and Burns 139; Lunsford 9; North 438). These foundational tenets may vary in their use from center to center, but they all prioritize the role of tutees in the practice of tutoring. As North himself reflects on the “validation and growth” of writing centers, he writes that we “quite naturally rely on the writer, who is, in turn, a willing collaborator in- and, usually, beneficiary-of the entire process” (439). However, the double role tutees have in the writing center is unevenly reflected in writing center scholarship, with the tutee-as-beneficiary taking precedence over the often-absent tutee-as-collaborator. Our scholarship is rife with studies that address tutees as recipients of tutoring, but we rarely elicit their perspectives as active participants in testing pedagogical assumptions.

Foundational tenets have become the subject of increased critical attention for empirical research that could corroborate or complicate their cogency. Our field’s collective realization that our assumptions need evidence is palpable in the first sentences of any recent article on writing center research. How many begin by referencing the field-wide trend towards self-reflection with talk like, “The subject is research in Writing Center Studies . . . again” (Liggett et al. 50)? This penchant for reflection, along with the field’s diverse methods, creates a unique problem in scholarship. Tutees’ perspectives are clearly valued by writing centers’ disciplinary foundations, but the methods of engaging those same tutees in research are not so well dictated. The assertions that form the writing center’s body of knowledge, as Jeanne Simpson writes, “have been filtered through our own value systems, fears, lore, and aspirations” (1). Empirical research is a relatively recent addition to this list—and although writing center scholarship has a history of empirical study, this work has been largely naturalistic and interpretative while inquiry into lore-based assumptions has accompanied a field-wide push to include more planned, systematic modes of inquiry. The trend towards greater methodological diversity is interwoven with efforts to include tutee perspectives in contemporary work.

This study finds that writing center scholarship engages tutee perspectives within a tripartite taxonomy. First are satisfaction studies that characterize writing center program evaluation since the early 1980s. Second are studies beginning in the mid-to-late nineties in which research shifted to include tutee perspectives in a peripheral capacity—incorporated in a given study, but not the priority of its inquiry or methods that focus on other participant-perspectives, like those of tutors. Third are recent empirical studies that incorporate substantial tutee perspectives at the center of their work. Each study gathered here engages in participant inquiry, so my meta-analysis sub-categorizes research via its positioning of tutees as practitioners of writing center work. This tripartite taxonomy does not suggest a hierarchy of importance; nor does it imply that studies lacking tutee perspectives are in any way deficient. Rather, the meta-analytical schema shows possibility in writing center studies for incorporating perspectives of those we claim to centralize or empower in our pedagogy.

Methods

This project will engage with three main academic journals in writing center talk: The Writing Center Journal (WCJ), WLN: A Journal of Writing Center Scholarship (WLN), and Praxis: A Writing Center Journal. I examined each journal from its founding issue and determined which research studies incorporated tutee perspectives via a systematic approach. A meta-analysis was conducted on these journals utilizing keyword searches and controlling for disqualifying variables with the following protocols:  

  1. Used keywords: tutee; client; writer; student; satisfaction; assessment; empirical; evidence; experimental; quasi-experimental; study. Searched online databases for WCJ and manually searched in Praxis and WLN online archives.  

  2. Read abstracts or introductions to determine eligibility for inclusion. Only sources that referenced tutee/writers AND established evidence-based practice (claimed empiricism, or otherwise declared their methods) moved to step 3.

  3. Read and analyzed sources, examining the role of tutees in research inquiry. Took notes on tutee relationship to research (see table 1 “tutee role”), forming general descriptions that later informed the meta-analysis and taxonomization of findings.

  4. Catalogued discursive markers, methods, tutee role, research cohort, subject cohort, sample size, and artifacts (see table 1)

My corpus does include some edited collections and texts from non-writing-center-focused journals where appropriate to my efforts of tracking the history of tutee-focused research. These studies emerged from keyword database searches, as well as a manual search of the online bibliography Undergraduate Research Articles in Writing Center Studies: An Incomplete List. All sources fit the same inclusionary criteria as those from WCJ, WLN, and Praxis. Said criteria, detailed in step 2, were primarily set to omit lore-based and anecdotal sources, but they also speak to the sampling method of this study.

Past meta-analyses have inductively reviewed writing center scholarship, broadly sampling a body of research to address methodological trends in rigor and RAD classification (Lerner; Driscoll and Perdue). Following this tradition, I posit that empirical projects lack ideological consistency in how they position tutees—inquiring about tutee perspectives but eliciting them to varying degrees in scholarship. The theory that tutees occupy a variable position as knowledge-creators in writing center studies compels strict participatory and methodological criteria that are necessary constraints on the sample group. Thus, this study samples from each journal to deliver a focused corpus (n=33) that solely relates to the participant-beneficiary status of tutees in question. I observe tendencies in method and inquiry within my sample in order to inductively produce a taxonomy that scales research studies by the degree to which they centralize tutee perspective.

Results & Discussion

Across my corpus (see table 1), I identify a wide breadth of methods, demonstrating the diversity of evidence-based practice in writing center studies.⁴ The table catalogues the publication year of each study as well as their main inquiries, participants and subject cohorts, how they collect data, and the perspectives they elicit. The breadth of findings in Table 1 are presented in the tradition set by Rebecca Day Babcock, who publishes her own findings as a “quick ready resource” to make our research more accessible (39).

Initial examination of the findings table suggests the affordances and constraints of each research type. The satisfaction approach to inquiry does elicit tutee perspectives and produce knowledge, but without addressing tutees as reflective, collaborative participants in tutoring. This positioning is apparent in how tutee-satisfaction studies signal their inquiry. In six out of eleven studies, the discursive markers referenced tutees in yes-or-no research questions (see table 1). Although these six have methods that allow for negative feedback, the polar form of questions and the delivery of surveys immediately following a writing center session can bias research questions and goals for positive feedback. In contrast to these limits on perspective, assessment invites large subject cohorts and rigorous sampling. This layer of taxonomy has an average sample size of n=703. Compared to the averages² of tutee-peripheral and central studies—n=40 and n=580, respectively—satisfaction research benefits from consistent methods of exit survey data collection. Assessment and satisfaction encourage strict research practices, producing knowledge about the beneficiary status of tutees often at the expense of their critical perspectives on writing center practice.

Tutee-peripheral studies recognize tutees as active participants in sessions, but not always in shaping knowledge about writing centers. A common tutee role in this layer of taxonomy is simply “to be present in writing center practice” (see table 1; Niiler, Severino et al.; Decheck; Raymond and Quinn; White-Farnham et al.). Discursive markers grammatically cue tutees as objects in research questions and goals, with tutors or even the session itself taking the subject position of the sentence. Tutees indirectly collaborate with inquiry by their presence in taped and transcribed sessions, positioning their perspectives at the periphery of these scholarly projects. 

Discursive markers in tutee-central studies signal centrality as tutees are grammatical subjects of research questions or have ownership over their writing and tutoring sessions, indicated by possessive forms (Table 1; Winder et al.). Tutee roles emphasize active, participatory, and collaborative perspectives in inquiries that favor a diverse range of subject cohorts and methods. There are seven different methods used in nine tutee-central studies, a majority of which employ both quantitative and qualitative design elements (see table 1). Where writing center talk overall is characterized by methodological diversity illustrated in this meta-analysis, tutee-central studies include the most varied set of methods. The participant cohorts of tutees at this layer of taxonomy are equally diverse. Multilingual students, graduate writers, and writers with ADHD, just to name a few, take to the forefront of scholarship.

What the findings table does not show are the themes that run through each stage, suggesting a narrative of evolving research tendencies in the writing center research. In what follows, I will explicate my findings to describe how the three research trajectories are separated by the degree to which they prioritize—or more accurately, centralize—tutee perspectives. Each section provides a comprehensive analysis of how tutee perspectives have been elicited, and for which purposes, in writing center scholarship over the last twenty years. Though they follow a chronological structure, research types have not linearly progressed from satisfaction to tutee-peripheral to tutee-central; instead, they fit a concurrent staging where the first is ongoing as the second starts, and so on for the third. At present, all three types of research are active in writing center publications. The tripartite structure of the following discussion addresses the chronological trends of this concurrent linear progression in each research type.

Tutee-Satisfaction Research

When writing center scholars do include tutee perspectives, they historically tend to do so with a satisfaction survey. This method permeates our scholarship’s past and present (see table 1), stemming from the larger tradition of program evaluation from the early stages of research in writing centers. Satisfaction surveys generally track writers’ approval of tutoring, and under the category of “satisfaction scholarship,” tutees are framed as beneficiaries. As such, surveys ask if tutees find tutoring sessions helpful and how they could be more effective, often in mixed method or purely quantitative terms.

Satisfaction surveys provide a key insight into the affordances and constraints of tutee perspectives in assessment-based writing center scholarship. As early as the 1980s, articles in the WCJ presented results from tutee surveys to determine satisfaction along with larger implications for writing centers’ assertions. Irene Lurkis Clark’s 1985 study “Leading the Horse: The Writing Center and Required Visits” questions the factors that motivate writers who seek or are instructed to have a tutoring session. This question is framed on an institutional level, asking “whether or not students ought to be required to visit the Writing Center”; however, Clark also frames her inquiry against North’s foundational assertion of the difficulty of converting required visits to desired ones (31). The perspectives of writers—both those who have and have not visited the writing center—are elicited, but in a way that doesn’t match up to the sophisticated pedagogical discussion of Clark’s inquiry.

The study’s titular and sustained metaphor of writers as horses to be led and watered portrays writers in limited terms; they are present in the writing center, but as passive recipients of tutoring, to put it generously. This restriction is apparent in the study’s discursive marker, which signals tutees as objects in a polar question: “whether or not students ought . . . .” (see table 1). The design of Clark’s study, a Likert-scale format, is defined by questions that gauge tutees’ sources of motivation to consult with a tutor. This method draws feedback in sole regards to satisfaction such as, “The Writing Center is valuable to my overall writing improvement” and “The tutors in the Writing Center are not helpful” (Clark 32). This general phrasing allows tutees’ perspectives to emerge in controlled ways that do not risk the articulation of any larger critique of tutoring philosophy. Clark’s early study sets up what will become a common habit of research design regarding participant inquiry in satisfaction surveys. That is, following Clark, many studies demonstrate a tendency to privilege their inquiry over the affordances of their method. Scholarship at this level of taxonomy asks far-reaching questions of writing center effectiveness for tutees through methods that limit the types of feedback those same tutees can provide.

Clark’s multifaceted use of the satisfaction survey is indicative of a trend shared by other studies that use assessment research to examine writing center pedagogy, but produce knowledge constrained by methods of program evaluation. For example, survey questions that frame all responses through approval lock tutees into the role of beneficiary. This trend is visible in satisfaction studies through the 1990s, illustrated by WCJ articles that sustain the same divide between inquiry and method. These articles also build on Clark’s work by recognizing the need for field-wide discussion of research. Wendy Bishop’s study in 1990 is purposely akin to Clark’s, to the point that Bishop quotes the same line from North in her introduction that Clark cites in her own article (Bishop 32). Bishop’s exploration of student referral and tutoring satisfaction employs a survey format lacking a Likert scale; instead, her work relies on polar and short answer questions—some of which share phrasing with Clark’s survey for the explicit purpose of comparison (34). Bishop concedes that she is “not a master survey maker,” and as such her study is methodologically limited in its design and is illustrative of the budding research branch of writing center epistemology in 1990 (34). Despite its marks as an early effort, Bishop’s survey instigates a larger frame of inquiry by responding to Clark, replicating her methods, and calling for other writing centers to follow suit (40). By expanding the scope of her findings from an individual institution to one that includes other writing centers and by enacting field-wide discussion about empirical research, Bishop demonstrates ways that satisfaction studies began the push towards more rigorous evidence-based practice in early writing center research.

Jean Kiedaisch and Sue Dinitz’s inquiry in “Learning More from the Students” adds pedagogical importance to the survey they conduct, reflecting that “no one had asked not only whether clients were satisfied but also what factors affected the degree of their satisfaction,” and asking questions like, “Did clients prefer tutors of the opposite sex? Were ESL students or students with learning disabilities less satisfied than others?” (90). The authors go on to theorize that “if we could answer such questions, we could not only demonstrate our effectiveness but also identify which students we work with best and areas in which our tutors need more training” (90). Though not framed as an examination into writing center pedagogy, as early as 1991 Kiedaisch and Dinitz clearly demonstrate their research’s capacity to elicit tutee perspectives that challenge lore assumptions and produce knowledge that could improve tutoring pedagogy in writing centers. By including both tutee and tutor input, their study highlights how satisfaction surveys can provide rich and meaningful results under the bridle of program assessment. 

Kiedaisch and Dinitz’s satisfaction work questions the actual circumstances of sessions and calls for tutee perspectives to test persistent lore assumptions. As with prior surveys, they also ask that other writing centers reproduce their inquiry at their own institutions. Despite sharing their survey construction, no published scholarship has followed on the call to replicate, placing the study in the unfortunate tradition of Neal Lerner’s proverbial unpromising present. Kiedaisch and Dinitz’s article is distinct, however, in two ways that reflect the growth of empirical writing center research. First, they employ a statistician in their research cohort to ensure their correlations are properly drawn in contrast to the limitations of past designs (Kiedaisch and Dinitz 99). Second, their methods incorporate tutor input in direct correlation to tutee input (Kiedaisch and Dinitz 90). Both participants in the tutoring session are surveyed regarding the quality of the session, prioritizing tutee input in an egalitarian methodological approach more aligned with writing center’s student-centered pedagogies. Although the questions are the same for both tutor and tutee, they remain general in their inquiry for satisfaction—a point that changes in more contemporary studies.

Balancing tutor and tutee input is a methodological quality shared in more recent satisfaction surveys such as Thompson et al.’s 2009 article “Examining Our Lore: A Survey of Students' and Tutors' Satisfaction with Writing Center Conferences.” The study provides a table comparing lore assumptions and applicable research findings, reaffirming that the push to question foundational tenets is embedded in all our research efforts—even when those efforts are limited to assess satisfaction and don’t elicit any active, participatory or collaborative perspectives (83). “Examining Our Lore” also draws from both tutors and tutees, though with one exception: the two parties are not given the same survey (Thompson et al. 86). General questions, like those from Kiedaisch and Dinitz, for rating the “success” of the session are constant for both tutees and tutors whereas other questions are separated by the depth of their vocabulary: “To what extent do you intend to incorporate the results of this conference in your writing? [Student survey] / To what extent do you think that this conference will influence the student beginning or revising his or her writing? [Tutor survey]” (86). Whereas tutors are surveyed with vocabulary coded for process writing such as “beginning” and “revising,” tutees are addressed in terms of writing as a product that simply “incorporates results,” bearing no indication of how writing center tutoring fits into a revision process (86). In questioning lore mandates, Thompson et al. elicit tutee perspectives, but only in terms of satisfaction, whereas tutor perspectives are prioritized in methods that engage with their reflections. This survey closely approaches a peripheral or even central focus in eliciting tutee perspectives; however, its inquiry in satisfaction and methodological hobbling of tutee participation in the language of pedagogy helps define the affordances of research at this level of taxonomy. That is to say, tutees are not afforded a participatory or collaborative role in research concerned solely with their satisfaction.

The tendency to open broad channels of discussion in inquiry and use methods that limit tutee input define the satisfaction survey as a key starting point for the presence of tutee perspectives in writing center literature. Not simply means of evaluation, the lines of inquiry these studies invite speak to larger goals of knowledge production despite the limited perspective historically elicited by their methods of data collection. Satisfaction surveys give us certain types of knowledge, namely for program evaluation, but preclude others from forming. Even as tutee perspectives are included, they are not necessarily prioritized in revisions of pedagogy or writing center practice. As these studies are often not replicated, tutee perspectives in satisfaction have no real role in the epistemological debates of program assessment. As research methods expanded into greater diversity in the 1990s, they accompanied a greater desire to question tutoring pedagogy, and, as such, writing centers began to speak with their tutees in greater nuance.

Tutee-Peripheral Research

The development of writing center scholarship includes a body of work that incorporates tutee perspectives to a much greater degree than satisfaction studies, albeit, in a secondary role to concerns of tutoring practice. Where satisfaction studies often question foundational assertions in addition to surveying approval, research that elicits a peripheral focus on tutee perspectives does so with greater diversity in methods. This isn’t to say satisfaction surveys lack the field’s trademark sense of variety—especially in those from the last fifteen years such as Peter Carino’s statistical correlation study and Cushman et al.’s work with focus groups; however, a greater degree of methodological diversity comes to fruition in the studies that hold a peripheral focus of tutee perspectives.

Illustrating diversity of methods at this level of taxonomy is White-Farnham et al.’s 2012 article “Mapping Tutorial Interactions” where tutoring sessions are coded for shifting tutor-tutee interactions and visually mapped on a quadrant. The methods are directly related to pedagogical assertions regarding facilitative/directive and writer/writing-centered interactions, suggesting that the degree to which tutee perspectives are elicited may depend on the centrality of the tutee in the particular foundational assertion being tested. In the case of “Mapping Tutorial Interactions,” White-Farnham et al. ask, “What are the qualities of the interactions that result from oscillations between facilitative and directive strategies?” (White-Farnham et al. 2). This line of inquiry is predicated on the tenet of non-directive tutoring, but as the mapping process only analyzes what is said in a session and not what comes before or after for the tutee, their perspectives are incidental to the purpose of empirically gauging conversation flow.

Similar in purpose to White-Farnham et al. is Blau, Hall, and Strauss’ 1998 study “Exploring the Tutor/Client Conversation,” which employs discourse analysis more familiar to the humanities discipline. The study collected session transcriptions that identify linguistic trends, satisfaction surveys for tutees and tutors, and tutors’ long-form self-reflections (Blau et al. 21). White-Farnham et al. cite this study as it led them to design their own mapping method (1). As with past research into satisfaction like that of Thompson et al., surveys are worded differently for tutees than for tutors, though in Blau’s work there is the additional effort to elicit tutors’ reflections on pedagogy. Though tutee perspectives are methodologically represented in the session transcripts and the short-form surveys, priority is given to the ways that tutors experience and reflect pedagogy in sessions. The purpose of study is to determine “the nature of the tutor/client relationship”—a line of inquiry framed by the assertion that collaboration is a sharing of authority and perspective (Blau et al. 21). The study demonstrates a methodological tendency to include tutee perspectives insofar as they reflect satisfaction.

Some studies break from the tendency to position tutees solely as beneficiaries of knowledge production. Even when research design is fairly split in its methodological prioritization of tutee and tutor input, such inquiry doesn’t necessarily access tutee perspectives in their full potential or capacity. Laurel Raymond and Zarah Quinn’s 2012 article “What a Writer Wants: Assessing Fulfillment of Student Goals in Writing Center Tutoring Sessions” is notable as one of three undergraduate research projects analyzed in this study. Tutee-satisfaction study cohorts included either lone faculty, or faculty working with tutors to gather data, with few exceptions (see table 1). Conversely, tutee-peripheral scholarship is the only type in this study that includes undergraduate researcher cohorts.³ Raymond and Quinn’s work codes and analyzes session report forms to “discover how well writers’ concerns matched up with the concerns tutors addressed” in the actual sessions (Raymond and Quinn 66). The methods of inquiry include pre- and post-session input from tutees and tutors respectively, so the experience of tutoring isn’t studied in any capacity. Lacking the negotiation of expectation between tutees and tutors that goes on in sessions shows not only the boundaries of an undergraduate study with limited resources, but also the way that inquiry confines tutee perspectives where they could provide a deeper understanding of research findings. Lacking post-session reflection from tutees, the study clearly prioritizes tutor input; however, unlike other tutee-peripheral studies, Raymond and Quinn demonstrate an appreciation of tutees’ role in the tutoring process as they set goals that are “honored by their tutors” (76). This added priority doesn’t result in a central study, but it does show that research questions and inquiry can fall upon a spectrum. The degree to which a study values tutees as active participants beyond their roles as beneficiaries reflects recognition of their significance in inquiry.

Just as satisfaction surveys punch above their weight, so to speak, in questioning tutoring pedagogies where their purposes are evaluation-based, peripheral studies can elicit tutee perspectives even when their goals are focused more on tutors’ experiences and identities. This quality exists in both older and newer tutee-peripheral studies and is reflected in the way that diverse methods suggest the ways that tutees have the potential to be centrally positioned in research. Carol Severino’s 1992 study “Rhetorically Analyzing Collaborations” and Severino et al.’s article “A Comparison of Online Feedback Requests by Non-Native English-Speaking and Native English-Speaking Writers,” published seventeen years later, both illustrate the varied modality of research inquiry that elicits tutee perspectives in a peripheral capacity. The older study utilizes rhetorical analysis to show different collaborative methods within sessions to reflect on and improve tutor training (63). The more recent work, with its emphasis on multilingual tutees and their desired feedback versus that of native speakers in online sessions, is a very different study, though it also relies on rhetorical coding of tutee input—session request forms, rather than transcripts (116). Both pieces bear similar methods in their rhetorical approaches, though the studies’ purposes are notably distinct despite the common goal in challenging pedagogical tenets. As such, research challenges assertions that non-directive tutoring is best or that multilingual writers disproportionality prioritize lower-order concerns. On both subjects, tutees should be able to contribute their perspective as participants, rather than simply as the subjects of tutoring; however, neither study elicits direct perspectives on tutoring pedagogies. In the newer study, Severino calls for tutees’ voices to be heard in future research, referring to a hypothetical “brief survey that online students fill out after they receive their feedback” to ask, “Did they use feedback on bigger issues, smaller issues or both?” and to perform longitudinal case studies on revised papers (125-126). Although the older study only suggests additional research to aid tutor development, Severino shows that peripheral studies can be considered as necessary steps towards prioritizing tutee perspectives, and their experience of tutoring within their individual writing processes. Greater methodological diversity in research can help writing centers make this epistemological shift, and it has already done so in the tutee-central work reviewed in the following section.

The placement of these writing center studies in the tutee-peripheral group demonstrates the role of a study’s purpose and methods in determining one another and eliciting tutee perspectives. Where the methods of satisfaction studies limited the extent and complexity of tutee input, peripheral studies are marked by methods that could elicit said input; however, their purposes prioritize tutor reflection instead. The methodological diversity of writing center research means that research questions can take many shapes in producing knowledge that challenges or affirms lore-based tenets. It also means that there is no reason not to elicit tutee perspectives and to adjust our methods to allow for such inquiry when our pedagogies call for its prioritization in discourse. Studies that centralize tutee perspectives of writing center pedagogies are methodologically similar to their peripheral counterparts, though with adjustments to their lines of inquiry that queue tutee input as an explicit priority.

Tutee-Central Research

Satisfaction and tutee-peripheral research have been published since the 1980s, and tutee-centered studies make up the newest type, only appearing in our three major journals within the last five years. These studies not only incorporate tutee perspectives to a great degree, but also prioritize tutees as active collaborators in producing knowledge about writing centers. Satisfaction surveys often ask much more daring questions than their purpose of program evaluation would imply, so it can be useful to compare such studies to tutee-central publications that ask similar questions but use methods that allow for rich participant responses, given the nature of their inquiry. Justin Hopkins and Bethany Mannon’s program evaluations appear in the same 2016 edition of Praxis, yet both writers illustrate very different types of knowledge and elicit tutee perspectives to varying degrees. Hopkins’ goal is to evaluate workshops, asking, “Did the students think and feel the workshops were worthwhile, and if so, how and why, and if not, how and why not?” (Hopkins 36). His methodological development, however, is intentionally kept narrow—never expanding in scope beyond local implications. Hopkins’ reflection that satisfaction surveys often lack complexity rings true of the common gap between inquiry and approach outlined in this meta-analysis, but his intentionally localized approach seem to overlook the greater possibilities of inquiry. He argues that program-evaluation research doesn’t need field-wide implications to be effective, yet the most constructive aspects of Hopkins’ study are arguably the opinions tutees provide on survey forms—“short stories” that provide “good, helpful feedback” (Hopkins 41). In methodologically limiting tutees to respond only in terms of the immediate workshops they attend, Hopkins’ study ironically emphasizes the value of engaging tutees as active, collaborative agents in tutee-central studies.

By situating tutees as active participants in producing knowledge about tutoring, central studies illustrate the writing center’s role in facilitating the complex realities of individual writing processes. Where Hopkins suggests the possibilities of research, Bethany Mannon delivers only a few pages later in the same journal. Her work with graduate tutees isn’t predicated on satisfaction; instead, she seeks to describe the ways that sessions help tutees in terms of their larger writing practices. She writes that her “questions are similar to those that writing centers might use for assessment”; however, Mannon’s design opens the limits of satisfaction inquiry to elicit tutee perspectives in greater nuance (Mannon 60). She asks about the “types of writing” tutees bring in, the “forms of feedback” they desire, and the differences between writing center tutoring and the help provided by “other resources” they use for writing (Mannon 60). Each question provides tutees with a space to articulate their expectations for the writing center, and its place in their compositional processes. Hopkins’ two open questions—“what did you like best about the workshop?” and “what would you suggest changing for future workshops?”—are far more confining in comparison (Hopkins 44). Despite sharing the basic goal of improving writing centers’ interactions with tutees, Mannon’s effort is situated in the wider language of process writing and in the pedagogical conceptions of writing centers. Exploring the “role graduate writers see the writing center consultation taking in a larger feedback ecosystem,” Mannon elicits tutee perspectives as a central concern of research with minor adjustments to the methods of satisfaction surveys (62).

Tutee-central research demonstrates strong plurality in perspectives that lead researchers down new and diverse lines of inquiry—contrary to lore-based assertions that tend to generalize tutees and their desired feedback. Mannon’s focus on graduate students in particular speaks to the tendency of central studies to engage with the diverse backgrounds and interests of tutees. Further illustrating this tendency, Savannah Stark and Julie Wilson’s 2016 article “Disclosure Concerns: The Stigma of Attention Deficit Hyperactivity Disorder in Writing Centers” prioritizes ADHD writers’ experience of tutoring. Similarly to Mannon’s, this study deploys interviews to elicit perspectives, though with an added layer as “interviews were coded for themes of definition of ADHD” (Stark and Wilson 6). In marked contrast to tutee-peripheral studies that include both tutor and tutee input with greater attention paid to the former, Stark and Wilson’s design weighs more heavily on the latter to guide their inquiry. The researchers held 10-20 minute interviews for tutors, whereas writer interviews were 40-60 minutes long (Stark and Wilson 6). The study is meant to increase understanding of ADHD, so tutor interviews respond to tutee input and the issues of stigmatization that writers have in visiting the writing center (Stark and Wilson 8). The way that tutee identity guides design and reflection in this study is indicative of consistency in this mode of research between the affordances of method and goals of inquiry. By addressing tutee identity as central, the diversity of those who use the writing center is brought to the forefront of the field’s discussion.

Multilingual students were paid some attention in peripheral studies like Severino’s, but tutee-central work with the same groups captures the writing center’s perceived place in language acquisition with greater attunement to the perspectives of writers themselves. Appearing in Educational Studies in 2016, Roger Winder et al.’s article “Writing Centre Tutoring Sessions: Addressing Students’ Concerns” is most similar to Raymond and Quinn’s research as it too explores a “correlation between students’ and peer-tutors’ perceptions of help received/provided” (325). Winder et al. are unique in how they elicit tutee perspectives in terms of coded metalanguage. Before and after sessions, tutees express their desires and the feedback they received through language of higher- and lower-order concerns and are actively engaged by surveys that ask them not to assess, but to reflect on tutoring by sharing their perception of the help they received (Winder et al. 332). Tutors receive the same survey, and though Winder et al. admit that multilingual tutees may not be acclimated to compositional metalanguage and that this may account for some limitations in the study, they nonetheless maintain an egalitarian methodology (335). Some studies dilute their metalanguage for surveys, but Winder et al. centralize tutee input by maintaining the same codes for all participants in the tutoring session and framing their talk in terms of larger vocabularies of composition and writing center pedagogy. In addition, their sample size of 743 recalls the rigorous sampling of satisfaction studies without that category’s tendency to restrict tutee input. Pamela Bromley et al.’s award-winning 2015 study deploys the same framing mode of “perception” to discuss their key concern: intellectual engagement.

The tutee-central study by Bromley et al. is fairly unique even among the wide-ranging body of writing center literature for its design and abstract inquiry. The “empirical, multi-institutional study uncovers and evaluates students’ definitions of intellectual engagement in their writing center sessions” (Bromley et al. 1). The study’s purpose is rooted in tutee perspectives, and as with Winder et al. the mode of data collection involves surveys; however, unique to this study is the way that these surveys developed. Bromley et al. used focus groups to elicit tutee perspectives “to discover how students were defining ‘intellectual engagement’ and to probe students’ perceptions of their visits” (2). Tutees are not just subjects of study, but participants in defining methods and determining scope. Knowledge creation is collaborative where tutee perspectives are prioritized, and the study’s findings further support this priority as “students who used our writing centers have a more nuanced understanding and appreciation of their own, and of their tutors’, intellectual engagement” (Bromley et al. 5). Unlike past studies, Bromley et al. do not respond to engrained pedagogical assertions and therefore divorce their inquiry from lore-based history. In allowing tutees to define intellectual inquiry and respond to it in an empirical study, Bromley et al.’s research forms new ways of knowing or understanding tutoring that couldn’t have occurred otherwise. It’s clear from this study that tutees can articulate aspects of writing center pedagogy when their perspectives are centralized. Bromley et al. show how responding to lore-assertions isn’t the sole catalyst for effective knowledge production. In contrast, Rebecca Block’s 2016 study of reading aloud in tutoring returns to the first point of inquiry that began this meta-analysis and thus provides a fitting ending—our lore-wariness merits and even compels investigation and reexamination of writing center orthodoxy. 

Tutee-centrality appears in the degree to which tutees are prioritized as active participants in an empirical study’s inquiry and methods. Block’s work with reading practices engages both tutee and tutor perspectives in methodological similarity to peripheral studies like that of Blau et al. with its use of coded transcripts and post-session surveys. On the level of purpose, Block also shares in the field-wide understanding that “empirical research can prompt us to re-examine the assumptions that underlie truisms” (35). All these similarities beg the question: why does Block’s inquiry result in a tutee-central study where she doesn’t have any explicit concern to prioritize tutee perspective? Block doesn’t actively engage tutees in the manner of Bromley et al.’s design; however, as Block and previously reviewed studies illustrate, egalitarian design and inquiry elicits tutee-central perspectives with or without the researcher’s intention. Coded transcripts are fairly even representations of the two parties involved in session talk, and subsequent surveys elicit perspectives of both participants without weighing one over another (Block 41). Block’s research design facilitates tutee engagement, as the only transcripts coded for analysis were those whose surveys overlapped in the reflections of both tutor and tutee input (Block 41). In the history of writing center study, Block’s study is a key example not just for its tutee-central findings, but for the way in which it illustrates that methods and inquiry don’t have to overtly draw out tutee perspectives on pedagogy; given the space to speak, tutees will do so on their own. All we have to do is listen.  

Conclusion

The role of tutees in writing center research is as varied and recent as the field’s foray into new methods of empirical study and discussion. Within this meta-analysis’ taxonomy, it’s clear that tutees’ perspectives are elicited mostly in the way that they can assess writing center effectiveness or give context to tutor perspectives, as in tutee-peripheral work. Satisfaction surveys have been essential institutional tools; however, they historically measure effectiveness in metrics that limit tutee perspectives and potential critiques of writing center practice. It’s telling that the most recent satisfaction studies, those by Cheatle and Hedengren and Lockerd, break from these tendencies. They more closely resemble their contemporaries in tutee-central research, rather than their predecessors in satisfaction work. The swell of tutee-central studies in the last five years suggests an epistemological shift that is just beginning to hit its stride—though it has done so below the radar of writing center talk.

This moment presents an opportunity to honor and connect with those we serve on a daily basis. In pursuing this shift, the field must accept the risk of unsettling some of our most foundational practices and beliefs. The question in the end, then, isn’t just which types of research we should do and which types of knowledge we produce, but to which degree we are prepared to open ourselves up to such risk in our tumultuous economic and educational atmosphere. Roberta Kjesrud articulates a call to embrace this uncertainty. In preemptively defending our pedagogy, she writes, “[W]e miss the opportunity to describe, to explain, to explore, to predict—we miss all the complications that tell us we don't really know what works and why” (Kjesrud 44). Here I cosign her call for an exploratory paradigm, and I offer this meta-analysis to illustrate the value of the perspectives reflected by tutee-central research. I propose we ask how tutees experience and articulate tutoring pedagogy for all assertions writing centers hold and reinforce without their feedback. The studies I’ve taxonomized show that tutee perspectives may be closer to the center than they appear, as even when they are in the periphery, tutees can drive critical insight into tutoring. Writing centers have always regarded writers as active participants in day-to-day sessions. Our research should serve its full purpose and produce knowledge that reflects all of our values as a pedagogical vehicle. 

Acknowledgements 

Thank you to Drs. Susan Lawrence, Genie Giaimo, and Christa Teston for their kind mentorship, feedback, and guidance through this project.

Notes

  1. Writing center studies employ a variety of terms to refer to those we work with. “Tutee” is a passive designation, and “writer” is preferable for its active connotation; however, some studies cited in this article refer to writers and students who do not visit writing centers. For clarity’s sake, in this meta-analysis “writer” is a blanket term, and “tutee” specifies those who engage in tutoring within writing centers.

  2. See Liggett et al. for discussion of writing center scholarship’s tendency towards an immense variety of methods this article further demonstrates. 

  3. All three averages exclude studies that lack explicit sample sizes. This note speaks to a larger problem in writing center studies that has been covered by Lerner and that my meta-analysis corroborates. At least one source in all three layers of taxonomy lack clearly defined research questions, subject cohorts, methods sections, or even sample sizes—the last of which would be unheard of in any other evidence-based field (Cushman et al.; White-Farnham et al.; Hug; Leary).

  4. I’d like to note here that undergraduate and graduate scholar interests tend towards more critical lines of inquiry relative to satisfaction surveys. Where the latter are almost entirely institutionally focused in design, the former are often more field-oriented. Theses and dissertations are outside the scope of this study, but they strongly illustrate the diversity of topics and subject cohorts in writing center studies. See Rebecca Ryan Block’s tutee-peripheral dissertation (which sets the foundations for her later work reviewed in this study), Joy Neaves’ tutee-central master’s thesis, Alexandra Valerio tutee-peripheral bachelor’s thesis, and Qianshan Chen’s tutee-central master’s thesis.

Works Cited

Babcock, Rebecca Day. “Disabilities in the Writing Center.” Praxis: A Writing Center Journal, vol. 13, no. 1, 2015. pp. 38-49.

Bishop, Wendy. “Bringing Writers to the Center: Some Survey Results, Surmises, and Suggestions.” The Writing Center Journal, vol. 10, no. 2, 1990, pp. 31-44.

Blau, Susan R. et al. “Exploring the Tutor/Client Conversation: A Linguistic Analysis.” The Writing Center Journal, vol. 19, no. 1, 1998, pp. 18–48. Web.

Block, Rebecca. “Disruptive Design: An Empirical Study of Reading Aloud in the Writing Center.” The Writing Center Journal, vol. 35, no. 2, 2016, pp. 33–59.

Block, Rebecca Ryan. Reading Aloud in the Writing Center: a Comparative Analysis of Three Tutoring Methods. Dissertation, University of Louisville, 2010.

Bromley, Pamela, et al. “Student Perceptions of Intellectual Engagement in the Writing Center: Cognitive Challenge, Tutor Involvement, and Productive Sessions.” The Writing Lab Newsletter, vol. 39, no. 7-8, 2015, pp. 1-6.

Burns, William. “Critiquing the Center: The Role of Tutor Evaluations in an Open Admissions Writing Center.” Praxis: A Writing Center Journal, vol. 11, no. 2, 2014.

Carino, Peter, and Doug Enders. “Does Frequency of Visits to the Writing Center Increase Student Satisfaction? A Statistical Correlation Study—or Story.” The Writing Center Journal vol. 22, no. 1, 2001, pp. 83-103.

Cheatle, Joseph, and Margaret Bullerjahn. “Undergraduate Student Perceptions and the Writing Center.” The Writing Lab Newsletter, vol. 40, no. 1-2, 2015, pp. 19–27.

Cheatle, Joseph. “Challenging Perceptions: Exploring the Relationship between ELL Students and Writing Centers.” Praxis: A Writing Center Journal, vol. 14, no. 3, 2017, pp. 21–31.

Chen, Qianshan. [陈倩珊]. Tutor-Tutee Interactions in the Writing Center: A Case Study at a College in South China. Thesis, University of Hong Kong, Pokfulam, Hong Kong SAR, 2012.

Clark, Irene Lurkis. “Leading the Horse: The Writing Center and Required Visits.” The Writing Center Journal vol. 5/6, no. 2/1, 1985, pp. 31–35.

Corbett, Steven J. “Using Case Study Multi-Methods to Investigate Close(r) Collaboration: Course-Based Tutoring and the Directive/Nondirective Instructional Continuum.” The Writing Center Journal, vol. 31, no. 1, 2011, pp. 55–81.

Cushman, Tara, et al. “Using Focus Groups to Assess Writing Center Effectiveness.” The Writing Lab Newsletter, vol. 29, no. 7, 2005, pp. 1-5.

DeCheck, Natalie. “The Power of Common Interest for Motivating Writers: A Case Study.” The Writing Center Journal, vol. 32, no. 1, 2012, pp. 28–38.

Driscoll, Dana Lynn, and Sherry Wynn Perdue. “RAD Research as a Framework for Writing Center Inquiry: Survey and Interview Data on Writing Center Administrators’ Beliefs about Research and Research Practices.” The Writing Center Journal, vol. 34, no. 1, 2015, pp. 105–133.

Giaimo, Genie. “Focusing on the Blind Spots: RAD-Based Assessment of Students’ Perceptions of Community College Writing Centers.” Praxis, vol. 15, no. 1, pp. 55-64.

Hedengren, Mary, and Martin Lockerd. “Tell Me What You Really Think: Lessons from Negative Student Feedback.” The Writing Center Journal, vol. 36, no. 1, 2017, pp. 131–145. 

Hopkins, Justin B. “Are Our Workshops Working? Assessing Assessment as Research.” Praxis: A Writing Center Journal, vol. 13, no. 2, 2016, pp. 36-45.

Hug, Alyssa-Rae. “Two’s Company, Three’s a Conversation: A Study of Dialogue Among a Professor, a Peer-Writing Fellow, and Undergraduates Around Feedback and Writing.” Praxis: A Writing Center Journal, vol. 11, no. 1, 2013.

Kiedaisch, Jean, and Sue Dinitz. “Learning More from the Students.” The Writing Center Journal, vol. 12, no. 1, 1991, pp. 90–100.

Kjesrud, Roberta D. “Lessons from Data: Avoiding Lore Bias in Research Paradigms.” The Writing Center Journal, vol. 34, no. 2, 2015, pp. 33–58.

Leary, Chris. “Eavesdropping Twitter: What Students Really Think About Writing Centers.” Praxis: A Writing Center Journal, vol. 14, no. 3, 2017, pp. 63–67.

Lerner, Neal. “The Unpromising Present of Writing Center Studies: Author and Citation Patterns in ‘The Writing Center Journal’, 1980 to 2009.” The Writing Center Journal, vol. 34, no. 1, 2014, pp. 67–102. JSTOR.

Liggett, Sarah et al. “Mapping Knowledge-Making in Writing Center Research: A Taxonomy of Methodologies.” The Writing Center Journal, vol. 31, no. 2, 2011, pp. 50–88.

Lunsford, Andrea. “Collaboration, Control, and the Idea of a Writing Center.” The Writing Center Journal, vol. 12, no. 1, 1991, pp. 3–10.

Mannon, Bethany. “What Do Graduate Students Want From The Writing Center? Tutoring Practices to Support Dissertation and Thesis Writers.” Praxis: A Writing Center Journal, vol. 13, no. 2, 2016, pp. 59-64.

Morrison, Julie Bauer, and Jean-Paul Nadeau. “How Was Your Session at the Writing Center? Pre- and Post-Grade Student Evaluations.” The Writing Center Journal, vol. 23, no. 2, 2003, pp. 25–42. 

North, Stephen M. “The Idea of a Writing Center.” College English, vol. 46, no. 5, 1984, pp. 433-46.

Neaves, Joy. Meaningful Assessment for Improving Writing Center Consultations. Thesis, Western Carolina University, 2011.

Niiler, Luke. “The Numbers Speak: A Pre-Test of Writing Center Outcomes Using Statistical Analysis.” The Writing Lab Newsletter, vol. 29, no. 5, 2005.

Pfrenger, Wendy, et al. “’At First it was Annoying’: Results from Requiring Writers in Developmental Courses to Visit the Writing Center.” Praxis: A Writing Center Journal, vol. 15, no. 1, 2017.

Phillips, Talinn. “Shifting Supports for Shifting Identities: Meeting the Needs of Multilingual Graduate Writers.” Praxis: Writing Center Journal, vol. 14, no. 3, 2017.

---. “Tutor Training And Services For Multilingual Graduate Writers: A Reconsideration.” Praxis: A Writing Center Journal, vol. 10, no. 2, 2013.

Raymond, Laurel, and Zarah Quinn. “What a Writer

Wants: Assessing Fulfillment of Student Goals in Writing Center Tutoring Sessions.” The Writing Center Journal, vol. 32, no. 1, 2012, pp. 64–77.

Severino, Carol et al. “A Comparison of Online Feedback Requests by Non-Native English-Speaking and Native English-Speaking Writers.” The Writing Center Journal, vol. 29, no. 1, 2009, pp. 106–129.

Severino, Carol. “Rhetorically Analyzing Collaboration(s).” The Writing Center Journal, vol. 13, no. 1, 1992, pp. 53-64.

Shea, Kelly A. “Through the Eyes of the OWL: Assessing Faculty vs. Peer Tutoring in An Online Setting.” The Writing Lab Newsletter, vol. 35, no. 7-8, Mar./Apr. 2011, pp. 6–10.

Sloan, Phillip J. “Are We Really Student-Centered? Reconsidering The Nature of Student ‘Need.’” Praxis: A Writing Center Journal, vol. 10, no. 2, 2013.

Stark, Savannah, and Julie Wilson. “Disclosure Concerns: The Stigma of Attention Deficit Hyperactivity Disorder in Writing Centers.” Praxis: A Writing Center Journal, vol. 13, no. 2, 2016, pp. 5-13.

Undergraduate Research Articles in Writing Center Studies: An Incomplete List,  http://tinyurl.com/UndergradWCResearch. 

Thompson, Isabelle, et al. “Examining Our Lore: A Survey of Students’ and Tutors’ Satisfaction with Writing Center Conferences.” The Writing Center Journal, vol. 29, no. 1, 2009, pp. 78–105.

Thonus, Terese. “Triangulation in the Writing Center: Tutor, Tutee, and Instructor Perceptions of the Tutor’s Role.” The Writing Center Journal, vol. 22, no. 1, 2001, pp. 59–82. 

Valerio, Alexandra M. “Connecting Theory and Evidence: A Closer Look at Learning in the Writing Center. Honors in the Major Theses, 211, 2017. Retrieved from stars.library.ucf.edu/honorstheses/211.

White-Farnham, Jamie, Jeremiah Dyehouse, and Bryna Siegel Finer. “Mapping Tutorial Interactions: A Report on Results and Implications.” Praxis: A Writing Center Journal, vol. 9, no. 2, 2012, pp. 1-11.

Wilder, Molly. “A Quest for Student Engagement: A Linguistic Analysis of Writing Conference Discourse.” Young Scholars in Writing, vol. 7, 2010, pp. 94–105.

Winder, Roger, et al. “Writing Centre Tutoring Sessions: Addressing Students’ Concerns.” Educational Studies, vol. 42, no. 4, 2016, pp. 323-39.

Appendix

Table 1

Appendix A.jpg
Appendix A 2.jpg
Appendix A 3.jpg
Appendix A 4.jpg