Praxis: A Writing Center Journal • Vol. 15, No. 1 (2017)

INSTITUTIONAL ASSESSMENT OF A GENRE-ANALYSIS APPROACH TO WRITING CENTER CONSULTATIONS

Brett Griffiths
Macomb Community College
griffithsb09@macomb.edu

Randall Hickman
Macomb Community College
hickmanr@macomb.edu

Sebastian Zöllner
University of Michigan
szoellne@umich.edu

Administrators of writing centers at two-year colleges keenly feel the call for improved research and greater visibility of writing “outcomes.” In the past decade, federal calls to raise rates of education attainment have inspired various initiatives to increase rates of student completion in degree and certificate programs (e.g., Achieving the Dream and the Guided Pathway initiative). At the same time, greater public awareness brings with it additional pressure and scrutiny on educational resources to demonstrate specific and measurable impacts. Chief among these—in the spotlight for education reform for over 100 years—is college writing.1 Placed within the context of college attainment goals and what has been called “accountability funding” in K-12, writing center administrators at two-year colleges find themselves again revisiting assessment and research practices with the goal of demonstrating to administrators and policymakers that writing centers improve “student success.”

Identifying how “student success” is an impact of writing instruction can be difficult, precisely because writing and thinking are interactive processes, meaning that the content of written academic work often reflects a moving canvas—a place where students’ ideas are changing while they are working. Education policy initiatives often privilege statistical outcomes, such as course completion and GPA, both snapshots of performed knowledge. Meanwhile, prevailing writing pedagogies, such as those published in the Council of Writing Program Administrators et al.’s Framework for Success in Postsecondary Writing (hereafter, Framework), aim to identify and support the interactive learning process itself, including writers’ abilities to recognize rhetorical and social expectations of specific writing situations, to reflect on their writing choices and learning processes (metacognition), to create, and to adapt in response to those changing demands. However, statistical assessments do not detract from ethical teaching in and of themselves. Statistical measures become problematic when they are used to assess whole-cloth measures for ambiguous targets, such as “success,” when such targets are decontextualized from students’ own goals and the learning context in which they are immersed. Rather, using statistical measures to test, evaluate, and improve our knowledge about quality instruction can be a valuable strategy for demonstrating the quality and import of the work we do. Given the commitment of writing instruction professionals to support students in the development of habits of mind (rhetorical knowledge, reflection, writing processes) that empower students to identify expectations and transfer knowledge across situations, writing instruction professionals have a responsibility to actively seek institutional collaborations that showcase our values and evaluate our approaches.

Stephen North has nicely summarized the goal of writing centers as creating “better writers, not better writing” (438). “Better writers” from the perspective of the Framework are more aware of their own practices and better able to adapt to and support themselves when faced with new writing situations. Yet, it is difficult to design and advocate for pedagogies that improve writers outside of conversations about outcomes assessment and funding. The schism between the two perspectives that writing instructional professionals face—improving student completion or developing better habits of mind, better writing, or better writers—can hamper the potential for fruitful conversations between writing pedagogy professionals and administrators. This schism can poorly position writing pedagogy experts to steer institutional conversations about writing pedagogy and supports (Toth et al.). Here, we believe we have developed a tutoring protocol and assessment that help us advocate for and implement some of the principles of current writing pedagogy while also evaluating impact in the measures preferred by our administrative colleagues.

This article aims to put these two perspectives together in what Linda Adler-Kassner has called a “principled connection”—an assessment evolved from a collaboration between advocacy for disciplinary values for writing pedagogy and institutional ways of identifying and interpreting student success initiatives. Our goals here are pragmatic. Specifically, we set out to devise an approach to writing center tutoring that would foster the development of the habits of mind described in the Framework and to assess the impact of that protocol, as well as possibly to use the statistical measures preferred for institutional reporting. Therefore, this paper has two parts: a description of our writing consultation protocol and a description of our assessment outcomes. The assessment outcomes were developed and analyzed in collaboration with our institutional research office. We recognize this article cannot—and does not hope to—resolve ongoing divisions in assessments, pedagogies, and standards. Nevertheless, we argue that it is ethically incumbent upon all of us in writing studies to explain our methods to our institutional/administrative colleagues and to demonstrate the effectiveness of the individual and reflective processes for which we advocate.

The authors of this article began with the premise that many writing center administrators and their institutional colleagues hold distinct expectations for the purpose and process of writing center work. These distinctions become highlighted through assessment reporting, because the goals and discourses of “outcomes” index community values associated with those distinctions. As such, we begin by articulating clearly to our administrative colleagues—and our writing center colleagues—our tutoring protocol and rationales. Through this articulation we discursively link our protocol to scholarship and recommendations in writing studies, including concepts of “uptake” and “transfer” associated with the Framework for Postsecondary Success in Writing and genre analysis.² To illustrate our model for the purpose and process of writing center work, we developed an infographic depicting what we called the “consultation hierarchy,” (Figure 1), which placed genre knowledge—understanding the assignment and attaching each assignment to prior experiences writing similar genres—at the top of the hierarchy and placed grammar, usage, and mechanics (GUM) at the bottom of the hierarchy. Our intention with this infographic is to emphasize the importance of understanding the rhetorical situation and social expectations associated with specific writing situations first, before moving into questions about content, structure, language choice, etcetera. This decision helped the new writing center director articulate to administrative colleagues and superiors the goals of writing center work, while also acknowledging the dilemma of assessment in process-driven learning. In effect, we aimed to proactively acknowledge and reject expectations for the writing center as a location for editing or proofreading and to divorce “outcomes” from measures of grammar correctness or the grades of individual assignments. This move was rhetorically important because of the tenuous nature of writing center funding and because our emphasis was on writing knowledge that students transfer, not on the conventionality or correctness of any single text students produced. Finally, announcing our assumptions about learning and the pedagogical principles that shaped our consultation sessions helped to more closely define the relationship between writing and knowledge transfer. Consequently, semester GPA, although not a perfect or singular measure of writing center impact, does assist us in tracking how well students are able to identify genre-specific writing expectations, to better advocate for their own learning processes, and to transfer knowledge about expectations and learning to other classes and contexts.

“Temporary Specially-Funded Strategic Initiative”: The Context for Assessment

It is useful here to understand the specific environment in which this writing center was founded and funded, because our situation mirrors the accountability context in education nationally, even while featuring specific local characteristics. In 2015, Macomb Community College— a large, multi-campus, two-year, open-admissions college north of Detroit—opened facilities at two of its campuses to support college students in their academic and professional reading and writing activities. These facilities, now called the Macomb Reading and Writing Studios (RWS), serve a diverse student body, including “academically-prepared” students, as well as students who face challenges in their writing due to their academic preparation or language and dialect backgrounds. Multilingual writers whose first languages were languages other than English (e.g., Arabic, Bengali, Chaldean) comprise 21% of the students we serve, alongside a population of multidialectal speakers of English. The Reading and Writing Studios were initiated as one part of the ongoing efforts to increase rates of persistence and completion of our students. This initiative also derived from our participation as a leader college in the Achieving the Dream initiative and as a component of commitment to student success in our assurance argument to the Higher Learning Commission. As such, the director also sought to rhetorically tie the principles of writing consultations to the measurements affiliated with both Achieving the Dream and the Higher Learning Commission.

The director was hired to establish a new, multi-campus academic literacy center. As such, the director’s primary responsibility was to develop a “pedagogical vision” for tutoring reading and writing to monolingual and multilingual students across multiple campuses and to work with institutional research resources to develop an assessment plan for measuring impact and effectiveness. The project was strategically funded for a two-year pilot initiative with the set goal of establishing a tutoring model that supports student persistence, while also providing regular assessment updates to the Provost, members of the Student Success Council, and the Achieving the Dream work team. Put simply, permanent funding for the center depended upon demonstrating that the devised tutoring approach had a positive impact on student success.

Mindful of this context, the director developed a tutoring protocol intended to incorporate three primary principles of writing pedagogy:

  • prioritizing the teaching of genre knowledge

  • promoting students’ rights to their own language

  • assisting students’ development of self-regulated learning strategies (e.g., conceptualizing, planning, and completing projects using individualized learning approaches.)

These primary principles align with and reinforce the habits of mind identified in the Framework, including flexibility, reflection, and metacognition.

Our session protocols initially invite students to describe their understanding of the criteria for their assignment and their experiences writing similar assignments. Tutor training emphasized the use of non-evaluative language and focus on social and historical expectations associated with genre types and prior learning. Our protocol then invites students to explicitly identify that prior learning and apply it to their new writing situation. Our session protocol heavily emphasizes student agency, encouraging our tutors to use terms such as “choice,” “decision,” and “reasons” to refer to writing moves, past and present. We actively discourage using language associated with hierarchical views of language that describe Standard Academic American English as the pinnacle for writing success. We eschew terms like “correct” and “incorrect” grammar, usage, and mechanics in the publication of our services, the professional development of our tutors, and the language of our instruction.

Additionally, session protocols encourage writers and tutors to conclude each consultation with a written session summary and a revision or work plan list of “next steps,” which articulate the students’ writing process and/or knowledge. For example, “next steps” might include procedural components, such as “embed sources” or knowledge components, such as “search the Internet for examples of writing similar to the assignment.” This “next steps” component of the protocol helps students to explicitly identify what they have learned in-session and encourages them to reflect on their process and to plan the next stages of their current and/or future writing projects. Such activities prompt metacognitive thinking in which students engage in their individual writing process while planning the stages of completion and submission of their work (self-regulation).

We have encountered two critiques of our protocol. One reviewer for this article seemed to suggest our approach was more directive than the current trend in writing center studies. A second critique, from a faculty member, suggested that the protocol appears rigid and inflexible. However, we argue that our protocol makes the roles of tutors as coaches and of writers as decision-makers more explicit. Moreover, we argue that the structured nature of our protocols provides an important rhetorical tool that makes visible the processes of metacognition and adaptation, which helps students to transfer their learning from one writing situation to another. Thus, the protocol emphasizes student agency, reflective learning, and individual processes, providing students with the language and knowledge to help them navigate conversations about writing and “success” outside the RWS. 

Taken as a whole, the articulation of our protocol and infographic aimed to bring together scholarship in writing studies, to reinforce the habits of mind identified in the Framework, and to implement learning support strategies demonstrated in developmental education scholarship for the purpose of advancing a wider institutional understanding of writing instruction methods. Future funding of our writing center depended on our ability to demonstrate impact as measured in institutional terms to learning principles aligned with writing studies. To negotiate this space, we turned to writing center assessment research. Therefore, we advertised our principles and educated our colleagues about the scholarship and rationales that undergirded them, and we consciously referred to the principles and rationales in our outreach initiatives. Simultaneously, we co-developed an assessment protocol that linked outcomes (GPA and course completion) to the learning principles we espoused and the scholarship that shaped those principles.

Methods for Writing Center Assessment          

Writing center administrators value and advocate for quality assessment and research on the impact of tutoring, though some disagreement remains about which methods comprise effective assessments, and which kinds of assessment are considered “research.”3 Dana Driscoll and Sherry Wynn-Perdue have found that RAD research—research that is replicable, aggregable, and data-supported—comprises only 5.5% of all articles published in Writing Center Journal from 1980-2009. Their follow-up study found that writing center directors bring broad, and sometimes conflicting, expectations to the concept of writing center research, as well as questions about which methodologies are best-suited for such work (e.g., quantitative, qualitative). Taken together, the two articles culminated in a call for increased participation by writing center administrators in RAD research—both qualitative and quantitative—and greater attention paid by writing center administrators to the methods, implications, and limitations of each study.

Neal Lerner has emphasized that writing center administrators and researchers can and should conduct their own research (“Counting Beans”).  In an era of “outcomes” measurements and budget cuts, we must be more than “ticket tearers,” encouraging the identification and publication of positive impacts writing centers have on student success (“Choosing Beans” 1). Yet Lerner has also cautioned that many variables complicate quantitative assessments of writing center impacts, including but not limited to students’ existing strengths as writers beyond what can be assessed through standardized testing, such as the SAT, as well as variability in grading of assignments and courses across instructors (“Counting Beans” 3). There remain as-yet-unanswered questions about how well grades in any one course can meaningfully and consistently measure writing ability or learning, which Lerner addresses in both articles. We operate carefully within this context, recognizing both that the pressure to demonstrate “outcomes”—often defined and measured by people outside our fields—is both fierce and urgent. In many cases like ours, student outcomes are tied to potential funding. Simultaneously, we are mindful of what compositionists have coined “Ed White’s Law”—“assess thyself or assessment will be done unto thee”— and of Les Perelman’s call to recognize assessment as a rhetorical act.4 Thus, we tread carefully here but are also eager to contribute to a greater understanding of how writing centers can have a positive impact on student success. We used mixed-method analysis of data, including student and faculty surveys, as well as observations of tutoring sessions and analysis of peer referrals to the RWS. We aimed to assess the overall function of the writing center from multiple perspectives. Here, however, we focus on the quantitative assessment of institutional outcomes associated with academic performance and persistence. Summaries of those components of assessment can be found in Appendix 1.

Sample Description and Methodology for Quantitative Institutional Assessment

Our study sample included all students who registered with the Reading and Writing Studios scheduling system (WCOnline) in the fall of 2015. As a quantitative measure for student outcomes affected by tutoring in the RWS, we chose student’s semester GPA, which we collected from transcript data. We collected data for 556 students who each attended between 0 and 28 tutoring sessions.5 Using regression models, we assessed if attendance of sessions at the RWS had an impact on semester GPA.

We employed two parameterizations of the key predictor, tutor session attendance, and two ways of analyzing GPA as outcome, resulting in a total of four analyses. As the two parameterizations, we employed (1) the raw number of sessions attended and (2) a discrete variable, based on the aggregate of sessions students attended: 0 sessions, 1 session, 2 sessions, 3 to 5 sessions, and 6 or more sessions. This had the effect of grouping frequency outliers (one student attended 28 sessions) into the “six or more” group. The models using the raw number of sessions naively assume that on average every additional session in the RWS has the same constant impact on GPA, that “more sessions” will have a continued and linear impact. One key problem with such an assumption is that students experiencing the greatest learning difficulties are often referred—if not required—to attend the Reading and Writing Studios as part of their learning accommodations and support. Using the second parameterization of attendance at tutoring sessions allows each session frequency group to have its own effect on GPA, but at the cost of estimating more parameters. The second effect of grouping attendance is to capture students who attended more sessions by choice and those who were required to attend more sessions into a single category of outliers who may have otherwise skewed a purely linear model of impact. 

We analyzed two outcomes as “impacts”: (1) semester GPA (0.0-4.0) and (2) student “success” measured more broadly, in terms of pass/fail (GPA 2.0 and above versus GPA less than 2.0). We assessed the impact of the key predictor on semester GPA using two models, both employing multiple regression. The first model parameterized attendance at sessions using the raw number of sessions. The second model parameterized attendance at sessions using the session frequency group variable. To control the impact of known confounders, we included the following variables in the models: race, gender, age, college-readiness measures (Compass English and reading placement scores), and years since first enrollment at Macomb (a proxy measure of amount of college experience). Research conducted at Macomb and elsewhere has shown that demographic variables such as race and gender, college-readiness measures, and amount of college experience are (sometimes strongly) associated with GPA. We assessed the impact of attendance at sessions on the pass/fail outcome using two models, both employing logistic regression, each using one of the two different parameterizations of the key predictor variable. A visual representation of the four analyses is in Table 1, below.

Summary of Results

We evaluated the results of our analyses in terms of both significance and effect size, using both unadjusted and adjusted effects sizes, which control for race, gender, age, college-readiness measures (Compass English and reading placement scores), and years since first enrollment at Macomb (a proxy measure of amount of college experience). As indicated in Table 2 (see Appendix 2), students who attended more tutoring sessions had higher semester GPAs on average. Using our naïve linear model, which assessed the marginal linear impact of each session on GPA, we estimate each session attended increases GPA by 0.043 grade points (p < 0.001), with a 95% confidence interval ranging from 0.020 to 0.066 grade points. Using the discrete parameterization of attendance at sessions and statistically adjusted for the controls listed above, differences in GPA ranged from +0.33 grade points for students attending one tutoring session to +0.79 grade points for students attending six or more sessions, compared with students who registered with the RWS but did not attend tutoring sessions.

The distinction in these two ways of assessing the impact of attendance at sessions on semester GPA can be seen in Figure 2 (see Appendix 2). The straight line shows the estimated impact of tutoring sessions attended, after adjusting for our covariate controls (e.g., age, race, time since enrollment, etc.) and the box graph shows the impact by session frequency group.

As is evident in Table 3, the results using the analysis of impact on pass/fail outcomes were consistent with the results from modeling semester GPA directly. As with the effect-size estimates from modeling semester GPA directly, the adjusted effect size estimates were monotonically ordered: students with a greater number of sessions tended to have higher odds of “success” (semester GPA ≥ 2) on average. The estimated increase in the odds of “success,” by comparison with the odds of “success” of students with no sessions, ranged from 88% for students attending one tutoring session to a nearly seven-fold increase for students attending six or more sessions.

In the linear model which naively assumes the same additive effect on semester GPA for each session, the parameter estimate of this additive effect was an increase in the odds of “success” (a semester GPA ≥ 2) of 16.7% (OR = 1.167, p = 0.001), with a 95% confidence interval ranging from a 6.1% to a 28.3% increase in the odds of success, underscoring what we have previously described as the complications introduced by measuring educational impact using a purely linear model. These confidence intervals reflect the uncertainty in the estimates of the effect sizes. Note that for frequency groupings “1” and “2” odds ratio 1 is contained in confidence interval, indicating that we cannot exclude the possibility that 1 or 2 visits have no effect on the mean odds of “success.” For frequency groupings “3-5” and “6+,” the lower bound of the confidence interval is larger than 2.4, indicating that we can estimate with high confidence that the mean odds of “success” are increased at least 2.4-fold.  This broad range of impact on students’ success represented here reflects the variable impact tutoring can have on students when reasonable delimitations are not set. As an example, students with one or more special learning plans who attended the RWS weekly or biweekly may have shown disproportionately greater or—more likely—smaller impact, given the specific learning challenges they face.

Discussion of Results

The results reported here provide local evidence for a measurable impact of a writing tutor protocol that explicitly emphasizes genre analysis, rhetorical knowledge, and self-regulated learning and transfer. As in all work of this nature, factors like motivation and schedule availability are difficult to control. However, our estimated measures of effect size allow us to ask questions about how strong an influence might be and, with considered controls, they can help us to narrow the possible rationales for influences in complex environments such as education.

The effect sizes produced in our analysis provide strong evidence that greater use of RWS tutoring sessions leads to greater increases in student success. In sum, the number of attended RWS sessions matter: the greater the number of sessions, the greater the semester GPA, at least up to a reasonable point.

By using session-frequency groups, we could remove some of the interpretive static introduced by a pure linear model for session attendance. Our analyses show a declining impact of attendance after many sessions, and common sense would lead us to expect such a decline. A student who attends sessions two or more times a week is likely a student with documented special needs or, alternatively, a highly engaged, high-achieving student. In both cases, one would expect the overall impact of each session after 10 or 20 sessions in one semester to be minimal. Using our assessment, we persuaded institutional administrators and the Board of Trustees that the RWS tutoring in place at this institution contributes to students’ overall semester-long success, measured in terms of semester GPA, even after we controlled for known confounding variables, such as race, placement testing scores, gender, and time in school.

It is noteworthy that a significant, substantial association was found between exposure to RWS tutoring sessions and semester GPA, which is, after all, an aggregate measure averaged across individual courses, rather than the grades on specific texts produced or grades in writing-intensive courses. The mechanism that affects changes in GPA is still poorly understood. It is simply possible that reading and writing abilities have quite general and substantial applicability across courses, and that work on reading and writing in the RWS affected those outcomes. A second interpretation is that the RWS consultation protocol, which emphasizes genre analysis, metacognitive reflection, and self-regulated learning, helps students adapt and transfer their learning across their courses, which is the intention of the design. The data and design of the current research, however, is insufficient to further analyze the mechanisms that affected GPA increase. In the future, we hope to garner support for a longitudinal study of students’ attitudes and expectations about reading and writing in their college courses and how those attitudes and expectations evolve through their encounters with the protocol. We hope such a study will teach us more about how students perceive and apply the strategies we teach in the RWS and to understand more specifically the mechanism by which students’ “success” was improved.

For our pragmatic purposes, the findings were persuasive enough to secure permanent funding for the RWS and began an institutional conversation about genre analysis and writing in the disciplines that had previously remained underground, if present. This precipitated additional invitations to our writing center tutors to facilitate writing workshops in courses such as International Business, Occupational Therapy, and our Early College initiative. Additionally, some faculty members in our communications and writing department sought us out for examples of capable scientific writing and analysis to include in their first-year courses. Moreover, some of our high-ranking institutional administrators became vocal advocates—not only for the use of the RWS, but also for the consultation protocols we had in place. In fact, at one institution-wide meeting, the Provost publicly articulated that the RWS mission and methods was neither to proofread nor to improve grammar directly, but to serve students through grounded learning methods.

We recognize our assessment’s limitations. First, the control group of students—those who signed up for tutoring but did not attend—is very small. Because most of our registrations occurred during class visits, there is reason to believe that these students were similar in many respects to the other students in the study, and we have controlled for the factors we were able to identify and categorize. While the impact of exposure to RWS sessions may contain a bias due to uncontrolled confounding, we have controlled for several demographic and other variables known to be associated with academic performance in our statistical analysis. Hence, we believe we have made a substantial effort to clearly identify the impact of RWS sessions on overall success in courses, as measured by semester GPA.

Motivation is one such variable. Research conducted in a separate study at our college explored relationships between non-cognitive skills and semester GPA using scores from a pilot using SuccessNavigator software. There are three variables in the SuccessNavigator (SN) non-cognitive assessment instrument that could be viewed as reasonable proxy measures of student motivation. That analysis showed that the impact of these proxy measures for motivation was captured by the demographic covariates included in our analysis. If the SN proxy is a valid measure for student motivation, then such findings suggest that our analysis indirectly models motivation through our demographic covariates and thus student motivation might not be a substantial confounder. Ultimately, we will only be able to identify how the variable we so often call “student motivation” affects the impact of tutoring when we develop a deeper, more complex grounding of the behaviors we observe and associate with motivation.

Concluding Remarks

We do not offer this summary of experiences and findings as a way to resolve the current tensions experienced by writing center administrators in the era of accountability funding but rather to persuade other colleagues to find ways to capitalize on these tensions as opportunities. We call on our colleagues to explain to our institutional interlocutors what we do and why we do it. And we encourage them to draw on scholarship from within and outside our disciplines to advocate for pedagogical principles when developing assessments. This kind of institutional and disciplinary translational work is essential to the sustaining validity and value of our disciplinary expertise. In this vein, we argue that accountability funding initiatives frequently contain objectives that many writing professionals also advance. For example, we can agree across our disciplines and institutions on a commitment to supporting historically underrepresented students to complete their educational goals. By working within our rhetorical commonplaces—student inclusion and support—we believe writing instruction professionals can further and more persuasively advocate for our intersecting professional knowledges and ideals within this contested era of education.

We began this article with recognition of the fraught positions that writing center administrators occupy at the intersection between institutionally-driven outcomes assessment, education-policy initiatives, and the student-centered, process-driven values professed by a diverse family of writing studies specialists. We have aimed to achieve two things through the description of our analysis and results. First, we have aimed to demonstrate a path by which writing centers can advocate for best practices while also committing themselves to internal and external assessment regarding institutional “student success.” Second, we have aimed to provide a model for enacting that assessment in a way that is multi-faceted and responsive to both institutional and individual student needs. We recognize that our model is but one response to the perceived gaps in assessment approaches. It is our hope that we have provided a useful and trustworthy model that can productively motivate new conversations between writing center initiatives and college-wide assessments at other institutions, while also helping writing centers argue for sound pedagogies. We look forward to the publication of this special issue of Praxis and towards reading other articles in the special issue that can foster our continued growth towards an even more progressive and reflective model, explicitly for two-year college writing center work. As we close, we want to reiterate the calls for continued research on writing center assessment and for increased communication between writing studies and institutional administrators about the processes, rationales, and implications of institutional assessment at all levels of education, so that disciplinary scholarship and higher education policy can enjoy a more generative collaboration that—in the end—will best support students’ learning.

Notes

1. See, for example, Williams; Hourigan; Nystrand; Nystrand et al.

2.  See, for example, Bawarshi; Rounsaville et al.; Reiff and Bawarshi.

3.  See, for example, Babcock and Thonus; Hallman et al.

4. Discussions of “White’s Law” can be found in Bloom et al.; Norbert and Perelman; Haswell and Wyche-Smith.

5. The students in the “no tutoring sessions” group were students who registered with the RWS but did not attend any tutoring sessions.

6.  https://form.jotform.com/70263719023148

7. The observed, unadjusted difference serves as the unadjusted effect-size estimate. The parameter estimates of the coefficients representing the exposure to RWS services variables in the GLM models are the adjusted effect-size estimates.

Works Cited

Babcock, Rebecca Day and Terese Thonus. Researching the Writing Center: Towards an Evidence-Based Practice. Peter Lang, 2012.

Bawarshi, Anis S. Genre and the Invention of the Writer: Reconsidering the Place of Invention in Composition. Utah State UP, 2003. Google Scholar, http://digitalcommons.usu.edu/usupress_pubs/141/.

Bloom, Lynn Z., et al. “Symposium: What Is College English?” College English, vol. 75, no. 4, Mar 2013, pp. 425–430.

Council of Writing Program Administrators, et al. Framework for Success in Postsecondary Writing. Jan 2011, http://wpacouncil.org/files/framework-for-success-postsecondary-writing.pdf.

Driscoll, Dana Lynn, and Sherry Wynn Perdue. “RAD Research as a Framework for Writing Center Inquiry: Survey and Interview Data on Writing Center Administrators’ Beliefs about Research and Research Practices.” The Writing Center Journal, vol. 34, no. 1, Fall/Winter 2015, pp. 105–133.

---. “Theory, Lore, and More: An Analysis of RAD Research in ‘The Writing Center Journal,’ 1980-2009.” The Writing Center Journal, vol. 32, no. 2, 2012, pp. 11–39.

Hallman, Rebecca, et al. “Re(Focusing) Qualitative Methods for Writing Center Research.” The Peer Review, vol. 1, no. 0, n.d. http://thepeerreview-iwca.org/issues/issue-0/refocusing-qualitative-methods-for-writing-center-research/

Haswell, Richard, and Susan Wyche-Smith. “Adventuring into Writing Assessment.” College Composition and Communication, vol. 45, no. 2, May 1994, pp. 220–236. Hourigan, Maureen M. Literacy as Social Exchange: Intersections of Class, Gender, and Culture. SUNY P, 1994. Google Scholar, http://bit.ly/2Athxbn.

Lerner, Neal. “Choosing Beans Wisely.” The Writing Lab Newsletter, vol. 26, no. 1, Sep 2001, pp. 1–5.

---. “Counting Beans and Making Beans Count.” The Writing Lab Newsletter. vol. 22, no. 1, Sep 1997, pp. 1–4.

National Council of Teachers of English. NCTE Position Statement on Reading. Position Statement, Feb 1999.

Norbert, Elliott, and Perelman, Les, editors. Writing Assessment in the 21st Century: Essays in Honor of Edward M. White. Hampton P, 2012.

North, Stephen M. “The Idea of a Writing Center.” College English, vol. 46, no. 5, Sep 1984, pp. 433–446.

Nystrand, Martin. “The Social and Historical Context for Writing Research.” Handbook of Writing Research, edited by Charles A. MacArthur et al., Guilford P, 2006, pp. 11–27.

Nystrand, Martin, et al. “Where Did Composition Studies Come from? An Intellectual History.” Written Communication, vol. 10, no. 3, 1993, pp. 267–333.

Pennington, Jill, and Clint Gardner. “Position Statement on Two-Year College Writing Centers.” Teaching English in the Two-Year College, vol. 33, no. 3, 2006, pp. 260–263.

Reiff, Mary Jo, and Anis Bawarshi. “Tracing Discursive Resources: How Students Use Prior Genre Knowledge to Negotiate New Writing Contexts in First-Year Composition.” Written Communication, vol. 28, no. 3, 2011, pp. 312–337.

Rounsaville, Angela, et al. “From Incomes to Outcomes: FYW Students’ Prior Genre Knowledge, Meta-Cognition, and the Question of Transfer.” WPA: Writing Program Administration, vol. 32, no. 1, Fall 2008, pp. 97–112.

Toth, Christina M., et al. “ ‘Distinct and Significant’: Professional Identities of Two-Year College English Faculty.” College Composition and Communication, vol. 65, no. 1, Sep 2013, p. 90.

Williams, Bronwyn T. “Why Johnny Can Never, Ever Read: The Perpetual Literacy Crisis and Student Identity.” Journal of Adolescent & Adult Literacy, vol. 51, no. 2, Oct 2007, pp. 178–182.

Appendix 1: Summary of Qualitative Assessment Protocol Findings

Student Assessments: We used WCOnline’s survey feature to assess students’ experiences at the RWS, their self-described levels of confidence to perform the task at hand, and summaries of next steps in their own words to assess the use of the RWS and the impact on students’ self-efficacy. In short, students described confidence and self-efficacy (e.g., “I believe I can carry out the revision steps I’ve outlined”), which suggests high intrinsic reward (95% and above). Analysis of students’ “next steps” is ongoing, and we hope to address this methodology in later publications.

Faculty Assessments: We used the National Council of Teachers of English (NCTE) position statements on Writing Centers at Two-Year Colleges and on Reading to form the foundations of our assessment form (Pennington and Gardner; NCTE).6 We adapted additional questions about faculty perceptions of impact based on input from our Faculty Advisory Board. We selected the criteria outlined in these position statements because we wanted faculty to first understand the purpose and methods of writing center instruction to evaluate our operation against those standards. We also wanted faculty to critically engage our practices and to provide productive feedback about how we could better meet those outcomes. Thus, we aimed to eschew conversations about proofreading and editing by framing the conversation in terms of “industry standards,” a term our college uses and that faculty members support.

In addition to conducting informational outreach with faculty and conducting grounded outreach and support for new faculty who assign writing in their classes, we also exported our philosophies in the form of teaching support “playing cards,” which we handed out during professional development events, one-on-one conversations, and student-support-team collaborations. (A PDF of our faculty outreach playing cards can be found here: http://bit.ly/2jrC3V5) Overall, survey responses demonstrated that our outreach efforts have remained incomplete. Those who had been exposed to our outreach through faculty orientation or class visits and those who had visited the studios indicated that we met or exceeded the criteria adapted from the NCTE position statements. However, roughly half of our respondents indicated they “did not know” about several sessions and could not evaluate the procedures we have in place for training or for tutoring. Our efforts to improve outreach to faculty and to include their evaluation in our assessment are ongoing.

Appendix 2

Figure 1: Consultation Hierarchy

consultation hierarchy.jpg.png

Figure 2: Comparison of Effect-Size Estimates for Linear and Session Frequency Groups

comparison of effect size estimates.jpg.png

Table 1: Analysis Parameters

Screen Shot 2017-11-08 at 11.57.32 AM.png

Table 2: Means and Effect-Size Estimates for Semester GPA Outcome

Screen Shot 2017-11-08 at 11.57.49 AM.png

Table 3: Effect-Size Estimates for Pass/Fail Outcome

Screen Shot 2017-11-08 at 11.58.02 AM.png