Assessing Writing Center Effectiveness

This post is a continuation of my reflections from the SWCA 2017 Conference at the
University of Mississippi. In this follow up, I will discuss how two writing centers in our state of Alabama have developed creative techniques to show the ways in which writing centers affect student success.

During her talk, “Demonstrating Value-Added: When Administrators Request Evidence from Writing Centers,” Charlotte Brammer of Samford University discussed how to present the effectiveness of writing center in both quantitative and qualitative terms. Brammer discussed the usual channels of assessment – number of appointments, percentage increases in the number of appointments from semester to semester, student survey questions that document abstract categories such as confidence and participation, and whether or not students return to the center after their first visit – as well as what she called “Peer Instruction.” Working with a new provost from a statistics and economics background, Brammer knew she had to be aware of budgetary concerns. She therefore asked her provost how much money the college allocated for Peer Instruction. Instead of the writing center and peer writing consultants existing as a separate department or student service, Peer Instruction makes the work being done in the Samford Writing Center part of the larger instructional mission of the university.

Matthew Kemp and Phillip Hughes of Auburn University at Montgomery also offered new ways to articulate the effectiveness of my own center. Their presentation, “Research Doesn’t Suck. We Promise!,” gave pragmatic advice for conducting quantitative research. Matthew Kemp pointed out that before beginning research that involves compiling feedback from individuals or focus groups, one should complete the Collaborative Institutional Training Initiative (CITI) program. Frequently used in the sciences, CITI certification gives any project some intellectual street cred and helps ensures the validity of the information collected.  The program is online and free.

As the presentation moved along, I must admit that it did indeed prove that research, especially quantitative research, doesn’t suck! I especially appreciated Kemp’s suggestion that writing centers use a variety of question types when seeking student or faculty feedback. His own project used a combination of yes/no, scaled (1-5), and longer free-form responses. The results were compiled in an Excel spreadsheet. Kemp demonstrated how the mixture of question types and the many functions of Excel allow researchers to document correlations between different questions. For example, a PEARSON Test, an Excel formula that shows the relationship between two columns of data, can be used to see if students who feel welcomed in the center also feel that the writing consultant answered their questions during the session.

During the Q and A session, I asked for tips organizing longer written responses from years of surveys. Kemp explained that while longer responses do not allow for a Pearson test, the COUNTIF function can show researchers the frequency with which a given term or phrase appears. I am excited to put these tips to use!

1 Comment

  1. You brought up excellent observations in this post about how quantitative and qualitative data can be modified to produce intriguing results! I also appreciate that you asked the presenters thoughtful questions that we can apply to our writing centers!

    Like

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s