Open Question--How to Quantify the Efficacy of a Coach?

I recently spoke with Diane Sweeney about her Student-Centered Coaching model.  It was a fun and engaging conversation, but more than that, I was able to gain a little more clarity on a problem that has nagged me about the way education in the U.S. demands numbers to justify its methods.  If you follow this blog and/or my podcast, you will know that I am actively working as an instructional coach.  I have come to learn that the model I operate in is exceptional in many ways.  Because of this, I have also come to realize that for me to collect accurate data on my work that would illustrate my efficacy is a conceptual Gordian knot.

Interpreting data from Education studies can often be a test of faith more than anything.  There are a lot of studies that conflate causation with correlation.  To clarify what causation and correlation are, here’s a classic example: when ice cream sales increase, so does the murder rate.  When one reads this, it seems that increased ice cream sales cause an increase in murder.  But when do people like to eat ice cream?  In the summer.   When do people (including murderers) spend less time at home?  In the summer.  The two issues are only connected by the coincidence (correlated) that both things increase during the summer.  It may not be the most solid example were one to delve into specifics, but it’s a serviceable point of departure.

My point is that the end data that matters in determining teacher efficacy is the student data, which is a one-to-one connection—teacher teaches students, thus is able to determine to what effect they have guided learning.  Granted, there is still a chasm in how that data operates too, but it is a much closer fit than me trying to determine the effect a coach has on student performance, because that chain of causation would go Coach—>Teacher—>Student.

After my conversation with Diane Sweeney, I feel like there are some possible inroads.  However, there is still a lot of room for error.  What Sweeney suggests is that the chain of causation look instead, like this: Coach/Teacher Partnership—>Student.  This puts those in the partnership on equal footing.  There are still questions I am sure.  I know I still have questions myself.  This is mainly because, when determining effectiveness as a coach, there should really be a control item from which one can compare data.   The only problem is that to do a comparison between a teacher coached, and one not, is unethical.  Why in a school would you deny students the benefit of additional support for their teacher(s)?  This does not even consider the difficulty of controlling how one accounts for experience levels with a veteran teacher, or how to account for class size adequately.  Data is always difficult, even when it is easy, because there are so many moving parts.

One thing that Sweeney mentioned as having yielded compelling results, was a study done around teachers who chose to work with a coach.  To compare data, they looked at the state averages on culminating assessments, and they found that as they considered the data markers on those teachers who used coaching (or other PD resources/support) that there was significant positive difference in the outcomes of the students who had teachers engaged in coaching or other PD supports.

In all, research in the humanities is always pretty difficult, when one is seeking quantitative over qualitative data.  But, there may be some ways one can structure data collection that at least tightens the focus on certain aspects of coaching performance.