By: Patrick J. O'Connor, Ph.D.
Don’t look now, but the end of the school year is about two months away. For counselors, this means awards assemblies, school picnics, and graduation ceremonies for everyone from kindergarten to high school. But thanks to school improvement programs and state mandates, the long to-do list for May and June may have a new addition—the collection of data to evaluate the effectiveness of your school counseling program.
Hard as it may be, measure we must—data is the lifeblood of most principal’s careers, so it’s time to review some basics for measuring the effectiveness of the department that improves students’ lives:
What do you want to know? This may seem like an obvious question, but that happens a lot in counseling. The key is to answer the question in a way that will show key groups (administrators, parents, your teaching colleagues) everything you do—so once you think you’ve answered this question, you’re going to have to run it past these groups to see if your answers make sense to them.
Suffice it to say, if this is the first time all year you’re thinking about this question, it may be time to do some fact gathering and hold off the evaluation until next year. Bring your stakeholders together, gather some opinions, and begin your plan for 2012; chances are the administration will give you a year to put a quality assessment together, as long as you give them a good reason for the delay—like, you were too busy actually helping students..
How will you measure what you want to know? Everyone may want to know if the college advising program is successful, but if the parents think the way to measure that is in scholarship monies earned, the principal thinks it should be measured by students going to Harvard, and you think it should be measured by how happy the students are about their college choice, you have a lot of talking to do. (By the way, never—never—measure a college counseling program by the number of students who were admitted to a college in a given year. Too many factors are involved that are out of your control; if you don’t believe me, just Google “admissions decisions 2011” and have the smelling salts at hand.) Make sure you reach consensus here, or people may insist you’re trying to hide something.
What will be a satisfactory level of response? I’m not suggesting you do this, but let’s say you want to measure the counseling program’s stress management program by giving random students a stress level test. What kind of outcome will be considered successful—if the average score is below More Tightly Wound Than Big Ben, or if 75% of the students have lower stress scores than their pre-tests last fall. Remember, if you decide to go with a pre-test model of some kind and you didn’t do the pre-test levels, let this go until next year—the data you get with post-test only can only be used in wrong ways.
How, and when will you share the results, and with whom? Once the results are in and analyzed, you can’t keep it secret for long. Decide now how the information will be shared, remembering that results can be shared in different ways with different groups that have different needs—if this sounds like counseling, you’re absolutely right.
The key here is not to rush into this. Look at what other schools are doing, make sure everyone has a common sense of purpose, and your efforts will leave everyone happier than the last day of school.
No comments:
Post a Comment