The state of career readiness and postsecondary preparation is changing across the country.
These changes mean career readiness programs must change as well—but how do you keep up with a set of constantly-evolving needs?
By evaluating career readiness practices with continuous improvement in mind.
Building an effective program usually begins with a theory of change that connects an action to desired student outcomes.
Most school districts focus on ensuring programs and curriculum are “evidence-based,” with documented proof that these interventions lead to student success (typically from an experiment in a controlled environment).
But implementation science acknowledges that there is another important step in this process: replicating the program in real-world situations.
Put simply: the plan might be great, but it is being executed with fidelity?
Answering that question requires data. And while K-12 education places a well-deserved emphasis on student outcomes, that same data isn’t sufficient for day-to-day decision-making.
The Carnegie Foundation for the Advancement of Teaching—the leading voice on continuous improvement—suggests considering data within three categories: accountability measures, research measures, and practical measures.
In career readiness, for instance, we study student attainment, college-going rates, and even unemployment rates. This data—a great example of "Accountability Measures"—determines whether graduating students are achieving long-term life outcomes. They're also useful metrics for identifying schools that are performing especially well at producing these outcomes.
But these measures are complex to track and require years of waiting. In what way, they fall short for short-term decision making or assessing if the implementation of a program is on track.
They also ignore the thousands of decisions and activities that go into executing a career readiness program: things like professional development, curriculum design, and individual student experiences.
If districts evaluate programs exclusively on these long-term outcomes, they end with an unclear conclusion: either the program wasn’t successful but they’re not sure why or the outcomes occurred and we lack the evidence to prove the program was the cause.
So when districts are evaluating their roll-out for improvement purposes, the Carnegie Foundation proposes incorporating "Practical Measures"—data that is specifically designed to study the fidelity of implementation and identify short-term outcomes that let them know if they're on track.
Zooming in on Career & College Readiness programming specifically, the K-12 Career & College Readiness Benchmarking Coalition identified five key areas for evaluating modern CCR programs:
Below are some examples of Practical Measures for studying each of the five areas within your CCR programs.
You can download the full checklist with 22 Metrics here.
Key Questions to Investigate:
Sample metrics:
Key Question to Investigate:
Is data being used to assess and realign career readiness programming to ensure equitable postsecondary outcomes?
Sample metrics:
Key Questions to Investigate:
Sample metrics:
Key Question to Investigate:
Are teachers, parents, and community members taking advantage of opportunities to participate in CCR?
Sample metrics:
Key Question to Investigate:
Are the career & college readiness tools being used by students, teachers, admin?
Sample metrics:
The key to practical measures is that they’re both easy to measure and precisely aligned to the actions your team is taking. The examples above are a great starting point but if you’re still feeling stuck, start with the K-12 Career & College Readiness Self-Assessment.
After taking the 30-question quiz, you’ll get a district report emailed to you that compares your results to national best practices. This is a great way to take that first step—by narrowing in on your biggest growth areas.
🎉 Take the quiz & get your district results within 5 minutes of completion.