Internet Explorer 11 is not supported

For optimal browsing, we recommend Chrome, Firefox or Safari browsers.

Two Paths Toward Common Core Standards Assessments

A state-led effort to apply national academic standards across the United States will have two assessment models, and they're not necessarily competing with each other.

The Common Core State Standards Initiative is a state-led effort to apply nationalized academic standards across the United States, eschewing the disparate standards of the past. Forty-five states have committed to the cause. But when states begin administering assessments in the 2014-2015 school year, they will not be using one common test.

Two assessment consortiums, SMARTER Balanced and the Partnership for Assessment of Readiness for College and Careers (PARCC), have risen from the Common Core movement. Although it might seem counterintuitive to have different tests for the same set of standards, policymakers see a potential upside in having two distinct means of assessing Common Core. Replacing individual state exams with assessments that cross state boundaries is an untested experiment, policymakers say, and public education could benefit from having unique approaches to compare.

“Competition breeds innovation,” Joe Willhoft, executive director for SMARTER Balanced, told Governing. “If you only have one model and it doesn’t work, then you don’t know if it was a bad idea or just a bad design.”

Twenty-eight states have joined SMARTER Balanced. Twenty-four, including ten of the twelve states to win Race to the Top funds, are a part of PARCC (see map below).

Alabama, Colorado, Kentucky, North Dakota, Pennsylvania and South Carolina are members of both consortiums, weighing their options before the assessments are officially introduced three school years from now. Kentucky, for example, "has opted to participate in both consortia as a means to learning the most about what each is likely to produce, and therefore make the best and most informed decision for our state when more information is available,” said Karen Kidwell, director of program standards at the Kentucky Department of Education.

PARCC and SMARTER Balanced, which both have received federal funding to develop their testing models, are designed to be “performance-based” to fulfill Common Core’s goal of preparing students for college and a career. That means they’ll require students to demonstrate higher-order thinking, through problem-solving, essay-writing and research projects, as opposed to the multiple-choice, fill-in-the-blank tests of the past. “That’s a very different architecture from the assessments that states give right now,” Massachusetts Education Commissioner Mitchell Chester, whose state was instrumental in developing PARCC, told Governing.

Both consortiums also feature periodic assessments throughout the school year. PARCC’s schedule begins with a diagnostic test at the start of the year, designed to provide teachers with a sense of their students’ knowledge and inform their instruction. A mid-year assessment updates educators on their students’ progress. SMARTER Balanced offers optional interim tests at the beginning and middle of the year.

The assessments are defined by some key differences. PARCC requires testing in grades 9-11, while SMARTER Balanced leaves testing in grades 9 and 10 optional. SMARTER Balanced will determine its cutoff scores for passing or failing after piloting the assessments in spring 2014. PARCC, on the other hand, will set those standards after the first full year of implementation.

Others are more substantive, particularly in how the assessments utilize technology. In conversations with members of both consortiums, that element of the testing experience was cited as the fundamental distinction between the two consortiums.

SMARTER Balanced is defined by its computer adaptive model. The tests will adjust their line of questioning and difficulty in real-time based on the responses of individual students. “With most tests, every student sees all the same items. Our test moves to where the student is,” executive director Willhoft said. “This is really important if we have an interest in whether a student is improving.”

The computer adaptive model is an idea that has existed for a while, Wilhoft said, but most individual states don’t have the resources to develop the necessary technological infrastructure. A partnership through SMARTER Balanced presented that opportunity: Idaho, for example, had a history of experimenting with adaptive testing, which led to a desire for that kind of assessment, Carissa Miller, deputy superintendent at the Idaho Department of Education, told Governing.

PARCC’s tests are also computer-based, but will adhere to a fixed format for all students. Kris Ellington, deputy commissioner for accountability, research and measurement at the Florida Department of Education, explained that regardless of a student’s status entering the school year, educators want students to finish the year performing at grade level. Fixed testing models determine whether that goal was achieved, she said. “With adaptive assessments, it’s a little unclear what that means and how ambitious that can be with the skills that it tests in students,” Massachusetts' Chester concurred.

Despite some philosophical differences, though, the consortiums don’t necessarily view each other as competitors. The two groups have been in frequent contact with one another, Ellington and Wilhoft said. The assessments have evolved: PARCC initially planned to have quarterly tests that counted for full credit, Ellington said, before states “realized they’d bitten off more than they could handle.” Those tests were transitioned to the less high-stakes diagnostic assessments at the beginning and middle of the year, similar to the optional SMARTER Balanced interim tests. SMARTER Balanced modeled its governance structure after PARCC, Willhoft said. The consortiums are also working to form comparability metrics, so that scores from PARCC and SMARTER Balanced assessments can be effectively compared.

Groups such as the Council of Chief State School Officers  and the National Association of State Boards of Education (NASBE) have been working with states from both consortiums to implement the Common Core standards and prepare for their assessments. The regular dialogue has fostered an environment of collaboration, rather than competition. That’s important, policymakers say, because there will likely be an opportunity for the assessments to adjust and improve after studying themselves and each other following full implementation in 2014-2015. States are entering unchartered territory with assessments that cross state lines, based on national standards. No one is sure what’s going to work and what won’t.

“This is new area for everyone. Not one size is going to fit all,” Patty Yoo, project director for Common Core at NASBE, told Governing. “This way, states are able to choose what type of assessment best fits their needs.”


Powered by Tableau

Dylan Scott is a GOVERNING staff writer.
From Our Partners