At the risk of breaking the What Happens in Vegas Stays in Vegas rule, I’ll deal you into what I’ve learned from this year’s annual Forum happening here this week. [For more context on the SAT redesign and a comparison to shifts in the ACT, I’ll point to my previous post.] The senior members of the College Board’s research and assessment team presented an update on the Redesigned SAT (“rSAT”) and related upcoming changes. The session was hardly a jackpot of new information; about 90% of it was review for those who have stayed on top of updates. The most noteworthy items for me were:
- A PSAT8/9 will debut in the Fall of 2015. This is ReadiStep rebranded. Expect a testing window that runs from September to February, and an additional 2-week window in the spring.
- There will be a PSAT10 ((yes, separate from the PSAT/NMSQT and normed for sophomores only)) debuting in March 2016. It will also have a 2-week testing window.
This is another example of the trend toward vertically integrated systems of assessments (ASPIRE, SBAC, PARCC). The College Board sees the rSAT as both an admission exam and a guidance exam; these new PSATs connote (in name) and denote (in creation) that idea. Expect a “staircase” scaling model, where the PSAT8/9 section scales might range from 120-720 and the PSAT10 and the PSAT/NMSQT section scales might go to a 160-760 scale as students climb toward the SAT’s 200-800 scale. ACT used to do this with EXPLORE-PLAN-ACT.
- Full forms (6 to be exact) for the rSAT are done. Or maybe I should say, they are “done,” because when I asked for granular specifications, they hedged and said they are still finalizing those. Remember the 11th-hour addition of the 10-minute Writing section back in 2004? There are always late stage psychometric tweaks that need to occur when assessments are created or redesigned. We still don’t know things like exact section timing and questions per section. We don’t know if the experimental/equating items will have their own section like they do now, or if they’ll be intermixed throughout the test.
So now as I step onto my soapbox, let me make a quick public service warning, and a plea to my fellow test prep industry leaders as well: be patient. We will see new forms this spring, and they will be accurate. At least a few overzealous test prep companies have felt the need to be first for the sake of being first, and that doesn’t serve anyone’s best interest in the long run. I’ve heard of at least three commercial claims this month offering full practice tests and course books for the rSAT. Most first semester sophomores don’t need to be taking a full practice SAT yet, let alone a fabricated one that will almost certainly not match the exact structure of the real test we’ll see in a few months. Those doing this are making us all look bad, so let’s agree to be a little more responsible with our messaging and practice what we should be preaching.
- Full prototype tests are currently being field tested in special studies. These studies will help set concordance. When preparing to roll out a redesigned test, there’s no time to wait for the operational data. So the imperfect-but-necessary patch of a concordance based on trial data will be used to set scores. There will be a concordance between current SAT and rSAT, and then a derived concordance that goes from the current SAT to the rSAT to the ACT. Students, counselors, and colleges will need to trust these scores to mean what they claim to mean.
- Eventually, things should iron out. The longer term interest is to ensure that the rSAT has enough predictive validity to remain relevant. To accelerate those findings, the College Board is leaning on a cohort of college freshman to take the rSAT, to then tie results to freshman year GPA. This will be a small, interim predictive validity study that will be supplanted by a more comprehensive one using the actual class first affected by the redesign. But that study can’t begin until the fall of 2017, with a report due in June of 2019. Once more, students, counselors, and colleges will need to trust scores to mean what they claim to mean.
- The last inevitable reality about rolling out a new test is an expected delay in score reporting. To really do scaling well, researchers will tell you that the March 2016 scores should be held back until after the May 2016 administration so that the latter can be used to improve the accuracy of the former. The possibility of disrupting students’ March, May, and June testing plans did not go over well. After hearing the audience groan about the extended delay, the Chief of Assessment described it as the worst case scenario and assured us that they are working on ways to shorten it. So public pressure may force them to make compromises to get scores out close to on time – or not. Either way, it’s probably not reasonable to expect those scores by the normal 21-day release schedule. The College Board faced similar trade-offs during the previous SAT overhaul.
There was only vague chatter about adding more test days, but nothing concrete in the next few years. Summer tests are not on the table. More school day testing – and perhaps a September administration – is more likely, but not anytime soon. And there is no talk of any corresponding changes to the Subject Tests. So for example, the guessing penalty will remain on those – oddly – until further notice.
3 Comments