Wednesday, July 1, 2009

TPM Thoughts by George Scott

Interesting take. from: http://georgescottreports.com

"Some readers have asked me to comment upon the Texas Projection Measure, a calculation that is now used by the Texas Education Agency in its formal ratings of school campuses. In reality, I addressed the substance of this issue (but not by name and in a different context) in great detail in my earlier series of responding to the researcher who had criticized some of my earlier reports.
Basically, the TPM is a calculation that permits the State to assert that a student who actually failed the test in a given year is on “pace” to pass the test in a future year based upon the student’s “progress” from prior year’s testing. It’s an offshoot of the disastrously dishonest and reprehensible academic fraud called the Texas Learning Index used during the Texas Assessment of Academic Skills (TAAS) era.
Under that statistical (TPM) calculation, a student who actually failed a test will not be counted against a campus’s failure rate for rating purposes if the TPM shows passage at a grade in the future (more or less simplified.)
Think of it in this exact way. If you are the kind of person who actually believes that Nostradamus predicted in the 16th century that two planes would fly into the World Trade Center in New York City, then you are the exact kind of person who will fall for this nonsense.
The TPM is also another perfect example of a common theme of George Scott Reports. Even if the TEA can prove that something is “statistically” true; it is still lying to you. It is still giving you information that appears to be honest and well motivated, but it is actually designed to give aid and comfort to its sick system of testing and supposed accountability.
At its heart and soul, the TPM is a statistical calculation that is based upon at least three major premises:
That each grade level’s TAKS test is finely tuned academically because it is immune from external forces such as cheating, psychometric cheating, programmed progress through field testing, publicly released questions that permit ‘drill and kill’ instruction for parallel questions, and selective use of unexposed prior questions that provide known result-boosters.
That the scale scores on one year’s test are precisely calibrated with scale scores on the next year’s test and the next year’s test and so on.
That a student’s progress from one year to the next is an actual measurement of academic growth unaffected by any of the historical factors cited in the first bullet point. Growth, according to the TPM, is actual academic growth and NOT because of factors that the TEA can manipulate to appear to be growth.
The TEA has historically been one of the most dishonest institutions of government in the United States. The facts back up that harsh statement. If you choose to suddenly grant the pervasively corrupt organization credibility, then you should dedicate yourself to becoming a serious student of the writings of Nostradamus.
Too many statisticians are willing to lie for their clients. Statisticians and psychometricians for the TEA have been more willing than most to participate in public policy schemes that don’t tell parents and taxpayers the complete truth.
Here’s the bottom line of what you should know about the State’s public education accountability system. It was designed and implemented so that Texas could claim in state and federal courts that it was dedicated to closing the academic equity gap between White and at-risk minority students. (That’s interpreted practically as the difference in performance on the State’s accountability test.)
It was created so that academic wasteland districts such as Houston I.S.D. could achieve what it has now achieved thanks to the TEA - highly rated campuses adorned with recognized and exemplary ratings with most achieving at least acceptable.
Senate Bill 7 adopted in the early 1990’s by the Texas Legislature mandated the closure of the academic equity gap. The Legislature established the Texas Assessment of Academic Skills (TAAS) test as the initial ‘enforcer’ of the state’s commitment.
The Supreme Court of Texas ratified that provision of Senate Bill 7 in January 2005. By the time the federal district court in the Western District in San Antonio ratified the TAAS testing program in January 2000, Texas could already prove substantial progress according to TAAS in closing the academic equity gap.
By the first year of TAKS testing in 2003, TAAS results showed such equity gaps generally reduced to the range of single digits. Confronted by field test results with the ‘harder’ TAKS test, the TEA’s assembled educators evaluated these field test results of the TAKS test (the year before actual administration began). The TEA discovered that the equity gap had suddenly reappeared in dramatic fashion. TAKS called TAAS a bald-faced liar. I think it was the first and only time the TAKS test told Texas parents anything of importance that was true. Predictably, statisticians for the TEA helped cover up the truth.
So what did the TEA do? It established a phony-baloney passing standard called 2 standard errors of measurement below what the panel recommended as the transitional passing standard. Make absolutely no mistake about it. As the panel of TEA educators convened in their meetings, they had total access to all field test results at all grades. They knew exactly how many white, black and brown students had passed and failed each test.
What those field test results showed was that TAKS had exploded the myth that TAAS had closed the academic equity gap. If the State established a quasi-honest passing standard that the ‘panel recommended,’ the equity gaps between white and minority students would have been exposed in all of their ugliness and the accumulated corruption of Texas’ officials. That is the 100% reason that statisticians created the 2 SEM passing standard. It was designed to cover up the true scope of the equity gap. It was designed to give TEA psychometricians a renewed opportunity to predetermine academic progress once again in the manner of TAAS.
Hollywood could not make up this bs. It takes government to do this. In particular, it took the pervasively corrupt TEA to do that.
Once again, statisticians were quite willing to validate whatever the TEA needed validating to protect the lie that Texas is achieving academic equity for its at-risk minority students and academic excellence for the rest. With the statistical calculation of passing in place that no one including parents could understand, the first year of TAKS results looked worse than the last year of TAAS but not as bad as the real results of the test documented. To reiterate, it gave the TEA psychometricians a better public starting place to scheme the State’s predetermined progress over the next five or more years.
In the subsequent years, student performance on TAKS has followed a similar track to improvement as did the earlier years of TAAS. However, because TAKS is really a harder test (I didn’t say a good test but just a harder test than TAAS), campuses have not been able to get virtually everyone over the passing hump as they were in the TAAS era.
So whether it is 2 standard errors of measurement below what the panel recommended or the Texas Projection Measure, statisticians and psychometricians for the TEA do what all good contractors in search of buck do - they solve a problem and sanctify the solution as statistical certitude.
What’s the bottom line? As parents in Katy soak in the placebo-filled ratings of exemplary and recognized this summer, just remember so do the parents of mediocre schools in Houston I.S.D. and around the state where the TEA’s ratings help mask high levels of functional illiteracy and poor preparation for college.
Remember this. Any rating system that allows the mediocrity of the vast majority of Houston I.S.D. to be acceptable, recognized, or exemplary will place virtually every school in Katy in the upper atmosphere as well.
Finally remember this. The question to be asked about the TEA’s rating system is NOT how good is high? It’s how crappy can high be?"

Friday, May 29, 2009

TPM and 2009 TAKS Accountability

On January 8, 2009, the USDE approved the use of the Texas Projection Measure (TPM) in the calculations for AYP in 2009. The TPM provides a method for measuring annual student improvement that also satisfies state legislative requirements passed during the 79th and 80th Texas legislative sessions. TEC §39.034 requires the measurement of annual improvement of student achievement. The TPM that was developed for TAKS, TAKS (Accommodated), and linguistically accommodated tests (LAT) is a multi-level regression-based prediction model. The model predicts student performance separately by subject in the next high-stakes grade (defined by Texas legislation as grades 5, 8, and 11). It uses current year scale scores and campus-level mean scores.

Projection equations are developed the year before they are applied, so that the formulas can be published and shared across the state before they are used in state accountability or federal AYP calculations. For example, projection equations developed in 2008 will be applied in 2009 to predict student performance. A student projected to be at or above proficiency in the next high stakes grade is determined to have met the improvement standard. Projections will be made each year for all subjects for all students who have valid scores in reading/English language arts and mathematics. The equations will be updated each year after the spring TAKS administration and will be published before their use the following year.

Beginning in 2009, the Texas Projection Measure (TPM) will be used to determine state accountability ratings. The TPM will be evaluated as a means of elevating a campus or district rating in cases where neither the TAKS base indicator nor Required Improvement (RI) are sufficient to allow a campus or district to earn the next higher rating. For any TAKS measure not meeting the standard for the next higher rating, RI, TPM, or the Exceptions Provision can elevate the rating one level, and only one level. Combinations of RI, TPM, and the Exceptions Provision cannot be used together for one measure to elevate a rating more than one level. Different features can be used for different measures to successfully elevate a rating, but multiple features cannot be used for any one measure.

Of the population of students who did not pass the test for a given subject, the number who met the TPM is determined. This count of failers who are projected to pass at the next high-stakes grade level is added to the count of passers and a new percentage is calculated. The new percentage is named “TAKS Met Standard with TPM.” If the “TAKS Met Standard with TPM” value is greater than or equal to the accountability standard for the subject, the measure meets the criteria for the next higher rating. If a student does not have a TPM for a test, that student is included in the TAKS indicator based on performance on the current year test. A TPM will be calculated for all grades and subjects except grade 7 writing and all subjects in grade 11. A TPM will not be available for grade 8 science until 2010.

TPM Calculator