A few weeks ago, we picked a pair of brackets using two different psychometrics models, Rasch and Classical Test Theory (CTT). Both models did well, correctly picking 74.5% and 70.5% of the matchups respectively. In this week's article, we'll talk about a few challenges we encountered while fitting basketball data to psychometric theory.
First, exams usually don't ask the same question twice, so we had to come up with a way to deal with tie records (Team A wins the first game against B, but team B wins the rematch a few weeks later). In these cases, we removed the extra games and said the "winner" was the team that scored the most total points against the other.
Second, these models are not designed for candidates who answer every item correctly. So, to account for South Carolina's perfect season, we assumed their ability was one step above the highest-ability opponent they faced during the season.
Third, the Rasch model assumes a 50% chance of winning (answering correctly) when the team (candidate) and opponent (item) are equally matched. So, some of our picks were little more than coin tosses.
Finally, the CTT approach struggled with the large number of potential opponents (items) and short seasons (exams). There is enough room in this scenario for teams to secure a higher win rate while still playing an easier schedule than others. While we could add more items to an exam, we don't have the power to extend the basketball season.
However, despite the challenges, we had a lot of fun with this year's tournament. Please let us know if you want to see more March Mathness next year.