Reassessing the Tug-and-Pull of Roster Continuity vs. Experience
It matters. But to what extent?
Torvik recently updated his 2024-25 projections by adding back a slew of 5th and 6th year seniors who were previously missing from the calculations, meaning we have a relatively new batch of continuity and experience rankings through which to sift. Continuity has taken on new meaning in the Portal Era, as the old method of lazily moving teams up or down based on returning production (if someone utters the phrase “returning starters,” you have my permission to shoot them on sight) is an anachronism of a bygone era. As I’ve noted a few times on Twitter, a handful of new coaches have entirely flipped their rosters from their 2023-24 iterations, but instead of rebuilding from the ground up — as was the custom in previous years via recruiting high schoolers and enduring the growing pains of a fledgling roster core — Pat Kelsey, Chris Holtmann, Musselman, et. al have instead taken the opposite tack. Louisville and USC, in particular, will be fascinating case studies. Both are in the top 10 in roster experience but bottom 20 in roster continuity. The one-cycle, wholesale roster flip is a relatively new phenomenon unique to the Portal Era, which means there’s precious little precedent for evaluating these rosters.
Last season, there were only two rosters with 300-spots-or-greater KenPom gaps between continuity and experience: Butler (324 spots) and St. John’s (322). Both overperformed their preseason KenPom projections by significant numbers (St. John’s was the 31st-bigger gainer by adjEM; Butler was 38th). In fact, among the 15 teams with the widest experience vs. continuity gaps, 11 of the 15 outperformed their preseason rating:
I would love nothing more than to declare that the above chart irrefutably demonstrates that age matters significantly more than continuity, and older rosters with little or no continuity are undervalued in preseason projections. Alas, I cannot tell you that, because the correlation is — at best — de minimis. Naturally, Arkansas is the proverbial fly in the ointment in the above chart, but it’s worth stepping back for a moment and emphasizing just how atrocious this team was relative to expectation and whether the Hogs’ flameout is instructive in identifying future calamities. Remarkably, among 362 teams, only Pacific (6-24 ATS, -8.3 avg cover margin — both worst in the country) suffered a more significant adjEM drop than Arkansas from start to finish:
The Arkansas Problem is several-fold. Poll 20 fans and you’ll probably get 20 different answers as to why the Hogs were so atrocious last season. Most of the answers would likely cite something at least tangentially related to “chemistry” or “locker room issues.” While I suspect that this explanation is quite likely true, it’s likewise correct that these intangible roster problems are very difficult to quantify with available metrics. How, exactly, does someone without inside knowledge of a particular team know ahead of time that the vibes are bad? As we sit here in mid-June, mired in the throes of the dog days of the college basketball offseason, is it even reasonable to try to identify next season’s Arkansas, i.e. a team that appears “talented” on paper and is replete with older, well-established players with respectable college basketball résumés? Realistically, it’s a dart throw. Chemistry is a fickle beast. But it’s almost certain that at least one of these overhauled rosters with lofty preseason expectations (top 25-ish) will not only “bust,” but, like Arkansas, will do so in catastrophic fashion.
The best path is to delineate the likeliest candidates and work backwards, analyzing each roster to determine if the pieces fit together. Of the below group, four are receiving consideration in the Groupthink Top 25s (Kentucky, USC, Louisville, Kansas State):
My working hypothesis heading into the season is that overemphasizing continuity versus experience or vice versa is a fool’s errand. To varying extents, roughly half of the teams in the above chart will beat preseason projections; the other half will fall short. What you rarely see, however, is any semblance of trepidation or range-of-outcomes analysis from media people in assessing these rosters. As I wrote1 last week following the Coleman Hawkins acquisition, Kansas State truly does have “top five Big 12 upside.” But what is “upside?” We see this term constantly bandied about — what does it actually mean? To me, “upside” is your 80th or 90th percentile outcome. If nearly everything that could possibly go right does go right, what is your ceiling? Frankly, you cannot tell me that KSU’s “most likely” outcome is a top 25 finish. You cannot. And yet, Kansas State will be in every (or nearly) every media top 25 poll. For various reasons (roster construction, combustible personalities, etc), KSU might be the most reasonable facsimile for 2023-24 Arkansas. Again, emphasis on might. A total flameout (finishing, say, in the 70’s or 80’s in KenPom) is not the most likely outcome either, but how do you not recognize it as a reasonable possibility? And if you do recognize that the floor here is extremely low, how do you put them in your top 25?
USC and Louisville are worth discussing for a different reason: talent. Or lack thereof. Per Torvik’s talent rankings, USC is 14th among 18 B1G teams; Louisville is dead-last in the ACC. Coaxing top 25 performances out of rosters with such pronounced talent deficits is not a reasonable expectation. Both Kelsey and Musselman opted to try to get asses back in seats by importing ancient rosters replete with maxed-out, high-floor players. Particularly for Louisville, this makes quite a bit of sense. After the debacles of the previous two seasons, the fans will show up to watch a top 50-60 type team. In an ideal world, hanging around the Bubble conversation all season should be sufficient to generate upward-trending program momentum for Kelsey, but if the media is telling the fans to expect a top 25-caliber team, then the fans will expect a top 25-caliber team, even if top 25 for this roster should reasonably be considered a 90th-percentile type outcome.
More on this topic later in the summer.
https://x.com/JonFendler/status/1801666614435119267