Last updated on October 26th, 2013 at 04:47 pm
Estimated reading time: 11 minutes
Those who followed our coverage of the Tour de France will recall Tour 2009: Contador climb, which was ignited when a French scientist, Antoine Vayer, estimated Alberto Contador’s power output to be around 445W, and then projected that Contador would have to have a VO2max of about 99ml/kg/min in order to achieve that performance. I must just add that given that Contador was about a minute ahead of maybe 8 or 9 riders that day, pretty much all of them would have had to have impossibly high VO2max values, not just him.
This projection is made on the basis that a rider producing a power output X is actually consuming energy Y (depending on efficiency), and we can calculate roughly how much oxygen is required to produce that energy. It requires a few assumptions, of course, but is an important principle, which is what I’d like to pick up on in today’s post. (Just as an aside, Frederic Portoleau, a colleague of Vayer’s has since taken the time to compare Vayer’s method of estimating VO2max to the actual, measured data from Nicki Sorensen’s SRM. The result? The actual SRM value was 357W, Vayer’s estimation was 365W, which is only 2.5% higher. So the response by many of dismissing as ludicrous Vayer’s estimations (and hence the implications) seems premature. It was high, and certain other assumptions need to be looked at to confirm the validity of the predicted VO2max, but it seems estimation is not “ridiculous” as was suggested)
Performance analysis in the fight against doping – two categories
This kind of analysis forms part of what is now being proposed as a potential quiver in the bow in the battle against doping. The premise is pretty simple: Physiology sets the limits for performance, and for how performance changes over time. That is, every single performance is underscored by a set of measurable physiological determinants, and so there are two categories of performance analysis that can be used to “flag” suspicious performances:
1. Detecting performances lacking physiological “credibility”
2. Historical analysis to detect the rate of performance change in individuals
To give you an absurd example of the first category – if you measure my VO2max as 65 ml/kg/min, and my oxygen consumption as 60ml/kg/min while running at a speed of 6 minutes per mile (3:43/km), then there is absolutely zero chance that I can run competitively against world class athletes who race at 3:00/km for a marathon. Why not? My physiology is inferior, and for me to run 3:00/km will push me well above a maximum exercise intensity, and I will be unable to hold that intensity for the required 2 hours.
If I was to achieve this performance, say three months later, it could be flagged as lacking credibility, and you would have to ask how it was possible? The answer is that either:
- I have discovered some other way to improve performance beyond what your physiological measurements predicted (that is, doping), or;
- Three months of training have seen me improve my ability to the point where my VO2max is now 80ml/kg/min, and I’m using 60ml/kg/min running at 3:10/km. If you put me in the lab again, I’d produce these numbers and you would say my performance is credible, apart from the fact that I’ve achieved such enormous gains in so short a time. This kind of improvement, in such a short time, would be a strong indicator of doping (note that it’s not a guarantee)
Therefore, if we know what physiological “boundaries” exist, we can track performance to discover exactly what the physiological implications are when a performance is achieved. A cyclist averages 6.5 W/kg on a climb lasting 30 minutes – is this physiologically credible? If we know the cyclist’s measured parameters, then we are in a position to judge this more objectively. Being in this position allows more informed decisions around drug testing, targeting of tests to improve the likelihood of catching cheats.
Performance tracking – rates of change
The other part is to examine how performance changes over time. Once again, the premise is the same – physiology sets the limits for how rapidly people improve. That is not to say that everyone should improve at the same rate – please, don’t read this and shout out “discrimination” against those “outliers” who produce brilliant performances, seemingly from nowhere. That happens, yes, but if you are sensible about how you track performance over time, then you can work out ways to minimize the chance of these once-off athletes affecting your result.
For example, you might look at the best performance, AND the average of the next ten or twenty performances. By taking an average, you are trying to manage the impact of one individual on your ability to interpret the data. If the same trend exists for the top 20, then you have a much stronger reason to suggest that something else is in play.
But, let’s not speak in metaphors here – below are three graphs that do exactly this, and then you’ll see how the principle might be applied. These three are redrawn from a paper by Prof Yorck Olaf Schumacher, one of the leading anti-doping experts in sport, and a man who has worked extensively with Olympic athletes, and now anti-doping agencies. The paper[cite source=pubmed]19417234[/cite], “Performance Profiling: A Role for Sport Science in the Fight Against Doping?” was published earlier this year in the International Journal of Sports Physiology and Performance. If you’d like a copy, as always, just let us know!
Women’s discus – introduction of out-of-competition testing
First of all, take a look at the best performance (red line) and the average of the top 20 performances (blue line) in the women’s discus event since 1960.
It should be immediately obvious that between 1960 and the late-1980s, the event was in a state of “lift-off”. Not only was the best performance improving almost every year, but the average of the best 20 performances was going the same way. Then, in 1988, out-of-competition doping controls were introduced, and so the use of steroids may have declined thereafter, explaining why the event today is on par with where it was in the early 1980s – it has gone backwards, performance wise, and many will say it is now closer to where it should be physiologically. This graph gives you a striking illustration of how doping, and its partial removal (presumably) affect the “limitations” to performance.
Men’s distance running
Next, look at the best time and average of the best 20 times for the men’s 5,000m and 10,000m events:
I don’t think I have to point out the striking change in performance, particularly in the 5,000m event, after the commercial introduction of EPO in about 1990. I’m particularly interested in how the average of the top 20 times each year changes, because the red line, which represents the best performance, and thus only one athlete, might be misleading. But the blue line, that average, very definitely heads downwards, after a period where it had begun to level off. For the top 20 athletes to all improve in a season is suggestive of a systemic change, possibly in training, possibly nutrition, possibly equipment (imagine what swimming’s graphs will look like one day!), possibly increased exposure of athletes. Or, quite possibly, doping, and the co-incidental timing of EPO becoming commercially available and this drop-off is quite difficult to ignore.
NOT proof of doping, but a flag for intelligent testing
Is this an indication of EPO use among elite distance runners? We don’t know. It could be. But there are many other reasons that may explain why the records fell suddenly. This is the challenge with performance analysis. Please read this before sending in the hate mail and criticizing my cynicism, because I must emphasize that this kind of analysis does NOT prove doping! As Schumacher states in the paper, there are many other factors that could explain why performance suddenly improves, so one must be careful not to infer doping without acknowledging a wide range of potential contributing factors.
A limit to performance? Cycling may be an easier ask…
Therefore, this graph, or any other, does not constitute proof that athletes doped. What is does do is help us to understand performance better – is it possible that we can draw a dotted line on the graph to indicate where performance ends and doping MIGHT begin? Probably not (at least for now), but that is where this is headed. For cycling, I believe it is easier, and when you look at the climbing power outputs of Tour de France champions (shown again below), and then ask what the implications of riding at 6 W/kg are for the physiology, then I believe it is feasible to say that riding at a relative power output above about 6 W/kg for longer than 30 minutes raises doubts over physiological credibility (particularly when this is repeated day after day). This cycling case is intriguing, and warrants a post all of its own, which I will do when there is more time, perhaps after the IAAF World Champs.
The practical use of this information
The other application of this historical profiling is to highlight how certain athletes can be identified on the basis of performances that stand out, and then more intelligent testing can be done to confirm or refute the notion that doping is involved. One of the problems in the above graphs is that they represent a combination of many athletes each year, and the appropriate use of this kind of testing requires that an individual be tracked from year to year. I will say, for example, that in defence of the men’s 5000m runners, that many young African athletes arrive in Europe for the very first time, having never set foot out of their village, and run sub-13 minutes. You’d have a difficult time convincing me that these young athletes (often juniors) are doping to break 13 minutes, and therefore I would propose that they are capable of 12:50 or faster.
Also, I know many of the coaches and scientists who work with the atheltes out of Nairobi, and I do believe in their integrity and approach to doping – they are adamantly against it. So there are clean athletes among those “20 best performances each year”, I have no doubt. But equally, I’m sure there are some “dirty” performances as well – only testing will prove which is which, and that is why intelligent testing is required, and performance analysis might help us understand what we are seeing a little better.
To conclude – intelligent testing is the aim
I have little doubt that the most emotive retort to this argument that “exceptional performances” should be targeted on the basis that they may lack physiological credibility is that we are too cynical and don’t “believe” – this approach was once famously used on the Champs Elysees to criticize those who had doubted a champion’s credibility. We “don’t believe in dreams”, would be the charge…Unfortunately, it is partly true, and the climate within most sports almost compels us to react with suspicion when great performances are noted. In the words of Bengt Kayser, “open your ears and eyes and think” when it comes to doping and doping controls!
However, those who are clean would welcome this approach, because as long as it is done sensibly, it vindicates them and then everyone is a winner. It does not mean every great athlete is a doper, or should be targeted simply because they perform exceptionally, but rather that analysing great performances gives us every opportunity to test sensibly, and that benefits everyone.
I leave you with a quote, straight out of the paper in IJSPP by Prof Schumacher:
A new approach could involve monitoring the rate of improvements in competition performance of an athlete from an early age, in combination with monitoring of blood values or steroid profiles once an appropriate level of competition is reached. Although sudden increases of performance can be induced by many reasons other than doping (improved training strategies, nutrition, growth in young athletes, etc), such observations are nevertheless worthwhile to trigger target testing of the athlete. In connection with data from blood and/or urine profiling, such “performance profiling” might improve the identification of suspicious athletes’ behaviors. In a similar context, mathematical analyses of winning patterns of gamblers are used with success to identify cheaters in casinos.
We’ll be discussing this over the next week for sure, beginning with our interview tomorrow, as promised, so join us then!
Ross
You must be logged in to post a comment.