Dogtrousers
Kilometre nibbler
I don't understand the problem that people have with deriving results from performance analysis, provided that proper attention is given to propagation of uncertainty.
Any physical measured value has an associated uncertainty. This in turn governs the number of significant figures that it is justifiable to use in a result. Normally you would determine the uncertainty by taking multiple readings and calculating the standard deviation. But it can be justifiable to make an educated guess.
Now, when you combine measured values mathematically these uncertainties propagate in a known mathematical way (a way that I used to understand when I was a young intelligent man, rather than the old dullard I am now). So for a result derived from a given set of inputs with known uncertainties, you have an associated derived uncertainty.
So, lets say I want to derive the W/kg for a rider based upon a video of a climb. I determine the rider's weight (and associated uncertainty) lets say thats 65 ± 2kg. I look up the meteorological conditions for the day - ambient pressure, wind etc, and associate an uncertainty with that. Maybe I find out some coefficient to do with the rolling resistance on that particular road. I end up with a basket of numbers, each with an uncertainty. I use these come up with a result in W/kg with an associated uncertainty, which also tells me how many significant figures I can justifiably quote.
Now lets say my answer is 7.9251 W/kg At first glance this looks pretty damning. It looks like he must be cheating somehow. But if the associated uncertainty is ± 2 W/kg, then it's not damning at all, it's just 8 ± 2, which could be anything from 6 to 10. 6 W/kg is not abnormal, so this investigation proves nothing.
If proper attention is paid to propagation of uncertainty, the objection that "oh, they don't know the wind conditions, the effect of the oval chainwheels, blah blah ... pseudoscience..." disappears. However, you will probably find that the "smoking gun" also disappears, because we can clearly see the limits of such calculation.
Any physical measured value has an associated uncertainty. This in turn governs the number of significant figures that it is justifiable to use in a result. Normally you would determine the uncertainty by taking multiple readings and calculating the standard deviation. But it can be justifiable to make an educated guess.
Now, when you combine measured values mathematically these uncertainties propagate in a known mathematical way (a way that I used to understand when I was a young intelligent man, rather than the old dullard I am now). So for a result derived from a given set of inputs with known uncertainties, you have an associated derived uncertainty.
So, lets say I want to derive the W/kg for a rider based upon a video of a climb. I determine the rider's weight (and associated uncertainty) lets say thats 65 ± 2kg. I look up the meteorological conditions for the day - ambient pressure, wind etc, and associate an uncertainty with that. Maybe I find out some coefficient to do with the rolling resistance on that particular road. I end up with a basket of numbers, each with an uncertainty. I use these come up with a result in W/kg with an associated uncertainty, which also tells me how many significant figures I can justifiably quote.
Now lets say my answer is 7.9251 W/kg At first glance this looks pretty damning. It looks like he must be cheating somehow. But if the associated uncertainty is ± 2 W/kg, then it's not damning at all, it's just 8 ± 2, which could be anything from 6 to 10. 6 W/kg is not abnormal, so this investigation proves nothing.
If proper attention is paid to propagation of uncertainty, the objection that "oh, they don't know the wind conditions, the effect of the oval chainwheels, blah blah ... pseudoscience..." disappears. However, you will probably find that the "smoking gun" also disappears, because we can clearly see the limits of such calculation.