I've had a look into this and sadly, the data behind it can't be relied upon in my opinion. It wouldn't pass any sort of peer review.
I'd say the only good thing Steve has done here is provide the sources of his data. As such, we can check them out ourselves.
The first link, the accident data looks pretty good to me. It's based on actual, absolute figures, so it's difficult to go wrong really. The original data is not cycle injuries per 100,000 cyclists at all, but is actually a per population figure, which Steve has manipulated to be per cyclists by conflating it with the other study - an absolute no-no in my opinion as the other data isn't very good. The apparent rise in injuries is driven by the more questionable drop in cyclists. The raw data looks more like this:
Note that I've not included non motor vehicle fatal accidents, as these are not significant and as such a per population figure isn't provided.
The data from the second link is very flimsy indeed. I've been through all of the relevant links, and I believe he's taken his data from here:
As you can see, the data is mostly flat apart from the first two columns, which are wildly different. Note the following:
- The first two columns are single year data and there are gaps between them inconsistent with the rest. Two 8 year gaps in the data!!! It is disclosed elsewhere in the report (which is a general transport survey), that those two data points aren't even from the same study. They are unlikely to have been carried out by the same team and it's extremely unlikely they have been carried out in a comparable way.
- The more recent data is all rolling 3 year data (eg 2/3rds of the data in the 2004-07 column is the same as 2/3rds of the data in the 2003-06 column. There is only one good reason to do this, and it is to smooth noisy data. I wouldn't be surprised if there are single year spikes in the data which gave the percentage cycling as similar to 89/90 or 97/98.
- The methodolgy they have used for the survey just isn't appropriate in my opinion.
They only ask about a 2 day period - the two day period refers to the 'Travel Days' in the title of the data above. Someone could be cycling 20,000 miles a year, but if they didn't ride a bike on their two travel days, they'd essentially be a non cyclist in that data. It's very difficult to make surveys like this work. They are surveying a tiny percentage of the population about a passtime/hobby/mode of transport that only a tiny percentage of the population partake in anyway.
By my reckoning, they are surveying about 0.3% of the population. They are not likely to survey many cyclists for a start. This is the sort of survey where there could have been a cycling boom and you wouldn't notice it in the figures, because even if the number of cyclists double or triples, the number of non cyclists stays similar in relative terms - and they are surveying mostly non-cyclists.
The data is probably a better reflection of what the weather happened to be like on the travel days within each time period.