Justinslow
Lovely jubbly
- Location
- Suffolk
I see Robert Peston on the news last night had a bike accident, went over the handlebars, said live on the 10 o'clock news "thank God I was wearing my helmet". Can't argue with that.
Lol, it was more of a discussion point, I was barely watching it untill I heard the words "bicycle accident" and it made my ears prick up.Fair enough, I won't. Mostly because the weaknesses of such arguments have been explained to you many times over, so there really can't be much point in doing it again.
Can. But if 372 pages of arguing with that still doesn't stop you saying silly things I can't be a***dI see Robert Peston on the news last night had a bike accident, went over the handlebars, said live on the 10 o'clock news "thank God I was wearing my helmet". Can't argue with that.
I see Robert Peston on the news last night had a bike accident, went over the handlebars, said live on the 10 o'clock news "thank God I was wearing my helmet". Can't argue with that.
Publication bias is one of the things that various meta-analyses of the case-control studies have tested for. The recent Olivier and Creighton study (the one from Australia that got publicised a month or two ago) tests for it using a statistical model, that I don't understand and certainly don't trust, and funnel plots, which I do vaguely understand and still don't trust terribly much. They also try to correct by trim-and-fill, which is sort of OK if you accept the premise of the funnel plot in the first place, but is subject to the same assumptions.While I share your suspicion that cyclehelmets.org is reliable, there is also an issue with the reporting of negative results in the scientific literature. Put simply, a study that fails to show any positive benefit is less likely to get published - due largely to the fact that the researchers are less likely to seek to publish. In this context, that means that a study that fails to show a benefit to wearing helmets is less likely to end up in the literature. How many - if any - studies that show no benefits have never seen the light of day I have no idea. Which is the problem: we don't know the complete picture.
This is not the point I'm making. It is always a good idea to do some quick back of the envelope calculations to check whether the numbers seem reasonable. A sanity check, if you will. The Cheshire police number implies a 33 fold reduction in fatal injury in wearing a helmet. Or, considering a hypothetical population of 1000 cyclists of whom 100 have fatal accidents, 50% helmetted, the rest not, this 97% stat, to be consistent, must mean 97 of those weren't wearing helmets - that's a rate of 19.6% whilst the rate in the helmetted group is 0.6%. Which gives a 33 fold difference between groups. Of course, that differene becomes smaller as the nonhelmetted cohort increases. The rate is identical for both groups when the helmet wearers are 3% of the total cyclist population - and that's far below the actual percentage of those who wear helmets in the UK. (I am assuming that the accident rate per unit distance is identical for both cohorts.)
But even allowing for that, the TRT study reported less than a seven fold decrease - this is a very large discrepancy between the data. That alone ought to be enough to set alarm bells ringing. Were I, during the course of my work, to generate two data sets with such a large discrepancy between them, I'd immediately suspect both sets. In fact, I'd go back and check all the underlying assumptions and the model I was using - and check, if I could, by generating a third set of data by another way.
Here I've used the dataset that's already the most consistent with the Cheshire stat - a set we already know to be of dubious provenance. Even so, it still fails this most basic of sanity checks - badly. That can only lead to the conclusion that the 97% figure is very much suspect.
I was discussing a scheme where, as part of a "get back to work" initiative, there is a half day course where they will be given donated bikes and learn to maintain them, so that one obstacle to being able to get to work was removed. So far very admirable. They then said" we will need to provide helmets ". Playing devils advocate I asked why. I was then given a list of the usual suspects including the "my friends son went straight into a car door and his helmet saved his life" I did ask if that was based on her friend being a specialist neurologist ,but we just got into a loop. Not saying it did or didn't, but yet again the perception of street joes
Local cyclenation group replies to the police commissioner at http://www.klwnbug.co.uk/2016/11/07/call-for-police-help-to-reduce-road-casualties/
Has anyone investigated cycle crash helmet supply chains? Does damage in the supply chain explain some of the reason why real-world helmet use doesn't seem to reflect the increased physical impact protection?Decathlon, who use a column of boxed helmets to make goalposts for the Sunday late shift warehouse footy game? I don't expect any shop staff on or close to minimum wage to treat the merch with kid gloves behind the scenes.
Has this been identified as a problem in the past?Has anyone investigated cycle crash helmet supply chains? Does damage in the supply chain explain some of the reason why real-world helmet use doesn't seem to reflect the increased physical impact protection?
Manufacturers acknowledge that users can't check helmets sufficiently to verify they still work with statements in the instructions like "There may be damage invisible to you, which may reduce the ability of the helmet to reduce the harmful effects of a blow to the head" (Specialized).
Helmet users are basically trusting the entire supply chain not to have damaged the helmet critically before they get it. Do we know how many crash helmets are DOA - Dud On Arrival?
I don't know. Has anyone verified reasons for the lack of real-world cycle helmet benefit other than risk compensation?Has this been identified as a problem in the past?