Firstly, conspicuity benefit is not necessarily a safety benefit. We've other research from Nottingham suggesting there's no significant change to cyclist outcomes from improved conspicuity.
Secondly, this citation smells funny: "Most collisions between motor vehicles and bicyclists involve the bicyclist being struck from behind (Hutchinson & Lindsay, 2009; Kim, Kim, Ulfarsson, & Porrello, 2007)."
Hutchinson & Lindsay is about Australia, so not relevant to their US claim. I think Kim et al is the notorious paper tabulating casualty data to say odd things like cyclists are mostly to blame for collisions.
Beware that paper...
That's just the standard boilerplate preamble to justify their existence. That's not what worries me about that paper: it's the questionable methodology and the use of statistics that worries me. I should mention that I haven't been able to download the full article - I may try tomorrow at work.
Firstly, the mean age of the people they tested was 18.7. In other words, they recruited undergraduates for this study. That is nowhere near a representative population. Secondly, the subjects were primed to specifically look for cyclists. Thirdly, they were passengers. Neither reflect the realities of driving a vehicle, and it really cannot be justifiable to attempt to make any general conclusions to cyclist visibility to drivers from this methodology.
There are also an implicit assumption that the vehicle was being driven at a constant velocity during the encounter, and a rather inaacurate means used to measure time. (Why not use a more accurate means to measure distance??) No account for the possibility that the experimenter driving the car behave in a different manner on approaching the test site, and so influencing the subject seems to have been made.
186 subjects were used. That seems a lot, but given that there were four test cases, that's 44 for each. Which is a very small number, using the Poisson distribution suggests a significant degree of noise can be expected. Despite this, they report a
surprisingly low figure of under 0.001 for the
p-value - the probability of their results being down to chance is thus calculated to be one in a thousand.
P-values are perhaps the most abused number in science. I'd far prefer to see the standard deviation in each distance for each test case. My suspicion is that the standard deviations of some or all of the four cases merge. Were you to show that sort of thing to any physicist, they'd laugh at you and tell you to come back when you'd got some
real data. If I get the time, I'll see if I can have a look at what they report in the body of the paper to see if it even remotely justifies their 0.001 claim. My suspicion is that it won't.