Before and after change in injuries in bikeshare cities compared to control cities (source data: Graves et al, via Streetsblog)

It’s every researchers greatest fear: getting it wrong and getting found out. That’s what happened this week to researchers who published findings supposedly showing the risk of head injury increased for all cyclists when cities introduced bike share schemes (see Proportion of head injuries rises in cities with bike share programs).

The claims made by Graves J and her co-authors in their published paper, Public bicycle share programs and head injuries, might’ve gone untested if they hadn’t gone on to recommend bike share schemes should make helmets available for users.

That attracted the attention of critics who discovered the published data actually showed the opposite – cycling injuries, including head injuries, fell in cities that implemented bike share.

The really interesting issue here isn’t helmets or the fact the researchers misinterpreted their own data; it’s why the level of injuries suffered by all cyclists in a city – not just those who use bike share – falls when bike share is introduced.

The researchers compared five cities that introduced bike share between 2007 and 2011 with five that didn’t. They looked at the level of injuries in the two years prior to bike share and in the first year following commencement. (1)

Unfortunately the Graves et al paper is gated; however fortunately, Streetsblog’s Angie Schitt reproduced the key table from it (WaPo is wrong: head injuries are down, not up, in bike share cities). (2)

It shows that when properly interpreted, the data reveals total cycling injuries fell 28% in the first year in the five bike share cities and moderate to severe head injuries fell 27%. In the five control cities, total injuries increased 2% and moderate to severe head injuries increased 6% (see exhibit).

That’s a remarkable result; even though the level of cycling presumably increased more in the cities that introduced bike share, injuries went down dramatically and, moreover, immediately. So what might account for such a startlingly strong negative correlation between cycling injuries and bike share?

Many commenters at Streetsblog have no doubt it’s due to the “safety in numbers” effect; bike share increases the number of cyclists on the streets. Their higher visibility leads to behavioural adapatation by motorists i.e. they drive with greater care around all cyclists.

Vox reporter Joseph Stromberg agrees (The media got it wrong: bike share programs don’t increase head injuries):

One possibility is the basic fact that the number of bikers on the road most strongly predicts biking safely…When drivers get used to seeing cyclists everywhere, they’re much less likely to hit them…the most reasonable interpretation of this new data is that the (bike share) programs made biking safer by putting many more bikes on the road…

Eric Jaffe at CityLab is more cautious but also thinks the safety in numbers effect might be part of the explanation (Head injuries didn’t rise in bike share cities; they actually fell).

I agree the increased visibility of cyclists might well be a factor, but 28% is an enormous drop. I’m sceptical that a sudden change of that magnitude is the result primarily, or even to a large extent, of the safety in numbers effect.

One reason is the fall happened in the first 12 months following implementation when the schemes were still finding their feet. Minneapolis Nice Ride, for example, began with only 65 stations and didn’t start expanding until the second year (by its fourth year it had 170 stations).

Another reason is two of the schemes – in Montreal and Minneapolis – shut down for the winter, thereby reducing the period of increased visibility. Nice Ride, for example, closes from the first week in November to the first week in April.

An important part of the behavioural adaptation explanation is that motorists themselves are likely to be cyclists and hence empathise with riders. That’s plausible in some European countries where cycling’s mode share for all purposes can exceed 20%, but much less so in North American cities where on-road cycling levels are around 1% or less.

It also can’t be assumed the safety in numbers effect applies automatically in all situations. Thompson et al (Reconsidering the safety in numbers effect for vulnerable road users: an application of agent based modelling) say that recent figures from London and San Francisco,

demonstrate a sharp rise in serious injuries among cyclists at rates that cannot be explained by commensurate increases in bicycle volumes alone. Consequently, an assumption that greater numbers of cyclists will reduce road injury risk under all circumstances may be overly simplistic.

Olivier et al examined cycling injuries in NSW from 2001-2010 and concluded that the “data suggest a proportional change in cycling is associated with a similar change in the proportion of cycling-related injury and is not supportive of the safety in numbers effect for cycling”.

Bhatia and Wier call for caution in applying the concept (Safety in numbers re-examined: can we make valid or practical inferences from available evidence?)

Given the paucity of evidence supporting a specific mechanism for the safety in numbers effect, alternative plausible explanations of the non-linear association behind it, and a potential for unintended consequences from its policy application, the authors call for caution in the use of safety in numbers in transportation policy and planning dialogue and decision-making.

CityLab’s Eric Jaffe suggests an alternative explanation: it could be that cities which introduce bike share might also tend to provide better bike infrastructure for use by all riders, resulting in fewer injuries across the board.

I think it’s possible the safety in numbers effect is part of the explanation, but the drop seems so implausibly large and sudden that I suspect there might be something in the researcher’s methodology or their data that hasn’t been adequately accounted for.

That’s possibly uncharitable, but it’s a tempting explanation given the author’s misinterpretation of the data. There’re a number of issues I’d want to look at in greater detail.

For example, I note that almost 9,000 cyclists were admitted to hospital in Australia in 2010-11 (see Which road users are most likely to end up in hospital?). Although metro New York (one of the control cities) alone has a population that approaches Australia’s, the authors count only 1,853 cycling injuries over two years in the five control cities. I’d like to see an explanation for this.

I’d want to be sure the authors separated out injuries incurred in off-road cycling – in Australia, they account for 41% of all cycling hospitalisations. I’d want to see if they’ve allowed for the fact that a large proportion of on-road cycling injuries (around half in Australia) don’t involve an interaction with a vehicle but are due to causes like falls.

I’ve more questions, but the main thing is I’m sceptical that most or even a large part of the apparent drop in across the board injuries can be put down to the safety in numbers effect; it seems too good to be true.

While I think the safety in numbers effect is a real phenomenon (see Cycling: is the safety in numbers effect all about the numbers?), I expect when and at what strength it’s triggered is a complex matter.


  1. The bike share cities were Montreal, Minneapolis, Washington DC, Boston, Florida Beach. The control cities were Vancouver, New York, Milwaukee, Seattle, Los Angeles.
  2. Another annoying case of researchers not making their paper easily available for a wider audience; I’d appreciate it if someone could e-mail me a copy of the paper; address in About This Blog (done, thanks)