Regular readers may remember there was some vigorous discussion at Croakey earlier this year after a SA study comparing the outcomes for homebirths and hospital births generated misleading headlines. The coverage largely reflected the slant put on the study by the Medical Journal of Australia’s press release and an accompanying editorial by the AMA president, Dr Andrew Pesce (see here, here and here, for example of the previous Croakey posts).

Well, the discussion didn’t stop there. I am writing a piece for the Crikey bulletin on related issues, some of which are also aired in the current issue of the BMJ (extract available here, but the full article costs).

Below are three sets of comments that helped inform the BMJ and Crikey articles. Firstly an email comment from Drs Steven Woloshin  and Lisa Schwartz from the Center for Medicine and the Media at the Dartmouth Institute for Health Policy and Clinical Practice in New Hampshire. They are international leaders in efforts to improve media coverage of health and medicine.

Secondly, an e-interview with University of Minnesota journalism academic Associate Professor Gary Schwitzer, blogger and publisher of HealthNewsReview.

Finally, there is also a detailed analysis of the SA study by Dr Andrew Bisits, an obstetrician and epidemiologist, which reinforces some of the concerns raised about how the findings were reported in the press release, editorial and media.

1. Drs Steven Woloshin  and Lisa Schwartz write:

“We don’t think the press release did a good job communicating the study findings.

It exaggerates the danger of home birth: the title says “higher risk of perinatal deaths” when that didn’t differ between the groups;  the lead par highlights “intrapartum related” deaths when overall deaths didn’t differ, it presents only those scary odds ratios without providing the absolute risks.  And it  minimizes the potential benefits of home birth by qualifying that home births were associated with less specialized neonatal care, hemorhage, tears were lower but that these differences were not statistically significant.

The editorial isn’t better.  Sadly, neither is the journal abstract.  Where are the absolute risks for the OR’s of 7 and 27?

We think press releases do more than just draw attention to studies — they also shape how journalists think and write about them.  Consequently we’ve been arguing for structured press releases (like journal abstracts) with clear rules about giving absolute risks and noting limitations.”

***

2.Email interview with Associate Professor Gary Schwitzer:

Q: Do you think the press release provides a fair or useful summary for journalists? Do you think it adequately acknowledged some of the uncertainties surrounding the study’s findings?

A: I’m in fits even with the first sentence. “A retrospective population-based study has added to previously published evidence…” What are the limitations of a retrospective population-based study? The news release doesn’t address this. What is the quality of the evidence of the previously published studies? The news release doesn’t address this either.

Deeper in the news release, it states: “Prof Keirse said that while the data showed that planned home births had a perinatal mortality rate similar to that of planned hospital births, they had a sevenfold higher risk of intrapartum death and a 27-fold higher risk of death due to intrapartum asphyxia (lack of oxygen during childbirth).” Huh?

Journalists – and the general public – are going to need a little help breaking down this statement. Sevenfold and 27-fold higher risks? 7 times what? 27 times what? If I’m reading the study correctly in a rush, the intrapartum death rate per 1,000 births was .8 for planned hospital births and 1.8 per 1,000 births for planned home births. Deaths attributed to intrapartum asphyxia were at the rate of .3 per 1,000 births for planned hospital births and 2.6 per 1,000 for planned home births. These frequency data are far more useful than the terms used in the news release.

The news release is simply inaccurate in stating “The study also found that low Apgar scores were more frequent among planned home births.” As I read the study, the rate was 1.1% for planned home births and 1.4% for planned hospital births.

It’s ironic that the final quote in the news release has no ending. It says, ““Those who argue that planned home birth in Australia can be safe will have to show this on the basis of…..” On the basis of what? The same kind of mixed evidence that this news release promotes?

This was, in short, an awful news release. It projects certainty where a great deal of uncertainty exists and emphasizes certain findings to the exclusion of others that may not fit a certain policy agenda.

Q. What do you think would have been an accurate journalistic representation of the study’s findings – for eg what would have been a fair lead or fair
headline? (most reports I’ve seen concentrated on the “sevenfold higher risk of intrapartum death and a 27-fold higher risk of death due to intrapartum asphyxia” highlighted in the release).

A: Honestly, I think a fair headline would have been a mealy-mouthed, “This study shows a mixed bag of outcomes, perhaps raising more questions than it answers.”

Q: There has been a lot of discussion about the potential pitfalls of the media simply reporting relative risks and benefits, when it comes to
medicines. Should we be having similar conversations about other types of interventions? Does it matter that the reporting of this study would make it very difficult for journalists to determine the absolute risks of homebirth versus hospital birth?

A: The importance of providing absolute risk/benefit data – or natural frequencies as Gerd Gigerenzer calls for – applies to any situation where a claim is being made about efficacy of an intervention or an approach – not simply with medicines. Journals are inconsistent in what they demand from authors. Some allow relative and absolute data to be used within the same paper – sometimes with relative data used for benefits (because it looks more impressive) and absolute data used for harms (because it looks less harmful).

The wide-as-the-ocean confidence intervals in this study are cause for concern. Did the journal address this with the authors? Why not educate journalists and the public about what these confidence intervals mean?

Q: Do you think journal press releases should be structured to provide journalists with a balanced interpretation of a study’s findings, for eg: what the study involved? What are the strengths and weaknesses of this design? What did the study find? How confident can we be of the reliability of these findings? What are their clinical/policy implications?

A: I think that journal press releases should follow the criteria employed by the Media Doctor or HealthNewsReview.org websites. Among other things, readers should be able to easily see if the study answered questions such as:

* How often do benefits occur – in absolute terms?
* How often do harms occur – in absolute terms?
* How strong is the evidence?
* Is this condition exaggerated (disease-mongering)?
* What’s the total cost of the intervention and how does it compare with
alternative options?
* Are there alternative options and how does this approach compare not
only in cost but in benefits/harms?
* Is this really a new approach?
* What is the current availability of the approach?
* Who’s promoting this?
* Do they have a conflict of interest?

There is a complete dissonance between the damning tone of the news release and this important sentence from the paper itself: “Although our study has shown few adverse outcomes from planned home births in SA, small numbers with large confidence intervals limit interpretation of these data.” Indeed, small numbers with large confidence intervals make any meaningful comparisons difficult.

Q: At the Croakey blog, the MJA editor Martin Van Der Weyden argued that the role of a journal press release was simply to attract journalists’ attention and it was up to them to then read the full study and make up their own minds. Do you think this is a useful assessment? If not, why not?

A: This is perhaps why journals should stop issuing news releases. If it’s all about attracting attention, not helping journalists and the public they serve to evaluate the quality of the evidence, which in this case is appropriately in question, then what public good is being served by journal news releases? Journals know that many journalists need help scrutinizing the evidence, yet they use news releases to get more publicity for their journal. This, of course, serves the journal’s goals of increasing the prestige of the journal and making it more attractive to advertisers. But it doesn’t do much to improve public understanding of science, does it?

Q: Another issue is the peer review process. There is so much focus on multidisciplinary care and research etc. Is it appropriate for medical journals to ask only obstetricians (the reviewers were three obstetricians and one statistician) to review an article like this, esp given the multidisciplinary nature of maternity care?

A: Evidence is evidence, whether one is an ob-gyn or a primary care provider or a journalist. So it shouldn’t require an ob-gyn to review an ob-gyn related manuscript. Who the peer reviewers are in any given journal on any given manuscript is an important question – one that gets very little attention so you’re wise to bring it up. It would seem wise to include a non-obstetrician in the mix if one truly cared about an even-handed evaluation of the evidence.

Q: What lessons can be drawn from this case (if any) for journal editors and readers, journalists and their readers, and researchers?

A: Journalists – and people who read news based on medical journal articles – need to realize that what is published in journals was not etched in stone tablets and handed down from a mountaintop. There are flaws in the way journals review and accept manuscripts for publication. So flaws in the underlying studies or analysis can be missed. When this happens, a manuscript may be allowed – within this flawed system – to make claims that go beyond what the evidence shows. Then, when a clear conflict of interest arises in the selection of the editorial writer, the process is further clouded. Worse yet, when the journal writes news releases that trumpet the findings and minimize the limitations, the scenario sinks deeper into an ethical abyss. Journal editors, researchers and journalists need to realize that the credibility of the publication process and of medical research itself is jeopardized in such instances.

***

3. Analysis of the study by Dr Andrew Bisits, obstetrician and epidemiologist

“The paper in the MJA 18/1/2009 reports the results of a study that compared outcomes of women who planned a homebirth with those who planned a hospital birth in the state of South Australia  between the years of 1991-2006.

The objective is clear; no hypothesis is proposed for refutation because the study is an exploratory or hypothesis generating exercise. The study group consisted of  those women who intended to have a homebirth in South Australia between the years 1991-2006; the control group consisted of those women who intended to have a hospital birth in the same period. The classification of these women seems to be clear cut. The source for the data about both groups comes from the SA perinatal database and confidential inquiries into perinatal adverse outcomes. No mention is made of the accuracy of these sources of data.

The key study factor was intention to birth at home.  The key control factor was intention to birth at hospital.  There is a veneer of simplicity about these factors; however, they are both complex interventions with many possible variations.

The authors analyzed the data with the following methods

The frequency of perinatal mortality, intrapartum death and death due to intrapartum asphyxia was  reported for the two groups

Perinatal mortality was reported as the number of babies lost (up to 28 days post partum)  per thousand births

The study and control groups were  compared for important baseline differences

Odds ratios for the above outcomes with the reference group being hospital births

Odds ratios adjusted for important baseline differences between the study and control groups. Only those baseline differences reported on the data base were used for adjustment. No consideration was given to the unmeasured baseline differences e.g. how fearful or confident were the women, what was the skill of the carers, what was the available transfer infrastructure, what type of cooperation was there between the homebirth practitioner and the local hospital. In order to do this they used logistic regression.  Because the numbers of outcomes were small in the study group the data would have best been analyzed with exact logistic regression methods rather than standard logistic regression methods which rely on assuming that the results will hold in a sufficiently large sample.  The study report says that this could not be done due to the limits of the software however there is a software package Logxact which can deal with such data sets.  With the methods used in the study, the odds ratios are likely to be biased.

There are some cursory details mentioned about each case and the factors that led to the adverse outcome. It would have been more useful if the authors dedicated a whole table to the nine cases with the pertinent detail. The results are dominated by the summary statistics.

The odds ratio of 27  for Intrapartum death is impressive but has to be interpreted in the light of the width of the confidence interval which is very wide.  This adjusted odds ratio was obtained by trying to ensure that the two groups were more comparable on the basis of risk factors that were reported in the data base. If more appropriate confounders were included in the adjustment, then there is a likelihood that the confidence interval will be wider and therefore less stable.

The other contentious area in the  analysis is the choice of factors used to adjust the odds ratios. It is clear that simply by throwing in more factors this can lead to biased results. This can easily happen with the methods reported by the authors.

One approach to analysis in these studies is to use propensity scoring.  Each subject in the study is given a propensity score for the outcome.  Homebirth subjects are then compared with those women in the hospital birth group who have a similar propensity score.  While this method has its critics, it probably does lead to more accessible and presentable results compared to logistic regression, which for most readers is a “black box” in the statistical analysis process.

Trying to understand the results of this paper is reminiscent of the term breech trial  where  the initial interpretation was solely determined by the summary statistics  showing an impressive excess risk in those women who  were randomized to a planned vaginal breech birth. This was so impressive and frightening that breech vaginal delivery was no longer an option for women with a breech baby at term. As the initial excitement (fuelled by a very supporting editorial in the Lancet) and horror settled, more critical thinking took place about the individual cases where there were adverse outcomes. Questions were asked about the appropriateness of management. It became clear that the management in the three cases of perinatal mortality was totally substandard  and could not be explained away with the intention to treat approach.  Further critical thinking took place about the ability of centres to provide adequate and competent care for women with a planned vaginal breech delivery.  Five years down the track in the AMJOG there was a call from one of the participating centres that the results of the trial be withdrawn.

The point to be made here is that valuable critical information was obtained, not from the various summary statistics but rather the actual details of the cases where there were adverse outcomes and inside knowledge about the conduct of the trial. The homebirth paper does not report enough details  about the adverse events in the home birth group to allow a critical evaluation of how important the place of birth was in contributing to the adverse outcome. Looking at other studies that report favorable outcomes with homebirth it is clear that  the following are important:

• A good working relationship with the referring hospital

• Skilled and  credentialed midwives

• Guidelines for referral

• Robust transfer arrangements

• Distance from referring hospitals.

• Confidence that the woman and her partner have in their system of care.

The authors chose to analyse the data using a fixed-effects assumption about the risk of homebirth as opposed to a random effects assumption.  The fixed effects assumption is convenient for this data because the incidence of outcomes in the control group is so low. The fixed effects assumption says that the risk of homebirth in the population at large is a fixed value which we aim to infer with our sample data.  However, the more plausible assumption is that the risk of homebirth in the population at large is not a fixed effect but rather will be a range of effects given differing conditions for homebirth.

Confidence intervals surrounding a random effects assumption tend to be wider. More than likely had a random effects assumption been used to analyse the data, the confidence intervals surrounding the odds ratio of 27  would be wider  and would have crossed unity – indicating that no clear effect of homebirth was demonstrable in this population data set.

The study is observational and the statistical methods yield strong associations between place of birth and adverse outcome. However, there is still a big step to establish causation.  The odds ratio is very impressive because a rag bag collection of homebirths where there were considerable variations in practice yielded a particular set of numbers.  The rarity of the outcome event makes it difficult to do any meaningful subgroup analysis that could identify those circumstances which will maximise safety in homebirth settings.

My conclusion for consumers and the general public is that this paper presents conclusions consistent with most other studies of homebirth; adverse outcomes associated with homebirth often occur because women with risk factors choose to deliver at home despite advice to the contrary. It is clear from the Dutch and Canadian observational studies of homebirth that safety of homebirth can be maximized where there are systems of health care that incorporate homebirth into their infrastructure by

• Clear guidelines

• Skilled midwives

• Informed consumers

• Good working arrangements with local hospitals

• Robust transfer arrangements

• Continuous and open audit of outcomes.

• Improving the birthing environment in public and private hospitals.”

And a final word from Croakey:

Some of those I interviewed raised concerns about Dr Pesce speaking about the study’s findings in the media last year, ahead of publication. However, the MJA editor, Dr Martin Van Der Weyden told me that he had given Dr Pesce permission to do this. Usually, the MJA, like most journals, is rather strict about discouraging researchers and journalists from publishing findings ahead of them appearing in the journal. One of the study’s authors, Marc Keirse, Professor of Obstetrics and Gynaecology at Flinders University, told me he had not known that Dr Pesce had been speaking of the results ahead of publication, but had no problems with this.

(Visited 31 times, 1 visits today)