icon caret-left icon caret-right instagram pinterest linkedin facebook twitter goodreads question-circle facebook circle twitter circle linkedin circle instagram circle goodreads circle pinterest circle

Recent Newspaper & Online Columns by Kate Scannell MD

Sometimes, doctors can't see the forest for the statistics

By Dr. Kate Scannell, Syndicated columnist
First Published in Print: 04/28/2012

One of my patients used to demand each year that he be screened for "any kind of cancer imaginable." He wanted blood tests and radiographic scans that scoured every reach of his body. He devoured handfuls of so-called "anti-cancer" supplements and wore copper bracelets to ward off malignancies. As healthy as he was, he suffered terribly with cancerphobia.

Ironically, his anxiety was his greatest risk for developing cancer. Every X-ray or CT scan he underwent to help "manage" his anxiety just increased his cumulative radiation exposure and, consequently, his chance of developing a malignancy.

I explained that risk and suggested psychotherapy as a healthier strategy.
He refused referal to psychiatry, and whenever I declined his annual request for head-to-toe scanning, he'd simply obtain it elsewhere. He continued to see me because I'd always examine him carefully, reassuring us both that he had no clinical evidence of cancer.

Ultimately, my patient moved, and we lost contact. But, like other patients who've taught me valuable lessons, he still comes to mind every now and then.

Indeed, I'm thinking about him this week while reading a new medical study that reveals how poorly we doctors understand cancer screening statistics. And in the end, I am left wondering whether being a better statistician might have made me a better doctor for my patient.

The study published in last month's Annals of Internal Medicine set out to determine whether primary-care doctors in the U.S. could accurately interpret basic survival and mortality statistics to decide whether cancer screening strategies actually worked to save lives. It also evaluated whether doctors could correctly select which of two screening tests reducedpatient mortality, based on simple statistical claims.

Obviously, it's important for doctors to understand when and if a screening test is actually useful -- capable of detecting cancers reliably to effect timely and meaningful differences in patients' lives. It's critical for doctors to know whether they're helping, hurting or simply confusing patients.

So, how did we 400-plus doctors fare in this study? Well, as baseball legend Yogi Berra once proclaimed: "We made too many wrong mistakes."

Indeed, most of us couldn't tell when and if cancer screening strategies were validated by appropriate statistics. And the vast majority of us chose the wrong screening test, based on flawed understandings of survival statistics.

At first blush, these alarming findings seem ... "improbable." How can we doctors be at such great odds with statistics?

Well, if you need occasion to suffer the answer, please join me in my weekly review of medical journals. For example, we need look no further than the very study inspiring this column shrewdly titled: "Do Physicians Understand Cancer Screening Statistics?" It merely contributes to the problem it is trying to highlight. I had to take two aspirins to read through this paragraph:
"Because the online version of the questionnaire did not allow item nonresponse, all 412 questionnaires were completed fully. To analyze within-physician outcomes (for example, recommendation of screening and judgment of the effectiveness of screening), we used the McNemar chi-square test and the Wilcoxon signed-rank test. Between-group analyses (for example, testing for order effects) were performed by using the Pearson chi-square test and the Mann—Whitney U test. All data were stored and analyzed by using SPSS 18 (SPSS, Chicago, Illinois)."

I desperately sought the meaning of this perplexing passage by showing it to my friend, Dan, who is a numbers guru and fluent in statistics. He declared that the paragraph bolstered his regular argument "for the need for a serious medical journal that is written in English by people with writing skill who know these statistical tools and who have vetted their work through knowledgeable doctors." I wholeheartedly agreed (and, still, I didn't understand the paragraph).

It's wrong to believe that such bewildering "language" is exceptional, because one will find this language in any randomly-chosen general medical journal.

The point here is not that doctors don't know what they are doing. The point is that clear communication of medical research and scientific discoveries is critical for doctors' understanding. About 99.93 percent or so of us doctors are not statisticians, and we don't convey information to our patients in terms of chi-squares and Blokesbottom-U-Givup-test results.

Jimmy Breslin once said, "Baseball isn't statistics. Baseball is Joe DiMaggio rounding second." A similar analogy holds true in medicine: Doctoring isn't statistics. Doctoring is coaching and cheerleading -- and successful doctoring is watching your patient hobbling to first, heading straight to second, confidently rounding third, and sliding home -- Safe!

At least 99.98 percent of the primary-care doctors I know want to improve their health literacy and numeracy skills in service of becoming more effective communicators with their patients. They want to provide patients with valid information that allows them to make informed choices about cancer screening and other health care decisions.

But such clarity requires researchers and journal editors to become better translators of statistical language. There's a vast and often enigmatic distance between a statistical figure in print and a human figure in the clinic, one that could and should be bridged by my friend's suggestion.
==============================================
Kate Scannell is a Bay Area physician and syndicated columnist.
© Copyright 2012, Kate Scannell