Just because you can measure something, doesn’t mean that it is a good proxy from which to draw a conclusion.
In the case of web sites, one proxy used to measure “success” is an increasing number of page views over time. But if you redesign your site to make it easier for visitors to find what they are looking for, page views (per session) may decline. That would be a good thing, although it’s counter to the belief that more is better.
Turning to cancer, a similar counterintuitive proxy is “survival rate.”
What do we mean when we say “survival rate”?
Because population is not static from year-to-year, researchers need a way to measure true differences, to subtract out the variance in numbers. Most use “survival rates.” This is not unlike using percentages or deflated dollars to compare data over time.
Age-adjusted five-year relative survival rates measure how many people diagnosed with cancer are still alive five years later, adjusted to eliminate causes of death other than cancer.
Why might these rates be suspect?
Early detection always increase survival rates, especially when the cancers that are detected early are nonlethal.
If two people have exactly the same disease progression, the one who’s diagnosed earlier will be more likely to be alive in five years (emphasis added).
Lest you think that this is a recent observation, here is the NY Times in 1984:
A small but growing band of distinguished analysts is challenging proclamations by Government officials and leading cancer scientists that great advances have been made in ”curing” cancer patients… it is a perverse fact that the cancers that are now the most ”curable” are statistically among the most rare.
Dr. Haydn Bush, director of a regional cancer center in Ontario, wrote in a magazine published by the American Association for the Advancement of Science in September 1984 that the apparent improvement in breast cancer survival probably reflected screening programs that detected the disease at an earlier stage.
Thus, even if these women received no treatment at all and their disease followed its natural course, they would automatically be more likely to survive five years. All that has happened is that the survival clock is being started sooner.
Clearly, this conclusion applies only to slow-growing cancers, but many breast cancers fall into that category.
Why is understanding this important?
In 2000, when I rode my Ducati 6,000 miles in two weeks to raise money for the Komen Foundation, the talking point was that 1-in-8 women would be diagnosed with breast cancer. That statistic is the same today. It was the same in 1988.
But in 1975-1977, the risk was 1-in-10.6. Why the increase?
Researchers hypothesized in 1993 that the increase might be early detection or it might be reduced deaths due to other causes.
Those researchers also examined mortality rates:
While the lifetime risk of developing breast cancer rose over the period 1976-1977 to 1987-1988, the lifetime risk of dying of breast cancer increased from one in 30 to one in 28, reflecting generally flat mortality trends.
Today, the lifetime risk of dying of breast cancer is 3.4%, which is 1-in-29. Flat. No change.
So take those messages from organizations that need to show progress in order to keep people donating money and foundations underwriting grants … with a grain of salt.
Understand that there has been very little change in the risk of dying of breast cancer since 1975.
Breast cancer is not a monolithic disease. It is extremely heterogeneous, and lumping its many variants (type, grade, growth rate, size, susceptibility to treatment) into one big ball does little to further understanding of its risks.