Chapter 2: Outlive: The Science and Art of Longevity

I don’t remember what the last straw was in my growing frustration with medical training, but I do know that the beginning of the end came courtesy of a drug called gentamicin.

Chapter 2: Outlive: The Science and Art of Longevity
Outlive: The Science and Art of Longevity

Medicine 3.0

Rethinking Medicine for the Age of Chronic Disease

The time to repair the roof is when the sun is shining.

John F. Kennedy

I don’t remember what the last straw was in my growing frustration with medical training, but I do know that the beginning of the end came courtesy of a drug called gentamicin. Late in my second year of residency, I had a patient in the ICU with severe sepsis. He was basically being kept alive by this drug, which is a powerful IV antibiotic. The tricky thing about gentamicin is that it has a very narrow therapeutic window. If you give a patient too little, it won’t do anything, but if you give him too much it could destroy his kidneys and hearing. The dosing is based on the patient’s weight and the expected half-life of the drug in the body, and because I am a bit of a math geek (actually, more than a bit), one evening I came up with a mathematical model that predicted the precise time when this patient would need his next dose: 4:30 a.m.

Sure enough, when 4:30 rolled around we tested the patient and found that his blood levels of gentamicin had dropped to exactly the point where he needed another dose. I asked his nurse to give him the medication but found myself at odds with the ICU fellow, a trainee who was one level above us residents in the hospital pecking order. I wouldn’t do that, she said. Just have them give it at seven, when the next nursing shift comes on. This puzzled me, because we knew that the patient would have to go for more than two hours basically unprotected from a massive infection that could kill him. Why wait? When the fellow left, I had the nurse give the medicine anyway.

Later that morning at rounds, I presented the patient to the attending physician and explained what I had done, and why. I thought she would appreciate my attention to patient care—getting the drug dosed just right—but instead, she turned and gave me a tongue-lashing like I’d never experienced. I’d been awake for more than twenty-four hours at this point, but I wasn’t hallucinating. I was getting screamed at, even threatened with being fired, for trying to improve the way we delivered medication to a very sick patient. True, I had disregarded the suggestion (not a direct order) from the fellow, my immediate superior, and that was wrong, but the attending’s tirade stunned me. Shouldn’t we always be looking for better ways to do things?

Ultimately, I put my pride in check and apologized for my disobedience, but this was just one incident of many. As my residency progressed, my doubts about my chosen profession only mounted. Time and again, my colleagues and I found ourselves coming into conflict with a culture of resistance to change and innovation. There are some good reasons why medicine is conservative in nature, of course. But at times it seemed as if the whole edifice of modern medicine was so firmly rooted in its traditions that it was unable to change even slightly, even in ways that would potentially save the lives of people for whom we were supposed to be caring.

By my fifth year, tormented by doubts and frustration, I informed my superiors that I would be leaving that June. My colleagues and mentors thought I was insane; almost nobody leaves residency, certainly not at Hopkins with only two years to go. But there was no dissuading me. Throwing nine years of medical training out the window, or so it seemed, I took a job with McKinsey & Company, the well-known management consulting firm. My wife and I moved across the country to the posh playground of Palo Alto and San Francisco, where I had loved living while at Stanford. It was about as far away from medicine (and Baltimore) as it was possible to get, and I was glad. I felt as if I had wasted a decade of my life. But in the end, this seeming detour ended up reshaping the way I look at medicine—and more importantly, each of my patients.

The key word, it turned out, was risk.

McKinsey originally hired me into their healthcare practice, but because of my quantitative background (I had studied applied math and mechanical engineering in college, planning to pursue a PhD in aerospace engineering), they moved me over to credit risk. This was in 2006, during the runup to the global financial crisis, but before almost anyone besides the folks featured in Michael Lewis’s The Big Short understood the magnitude of what was about to happen.

Our job was to help US banks comply with a new set of rules that required them to maintain enough reserves to cover their unexpected losses. The banks had done a good job of estimating their expected losses, but nobody really knew how to deal with the unexpected losses, which by definition were much more difficult to predict. Our task was to analyze the banks’ internal data and come up with mathematical models to try to predict these unexpected losses on the basis of correlations among asset classes—which was just as tricky as it sounds, like a crapshoot on top of a crapshoot.

What started out as an exercise to help the biggest banks in the United States jump through some regulatory hoops uncovered a brewing disaster in what was considered to be one of their least risky, most stable portfolios: prime mortgages. By the late summer of 2007, we had arrived at the horrifying but inescapable conclusion that the big banks were about to lose more money on mortgages in the next two years than they had made in the previous decade.

In late 2007, after six months of round-the-clock work, we had a big meeting with the top brass of our client, a major US bank. Normally, my boss, as the senior partner on the project, would have handled the presentation. But instead he picked me. “Based on your previous career choice,” he said, “I suspect you are better prepared to deliver truly horrible news to people.”

This was not unlike delivering a terminal diagnosis. I stood up in a high-floor conference room and walked the bank’s management team through the numbers that foretold their doom. As I went through my presentation, I watched the five stages of grief described by Elisabeth Kübler-Ross in her classic book On Death and Dying—denial, anger, bargaining, sadness, and acceptance—flash across the executives’ faces. I had never seen that happen before outside of a hospital room.

My detour into the world of consulting came to an end, but it opened my eyes to a huge blind spot in medicine, and that is the understanding of risk. In finance and banking, understanding risk is key to survival. Great investors do not take on risk blindly; they do so with a thorough knowledge of both risk and reward. The study of credit risk is a science, albeit an imperfect one, as I learned with the banks. While risk is obviously also important in medicine, the medical profession often approaches risk more emotionally than analytically.

The trouble began with Hippocrates. Most people are familiar with the ancient Greek’s famous dictum: “First, do no harm.” It succinctly states the physician’s primary responsibility, which is to not kill our patients or do anything that might make their condition worse instead of better. Makes sense. There are only three problems with this: (a) Hippocrates never actually said these words,[*1] (b) it’s sanctimonious bullshit, and (c) it’s unhelpful on multiple levels.

“Do no harm”? Seriously? Many of the treatments deployed by our medical forebears, from Hippocrates’s time well into the twentieth century, were if anything more likely to do harm than to heal. Did your head hurt? You’d be a candidate for trepanation, or having a hole drilled in your skull. Strange sores on your private parts? Try not to scream while the Doktor of Physik dabs some toxic mercury on your genitals. And then, of course, there was the millennia-old standby of bloodletting, which was generally the very last thing that a sick or wounded person needed.

What bothers me most about “First, do no harm,” though, is its implication that the best treatment option is always the one with the least immediate downside risk—and, very often, doing nothing at all. Every doctor worth their diploma has a story to disprove this nonsense. Here’s one of mine: During one of the last trauma calls I took as a resident, a seventeen-year-old kid came in with a single stab wound in his upper abdomen, just below his xiphoid process, the little piece of cartilage at the bottom end of his sternum. He seemed to be stable when he rolled in, but then he started acting odd, becoming very anxious. A quick ultrasound suggested he might have some fluid in his pericardium, the tough fibrous sac around the heart. This was now a full-blown emergency, because if enough fluid collected in there, it would stop his heart and kill him within a minute or two.

There was no time to take him up to the OR; he could easily die on the elevator ride. As he lost consciousness, I had to make a split-second decision to cut into his chest right then and there and slice open his pericardium to relieve the pressure on his heart. It was stressful and bloody, but it worked, and his vital signs soon stabilized. No doubt the procedure was hugely risky and caused him great short-term harm, but had I not done it, he might have died waiting for a safer and more sterile procedure in the operating room. Fast death waits for no one.

The reason I had to act so dramatically in the moment was that the risk was so asymmetric: doing nothing—avoiding “harm”—would likely have resulted in his death. Conversely, even if I was wrong in my diagnosis, the hasty chest surgery we performed was quite survivable, though obviously not how one might wish to spend a Wednesday night. After we got him out of imminent danger, it became clear that the tip of the knife had just barely punctured his pulmonary artery, a simple wound that took two stitches to fix once he was stabilized and in the OR. He went home four nights later.

Risk is not something to be avoided at all costs; rather, it’s something we need to understand, analyze, and work with. Every single thing we do, in medicine and in life, is based on some calculation of risk versus reward. Did you eat a salad from Whole Foods for lunch? There’s a small chance there could have been E. coli on the greens. Did you drive to Whole Foods to get it? Also risky. But on balance, that salad is probably good for you (or at least less bad than some other things you could eat).

Sometimes, as in the case of my seventeen-year-old stab victim, you have to take the leap. In other, less rushed situations, you might have to choose more carefully between subjecting a patient to a colonoscopy, with its slight but real risk of injury, versus not doing the examination and potentially missing a cancer diagnosis. My point is that a physician who has never done any harm, or at least confronted the risk of harm, has probably never done much of anything to help a patient either. And as in the case of my teenage stabbing victim, sometimes doing nothing is the riskiest choice of all.

I actually kind of wish Hippocrates had been around to witness that operation on the kid who was stabbed—or any procedure in a modern hospital setting, really. He would have been blown away by all of it, from the precision steel instruments to the antibiotics and anesthesia, to the bright electric lights.

While it is true that we owe a lot to the ancients—such as the twenty thousand new words that medical school injected into my vocabulary, most derived from Greek or Latin—the notion of a continuous march of progress from Hippocrates’s era to the present is a complete fiction. It seems to me that there have been two distinct eras in medical history, and that we may now be on the verge of a third.

The first era, exemplified by Hippocrates but lasting almost two thousand years after his death, is what I call Medicine 1.0. Its conclusions were based on direct observation and abetted more or less by pure guesswork, some of which was on target and some not so much. Hippocrates advocated walking for exercise, for example, and opined that “in food excellent medicine can be found; in food bad medicine can be found,” which still holds up. But much of Medicine 1.0 missed the mark entirely, such as the notion of bodily “humors,” to cite just one example of many. Hippocrates’s major contribution was the insight that diseases are caused by nature and not by actions of the gods, as had previously been believed. That alone represented a huge step in the right direction. So it’s hard to be too critical of him and his contemporaries. They did the best they could without an understanding of science or the scientific method. You can’t use a tool that has not yet been invented.

Medicine 2.0 arrived in the mid-nineteenth century with the advent of the germ theory of disease, which supplanted the idea that most illness was spread by “miasmas,” or bad air. This led to improved sanitary practices by physicians and ultimately the development of antibiotics. But it was far from a clean transition; it’s not as though one day Louis Pasteur, Joseph Lister, and Robert Koch simply published their groundbreaking studies,[*2] and the rest of the medical profession fell into line and changed the way they did everything overnight. In fact, the shift from Medicine 1.0 to Medicine 2.0 was a long, bloody slog that took centuries, meeting trench-warfare resistance from the establishment at many points along the way.

Consider the case of poor Ignaz Semmelweis, a Viennese obstetrician who was troubled by the fact that so many new mothers were dying in the hospital where he worked. He concluded that their strange “childbed fever” might somehow be linked to the autopsies that he and his colleagues performed in the mornings, before delivering babies in the afternoons—without washing their hands in between. The existence of germs had not yet been discovered, but Semmelweis nonetheless believed that the doctors were transmitting something to these women that caused their illness. His observations were most unwelcome. His colleagues ostracized him, and Semmelweis died in an insane asylum in 1865.

That very same year, Joseph Lister first successfully demonstrated the principle of antiseptic surgery, using sterile techniques to operate on a young boy in a hospital in Glasgow. It was the first application of the germ theory of disease. Semmelweis had been right all along.

The shift from Medicine 1.0 to Medicine 2.0 was prompted in part by new technologies such as the microscope, but it was more about a new way of thinking. The foundation was laid back in 1628, when Sir Francis Bacon first articulated what we now know as the scientific method. This represented a major philosophical shift, from observing and guessing to observing, and then forming a hypothesis, which as Richard Feynman pointed out is basically a fancy word for a guess.

The next step is crucial: rigorously testing that hypothesis/guess to determine whether it is correct, also known as experimenting. Instead of using treatments that they believed might work, often despite ample anecdotal evidence to the contrary, scientists and physicians could systematically test and evaluate potential cures, then choose the ones that had performed best in experiments. Yet three centuries elapsed between Bacon’s essay and the discovery of penicillin, the true game-changer of Medicine 2.0.

Medicine 2.0 was transformational. It is a defining feature of our civilization, a scientific war machine that has eradicated deadly diseases such as polio and smallpox. Its successes continued with the containment of HIV and AIDS in the 1990s and 2000s, turning what had seemed like a plague that threatened all humanity into a manageable chronic disease. I’d put the recent cure of hepatitis C right up there as well. I remember being told in medical school that hepatitis C was an unstoppable epidemic that was going to completely overwhelm the liver transplant infrastructure in the United States within twenty-five years. Today, most cases can be cured by a short course of drugs (albeit very expensive ones).

Perhaps even more amazing was the rapid development of not just one but several effective vaccines against COVID-19, not even a year after the pandemic took hold in early 2020. The virus genome was sequenced within weeks of the first deaths, allowing the speedy formulation of vaccines that specifically target its surface proteins. Progress with COVID treatments has also been remarkable, yielding multiple types of antiviral drugs within less than two years. This represents Medicine 2.0 at its absolute finest.

Yet Medicine 2.0 has proved far less successful against long-term diseases such as cancer. While books like this always trumpet the fact that lifespans have nearly doubled since the late 1800s, the lion’s share of that progress may have resulted entirely from antibiotics and improved sanitation, as Steven Johnson points out in his book Extra Life. The Northwestern University economist Robert J. Gordon analyzed mortality data going back to 1900 (see figure 1) and found that if you subtract out deaths from the eight top infectious diseases, which were largely brought under control by the advent of antibiotics in the 1930s, overall mortality rates declined relatively little over the course of the twentieth century. That means that Medicine 2.0 has made scant progress against the Horsemen.

This graph shows how little real mortality rates have improved since 1900, once you remove the top eight contagious/infectious diseases, which were largely controlled by the advent of antibiotics in the early twentieth century.

Toward Medicine 3.0

During my stint away from medicine, I realized that my colleagues and I had been trained to solve the problems of an earlier era: the acute illnesses and injuries that Medicine 2.0 had evolved to treat. Those problems had a much shorter event horizon; for our cancer patients, time itself was the enemy. And we were always coming in too late.

This actually wasn’t so obvious until I’d spent my little sabbatical immersed in the worlds of mathematics and finance, thinking every day about the nature of risk. The banks’ problem was not all that different from the situation faced by some of my patients: their seemingly minor risk factors had, over time, compounded into an unstoppable, asymmetric catastrophe. Chronic diseases work in a similar fashion, building over years and decades—and once they become entrenched, it’s hard to make them go away. Atherosclerosis, for example, begins many decades before the person has a coronary “event” that could result in their death. But that event, often a heart attack, too often marks the point where treatment begins.

This is why I believe we need a new way of thinking about chronic diseases, their treatment, and how to maintain long-term health. The goal of this new medicine—which I call Medicine 3.0—is not to patch people up and get them out the door, removing their tumors and hoping for the best, but rather to prevent the tumors from appearing and spreading in the first place. Or to avoid that first heart attack. Or to divert someone from the path to Alzheimer’s disease. Our treatments, and our prevention and detection strategies, need to change to fit the nature of these diseases, with their long, slow prologues.

It is already obvious that medicine is changing rapidly in our era. Many pundits have been predicting a glorious new era of “personalized” or “precision” medicine, where our care will be tailored to our exact needs, down to our very genes. This is, obviously, a worthy goal; it is clear that no two patients are exactly alike, even when they are presenting with what appears to be an identical upper-respiratory illness. A treatment that works for one patient may prove useless in the other, either because her immune system is reacting differently or because her infection is viral rather than bacterial. Even now, it remains extremely difficult to tell the difference, resulting in millions of useless antibiotic prescriptions.

Many thinkers in this space believe that this new era will be driven by advances in technology, and they are likely right; at the same time, however, technology has (so far) been largely a limiting factor. Let me explain. On the one hand, improved technology enables us to collect much more data on patients than ever before, and patients themselves are better able to monitor their own biomarkers. This is good. Even better, artificial intelligence and machine learning are being harnessed to try to digest this massive profusion of data and come up with more definitive assessments of our risk of, say, heart disease than the rather simple risk factor–based calculators we have now. Others point to the possibilities of nanotechnology, which could enable doctors to diagnose and treat disease by means of microscopic bioactive particles injected into the bloodstream. But the nanobots aren’t here yet, and barring a major public or private research push, it could be a while before they become reality.

The problem is that our idea of personalized or precision medicine remains some distance ahead of the technology necessary to realize its full promise. It’s a bit like the concept of the self-driving car, which has been talked about for almost as long as automobiles have been crashing into each other and killing and injuring people. Clearly, removing human error from the equation as much as possible would be a good thing. But our technology is only today catching up to a vision we’ve held for decades.

If you had wanted to create a “self-driving” car in the 1950s, your best option might have been to strap a brick to the accelerator. Yes, the vehicle would have been able to move forward on its own, but it could not slow down, stop, or turn to avoid obstacles. Obviously not ideal. But does that mean the entire concept of the self-driving car is not worth pursuing? No, it only means that at the time we did not yet have the tools we now possess to help enable vehicles to operate both autonomously and safely: computers, sensors, artificial intelligence, machine learning, and so on. This once-distant dream now seems within our reach.

It is much the same story in medicine. Two decades ago, we were still taping bricks to gas pedals, metaphorically speaking. Today, we are approaching the point where we can begin to bring some appropriate technology to bear in ways that advance our understanding of patients as unique individuals. For example, doctors have traditionally relied on two tests to gauge their patients’ metabolic health: a fasting glucose test, typically given once a year; or the HbA1c test we mentioned earlier, which gives us an estimate of their average blood glucose over the last 90 days. But those tests are of limited use because they are static and backward-looking. So instead, many of my patients have worn a device that monitors their blood glucose levels in real time, which allows me to talk to them about nutrition in a specific, nuanced, feedback-driven way that was not even possible a decade ago. This technology, known as continuous glucose monitoring (CGM), lets me observe how their individual metabolism responds to a certain eating pattern and make changes to their diet quickly. In time, we will have many more sensors like this that will allow us to tailor our therapies and interventions far more quickly and precisely. The self-driving car will do a better job of following the twists and turns of the road, staying out of the ditch.

But Medicine 3.0, in my opinion, is not really about technology; rather, it requires an evolution in our mindset, a shift in the way in which we approach medicine. I’ve broken it down into four main points.

First, Medicine 3.0 places a far greater emphasis on prevention than treatment. When did Noah build the ark? Long before it began to rain. Medicine 2.0 tries to figure out how to get dry after it starts raining. Medicine 3.0 studies meteorology and tries to determine whether we need to build a better roof, or a boat.

Second, Medicine 3.0 considers the patient as a unique individual. Medicine 2.0 treats everyone as basically the same, obeying the findings of the clinical trials that underlie evidence-based medicine. These trials take heterogeneous inputs (the people in the study or studies) and come up with homogeneous results (the average result across all those people). Evidence-based medicine then insists that we apply those average findings back to individuals. The problem is that no patient is strictly average. Medicine 3.0 takes the findings of evidence-based medicine and goes one step further, looking more deeply into the data to determine how our patient is similar or different from the “average” subject in the study, and how its findings might or might not be applicable to them. Think of it as “evidence-informed” medicine.

The third philosophical shift has to do with our attitude toward risk. In Medicine 3.0, our starting point is the honest assessment, and acceptance, of risk—including the risk of doing nothing.

There are many examples of how Medicine 2.0 gets risk wrong, but one of the most egregious has to do with hormone replacement therapy (HRT) for postmenopausal women, long entrenched as standard practice before the results of the Women’s Health Initiative Study (WHI) were published in 2002. This large clinical trial, involving thousands of older women, compared a multitude of health outcomes in women taking HRT versus those who did not take it. The study reported a 24 percent relative increase in the risk of breast cancer among a subset of women taking HRT, and headlines all over the world condemned HRT as a dangerous, cancer-causing therapy. All of a sudden, on the basis of this one study, hormone replacement treatment became virtually taboo.

This reported 24 percent risk increase sounded scary indeed. But nobody seemed to care that the absolute risk increase of breast cancer for women in the study remained minuscule. Roughly five out of every one thousand women in the HRT group developed breast cancer, versus four out of every one thousand in the control group, who received no hormones. The absolute risk increase was just 0.1 percentage point. HRT was linked to, potentially, one additional case of breast cancer in every thousand patients. Yet this tiny increase in absolute risk was deemed to outweigh any benefits, meaning menopausal women would potentially be subject to hot flashes and night sweats, as well as loss of bone density and muscle mass, and other unpleasant symptoms of menopause—not to mention a potentially increased risk of Alzheimer’s disease, as we’ll see in chapter 9.

Medicine 2.0 would rather throw out this therapy entirely, on the basis of one clinical trial, than try to understand and address the nuances involved. Medicine 3.0 would take this study into account, while recognizing its inevitable limitations and built-in biases. The key question that Medicine 3.0 asks is whether this intervention, hormone replacement therapy, with its relatively small increase in average risk in a large group of women older than sixty-five, might still be net beneficial for our individual patient, with her own unique mix of symptoms and risk factors. How is she similar to or different from the population in the study? One huge difference: none of the women selected for the study were actually symptomatic, and most were many years out of menopause. So how applicable are the findings of this study to women who are in or just entering menopause (and are presumably younger)? Finally, is there some other possible explanation for the slight observed increase in risk with this specific HRT protocol?[*3]

My broader point is that at the level of the individual patient, we should be willing to ask deeper questions of risk versus reward versus cost for this therapy—and for almost anything else we might do.

The fourth and perhaps largest shift is that where Medicine 2.0 focuses largely on lifespan, and is almost entirely geared toward staving off death, Medicine 3.0 pays far more attention to maintaining healthspan, the quality of life.

Healthspan was a concept that barely even existed when I went to medical school. My professors said little to nothing about how to help our patients maintain their physical and cognitive capacity as they aged. The word exercise was almost never uttered. Sleep was totally ignored, both in class and in residency, as we routinely worked twenty-four hours at a stretch. Our instruction in nutrition was also minimal to nonexistent.

Today, Medicine 2.0 at least acknowledges the importance of healthspan, but the standard definition—the period of life free of disease or disability—is totally insufficient, in my view. We want more out of life than simply the absence of sickness or disability. We want to be thriving, in every way, throughout the latter half of our lives.

Another, related issue is that longevity itself, and healthspan in particular, doesn’t really fit into the business model of our current healthcare system. There are few insurance reimbursement codes for most of the largely preventive interventions that I believe are necessary to extend lifespan and healthspan. Health insurance companies won’t pay a doctor very much to tell a patient to change the way he eats, or to monitor his blood glucose levels in order to help prevent him from developing type 2 diabetes. Yet insurance will pay for this same patient’s (very expensive) insulin after he has been diagnosed. Similarly, there’s no billing code for putting a patient on a comprehensive exercise program designed to maintain her muscle mass and sense of balance while building her resistance to injury. But if she falls and breaks her hip, then her surgery and physical therapy will be covered. Nearly all the money flows to treatment rather than prevention—and when I say “prevention,” I mean prevention of human suffering. Continuing to ignore healthspan, as we’ve been doing, not only condemns people to a sick and miserable older age but is guaranteed to bankrupt us eventually.

When I introduce my patients to this approach, I often talk about icebergs—specifically, the ones that ended the first and final voyage of the Titanic. At 9:30 p.m. on the fatal night, the massive steamship received an urgent message from another vessel that it was headed into an icefield. The message was ignored. More than an hour later, another ship telegraphed a warning of icebergs in the ship’s path. The Titanic’s wireless operator, busy trying to communicate with Newfoundland over crowded airwaves, replied (via Morse code): “Keep out; shut up.”

There were other problems. The ship was traveling at too fast a speed for a foggy night with poor visibility. The water was unusually calm, giving the crew a false sense of security. And although there was a set of binoculars on board, they were locked away and no one had a key, meaning the ship’s lookout was relying on his naked eyes alone. Forty-five minutes after that last radio call, the lookout spotted the fatal iceberg just five hundred yards ahead. Everyone knows how that ended.

But what if the Titanic had had radar and sonar (which were not developed until World War II, more than fifteen years later)? Or better yet, GPS and satellite imaging? Rather than trying to dodge through the maze of deadly icebergs, hoping for the best, the captain could have made a slight course correction a day or two before and steered clear of the entire mess. This is exactly what ship captains do now, thanks to improved technology that has made Titanic-style sinkings largely a thing of the past, relegated to sappy, nostalgic movies with overwrought soundtracks.

The problem is that in medicine our tools do not allow us to see very far over the horizon. Our “radar,” if you will, is not powerful enough. The longest randomized clinical trials of statin drugs for primary prevention of heart disease, for example, might last five to seven years. Our longest risk prediction time frame is ten years. But cardiovascular disease can take decades to develop.

Medicine 3.0 looks at the situation through a longer lens. A forty-year-old should be concerned with her thirty- or forty-year cardiovascular risk profile, not merely her ten-year risk. We therefore need tools with a much longer reach than relatively brief clinical trials. We need long-range radar and GPS, and satellite imaging, and all the rest. Not just a snapshot.

As I tell my patients, I’d like to be the navigator of your ship. My job, as I see it, is to steer you through the icefield. I’m on iceberg duty, 24-7. How many icebergs are out there? Which ones are closest? If we steer away from those, will that bring us into the path of other hazards? Are there bigger, more dangerous icebergs lurking over the horizon, out of sight?

Which brings us to perhaps the most important difference between Medicine 2.0 and Medicine 3.0. In Medicine 2.0, you are a passenger on the ship, being carried along somewhat passively. Medicine 3.0 demands much more from you, the patient: You must be well informed, medically literate to a reasonable degree, clear-eyed about your goals, and cognizant of the true nature of risk. You must be willing to change ingrained habits, accept new challenges, and venture outside of your comfort zone if necessary. You are always participating, never passive. You confront problems, even uncomfortable or scary ones, rather than ignoring them until it’s too late. You have skin in the game, in a very literal sense. And you make important decisions.

Because in this scenario, you are no longer a passenger on the ship; you are its captain.

Notes

*1 The words “First, do no harm” do not appear in Hippocrates’s actual writings. He urged physicians to “practice two things in your dealings with disease: either help or do not harm the patient.” This was changed to “First, do no harm” by an aristocratic nineteenth-century British surgeon named Thomas Inman, whose other claim to fame was, well, nothing. Somehow it became the sacred motto of the medical profession for all of eternity.

*2 Pasteur discovered the existence of airborne pathogens and bacteria that caused food to rot; Lister developed antiseptic surgical techniques; and Koch identified the germs that caused tuberculosis and cholera.

*3 A deeper dive into the data suggests that the tiny increase in breast cancer risk was quite possibly due to the type of synthetic progesterone used in the study, and not the estrogen; the devil is always in the details.