“YOU REALLY ONLY NEED TO KNOW THREE THINGS about medicine,” Dr. Steve Lacey explained to the gathering of 207 first-year students at Southwestern Medical School in Dallas this past fall. “One is, Oxygen is good! The second is, The blood goes round and round! And the third is, You have to make pee!”
The students got a big laugh out of Lacey’s lighthearted riff during the opening lecture of first-year biochemistry. But they were much too bright to have missed the more sobering irony beneath the humor. Medical knowledge has burgeoned to such a degree that it is all but impossible for aspiring doctors to master it. The knowledge glut would seem to demand even more of the specialization that has been the trend in medicine for the past quarter of a century. But the marketplace is demanding exactly the opposite—more primary-care physicians. Today’s generalists have to know everything about medicine, from Lacey’s three basic rules to the latest scientific developments, and they have to know how to run a cost-efficient clinical practice as well.
“Your medical education will not be completed in four years,” Lacey told them later in the lecture. “The nature of education has changed for you in a very dramatic way. In the past the teachers taught, you went to class, you did what they said, you got a grade. Well, I’ve got a news flash for you: I don’t care about your grades. The real test is coming five years from now, when you’re in the sack at two o’clock in the morning, and a nurse calls you about a problem with a patient. Do you know enough to take care of him?”
Not so long ago, deciding to be a doctor involved only certainties: Years of hard study would be rewarded with high income and equally high prestige. But the continuing explosion of biomedical knowledge guarantees that much of what today’s students are learning will be outdated by the time they graduate, necessitating rigorous and constant reeducation for the rest of their careers. Meanwhile, the vagaries of the marketplace dictate that there is no longer the certitude of big money at the end of the gauntlet. And for the first time in the history of the profession, there is a growing mistrust of doctors. “I can think of lots of other things I could do to make money,” observes first-year student Leslie Chin of Dallas, a 22-year-old second-generation Chinese American who was a National Merit Scholar and an honor graduate of UT-Austin. “But I can’t be concerned about making money. And I can’t be concerned about making A’s either. I just have to focus on remembering what’s going to make me a good doctor.”
For Southwestern, an institution that in fifty years has gone from educating students in army barracks to a $400 million medical center with three Nobel prizes to its name, the task of providing her with that knowledge may be its most daunting challenge yet. At Southwestern and other leading medical schools, the dueling forces of expanding knowledge and restrictions on the use of that knowledge imposed by the managed-care revolution have compelled reflection and innovation for the first time in decades. Curriculum is being rethought, training refashioned. Students are being educated by doctors who have scant experience with the realities of the modern medical marketplace. In many ways the job of having to learn too much, too fast is one that the students will have to do on their own.
“There was a time, I suppose,” says Southwestern’s dean, Dr. William Neaves, “when all that a doctor needed to know was uttered at one time or another during his medical education. No more. My ideal finished student today would be one who is grounded in all the basics we can offer in four years, but also has learned how to manage time and information and how to keep on learning.” Add to this the growing economic pressure on physicians—their two main sources of income, insurance companies and government, are cutting back—and the decision to become a doctor becomes an adventurous, if not risky, career choice.
IN A LARGE, DANK ROOM MARKED “GROSS ANATOMY”—a double meaning that is all too apt—dozens of partially mutilated cadavers lie on stainless steel gurneys. Each grayish corpse is attended by four or five impossibly young first-year medical students, who today are taking a hands-on look at the anatomy of the heart. The greenish light and the odor of embalming fluid are oppressive. But the students seem oblivious to the environment as they carve and probe their cadavers, consult anatomy texts, exchange questions and answers with two roving professors, and attempt to finish the afternoon’s work: identification of some fifty body parts. The students have already listened to an hourlong lecture on the heart; now it’s time to put to use what the professor was talking about.
This is one of the few things about medical education that is more or less the same as it was more than thirty years ago, when internal medicine professor John Burnside studied anatomy. “Anatomy is the baseline still,” says the wiry, avuncular doctor, who is escorting me through the lab. “There has been some talk about changing this too, using computer imaging, cyberanatomy. But so much of understanding the body is three dimensional, and so much of it is texture.”
A young female student (the first-year class is almost 40 percent women) approaches to ask Burnside about a mass of tissue in her cadaver that she is having trouble identifying. Burnside goes over to inspect the cadaver, which, at midsemester, has begun to look like a well picked-over Thanksgiving turkey. “Those are juicy lymph nodes,” he says. “Metastasized. This patient died of cancer.”
“They’re kind of like cauliflower,” the student says, continuing to poke at the tissue.
Burnside returns to our conversation. “These students keep the same cadaver for the whole semester, and they frequently develop a relationship with it,” he says. “They learn all the organs, of course. But they also learn that there’s a narrative to every disease.”
David Heise, a 35-year-old former computer consultant who is starting a second career, has, for example, learned that his cadaver was a heavy smoker and had cataracts. “You can tell the smoker by the lungs,” he explains. “Normal lung tissue should bounce back when it’s probed. She’s had some emphysema, you can tell, because when you push, it just stays dented. This teaches you to be detached in a way, but it also reminds you that these are people who lived real lives, who had families.”
Heise, one of a growing number of older students pursuing medical careers—more than 20 percent of the first-year students are over 25, and a third of those are over 30—professes to love gross anatomy lab because of its hands-on nature. For most of the year, David’s hands will be relegated to pecking away at a computer keyboard in the courses that round out the students’ first year of education—biochemistry, cellular biology, physiology, endocrinology and reproduction, and human behavior. He can look forward to much of the same during his second year, when the focus turns from the healthy functioning of the body to disease and dysfunction, and the students take pathology, microbiology, pharma-cology, and psychopath-ology. Only in the third and fourth years, when the students’ education is conducted almost entirely in the clinics at Parkland Memorial Hospital and other institutions affiliated with the medical school, will David get his hands on patients.
For the time being, he must measure his success by his performance on the purely academic rigors of early medical education. Just the week before, he and his classmates were put through the grind of two anatomy tests. In what is known as the walk around, students proceed from cadaver to cadaver, on which various organs and other body parts have been tagged; there are about a hundred tagged organs, each of which must be correctly identified within fifty seconds. “And it’s not like it’s the guy’s liver or something,” says Heise. “They’ll put a tag on a particular nerve or part of a major artery. You have to understand the relationship of the organs real well.” In the second, multiple-choice exam, students are given a premise of pathology—say, patient X has been stabbed through the eye with an ice pick—and asked to identify the correct sequence of the layers of skin, nerve tissues, and so on, that the weapon might have damaged.
As hard as it is for many doctors to imagine medical education without basic anatomy taught in this basic way, technology may soon make the number of hours medical students spend in the anatomy lab seem a waste of precious time. Computer imaging has become so sophisticated that anatomy can be learned—and, more importantly, viewed by a consulting physician—on a virtual basis. Since more and more surgery in the future will be assisted by computer imaging, basic courses will be taught more and more through the same medium. As Southwestern Medical Center president Kern Wildenthal says, “There really won’t be as much need for outright rote recall of anatomy, because it’s going to be right there, in the computer, remembered more precisely than any doctor could.”
The Biomedical Machine
MEDICAL EDUCATION IS ON THE THRESHOLD of its greatest pedagogic change since 1910, when educator Abraham Flexner published his scathing critique of medical schools in the nation, calling the vast majority of them quack factories. These were still the dark ages of medicine in America, when charlatans and magic elixirs, untrained midwives and the surgical anesthetic known as sour mash, still dominated health care. Doctors were counselors and confidants more than students of curative science. Since only about a dozen drugs were known to be pharmacologically effective, anyone with minimal powers of memorization and a reassuring voice could call himself a doctor—and that’s exactly what many people did.
Of the 150 or so medical schools in the nation at the time, Flexner recommended that 120 be summarily shut down. Most of them were little better than trade schools, where doctors learned a craft much as a carpenter or a mason might. Flexner’s recommendation, considered radical at the time, was to redefine the medical profession and the education necessary for it. The doctor should be a scientist who specialized in “derangement of biomedical function” and whose primary duty was to diagnose that derangement and prescribe cures. All other roles that society seemed inclined to foist on physicians—including the doctor’s time-honored function as a counselor—should be left to other professions. Aspiring physicians should learn as students, not as apprentices to other doctors; they should learn the basics of bioscience before they began treating patients; and medical school instructors should put teaching ahead of practicing medicine.
The idea was to train doctors to be more than purely reactive craftsmen who simply identified symptom X as associated with afï¿½iction Y and prescribed cure Z, basing their decisions solely on past experience, with little understanding of medical science. Education rooted in the basic sciences sought to foster in the physician an understanding of the true nature of disease and health. Doctors needed to know which treatments worked and why they worked. The medical schools that taught them had to be traditional institutions of higher learning.
Flexner’s science-first model eventually became the norm for American medical education. As is the case at Southwestern today, students are first inundated with the basic sciences in the traditional lecture format, then slowly eased into clinical applications of their work at an adjacent teaching hospital. The strict academic environment was designed both to educate doctors and to promote scientific research and discovery by faculty members.
The boom years for medical schools began after World War II. The Manhattan Project had suggested to the country that monumental tasks could be accomplished by rigorously applied science. If Big Science could discover how to split atoms, why couldn’t it find cures for major diseases? Millions of dollars, both public and private, flowed into biomedical research, largely through the National Institutes of Health. At the same time, the post-war boom had produced shortages of suddenly vital goods and services—doctors among them. State legislatures hastily appropriated funds to build new universities and medical colleges in response to a government-proclaimed doctor shortage. For the first time, medical care became widely available—and affordable—to most Americans. The new biomedical machine began to produce impressive results. Preventions for age-old pathologies like polio were discovered, radiological diagnosis and surgical technique were refined, new wonder drugs were introduced to the market, and the more Big Medicine accomplished, the more the public demanded. In medical academe these accomplishments were seen as direct results of adherence to the Flexnerian creed: Science is king.
“Where Are the Patients?”
BIOCHEMISTRY IS STRAIGHT, UNADULTERATED SCIENCE of the sort that could glaze the eyes of even the most zealous scholar. This is the discipline of amino acids and metabolism, of complex lipids, of polysaccharides and DNA, of the functioning and dysfunctioning of the human body at its most elemental level—its chemistry. Learning it is a torturous exercise.
The biochemistry lecture hall, in a basement at Southwestern medical school, is an amphitheater shaped like a wedge of pie. It is in this moodily lit room that first-year students get their initial taste of pure medical science, and for many of them, it is an unpleasant experience.
Leslie Chin knew of the infamous first-year biochem through a relative who had studied at Southwestern four years before her. Still, she says, her reaction was, “This is med school! Where are the patients?” Leslie had long dreamed of being a doctor; and the dream had always involved putting her hands on a patient, diagnosing what was wrong, and curing it. Though she’d taken biochemistry at the University of Texas, it still seemed hopelessly abstract. In time though, she came to understand the warning of Dr. Lacey in that opening lecture: “The people who tell you biochemistry isn’t relevant are the people who make mistakes, because they don’t know biochemistry.”
“Biochem is the great leveler,” Leslie says after several weeks of the course. “It is where all of us—from biology majors like me to English majors—are in equally deep.” Each Monday, Wednesday, and Friday at nine in the morning, all two hundred first-year students gather to listen to one or another guest lecturer hold forth on subjects like amino acid metabolism. But even this most sacred of basic-science classes has been affected by the knowledge glut.
Lewis Waber, for example, who delivered the lecture on amino acid metabolism, was quick to advise his students of precisely how he wanted them to spend their time on his material—and how he didn’t want them to spend their time. “I do not want you to try to memorize metabolic pathways, the structures of metabolites, or the names of enzymes,” he wrote on the first page of his lecture outline. “You would not remember them long enough to do you any good.… I want you to try to see the interrelationships among metabolic pathways. I want you to understand how defects in one pathway cause derangements in others.” He peppered his lecture with jazzy computer-driven visuals and rock and roll music to keep the students’ attention. The class is another indication that the days when medical education was based on superhuman feats of rote memorization and regurgitation are mostly over. As in anatomy, most, if not all, of that sort of information can be retrieved from a database with a couple of keystrokes. The task of today’s medical student is to get a firm grasp of the motifs and principles that form the trunk of the information tree. That, educators have come to realize, is all there’s time for.
The students, for their part, have devised their own methods for coping with the explosion of knowledge. When Leslie Chin goes to biochem every other morning, she does so not just as a student but as an archivist for a special student-run service that provides complete transcripts of all lectures. Each lecture is taped, transcribed, proofed, and fact-checked by a team of students and provided to other students who have paid a $115 per-year fee for the transcriptions known in student jargon as scribes. “That’s mainly what I use to study,” says Leslie, “the scribes and the syllabus.” The service has the additional benefit of allowing students to skip lectures if they wish and use the time to study for other classes. In the age of the knowledge glut even the faculty sees nothing wrong with this. “Why, when I was a med student,” says Professor Burnside, “it was unheard of to miss a lecture. But it’s the times. It’s not like these students are goofing off when they skip a lecture. They’re at the computer or somewhere else, learning.”
Even with such time-management gimmicks, first-year students quickly are forced to adopt a Herculean study schedule. David Heise, for example, arises at four-thirty each weekday, fixes breakfast for himself, lunch for his two children, kisses his wife good-bye, and makes the long commute from his home in Plano to the Southwestern campus, near downtown Dallas. He attends biochem or another lecture starting at nine and finds himself either in class, lab, or a review session almost nonstop until five. Then he returns home, grabs some dinner, and studies until eleven. “There’s really no other way,” he says. “Because of my experience in business, I know a lot more about time management than some of the younger students. But it just requires time.”
The Southwestern Gospel
THE DEDICATION TO BASIC BIOMEDICAL SCIENCE at Southwestern remains so intense because for this once-obscure medical school in the hinterlands, science has been the school’s entrée to academic celebrity. Although the Southwestern name has been around Dallas since 1900, its early history was less than auspicious, and it didn’t show up on any radar screen outside Texas until its medical researchers won three Nobel prizes for medicine in the past eleven years.
The first Southwestern closed its doors in 1915 after a brief affiliation with Southern Methodist University. That hardly mattered, since Baylor Medical College was located in Dallas in those years, but Baylor’s move to Houston in 1943 threatened to leave the city without a medical school. Civic leaders and former Baylor dean E. H. Cary formed a foundation that started a new medical school in Dallas in the same year that Baylor departed. Both the foundation and the medical school bore the resurrected Southwestern name. But the new Southwestern, like its remote ancestor, could not survive long as a private, unaffiliated institution. Not until the post-war years did Cary find the right adoptive parent for the orphan school. The University of Texas system was ready to expand to meet the demands of a doctor shortage in a growing state. The system had only one medical school, at Galveston, and UT was looking for a venue suitable for another. At the time, Southwestern didn’t have a lot to show for itself, but it did have a large parcel of land that had been donated by local scion Karl Hoblitzelle and the support of the Southwestern Medical Foundation, a charitable arm of the local business establishment. That was more than other cities—principally, San Antonio and El Paso—had to offer, but it still took some clever politicking by Cary to get his little school adopted into the UT family. He first cozied up to the board of regents. Then, knowing that Dallas was—and always would be—unpopular with rural members of the Legislature, he helped push through a law authorizing a second UT medical school. He also persuaded lawmakers to hand over the decision of where to locate it to the House of Delegates of the Texas Medical Association. As historian John Chapman notes in his 1976 history of Southwestern, the strategy was simple: “if the Dallas delegation to the Legislature was not the strongest or most inï¿½uential, its delegation to the Texas Medical Association was both powerful and well-respected.”
In 1949 Cary’s school officially became part of the UT system under the name Southwestern Medical School of the University of Texas. In becoming a member of the UT family, the school’s overseers had to give up its land and other assets to the system, as well as a great deal of control—including an agreement that 90 percent of the students would be resident Texans. But the school would remain small, obscure, and without much of a distinctive character for another couple of decades. The building of its new campus on Harry Hines Boulevard was steady but painstakingly slow, as was assembling a faculty. Though Southwestern had been adopted by a large, wealthy state university, years would pass before it would lose its identity crisis over being an orphan.
Big Science would provide that identity. The school was gaining its firm footing just as the biomedical wave was building up. Research was an arena where an upstart school lacking in pedigree could make a quick reputation for itself. Southwestern couldn’t hope to catch up with the Harvards and the Johns Hopkinses of the medical academic world according to the traditionally accepted measures—size and quality of the student population, teacher-pupil ratio, budget, and physical plant. If the school couldn’t compete for students, however, it could build a national reputation by competing for faculty. A few notable experiments, discoveries, papers, or awards by individual faculty members could instantly leapfrog the school over the competition.
By the seventies Southwestern was in position to go after the top professor-researchers. Funding, once a hand-to-mouth matter, now seemed to be rushing in from all directions. The school was finally getting large payouts from UT’s share of the Permanent University Fund. A bounty of federal research grants was available from Washington. Most important, annual donations from local benefactors began to top seven figures. In a two-year period, for example, two Texas Instruments executives, Erik Jonsson and Cecil Green, gave the school a combined $4.6 million to help fund teaching chairs and building projects. With other donations, Southwestern was able at last to finish much of the master plan it had commissioned in the sixties. An odd collection of buildings next to Parkland Memorial Hospital began to grow at astounding speed: During the decade, the Fred F. Florence Bioinformation Center, the Tom and Lula Gooch Auditorium, the Cecil H. and Ida Green Science Building, the Eugene McDermott Academic Administration Building, the Philip R. Jonsson Basic Sciences Research Center, and assorted necessities such as a large parking garage were erected. By 1976 the number of students, including postgraduates, passed one thousand; the next year, Southwestern graduated its 3,000th doctor.
Meanwhile, the faculty was beginning to attract notice in the world of medical academe. In 1978 two Southwestern doctors were published in the prestigious New England Journal of Medicine; the next year saw the first Southwestern faculty member named to the National Academy of Sciences. Such modest acclaim might seem trivial compared with the yearly accomplishments of the faculties of the nation’s top medical schools, but at Southwestern, it was an indication that the school’s recruiting policies were working.
The Southwestern recruiting gospel was the brainchild of Donald Seldin, the chairman of internal medicine. The one advantage that Southwestern had over high-prestige schools like Harvard was that it wasn’t encumbered by tradition. At the established schools, young professors had to wait their turn. At Southwestern, though, professors could make a name for themselves immediately and in a big way. Armed with increasing amounts of funding, Seldin and others set about identifying and wooing the rising stars of biomedical research.
Seldin started out by recruiting his own students. He made the brightest students his protégés, sent them off to continue their studies at the top medical institutions in the East, and at the appropriate time, talked them into returning to Southwestern as professors and researchers. Then he encouraged them to spread the word about Southwestern to other scientists they met along their academic career paths.
One of the first and, as it would turn out, most notable of Seldin’s returnees was Joseph Goldstein, who had graduated from Southwestern in 1966 with plans to become a neurosurgeon. Seldin persuaded the brilliant young scientist to pursue genetics at Massachusetts General Hospital in Boston and then coaxed him into returning to Southwestern to teach and do research in the discipline. When Goldstein came back to Dallas, he brought with him fellow geneticist Michael Brown. “It sounded to me like a Bible school,” Brown would tell the Wall Street Journal in 1994. “But Joe kept telling about this wonderful Dr. Seldin and how he had already begun building a team better than Massachusetts General.” In 1977 Goldstein was named to head a new basic-science department.
Goldstein and Brown took full advantage of the academic atmosphere Seldin had championed at Southwestern: In 1985 their ongoing work on the genetic control of cholesterol metabolism in the body was awarded the Nobel prize for medicine—the first in the school’s history. Almost overnight, Southwestern became a viable option for young researchers weary of the morass of tradition and bureaucracy at other schools. And Southwestern had the money to suit their needs: State funding had jumped to more than $60 million, allowing the school to expand its space to more than 2 million square feet. More importantly, the Nobel at last gave the school’s leaders something tangible to show to the city’s new generation of philanthropists to get them to open their checkbooks. When oil and real estate crashed in the mid-eighties and state funding dipped by 13 percent, Southwestern had to get more private money. “We knew we were dealing with very smart, careful philanthropy,” says president Wildenthal. “We had to craft very specific proposals for donors.”
The sales pitches have centered on Southwestern’s commitment to biomedical excellence. Scientists were the school’s stars—Brown and Goldstein in genetics and molecular biology; Johann Deisenhofer, whose work in photosynthesis brought Southwestern its second Nobel, in 1988; and Alfred Gilman, who won the third Nobel, in 1994, for work in cellular communications.
The pitch to canny, science-minded donors like Ross Perot was the opportunity to have their names on research projects and educational programs that were on the cutting edge of scientific inquiry. Donors could match their interests to Southwestern’s research programs. Perot, for example, had long expressed a deep interest in ensuring that the school continue competing for and winning Nobel prizes. So, after a year of discussion, the persnickety billionaire agreed to fund the school’s M.D.-Ph.D. program. Big civic names like Aston, Zale, McDermott, and Jonsson adorn the center’s array of clinics and specialized educational centers. The list of its “lifetime benefactors” (donors who have given more than $l million cumulatively) reads like a who’s who of North Texas’ wealthy and powerful—among them Perot, Moncrief, Seay, and Simmons.
For all its achievements and high local profile, however, Southwestern continues to be routinely excluded from rankings of top medical schools such as U.S. News and World Report ’s annual scoresheet, largely because its student-to-faculty ratio remains relatively high and its mandated Texas-resident student base prevents it from recruiting the best and brightest students nationwide. The mean MCAT (Medical College Admissions Test, the entrance exam for medical schools) scores of its students are the highest among the state’s public medical schools at 31.7, compared with UT’s branches in San Antonio, 29.1; Galveston, 27.5; and Houston, 27.1. But the all-important figure still lags behind, say, the mean score of Harvard Medical School’s class of 2000, which is 33.8. Southwestern’s reputation in the research community is secure: A study tracking how often the published work of faculty members in all disciplines was cited by other authors between 1981 and 1994 placed Southwestern among the eight most-cited faculties in the country, a group that included Yale, Harvard, and Stanford. But outside the research community, Southwestern remains, to some extent, the little orphan who still isn’t taken quite seriously by those who look first at pedigree.
Retreat From Flexner
THE NINE FIRST-YEAR STUDENTS WHO ARE GATHERED IN A SMALL, dimly lit conference room represent a small chink in the armor of Big Science, a recognition that biomedicine is not all that a prospective doctor needs to know. It is a departure from traditional first-year medical education—an intrusion of clinical practice into what previously was a pure academic regimen. Leslie Chin and her fellow students have been asked to fashion a full diagnosis and treatment plan for a fictional patient, Kerri, a thirteen-year-old African American female who is “feeling sick to my stomach, short of breath, and tired.” The patient has a medical history of insulin-dependent diabetes mellitus and has had ten hospitalizations over the past year for diabetic ketoacidosis (DKA), a life-threatening complication of the disease.
Even a layman could make an educated guess that Kerri has not been taking her insulin as instructed, leading to her frequent ï¿½are-ups of DKA. Remedy: Make her take her medication. Problem solved. But it’s not that elementary—which is the point of this four-year-old course called Introduction to Clinical Medicine. As it turns out, the case raises social, psychological, and ethical issues that these physicians of the future must consider along with the bioscience of her condition. To begin with, Kerri has two primary caregivers, her mother and her grandmother. Her mother feels that the solution is for the adults in Kerri’s life “to stand over her and make her take it.” The grandmother, the students are told, has a more laissez-faire attitude. The medical problem is really a social problem. The cutting edge of medical education today is not only science but also a recognition that the demands upon doctors have come full circle since Flexner issued his criticisms. In the future, as in the pre-Flexnerian past, doctors are going to have to be counselors, advocates, even sociologists to do their jobs.
The attack on the idea that science was the beginning and end of medical education began in the sixties, when there were calls for the medical establishment to come down out of its ivory towers and apply its knowledge and skills to urgent social problems ranging from crime to poverty. There were also complaints that the National Institutes of Health was spending too much money on arcane scientific investigation and not enough on practical, clinical applications. But the force of Big Science was too strong. Medical schools were responding to the explosion of knowledge by producing the specialists that patients wanted. By the late eighties the medical profession was crowded with specialists and short on generalists; the shortage became more acute with the arrival of managed care and its need for primary-care physicians.
The retreat from Flexner became evident in a 1989 article in the American Journal of Clinical Pathology by Harvard professors Robert Colvin and Miriam Wetzel. The authors argued that because of the new pressures on modern medicine, “development of skills, values, and attitudes should be emphasized at least to the same extent as factual knowledge.” The doctors went on to explain that the best way to address the twin burdens of exploding biomedical knowledge and a rapidly changing health-care-delivery marketplace was to get students out of the lecture halls and into problem-based learning seminars—like the one at Southwestern involving Kerri. There they could learn medicine by example, even during their so-called preclinical years, the first two years of medical school. The Harvard doctors concluded that only such a curriculum could foster the “self-directed learning” that doctors of the future must practice to survive.
The Introduction to Clinical Medicine course at Southwestern represents an admission on the part of medical academe that the managed-care revolution has reached into the ivory towers. (Indeed, the school has explored the idea of a course in the business of medicine—how to form a doctors’ group, bid on a managed-care contract, process Medicare paperwork, and obtain grant money.) Leslie Chin and the other students break down Kerri’s case into five areas: ethics, human behavior, preventive medicine, basic science, and clinical medicine. They will each research one area during the next week and report back with findings and recommendations. They also discuss the psychosocial aspects of the case in exhaustive detail: Should the physician’s course be to try to impose her will on the mother and grandmother or to cajole them into getting Kerri to take her insulin? Or can both caretakers be bypassed and Kerri persuaded to take her insulin more regularly? In any event, do all parties know enough about the disease and the medication to act responsibly on their own? The seminar’s two “facilitators”—doctors who are present only to nudge the deliberations in certain directions—ease the students through the various options, trying to get them to understand that it is not enough for a doctor to simply diagnose DKA caused by patient noncompliance.
A week later the students return, ready to decide on a treatment plan for the fictional patient. Leslie lays out the basic science of the disease thoroughly and succinctly—showing that biomedicine can be learned outside the lecture hall with the help of computers, textbooks, and a practical exercise. The students then decide that family counseling is necessary. Because the patient is a minor under the care of two legal guardians, the doctors must make certain that all three are completely informed of the nature of diabetes, particularly its long-range secondary pathologies such as blindness and heart, renal, and nervous-system degeneration. One student suggests that perhaps the young girl should be encouraged to attend a diabetes camp, where she would meet peers with the same afï¿½iction and learn self-care; another recommends that the two guardians and the doctor devise a scheme of rewards for the girl for complying with the medication regimen. During the course of the conversation, subjects ranging from adolescent psychology to the family dynamics of inner-city African American families are discussed.
To the biomedical purist, such lengthy treks into the nonscientific territory of medicine by first-year students might seem like heresy. But the growth of such courses seems inevitable, even at Southwestern, which remains committed to bioscience first. The seminar proves that even first-year students can fashion a respectable diagnosis and treatment from scratch. And the exercise did teach the students to ask the right questions and where to look for the answers—in short, how to teach themselves. At a time when the basics of biomedical knowledge are almost incalculably large (it will have doubled by the time Leslie Chin and David Heise finish their residencies), these students need to know not only what they know but what they don’t know.
A Recurrent Headache
IN MID-NOVEMBER LESLIE CHIN FINALLY GETS her hands on a patient—not a real one, but a campus employee serving as a “dummy client” in a second clinical-medicine exercise. The charge to the students in this drill is to take a thorough case history from the “patient” based on a vague complaint—in this instance a recurrent headache. No professor is present; only six fellow students observe. Leslie’s performance will also be critiqued by the “patient” according to a lengthy checklist the school has given her. Later in the day, a professor will brief the students on potential testing and diagnoses that the case histories of their patients should have suggested to them.
For Leslie the exercise is a welcome break from the academic grind. In one five-day stretch in October, the first-year students had a three-hour biochemistry mid-term, a four-hour anatomy “walk around,” and an anatomy exam. Students studied ten to twelve hours a day. It was a humbling experience for Leslie. “I’m used to being among the best on every test,” she said afterward. “I did fine, but I wasn’t the best. It made me realize I’m competing with the best of the best here.”
Now she was meeting the middle-aged black woman who was her “patient.” After introducing herself, Leslie began to ask questions: What kind of pain is the headache? Sharp or throbbing? How frequently do the headaches occur? How long has she been having them? What prompted this particular visit?
The patient is good at what she does. Like a typical patient, she answers the questions tiredly, cryptically, incompletely. You can see Leslie’s mind racing to find the questions she might have forgotten to ask; among other things, the challenge here is not to let the patient leave without revealing everything germane to an intelligent diagnosis. Leslie asks about the patient’s work and family, the amount of stress they cause in her life. She inquires about her vision. Throughout, she scribbles furiously on a small pad.
Finally, she extracts from the patient that the most recent headache was accompanied by some numbness down one side of her body—a critical fact, since it might suggest a neurological disorder. Wrapping up, Leslie says as informally as possible, “That’s all I need for now. The nurse will be in presently … oh, my God, I forgot to summarize!”
Her fellow students chuckle. Leslie’s interviewing has been efficient and, for the most part, incisive and thorough. But in her nervousness, she has neglected to recapitulate the information to the patient, a crucial doublecheck of the symptomatology. She gathers herself and races through a summary, discovering along the way that she’d forgotten a couple of things the woman had said and misunderstood one symptom. “And you forgot to ask me about other symptoms,” the patient informs her during the critique. “I was ready to tell you I’ve been vomiting too, but you never asked.”
“This is harder than you’d think,” Leslie reï¿½ects, as the students walk toward another room and another “patient,” this one complaining of a sore throat. “But I’ll know exactly how to do it the next time. It would have been easier if we didn’t have that biochem exam Monday hanging over our heads.”