Meanwhile in Austria, at the USAB 2011 conference on eHealth, hundreds of health and technology professionals gather to discuss topics relevant to information flow, patient empowerment, and clinical decision-making.
The first of four (four!) keynote presentations was given by Vimla Patel of Columbia University, whose interests lie in quality of eHealth data. In the talk, Cognitive Approaches to Clinical Data Management for Decision Support: Is It Old Wine In a New Bottle?, Dr. Patel argued that indeed it is new wine in a new bottle. This post chronicles my reactions to the talk. Dr. Patel, if you happen upon this post, please know that I am quite jet-lagged and had had three of the complimentary espressos, in rapid succession, shortly before your talk. I do not intend this as an apology, but as an explanation, and as a hope that you will not hate me for expressing my views so plainly. I don't buy it, and here, I explain why.
Patients are in danger!
The problem was outlined thus: Information technology impacts patient safety. There is simply not enough evidence that current information technology systems are good for patient safety -- in fact, they might be detrimental. One of the reasons is that there is no accountability for these systems. According to Dr. Patel, in many cases, systems are designed and deployed by engineers without consulting with clinicians or patients, and without proper responsibility for upkeep of the systems for new ideas and trends.
Federal regulations
A way to address this? Dr. Patel suggested: Technology should be monitored by an agency or government; there should be federal regulations on software released for eHealth purposes.
Pardon me while I gather my jaw from the floor. Right off the bat, I can think of at least two reasons this will never work.
First: The design-development cycle would be too cumbersome. Can you imagine being the poor programmer that has to succumb to federal regulations, to laws and restrictions, to government-imposed checks and balances? Can you imagine trying to add a new feature, a new decision flow, or a new interaction method? I thought the Apple Developer cycle was bad; this would be murder.
Second: Regulations on software are restricting. No, I don't have a citation for this, you overachiever. I know from my experience and from the experiences of all of my colleagues that regulations are inversely proportional to success of a software product. While it is true that some restrictions spark creativity, what we are talking about here is a severe impediment to the development process.
Here's a bonus: Third: Every hospital, every office, every provider has different requirements. How do you federally-regulate this difference and custom instances of the same product? It is a nightmare.
Why Electronic Health Records (EHRs) suck
Problems with current electronic health record (EHR) system include the following.
In EHRs, information is structured temporally, which is how clinicians come about gathering the data. As a clinician, you see a patient, you take notes; you see the patient again, you take more notes. Over time, this presents a time-oriented view of the patient's health. But the problem is that clinicians do not think about patient health in this way. They think in terms of symptoms and relationships between symptoms, tests, and diagnoses.
So the question is how to store the data in a way that is fundamentally useful to clinicians, and how to retrieve and display it in the same way that they think about the data. There is too much data, too much redundant data, and too many sources of related data. There is a mismatch between cognitive processes of clinicians and the way the data is stored and represented.
The result Dr. Patel drew is that there is poor usability study and requirements gathering for these EHR systems. By involving users (i.e., clinicians) in the process early and often, she argued, we can explicitly retain the relational structure in software that clinicians use in real life: a complex mental model of vital signs linked to symptoms linked to potential diagnoses. A directed graph of thoughts and decisions. Understand what people want, she said. Test iteratively with users, she said.
I thought: Don't be afraid to say participatory design.
User study is not enough
Dr. Patel never outright said it, but it is a question of tagging and metadata and, most certainly, provenance. In effect, the question is the same as in any large-scale file system (think peta-scale): how can you predict which data the user will want to retrieve? I refer the reader to work done by the Storage Systems Research Center which has been tackling the problem in full force.
Sure, representing the data is important. As with any file system (let's face it; that's what we are talking about here), we can know everything about what users want, but it may be fundamentally impossible to deliver this kind of system. Big data have an inherent bottleneck at retrieval; they have an inherent bottleneck at storage and archival.
Nobody likes to be wrong
In real life clinicians draw logical conclusions in a guess-and-check fashion: given a set of symptoms gathered from charts, nurses, attendings, and other sources of information, they make a mental model of the potential problems and solutions which can be confirmed or refuted. In the ideal scenario, the clinicians would chart these decisions and potential diagnoses. They would chart, in this system, anything that they considered potentially important in the future.
Oh god! So many problems!
First, think about the paperwork overhead. Electronic paperwork, whatever. Sure, in the ideal world with infinite time and infinite memory (as they say in computer science), doctors would save all of their thoughts.
Second, think of the liability. I am not even talking about not wanting to be wrong, which, of course, everyone feels. It is well-studied in elderly patients with dementia: they will not admit to forgetting appointments or missing meetings. People won't chart wrong guesses. Being wrong is bad. For a clinician, being wrong leads to liability. Mis-representing a symptom that can lead to a missed diagnosis leads to liability. How can you prove your motives were good, when the patient's health was compromised?
Third, is this another way to minimize patient interaction? Look, in labor and delivery in the US, the average doctor spends something like 2 hours, 41 minutes with her patient, total, throughout her entire average 10-month pregnancy and including the 24-hour birth. With such a system, will it mean that a doctor no longer needs to spend quality time with her patient, but instead spend this time mining data? I do not argue that in aggregate, data gathered over time in a particular facility can be powerful. But what happened to patient-centered care?
It's different on paper
Electronic health records have a different set of abstractions and information flow (and hence, a different set of mistakes one can make) than paper-based ones. For paper-based health records, it goes basic concepts (such as vital signs) : intermediate constructs (what to do with the vitals: e.g., compare to normal, compare to expected, compare over time) : heuristics (visualization and diagnosis). Concrete to abstract. But most experts do not bother writing down some basic concepts because it is inefficient, much in the same way you do math in your head or play chess without writing down the possible moves. For EHRs, the flow goes heuristics : intermediate constructs : basic concepts. Abstract to concrete. The overlap is at intermediate constructs, and the question is how to move them from the head to the computer.
I imagined WebMD, the website that spits out a list of things that could be killing you subtly or not-so-subtly, given an input of real or imagined symptoms. The output from WebMD is potentially useless. You have a stomach cramp and a head ache? It could be a brain tumor and pregnancy.
Disempowered
Of course, the tool Dr. Patel described would need to be understood by a doctor, or someone else medically trained. In fact, she said, in some cases, you do not want the patient to know at all. There are cases that the patient should not have access to these private thoughts of doctors. With the exception of one situation which I do not have training to understand, namely, adolescents seeking psychiatric care (if I were said patient, I would damn well like to know what the doctor thinks!), I thought that it was a huge oversight that the system would be unusable by anyone without proper training. Make it understandable, she said, for the doctor.
What about patient empowerment? What about patient information? In Germany, a doctor will sit alongside the patient to look through a clinical workflow, and they will decide together, collaboratively, on the proper treatment. Why is there not more of this worldwide? And why not just teach the patient?
Dr. Patel said the goal is to move towards patient-centered cognitive support for the clinician. I realize that this is the goal, but with this technology, I worry that we are removing real interactions between the clinician and the patient in favor of data collection. We are in a digital age where we teeter on worshipping data: in some ways, we hold data above all other things. We hold data collection, for example, above the real-life interactions, the real time that doctors and nurses used to spend with patients, that now they spend writing down things about their brief encounters.
Finally, and then I will stop ragging on this keynote, what about evidence-based medicine? Why has it never been mentioned, alone or in conjunction with "patient-centered" care? Why are we increasing the burden on care providers while decreasing the burden on the very people that are meant to do well -- by removing them from the patient and treating their thoughts, education, and logic, which makes them unique and valuable, as interchangeable with any other doctor, clinician, or robot?
Now, take this with a grain of salt because my triple caffeine buzz is wearing off. I was pretty excited about this talk when it began: the initial idea was that medical technology, and electronic health record systems in particular, are possibly doing harm to the patients they intend to serve. But near the end, it was clear that the only take-away, for me, is that more user study is needed for electronic health records, to determine what doctors need to make them disposable. As a patient and as a researcher, I feel disempowered.
But it is an interesting file systems problem.
One woman's path through doula training, childrearing, and a computer science Ph. D. program
Subscribe to:
Post Comments (Atom)
I like this post. this post very important. we can get lot of information thought this post and this site. thanks for giving these information, good luck...!!!!
ReplyDeleteInformatics Outsourcing - Clinical Data Management