On the other side of town, ONC is busy apologizing for the sorry state of what it calls “interoperability”, blaming everything from the lack of standards to people’s inability to agree on a restricted set of vocabularies for the medical profession. According to the ONC philosophy of interoperability, only “computable” data can be exchanged or analyzed in a meaningful way. In other words, all medical professionals must learn to express themselves in a standardized way which computers can understand. To that end we have ICD-9, ICD-10, SNOMED-CT, LOINC, RxNorm and all sorts of other terminologies and vocabularies aimed at restricting the English language to the limited computational abilities of available EMR software. How do you say “Mr. Smith is a pleasant 82 year old gentleman with a sad demeanor” in SNOMED? You don’t. You dispense with the pleasantries (pun intended) and diagnose Mr. Smith with depression. The Sapir-Whorf linguistic relativity hypothesis is by no means a settled subject, but if it contains any truth and vocabulary does affect cognition, then how will restricting clinical vocabulary affect the cognitive abilities of its users over time? We don’t know, and frankly I am not interested in finding out.
The folks at IBM took a different route. Paraphrasing Sir Francis Bacon, loosely quoted as, “If the mountain won't come to Muhammad then Muhammad must go to the mountain”, Dr. Watson’s creators must have decided at some point that if the doctor won’t come to the computer then the computer must go to the doctor. Instead of framing the problem by asking how we can change human communications to better enable the current generation of computers to “understand” humans, IBM began by figuring out how to change computers to better enable them to understand current forms of human communications. Thus, Dr. Watson learned to read books and articles and all sorts of “unstructured” information, because no matter how hard the powers to be are trying to fit the square peg of human language into the round hole of computer language, and tragically vice versa, most information generated by people is in their natural language and Dr. Watson was programmed to process natural language. So if Dr. Watson is able to “read” half a million pieces of text of various heft in a second or so, how long would it take for it to read an old fashioned paper chart, or an electronic rendition thereof? I am pretty sure that if you ask nicely, Dr. Watson would be happy to rearrange it for you in any way you choose, while pointing out the most pertinent parts to your current objective, highlighting discrepancies, missing and redundant information, all in a picosecond or less. And interestingly enough IBM developers thought it wise to take a generalized path to Watson’s education, instead of creating specialized Watsons each with linguistic abilities specific to a domain. Seems more human friendly that way…
The IBM Watson software line is not an EMR, but it can process and analyze information in an EMR. It is really an attempt at artificial intelligence consisting of a gigantic contextual search engine, coupled with lots of very sophisticated and self-generating algorithms to both analyze and inform the search itself. Watson doesn’t need to have the smoking status check box clicked in order to infer that the patient is or is not a smoker and doesn’t need to have a new standard defined before it can read a patient’s family history. True, Watson is pretty new software and folks have been tinkering with natural language processing and artificial intelligence for half a century without much success, but things are beginning to coalesce now and technology in the next decade will look very different. Is it really wise for our government to spend so much money and invest so much effort in building and enforcing the use of tools that are becoming obsolete faster than they are created? My hat is off to the VA and DoD who gave up on the strange and expensive idea of building their own EMR from scratch (better late than never). I think it’s high time that other governmental agencies got out of the EMR design business as well, because there are companies out there whose core competency is technology and who have large enough innovation budgets to build the next generation of health IT. Consider this: What if Dr. Watson had a few less educated siblings serving as medical secretaries, summarizing, abstracting and relaying information back and forth, on demand? All of a sudden the shape, form, functionality, standardization and all “meaningful” bells and whistles in an EMR are rendered irrelevant, and using Microsoft Word for typing or dictating your note is as good as using a “certified” EMR, or much better, because the context is so much clearer and so much more forthcoming.
Whether it can pass a Turing test or not yet, Dr. Watson is not a real doctor, and it will not be one in our lifetime. Dr. Watson has no free will and everything it knows is dictated by the corpus of knowledge made available to it by its creators or employers. There will be huge ethical and legal questions raised by software capable of supplanting human decision making processes, and software that can be centrally deployed and manipulated to this end. Even before that future arrives, it is worth noting that Dr. Watson is simultaneously employed in oncology clinics and by payers, and in my opinion, Dr. Watson has one button too many – the direct button to the insurance company, which will automatically approve payment for Dr. Watson’s top recommendations, but presumably not so much for other choices. Like all technologies, Watson embodies hope for the greater good along with great new perils for ordinary people. Leaving these philosophical questions aside for a moment, the only certain thing is that Dr. Watson is starting its brilliant new career by introducing a cure for one very painful disease that is reaching pandemic proportions amongst medical professionals: clicking boxes.