Artificial intelligence (AI) has opened up new possibilities for patient care, providing doctors with increasingly sophisticated data. However, access to reliable information for patients has remained stagnant. The lack of comparative information about hospitals and healthcare providers makes it difficult […]
In today’s world, there is more information available to help you bet on the performance of your local sports team than there is about the performance of your local hospital. This discrepancy is alarming, considering the critical nature of healthcare decisions. Whether you are dealing with cancer, in need of a knee replacement, or facing a serious heart surgery, finding reliable and up-to-date information is a challenge.
The emergence of chatbots powered by AI, such as ChatGPT and Google’s Bard, offers a solution to this issue. These AI tools provide patients with access to otherwise unavailable information. Now you can find a knee replacement surgeon in Chicago with the lowest infection rate, learn about the average survival rates for breast cancer patients in a renowned medical center in Los Angeles, or get recommendations for heart surgeons in New York.
However, the challenge lies in the word “reliable.” Generative AI has the tendency to hallucinate and sometimes fabricate data and information sources. As a result, some answers provided by AI may be accurate, while others are not, and many cannot be verified.
Nevertheless, this seemingly simple approach of providing valuable yet frustratingly unreliable information can be seen as a positive development. It highlights the potential of AI to empower patients with a wealth of new information, at a time when regulation and legislation on AI have become political priorities.
Both doctors and patients have become accustomed to discussing treatment advice found on the internet. However, the question, “What is the best way to heal?” is dramatically different from, “I found information about your previous work, and I’m concerned. Will you heal me or harm me?”
When patients can provide data on a doctor’s success rate, such as infection rates or mortality rates specific to their condition, it puts doctors in a difficult position. They either admit to not knowing the exact figures, refuse to disclose the information, or engage in a discussion based on accurate data.
Only the last option, being open and transparent like AI but more reliable, maintains the vital element of trust between doctors and patients. Effectively, AI can serve as a means to enforce transparency, taking control of information out of the hands of those who manipulate it.
However, individuals and institutions that benefit financially from controlling information may resist this change. They may argue that the public needs to be “protected” from potentially inaccurate information about the quality of care provided by doctors and hospitals. As long-standing advocates of patient-centered innovations, we strongly believe in a different approach.
Instead of suppressing information, the focus should be on ensuring that every AI tool providing answers about clinical performance is as accurate and understandable as possible. Patients must be partners in their own healthcare, and all stakeholders should collaborate in defining the roles, rules, and relationships of information in the digital age. Despite slogans about a “consumer-driven approach” and “patient-centered care,” this transformation has not fully materialized.
For example, while hospitals are required to publish prices for 300 common procedures, this is often the extent of the information available to the public regarding “value.” Data on the quality of care you receive for your money is scarce at best. Medicare’s Compare website provides data on mortality rates for only six specific conditions and data on complications or infection rates for a few more hospitals, with unclear recency. But certainly, no patient being wheeled into the operating room has ever thought, “I might not come out alive, but at least I got a great deal!”
The risk of misleading or misused information always exists, as exemplified by the U.S. News & World Report’s list of hospitals. However, fears of chaos or confusion should not justify delaying tactics that deprive patients of the extraordinary potential of AI. The revolution is already underway.
Government, private-sector entities, and patients all need to engage in focused collaboration to create an environment of policy and practice where this remarkable technology can enhance the lives of every individual. Radical transparency in information, however uncomfortable, must be one of the highest priorities in the age of AI healthcare information.
How can AI empower patients with information?
AI-powered tools like chatbots provide patients with access to valuable healthcare information that is otherwise difficult to find or outdated.
What are the potential risks of relying on AI-generated information?
As generative AI can sometimes fabricate data, there is a risk of receiving inaccurate or unverifiable information.
How can AI help improve the doctor-patient relationship?
By providing patients with data on a doctor’s performance specific to their condition, AI can encourage open and transparent discussions, thereby enhancing trust between doctors and patients.
What hurdles exist in implementing AI-driven healthcare information?
Institutions and individuals with vested interests in controlling information may resist the transparency brought about by AI. However, the focus should be on ensuring the accuracy and understandability of AI tools rather than suppressing information.
Why is radical transparency important in the age of AI healthcare information?
Despite the discomfort it may cause, radical transparency is crucial to ensure the responsible and ethical utilization of AI in healthcare, ultimately benefiting patients and societies.