Don’t fear AI in healthcare and do embrace ChatGPT as a valuable time saver
If I had to select a game changer in healthcare in the past year, clearly it would be using artificial intelligence (AI) and ChatGPT (Generative Pertaining Transformer) for managing both clinical and nonclinical aspects of healthcare. This blog will discuss a brief history of ChatGPT and its potential applications for medicine.
ChatGPT is a large language model that has been trained on massive volumes of internet data. It attempts to imitate human-generated text and can perform various roles in just about every field, including healthcare and health research.
The computer program can access large amounts of data in computing techniques that make predictions from the data on stringing words together in a meaningful way that mirrors intuitive human conversation. This technology has captured the attention of millions who have employed it to write everything from songs and poetry to essays and Python code.
In medicine, ChatGPT is a major topic of conversation among providers and patients. This innovation has the potential to automate daily tasks like generating patient records or writing reports. While still in the early stages of development and use, it is projected to feature diagnostic and treatment use in the future.
Brief history of ChatGPT
ChatGPT, which is known as a general purpose chatbot, was developed by OpenAI and was launched as a prototype on November 30, 2022. By mid-December 2022, the program had over 1 million users. At the present time, OpenAI is allowing the public to use and experiment with the technology at no charge. ChatGPT Plus is available at a cost of $20 a month. OpenAI’s current market valuation is around $80 billion (a moving target) with about 200 million ChatGPT users.
OpenAI’s isn’t the only chatbot out there. Companies huge and small are rolling out similar AI text generators.
Medical applications of ChatGPT in practices
I recommend starting the ChatGPT experience by assisting with mundane tasks like writing back-to-work certificates, medical exemptions from jury duty and letters to insurance companies to pay for patients’ expensive medications. In the beginning ChatGPT can function as a high-level personal assistant to speed up bureaucratic tasks and increase time for patient interaction.
But it could also assist in more serious medical activities such as triage algorithms to choose patients who can get access to dialysis for kidney failure, ICU beds or other resources that are often limited. The technology could be used to enroll participants in clinical trials.
It is too early to know all the ethical implications of the adoption of ChatGPT in healthcare. The more it is used, the clearer the implications will get.
Reduce administrative work with this AI text generator
Let me provide an example of using this AI text generator to automate administrative tasks. Obtaining prior authorization (PA) consumes more than 16 hours of uncompensated physician time each week. Obtaining PA has been considered a veritable nightmare for most medical practices.
ChatGPT can be used to streamline the process. What currently takes 30-60 minutes per each PA can be reduced to less than two minutes. Let me provide an example used in a urology practice. I wanted to write a letter to the patient’s insurance company requesting prior authorization for an mpMRI to aid in the diagnosis of prostate cancer.
I opened ChatGPT and provided the patient’s clinical information. But I did not include the patients name or any other identifying information since I wasn’t sure if the site was encrypted or if I was at risk for a HIPAA violation. I gave the AI text generator the following instructions:
Write a letter to UnitedHealthcare regarding patient John Doe age XX, who has an elevated PSA, 8.3ng\ml, a free\total PSA ratio of 19% (normal is greater than 25%). I am requesting a multiparametric MRI and a prostate biopsy to complete his evaluation for prostate cancer. Please provide recent references regarding the appropriateness of this procedure. Signed Dr. Neil Baum.
AI’s letter to UnitedHealthcare
The following letter was generated in less than 45 seconds and another 60 seconds was required for the nurse to add the patient’s name and identification number:
I am writing to request approval for a multiparametric MRI and prostate biopsy for patient <John Doe> age XX, <who holds membership number XXXX>.
Upon examination, Mr. <Doe’s> PSA levels were found to be elevated at 8.3ng\ml, with a free\total PSA ratio of 19% (normal is greater than 25%). Given these findings, I strongly recommend a multiparametric MRI and prostate biopsy to assess the condition of Mr. Doe’s prostate gland and determine if further treatment is necessary.
I request that you approve this study for Mr. Doe’s benefit and ensure that he receives the care he needs.
I am including recent references that document the appropriateness of mpMRI in the management of men with an elevated PSA level. These are references from peer-reviewed literature on the value of mpMRI in the management of patients with elevated PSA levels. I believe it is essential to remain up-to-date on the latest advancements and recommendations in the field.
Thank you for your attention to this matter. Please do not hesitate to contact me if you have any questions or require additional information.
Dr. Neil Baum
Following are some of the references ChatGPT included.
- European Association of Urology (EAU) guidelines for prostate cancer.
- Prostate Imaging-Reporting and Data System (PI-RADS).
- European Society of Urogenital Radiology (ESUR) guidelines for prostate MRI.
This prior authorization request was faxed to the insurance company and approval was obtained in 24 hours. If this request was made by one of the nurses using the phone, the permission could have taken days or weeks. This is just one example of how effective ChatGPT can be in the medical office setting.
Ethical issues of using such chatbots
Whenever there is innovation in healthcare, such as robotics, digital technology, improved diagnostics and new therapeutics that can change healthcare for the better, there will also be ethical challenges.
It is too early to know all the ethical implications of the adoption of ChatGPT in healthcare. The more it is used, the clearer the implications will get. Incorporating this new technology in patient care and medical research raises several ethical concerns. Using it could lead to harmful unintended consequences. These concerns relate to confidentiality, consent, quality of care, reliability and inequity.
Downside of chatbots: AI hallucinations & HIPAA violations
ChatGPT suffers from multiple limitations. OpenAI acknowledged that ChatGPT sometimes writes plausible sounding but incorrect or nonsensical answers, often called AI hallucinations, which some are doubting can ever be corrected. ChatGPT originally had limited knowledge of events that occurred after 2021. But ongoing updates are changing that limitation.
ChatGPT runs the risk of committing privacy breaches. If identifiable patient information is included from machine learning, it forms part of the information that the chatbot uses in future. In other words, sensitive information is “out there” and vulnerable to disclosure to third parties.
The extent to which such information can be protected is not clear. Therefore, as I described in my letter to UnitedHealthcare, I did not enter any information that might be a HIPAA violation.
Confidentiality of patient information forms the basis of trust in the doctor-patient relationship. Chatbots threaten this privacy and may expose doctors to litigation.
Another ethical concern relates to the provision of high-quality healthcare. This is traditionally based on evidenced-base medicine. Using chatbots to generate evidence has the potential to accelerate research and scientific publications. However, ChatGPT does not provide the latest references, though it is working on that. Another issue are reports that ChatGPT fabricates references, compromising the integrity of the evidence-based approach to good healthcare.
Good quality evidence is the foundation of medical treatment and medical advice. Providers and patients use various platforms to access information that guides their decision making. But chatbots may not be adequately resourced in their development to provide accurate and unbiased information.
Bottom line: Like other innovations in healthcare, ChatGPT has challenges and further testing to undergo before achieving widespread use. Soon this technology will evolve to the point that it can design experiments, write and complete manuscripts, conduct peer review, and support editorial decisions to accept or reject manuscripts. The future of ChatGPT is very exciting, and its train is leaving the station – All aboard!