Munjal Shah Advocates for AI “Super-Staffing” to Address Health Care Worker Shortages
At the recent 2023 HLTH conference, Munjal Shah, CEO of Hippocratic AI, participated in a panel discussion about utilizing artificial intelligence in the healthcare industry. During the panel titled “There’s No ‘AI’ in Team,” Shah focused on how generative AI could help alleviate widespread staffing shortages for non-diagnostic medical roles.
Shah began by outlining the severity of the problem. He cited statistics from the World Health Organization predicting a global deficit of 10 million healthcare workers by 2030. But he emphasized that overburdened systems and underserved populations are already contending with staffing crunches. Given the scale of this crisis, Shah views generative AI as an essential tool to ensure patients worldwide can access quality, comprehensive care.
What exactly does he mean by “generative AI”? Shah refers to large language models (LLMs) that can hold conversations and reason across many documents. Unlike AI built for specific classification tasks, generative AI aims to interact naturally through conversational AI. The company Munjal Shah founded, Hippocratic AI, focuses on training LLMs to provide virtual assistance for non-diagnostic health services.
Some examples include chronic care nursing, explaining insurance benefits, delivering test results, and answering patient questions pre- and post-operation. Traditionally, these roles cost over $100 per hour to staff with human nurses and specialists. But LLMs only cost around $1 per hour and aren’t limited by human constraints like burnout or available time. This potential for exponential scale is why Shah deems it “super-staffing.”
However, successfully implementing generative AI requires careful training and oversight. Hippocratic AI has hired thousands of medical professionals to test their LLMs. The goal is for human experts to judge if the LLM’s responses mirror what a human colleague would say. Over time, reinforcement learning with continuous human feedback allows the LLM to improve. Coupling this with “overtraining” on authoritative health sources helps ensure trustworthy AI.
Ultimately, generative AI is intended to augment human capabilities, not replace them outright. Shah said during the panel that you can only build safe AI in a lab. You must partner with health systems and create governance to ensure responsible usage. But used judiciously under human supervision, AI could fill staffing gaps to expand healthcare access dramatically.
Other panelists echoed this balanced perspective. Jessica Beegle of Lifepoint noted AI isn’t a panacea for healthcare problems. Targeted applications like automating back-end paperwork could free up provider time for patients. Wardah Inam of Overjet emphasized human-centered design and testing with real users to create AI people want to use. And Shan Jegatheeswaran of Johnson & Johnson said AI should enhance clinicians’ abilities, not diminish human relationships.
The panel’s premise was that a “centaur” approach, blending AI with human expertise, will shape most healthcare decisions moving forward. Based on the enthusiasm at HLTH for generative AI’s potential, companies like Hippocratic AI are poised to make this vision a reality. But they’ll need to proceed cautiously and collaborate with healthcare stakeholders every step of the way. As Shah remarked, true success requires “the technology for patient-facing conversations” to meet human empathy and compassion.