Four Key Directives From The Biden Administration
President Biden issued an executive order setting new standards for AI safety and security, safeguarding Americans’ privacy, championing equity and civil rights, advocating for consumers and workers, and encouraging innovation and competition.
People have mixed reactions to AI regarding trust and output biases. Pew Research survey shows that many Americans feel uneasy about AI’s role in their healthcare. Sixty percent of U.S. adults say they would be uncomfortable if their healthcare provider used artificial intelligence for tasks like diagnosing diseases and recommending treatments. In contrast, only 39% say they would feel at ease.
Below are four directives impacting healthcare.
HHS AI Task Force – A year after creating the task force, the committee will develop a strategic plan with appropriate guidance. This plan will include policies and frameworks that might integrate regulatory actions as necessary. The focus will be on responsibly deploying and using AI and AI-enabled technologies in the health and human services sector, spanning research and discovery, drug and device safety, healthcare delivery and financing, and public health.
AI Equity – The Executive Order calls for equity principles in AI-enabled technologies in the health and human services sector. This involves using detailed, disaggregated data on affected populations and representative population datasets when developing new models. The directive states that there will be an active monitoring of the performance of algorithms to check for discrimination and bias in existing models. It will work to identify and mitigate any discrimination and bias in current systems.
AI Security – The directive mandates integrating safety, privacy, and security standards throughout the software development lifecycle, with a specific aim to protect personally identifiable information.
AI Oversight – The executive order directs the development, maintenance, and utilization of predictive and generative AI-enabled technologies in healthcare delivery and financing. This encompasses quality measurement, performance improvement, program integrity, benefits administration, and patient experience. It stipulates that these activities should include considerations such as ensuring appropriate human oversight over the application of AI-generated output.
What’s Next
Companies will comply with the requirement to share AI safety test results, but they will claim that in order to do so, they will not be able to protect intellectual property. The companies will likely devise strategies to claim compliance with current requirements, often resulting in more of a paperwork exercise than meeting the regulation’s detailed objectives. While we recognize the challenges that AI presents, the industry struggle lies in finding the solution.
Suki AI CEO Punit Soni said, “The executive order is much needed and is a reasonable step as it demonstrates that AI is front and center for the administration. It organizes a few principles that should be front and center for AI companies. However, there are some flaws. The technology of today will be the table stakes of tomorrow. Regulations such as basing notifying infrastructure on the size of AI models is not useful, and carries the risk of creating onerous requirements, especially for smaller companies. What should be considered is an AI version of HIPAA, where there is detailed guidance on data usage, infrastructure, training data, and how to tackle bias, model leakage, and other factors in a way that promotes safety, national security, and equality while preventing regulatory capture by large companies. It’s imperative to have industry representation in these committees and task forces that are representative of the vibrant startup AI ecosystem, not just those who can afford lobbying power.”
Healthcare CIO, James Wellman from Nathan Littauer Hospital and Nursing Home is concerned about potential pricing impact on the AI products due to the Executive Order. Wellman said “The EO requires HHS to create a regulatory unit that can monitor emerging AI tools, review them prior to public release and to track their performance and outcomes once they’re being used. I understand this is intended to stop potentially harmful practices developed by AI, but I do have concerns on the impact of the market to react quickly and also create a price gap that places the technology out of reach of hospitals in smaller markets due to the overhead. The lure of AI to offset reduced resources in small healthcare markets can be very appealing and allows us to offer even better care for our patients, families and residents. But can we afford it? “
Big tech is focusing on exploiting AI’s potential to sharpen clinical judgment, cut down administrative tasks, and offer lifesaving predictive features. The effectiveness of this executive order in achieving quick results is something only time will reveal. As AI technology advances rapidly, the pressing question remains: Will the regulations keep pace in our ever-evolving technological landscape?
link