I do not understand According to Who is Responsible When an Artificial Intelligence System Fails? new paper New England Journal of Medicine by Stanford researchers.
As a result, doctors and hospitals are taking a risky gamble when implementing this technology.
“Courts will need to evolve some of their existing regulatory principles to create remedies for plaintiffs against software developers,” said health law scholar Michel Mello, one of the study's authors. told.
nevertheless: Mello worries that without such advances in court, clinicians who rely too much on software co-pilots could be put at risk by AI. This is a concern shared by the American Medical Association, a major medical association. If the software is not secure.
The opaque inner workings of AI can change as the system incorporates new information, making it much more difficult to prove AI involvement in a particular injury under current judicial standards.
And because AI systems can use sometimes billions of variables to come up with an answer, it can be difficult for plaintiffs to find product defects specific enough to win in court.
In reviewing the limited case law, the authors note that courts — at least so far — appear to be “reluctant to create new rules specifically regarding AI.”
Advice for providers: The authors say that doctors and hospitals should consider archiving the results of their AI systems if they need to submit them as evidence.
“They should also insist on favorable terms governing liability, insurance, and risk management in AI licensing agreements.”
Here, we explore the ideas and innovators shaping healthcare.
California counties It's the first time in the country Declaring loneliness a public health emergencyNBC News reported. San Mateo city officials' ideas for improving social connections include investing in more walkable neighborhoods and partnering with social media platforms on community meet-ups.
Share your thoughts, news, tips and feedback with Carmen Paun. [email protected]Daniel Payne [email protected]Ruth Reeder [email protected] or Erin Shoemaker [email protected].
Send your tip securely Through SecureDrop, Signal, Telegram, or WhatsApp.
It turns out The agreement European countries reached in December to regulate artificial intelligence may not mean as much to health care as initially thought.
Why? Last week, before EU member states formally enacted the Artificial Intelligence Act in a unanimous vote, Germany was told by the EU's executive body, the European Commission, that the following regulations do not apply to the use of AI in medical devices: I made a promise.
This is what a spokesperson for Germany's Digital and Transport Minister Volker Wissing, the leading AI law skeptic in Germany's coalition government, said in a conversation with POLITICO's Jan Volpicelli.
Why it's important: Before the concession, Germany and France had threatened to oppose approval of the AI law. The law bans some AI technology applications, imposes strict limits on use cases deemed high-risk, and imposes transparency and stress obligations to rein in cutting-edge software models. Under test.
Paris and Berlin are concerned that the law, the details of which will be decided by the commission, will stifle up-and-coming AI champions such as France's Mistral and Germany's Aleph Alpha. Ta.
Both Mistral and Aleph Alpha aim to compete with large U.S. companies such as OpenAI and Google and collaborate with healthcare technology companies to develop products.
At the time of the deal in December, European device makers worried that the new law would impose two rules on AI and medical products and vowed to campaign against it.
American Hospital Association demonstrated priorities As for the rest of this year, most of them have something in common.
With major government funding due later this month, hospitals are lobbying to prevent facility-neutral proposals from going ahead.
These proposals aim to equalize payment for treatment, regardless of where the treatment is performed: in a hospital or at a doctor's office or doctor's office, which is often cheaper. Hospitals claim they are taking more safety precautions and providing better care for more seriously ill patients.
“We are finding a sufficient level of support,” Stacey Hughes, AHA's executive vice president of government relations and public policy, told Daniel.
AHA also wants policymakers to:
— Develop and support health workers by reauthorizing training programs
— Maintain healthy Medicare and Medicaid funding, including additional Medicare payments to hospitals that treat large numbers of low-income patients.
— Create hospital designations that result in higher Medicare rates for some facilities in urban settings.
— Modify or cancel the Centers for Medicare and Medicaid Services proposal to set minimum staffing standards for long-term care facilities for the first time.
— Establish a national data privacy law governing how data is shared and used by AI
— Makes coverage of certain pandemic-era telehealth services permanent.