This page highlights international guidelines and resources.
Artificial Intelligence in Health Professions Education: Proceedings of a Workshop
by
The National Academies Global Forum on Innovation in Health Professional Education hosted a multi-day workshop series in March and April 2023 to explore the potential of artificial intelligence (AI) in health professions education. Speakers at the workshops provided background on AI; discussed the social, cultural, policy, legal, and regulatory considerations to integrating AI into health care and training; considered the skills health professionals will need as educators and providers to effectively use AI in practice; and explored needs for educating the next generation of health workers. Speakers took consideration of the bias, burden, health equity concerns that introducing AI into clinical education would bring. This Proceedings of a Workshop summarizes the discussions held during the workshop.
Ethics and Governance of Artificial Intelligence for Health: Guidance on Large Multi-modal Models
by
Artificial Intelligence (AI) refers to the capability of algorithms integrated into systems and tools to learn from data so that they can perform automated tasks without explicit programming of every step by a human. Generative AI is a category of AI techniques in which algorithms are trained on data sets that can be used to generate new content, such as text, images or video. This guidance addresses one type of generative AI, large multi-modal models (LMMs), which can accept one or more type of data input and generate diverse outputs that are not limited to the type of data fed into the algorithm. It has been predicted that LMMs will have wide use and application in health care, scientific research, public health and drug development. LMMs are also known as “general-purpose foundation models”, although it is not yet proven whether LMMs can accomplish a wide range of tasks and purposes.
Regulatory Considerations on Artificial Intelligence for Health
by
This publication is a general, high-level and nonexclusive overview of key regulatory considerations in topic areas developed by the WG-RC to support the overarching FG-AI4H framework. Recognizing that a single publication cannot address the specifics of the various AI systems that can be used for therapeutic development or health-care applications in general, the WG-RC’s overview will highlight some of the key regulatory principles and concepts – such as risk–benefit assessments and considerations for the evaluation and monitoring of the performance of AI systems developed using AI technologies. Throughout the process of developing this publication, the WG-RC took into consideration different stakeholder perspectives, as well as different global and regional settings. The WG-RC’s overview is not intended as guidance, as a regulatory framework or policy. Rather, it is meant as a listing of key regulatory considerations and a resource for all relevant stakeholders – including developers who are exploring and using AI technologies and developing AI systems, regulators who might be in the process of identifying approaches to manage and facilitate AI systems, manufacturers who design and develop AI systems that are embedded in medical devices, and health practitioners who deploy and use such medical devices and AI systems.
Quick links
About ANZCA
Copyright © Australian and New Zealand College of Anaesthetists.