WHO urges caution in the use of artificial intelligence in public health

The World Health Organization (WHO) has expressed concern about the indiscriminate use of artificial intelligence (AI) tools, such as language models (LLMs), in the healthcare field.

In a release published this Tuesday (16), the WHO warned about the possible risks that the hasty use of these technologies can bring, including medical errors, harm to patients and reduced confidence in the use of AI in this sector.

ADVERTISING

With the rapid growth and experimental use of AI tools such as ChatGPT and Bard, seeks to improve access to health information and increase diagnostic capacity, especially in resource-limited environments. However, the WHO highlights the importance of responsibly and carefully approaching the implementation of these technologies.

Artificial Intelligence model can accurately predict pancreatic cancer risk, says study
Artificial Intelligence model can accurately predict pancreatic cancer risk, says study

The WHO highlights the need to ensure patient safety and protection during the development and commercialization of LLMs by technology companies. It is critical that policymakers implement rigorous measures to evaluate the benefits of these tools before their widespread use in routine health care and medicine.

Furthermore, the WHO highlights the importance of applying ethical principles and appropriate governance when designing, developing and implementing AI systems for healthcare. This includes protecting patient autonomy, promoting human well-being, and ensuring inclusion and equity in access to health services.

ADVERTISING

With the appropriate use and responsible adoption of AI tools in healthcare, significant advances in the diagnosis and treatment of diseases are expected to be achieved. However, the WHO emphasizes that a balance needs to be struck between the potential of these technologies and the need to ensure adequate safety and care for patients.

See also:

Scroll up