WHO formulates six principles for ethics in AI health sector

These ethics in the AI health sector include ensuring equity and autonomy

AI health

AI health

The World Health Organization issued a guidance report consisting of six key principles for the ethical use of artificial intelligence in health. It becomes the first-ever consensus report on ethics in the AI health sector. This report was developed by twenty experts for two years. 

The report calls attention to the promise of AI health, and its capability to assist doctors in treating patients in under-staffed communities. It also stresses that governments and regulators should carefully inspect where and how AI is being used in healthcare sectors especially in middle-income countries. The WHO hopes that these six principles will lay the foundation for how governments, developers, and regulators approach AI technology. 


The six principles that experts came up with regarding AI ethics are:

Protecting autonomy: Humans should remain in control of healthcare systems and medical decisions, privacy and confidentiality need to be protected, and patients should give valid informed consent through appropriate legal frameworks for data protection.

Promoting human safety, well-being, and public interest: The designers of AI technologies should satisfy all the required elements for safety, accuracy, and efficacy for well-defined use of indications. Measures should be taken for quality control in practice and improvement in the use of AI must be available.

Ensuring transparency, explainability, and intelligibility: Transparency requires that sufficient information be published before the design or deployment of AI technology. Such information must be accessible and easily facilitate meaningful public consultation and debate on how the technology is designed and how it should be used. 

Fostering accountability: Though AI technologies perform specific tasks, it is the responsibility of stakeholders to ensure that they are used under appropriate circumstances by trained people. Effective mechanisms should be available for questioning and for redressing individuals or groups that are affected by the decisions based on algorithms.

Ensuring equity: This means to make sure that the tools are available in multiple languages, that they are trained on diverse sets of data. As scrutiny of common health algorithms found some of the bias built-in. So access should be given irrespective of age, sex, gender, race, income, ethnicity, or any other characteristics which are protected under human rights codes.

Promoting AI tools that are responsive and sustainable: All the designers, developers, and users should constantly access AI applications during their actual use to determine whether the AI tools are responding well or not. AI systems should also be designed to minimize the environmental consequences and should increase efficiency. Governments and companies should anticipate disruptions in the workplace such as providing training for healthcare workers in adopting AI tools and job losses due to the use of AI tools.  

There are many ways to use AI in the healthcare industry. Many new applications are being developed too, such as using AI to screen medical images that can scan patients’ health records and predict the outcomes. These kinds of applications can also help patients monitor their health track in case of any changes in their health conditions in the understaffed areas. 

Such technologies can be very useful in fighting any kind of pandemic situation by helping several healthcare institutions and governments. And so most of them are turning towards AI health tools after the Covid-19 pandemic’s hit. Though these AI tools are advantageous, they also had few of the features that the WHO report consisted of. 

In the case of Singapore, the government agreed that the contact tracing application data was also used in criminal investigations, which is an example of ‘function creep’. Here the data was repurposed beyond the government’s original goal. 

As most of the AI health programs that were detecting Covid-19 based on the patient’s chest scans were based on poor data and so it wasn’t successful enough. And so hospitals in the United States used an algorithm that was designed to predict which Covid-19 patients might need intensive care before the program was fully tested. The report on such unproven technologies said, “An emergency does not justify the deployment of unproven technologies”. 

The report also identified that most of these AI health tools are developed by private technology companies such as  Google and Tencent, by partnering with public and private sectors. These companies have the resources and data to build these AI tools, but may not have enough incentives to adopt the proposed ethical framework for their own products. This is because their focus is on profit rather than the public good. The report reads, “ While these companies may offer innovative approaches, there is concern that they might eventually exercise too much power concerning governments, providers and patients”. 

Since AI technology in the healthcare domain is still new, and so many regulators and governments are figuring out how to evaluate and manage AI tools. The WHO report said to be thoughtful and measured in the approach which can help to avoid potential harm. 

“The appeal of technological solutions and the promise of technology can lead to overestimation of the benefits and dismissal of the challenges and problems that new technologies such as AI may introduce”, the report added. 


Must see news