16 November 2023

The United Nations Educational, Scientific and Cultural Organization (UNESCO) must leverage philosophy’s transformative power to promote critical thinking and ethics in the international community’s efforts to address contemporary issues. The organization was created in 1945 on the premise that humanity’s moral and intellectual solidarity must serve as a foundation for peace. Philosophy has had a prominent role across human civilizations, leaving its mark on social and political structures, languages and cultural exchanges. As a result, this discipline includes all the diverse representations human societies have expressed, and it continues to feed the cycle of mutual influences between our collective sense of ethics and morality.

The international community is committed to promoting more inclusive societies, and UNESCO can draw on philosophy and the humanities to achieve this common goal. How do we want our societies to cope with climate change? What do we want to get out of the digital revolution?  These global trends force us to ask philosophical, moral and ethical questions that will need to be answered if we want to reach our sustainable and inclusive development objectives.

To address these evolving challenges, the UNESCO Social and Human Sciences Sector offers a framework to integrate philosophical insights into social transformations. The Management of Social Transformations (MOST) programme facilitates the direct circulation of ideas between academia and policymakers to drive positive social change, which is essential in confronting current global trends. It allows for an ethical reflection of critical topics such as technological advancements and the disruptions they entail.

The current technological revolutions brought on by Big Data, biotechnologies and artificial intelligence (AI) provide an interesting case study. The ethical concerns they raise illustrate the pivotal role philosophy will play in understanding human interactions in the digital era. We cannot ensure that these technologies will benefit our societies without using ethics to establish a globally representative “humanity-first approach”. If left without control, or to a control not everyone agrees on, technologies such as AI can lead to abuses, including racial and gender profiling or social scoring, which will further exacerbate the gap between the reality of our situation and the inclusive development goals our societies have committed to. The recent development by UNESCO of the first-ever global standard on AI ethics, the Recommendation on the Ethics of Artificial Intelligence, sets out a framework and clear provisions on how to manage the ethical implications of technological development.

If AI algorithms are not meticulously trained and tested, there are risks of perpetuating biases already present in the training-data. 

UNESCO World Philosophy Day (16 November 2023) centred on the ethical dimensions of AI in mental health, using this topic as a case study to provide concrete ideas on how societies can come up with a shared moral framework for these technologies to benefit us all.

The current global mental health landscape is concerning. The difficulty with mental health is not only detection or treatment. Grasping the full impact of mental health disorders on life expectancy is complex and makes raising awareness even more problematic. According to the World Health Organization’s 2022 World Mental Health Report: Transforming Mental Health for All, only 4.6 per cent of health research focuses on mental health, while people with mental health conditions are known to experience disproportionately higher mortality rates compared with the general population. In any case, mental health disorders lead to an important cumulative mortality burden.

Understanding as many implications of mental health conditions as possible will lead to better health care perspectives, and advancements in technology and neuroscience offer new opportunities for diagnostic capacities, prevention and treatment.

On the other hand, these advances should not overshadow the potential risks they carry. The data processing capacities of AI imply monitoring individuals. Even if done in the interest of their health, this raises ethical questions. These technological advancements require diligent approaches. If AI algorithms are not meticulously trained and tested, there are risks of perpetuating biases already present in the training-data. These biases could skew mental health diagnosis and treatment, which is particularly concerning for marginalized populations who lack proper health-care safety nets. Without careful oversight and continuous monitoring, the promising potential of the use of AI in mental health care could inadvertently exacerbate inequalities rather than alleviate them.

The epistemological and ethical concerns arising from technological shifts must not be overlooked, especially when used in the realm of physical and mental health. Understanding mental disorders and the validity of knowledge derived from AI, along with ethical responsibilities in patient care, remains crucial. These advancements must mitigate vulnerabilities without introducing new forms of exposure that will exacerbate our societies’ dysfunctions.

As we navigate this landscape, ethical consciousness is imperative. The increase in our technological capabilities should always be met with a proportional increase in our ethical responsibility. Engaging in critical thinking and nurturing a new humanism is essential to addressing the ethical, intellectual and political challenges of our time. Philosophy is not an optional discipline, but a global force that will help shape a more humane future for all.

Philosopher's Walk, a pedestrian path that follows a cherry-tree-lined canal in Kyoto, Japan. Kimon Berlin via Wikimedia Commons

A closer look at AI and mental health

Today's technological advancements and our improved understanding of brain functions open up new possibilities for supporting individuals affected by mental health issues. Whether it’s treating, accompanying or helping, new tools are emerging to assist diagnosis and treatment.

However, whether new devices are used in mental health or elsewhere, they also serve as instruments of action, carrying new responsibilities that would be imprudent to ignore. For instance, the development of an artificial conversational agent in psychiatry raises questions about the future of psychiatrists. Will they eventually be replaced? Some say that the patient's relationship with an artifact cannot be equivalent to the one developed with a human psychotherapist.

Although universal access to technology remains a key global challenge, those with access to the Internet have pointed out certain advantages related to AI-powered platforms such as Google AI, Cleverbot, and ChatGPT (natural language processing and conversational AI), and Woebot, Youper and Wysa (natural language processing platforms used to provide mental health support). These platforms are readily available to everyone online, without the need for significant technical expertise or resources. However, their objectivity is still an illusion in that they remain a product of the human mind, and in this sense, they are not entirely free from biases and stereotypes.

Indeed, there are deep concerns regarding the representativeness of databases and algorithmic biases, which present an enormous problem to solve. The risk of misuse or spreading misinformation through AI-generated content is another challenge that needs to be addressed to maintain the integrity of online information and creative output.

We can understand how technological advances in the mental health domain may modify our practices, and for this reason we are forced to exercise ethical caution. At least two points of vigilance deserve to be highlighted.

The first issue concerns epistemology. Our understanding of the functioning of mental disorders remains incomplete. For instance, biomarkers designed to aid in the identification of diseases, their mechanisms and progression, and the effects of treatments, are still unsatisfactory. Hence, there is a pressing need for us to improve our knowledge in these domains as our ability to assist individuals hinges on such knowledge. In this context, the algorithmic processing of extensive medical databases is valuable. Not only does it enable the analysis of a larger volume of information but it also accelerates the examination process. Another epistemological consideration refers to the nature and validity of knowledge generated through AI. If AI systems exclusively rely on neurological data, the understanding of mental disorders may become overly focused on biology, potentially overlooking their social dimension. Research has unveiled significant correlations, such as those between poverty and mental health issues. The question of the sources of the data used is therefore a pressing one, as is the question of how this knowledge will be used by health-care professionals. It is known that there can be a gap between the quantitative analysis performed by an AI tool and the qualitative understanding provided by a human expert.

View of UNESCO Headquarters in Paris, 2009. Matthias Ripp

The second issue concerns care. The assistance provided by AI, neurotechnology and certain digital devices in the field of mental health will have an impact on ways of providing care. The first consideration is medical diagnosis. Indeed, one cannot help a person without accurately identifying the nature of their mental condition. A more refined diagnosis allows for a better specification of treatments and care, and the envisioning of new forms of assistance. An improved diagnosis is also one that is made at an earlier stage, enabling quicker therapeutic interventions and the preservation of the individual's quality of life to the greatest extent possible.

Care extends from the diagnosis to the treatment of the patient, or at least their accompaniment. Mental disorders present a heterogeneous clinical picture; the same depressive syndrome may manifest differently in different individuals, for example. Each person's experience of their condition is personal, further complicating daily care challenges. Support can be improved through certain digital tools or AI-equipped devices such as smartphones, which can enable real-time emotion recognition, the monitoring of individual activity rhythms, sleep patterns, movements and many other aspects of daily life. The advantage lies in live feedback and interactive monitoring, which allows for swift responses in times of need, alerting the patients themselves. This approach serves to protect and enhance their quality of life from a physical, social and mental perspective.

All of these aspects revolve around a major principle of ethics and bioethics: the principle of vulnerability. However, in a patient’s daily life, these technological contributions that help compensate for certain vulnerabilities should not, from a moral standpoint, introduce new forms of vulnerability. Will therapeutic dialogue be replaced by an implantable artificial brain device to regulate emotions? Will the patient permanently lose the exclusivity of access to their thoughts due to neurotechnologies that allow for the scientific exploration of the brain? Could therapeutic and benevolent monitoring of the patient risk becoming anticipatory surveillance of behaviours and therefore point to controlled presumptions of intent?

The primary ethical concern underlying our practical considerations revolves around the role of humans in a technologically driven world where algorithms manage mental health. We must act on three levels: first, the level of life-long education for all people working in related fields, to provide them with capacity for critical thinking; second, we must consider the beneficiaries and use critical methodologies to evaluate AI; third, we must recognize that the expansion of our capabilities through technological advancements inherently amplifies our responsibility. Aspiring for improvement does not equate to being a technophile, just as hesitance does not signify technophobia. On the contrary, it reflects an ethical awareness rooted in a fundamental concern for humanity. The challenge lies in contemplating today's technology-driven mental health landscape without unfairly criticizing that of tomorrow.

 

The UN Chronicle  is not an official record. It is privileged to host senior United Nations officials as well as distinguished contributors from outside the United Nations system whose views are not necessarily those of the United Nations. Similarly, the boundaries and names shown, and the designations used, in maps or articles do not necessarily imply endorsement or acceptance by the United Nations.