INTRODUCTION
Technological progress provides new solutions, though their implementation in practice was delayed due to the insufficient computing power of the relevant hardware. Artificial neural networks, “artificial intelligence” or machine learning dominate an increasing number of areas of human activity, making many activities automatic, and thus affecting the safety and comfort of life. Autonomous cars are produced that analyse images from cameras and decide whether the road is safe or whether the driver should be notified of the impending danger [1]. For years, new aeroplanes have been obligatorily equipped with systems that analyse risk factors on an ongoing basis, mainly in the form of advanced statistical calculations, in order to notify the pilot in advance of impending danger or the unusual behaviour [2]. The new mobile phones are equipped with the function of active learning based on the user’s behaviour, thus enabling him to propose a specific activity at a given time. Medicine also uses artificial intelligence, including models leading to a reduction in the percentage of misdiagnosis related to breast cancer detection through deep learning techniques and training on images collected during mammography examinations [3]. All of the above examples are just a fraction of the possibilities of applying machine learning in the context of devices used by people today. The aim of this article was to review the use of machine learning models in psychotherapy.
THE USE OF NEW TECHNOLOGIES IN PREVENTION AND PSYCHOTHERAPY
The dynamic development of technologies raises the question of the possibility of their application in clini cal practice, in particular that of psychiatrists and clinical psychologists. Pilot studies have shown that machine learning approaches could be a useful tool in the development of advances in psychotherapeutic processes, allowing the study of numerous simultaneous variables without assuming linearity [4]. According to research, machine learning can allow inferences at the individual level. Therefore it is a promising approach for prediction in the psychotherapeutic context to improve treatment efficacy [5], and also e.g., determining which patients will benefit from cognitive-behavioural therapy (CBT) [6]. One of the most interesting examples of the use of technologically advanced solutions is the SPARX game (Smart, Positive, Active, Realistic, X-factor thoughts), the function of which is to help in the behavioural and cognitive therapy of adolescents suffering from depression [7]. It was developed by the University of New Zealand Auckland in cooperation with Metia Interactive, and its aim is to teach young people suffering from depression how to cope with difficult situations. SPARX is based on the RPG model – Role Play Game – in which everyone creates and develops their own character. Users can learn to face real problems by engaging with in-game characters. Research conducted in New Zealand shows that the SPARX game is an effective source of information for teenagers, is a comparable solution to classic therapy, and may turn out to be much cheaper than it. It is also an alternative for people who are reluctant to engage with conventional therapy [8].
Currently, the literature distinguishes at least a few applications of new technologies of varying degrees of advancement, with machine learning at the forefront [9]. It seems that the researchers involved are most interested in personalizing notifications of therapeutic applications, modifying therapeutic programs in a manner adapted to the patients problems, and conducting “intelligent” conversations with them.
Artificial neural networks are information structures modelled on the work of the nervous system [10]. However, it is difficult to equate the concept of neural networks and artificial intelligence, because the tasks solved by neural networks are only computational. The theory of artificial intelligence, on the other hand, assumes that every aspect of learning or other property related to the thought process can, in principle, be so precisely described that it is possible to create a machine that simulates it and improves itself [11]. Neural networks, however, are increasingly becoming the basis for the development of “real” artificial intelligence, the operation of which is to evoke the intelligence attributed to people, and therefore, above all, the ability to independently infer and extract facts from single events. Currently used machine learning methods, including neural networks, even if they are exaggeratedly referred to as “artificial intelligence”, undoubtedly bring us closer to entrusting more and more tasks to machines [12, 13].
Neural networks can be used to learn the classification of various phenomena, where the nature of the information is not, as we will see below, a limitation. Machine learning algorithms offer a suite of methods able to deal with data-base analysis complexity and can be used to extend the toolbox of psychotherapy researchers [14]. As training data, text, photos, videos, sounds, or numerical data such as geographic location, and even the level of cloud cover or temperature are used as training data. Network architecture is changing, allowing the use of specific types of data, but the neural networks themselves can be combined into larger assemblies, which enables the use of different types of materials at the same time [15].
PROPOSING THERAPEUTIC INTERVENTIONS AT SPECIFIED TIMES
The operation of neural networks consists, in simple terms, of analysing the input data, performing calculations and presenting the output data [16]. Illustrating the above with the example of image analysis, it is possible to train artificial neural networks to recognize images of self-mutilation by presenting a collection of several hundred images of this type. The network learns by itself to recognize specific patterns (in this case, it will most often be sharp objects, darker background, blood), and then, when submitting a photograph that was not analysed before, to draw conclusions about its belonging to the appropriate class. In this case we are talking about photos showing self-mutilation or photos that are “safe” in this respect [17].
It is also possible to learn to recognize the user’s behaviour based on the way that they use their mobile phone. This behaviour is not limited to the application that the person is using. It can also include time, geography, or biological activity. Mobile phone users are increasingly wearing smart watches, which essentially reflect the construction of miniature mobile phones that successfully record vital signs and can continue learning about users’ habits [18]. These include: the daily number of steps, physical activity, heart rate, stress levels, the so-called body battery (charging the body with “energy”), the level of fluids consumed or even a plan for and control of a medical treatment [19]. Based on the above data, information that can potentially be used for the purposes of therapeutic applications, can be obtained [20]. Table 1 presents examples of situation descriptions, the nature of the collected data and the therapeutic intervention.
Table 1
The advantage of using the interventions described in the table above is that they are tailored to a specific time when the patient appears to be going through a more difficult period. Based on the profile of information entered by the application, it would be possible to act adequately to the threats and provide appropriate help. However, there are at least several risks associated with the functionalities proposed.
First, the complexity of human behaviour may be beyond the ability of an application to fully describe and classify. In other words, the set of data made “visible” by a given application may not be sufficient to obtain an overall picture of the situation. Analysing the above on the basis of a clinical example, if a given patient decided not to wear a smartwatch on a specific day, and he would not search for information on self-harming methods via a mobile phone, but on a computer, without searching for or exchanging information with mobile phone, then the scope of data required for sending the corresponding notification would not be complete. This means that an all-data-catching system would need to be integrated with every possible device that could capture relevant variables, and these devices would need to be contactless, charged and in use.
Secondly, behaviours that are “visible” to a mobile phone, which could indicate, for example, a depressed mood, do not have to have universal meanings. The most visible and dangerous behaviour seems to be active search for information about suicide, which is relatively easily interpreted by the application [21]. However, if a person’s depressed mood is manifested by playing computer games more often (as a replacement strategy for coping with negative emotions) or walking alone, their identification will be much more difficult. The system may show more frequent physical activity and a desire to leave the house, mistakenly attributing it to well-being. The solution to the above problem could be to precede the application’s functioning with a detailed interview with a clinician, which would allow for a precise specification of the patient’s behaviour and strategies for coping with negative emotions.
MODIFICATION OF COMPUTERISED THERAPEUTIC PROGRAMS
For several years, computer programs have been successfully used to help patients face the anxiety they experience in a virtual environment [22]. Therapeutic programs of this type are most often based on the concept of behavioural therapy for phobias, which assumes that repeated exposure to a factor that causes anxiety in a given person causes the anxiety to disappear over time. It is also important that contact occurs in a controlled manner, most often using relaxation methods, in an environment that is safe for the patient [23]. For this reason, therapists more and more often use special computer programs and virtual or augmented reality devices1.
Standard programs of this type do not use machine learning solutions. As a rule, they display a given image and allow the patient to interact with the anxiety-causing object, but do not react actively to its behaviour. The use of machine learning could allow for increasing the effectiveness of these types of program and adapting them to the real needs of patients. Table 2 presents the way of working with the patient in the “traditional” approach and the possibilities of using machine learning, including neural networks, to support therapeutic work2.
Table 2
CONDUCTING “INTELLIGENT” CONVERSATIONS WITH PATIENTS, WITH THE USE OF BOTS
Machine learning is successfully used to conduct conversations similar to person-to-person interactions, while the conversation is actually conducted with a computer [24, 25]. This technology is most often based on the use of neural networks to analyse the text entered, and then selecting the most appropriate answer from a given pool. Thus, the text entered by the user of the application is free, while the answers are previously determined by the program’s creators. Conversations are conducted with bots, i.e. with a dedicated type of computer program. Such solutions are commonly used in Internet services, allowing, among other things, the provision of technical support without the need to involve the employees of a given service [26]. The use of technological solutions of this type in a medical or therapeutic context can be done in several ways [27].
Among these are applications that, based on the entered information, allow for the performance of a working diagnosis. The task of the application user is to conduct a casual conversation with the bot. Based on the type of symptoms symptoms and the way the user formulates statements, a preliminary diagnosis is made. These systems have become, especially during the COVID-19 pandemic, a common form of first contact with an attending physician, thus avoiding a face-to-face meeting and the risk of contracting the virus. A chatbot is able to qualify the patient, among other things, for an appropriate coronavirus test or referral to a doctor of a specific specialization. These tools are also finding more and more applications in psychology and psychiatry. On the basis of answers to a series of questions and a virtual interview, a preliminary working diagnosis is made and the result of the study allows for the identification of the clinicians most suitable for specific problems. For example, for a person struggling with sexual problems, where the context suggests an psychological aetiology, the application would suggest a sexologist.
A significant limitation of the above applications is the difficulty in determining the markers of the aetiology of the reported disorder. While the working diagnosis itself is relatively easy to introduce, in particular when it comes to the classification of several types of disorder (mood, anxiety, personality), an attempt to infer their aetiology on the basis of textual analysis alone is still impossible. Moreover, at the present stage of development of the technology, it is difficult to talk about the possibility of differentiating between individual units of mental disorders. This in fact is not the role assigned to programs of this type.
Conversations with patients can also take place as an element supporting the therapeutic process, thus complementing psychoeducation in the treatment of disorders [28]. Currently, patients are often provided with materials the purpose of which is to help them understand the essence of their difficulties, and thus help them in the therapeutic process. Therapists can additionally use English-language applications for mobile phones, which allow users to familiarize themselves with materials dedicated to a specific therapy, bots, or with elements of monitoring the health of the application user.
The advantage of using this type of approach is, as it seems, the high level of substantive content and the ability to personalize feedback based on the patient’s symptoms and functioning. A significant limitation, however, is the therapist’s lack of control over the content presented in a given application; if the problem experienced by the patient is complex or unusual, they may not find this type of content in the application.
The results of research conducted in this trend are optimistic, and these technologies are still largely experimental in terms of clinical psychology and psychotherapy. So far, the observations on the acceptance and implementation of chatbots into the practice of supporting mental health are promising, but they cannot be fully transferred to the field of psychotherapeutic practice [29].
Large technological advances, in a relatively short time, give grounds to believe that it will soon be possible to develop algorithms that will enable, among other things, the rapid detection of incoherent speech that may indicate schizophrenia [30] or mood disorders [31]. Websites addressed to a specific group of recipients also deserve attention, for example bots detecting stress in adolescents [32].
RISKS ASSOCIATED WITH THE USE OF MACHINE LEARNING METHODS TO SUPPORT THE THERAPEUTIC PROCESS
The previous section defined the main directions of development of machine learning applications in supporting the therapeutic process. Next, the problems related to the introduction of the above solutions to clinical practice, which relate basically to all of the cited solutions, will be presented.
Particular attention should be paid to ensuring the full privacy of applications used [33]. Due to the collection of health data, including mental health data, as well as other data such as location and vital signs, application producers must be responsible for their proper use. However, it is difficult to exclude the possibility of data leakage in this area, which could have negative consequences for their users.
Great emphasis should be placed on teaching psychologists and psychology students in formal education on how AI/ML-enabled tools might impact psychotherapy [34]. Selling user data to certain third parties3, such as those that sell certain medications or dietary supplements, would be ethically questionable. In addition, it could lead to the belief in the user that, on the basis of the information provided by him, the authors of the application (who very often collaborate with clinicians) recommend a specific type of substance when it would be visible only for marketing reasons.
It is difficult to define the extent to which the content of the programs based on machine learning or artificial intelligence is the result of cooperation with clinicians, but it seems that the largest applications place a lot of emphasis on this aspect. On the other hand, there are no legal regulations (or a system of recommendations of relevant scientific societies) that would limit the use of proven applications to support the therapeutic process of a given disorder in the future, and which were created solely for the financial purposes of authors who did not conduct substantive consultations. The above could lead to the use of inappropriate applications resembling those recommended by the clinician in appearance or name.
Work on artificial neural networks has revealed one more important phenomenon – that of so-called catastrophic forgetfulness. These networks, in learning to solve subsequent tasks, may lose the ability to perform those that they mastered earlier [35]. Work on solving this problem is ongoing, but neural network developers must remember that the new information memorization scheme should be programmed to avoid “overwriting” new information in the same structure.
At this point, it is not easy to assess the extent to which the introduction of machine learning methods will contribute to the actual support of therapeutic processes. While solutions introduced and tested so far seem to be very promising, it seems that at the moment there is a lack of adequate scientific research. A base of independent scientific research is required, leading to a comparison of interventions supported by a given application and technological solutions for therapeutic interventions with “classic” psychoeducation.
CONCLUSIONS
Machine learning methods are finding new applications in supporting therapeutic processes. They allow for timely responses, making the notifications sent to their users more relevant to their current needs. They make it possible to modify computer programs used for therapeutic purposes in a manner adapted to the experiences of a given patient and to react in real time to the changing conditions of therapy. They also make it possible to conduct conversations regarding reported symptoms by means of messages sent not to a human, but to a bot analysing the text sent to it in real time. All the above solutions, before they are recommended to support the therapeutic process, should be the subject of independent scientific research, which would allow for the evaluation of their actual usefulness. At the moment, it seems most important for clinical practice to raise the awareness of psychiatrists and clinical psychologists around solutions of this type, because some patients can already use them themselves. In such a case, it would be useful to know about the basics of the application’s operation and about the potential risks associated with its use.