What is the FAITH project and what are its goals?
The goal of FAITH (a Federated Artificial Intelligence solution for moniToring mental Health status after cancer treatment) is to provide an application that remotely analyses depression markers, such as changes in activity, outlook, sleep and appetite. When a negative trend is detected, an alert can be sent to the patient’s healthcare providers or other caregivers who can then offer support.
How will your solution work and how will it benefit the patients?
The project uses the latest, secure AI and machine learning techniques within the interactive app located on patients’ phones. Key to the project is federated machine learning, which enables the patient’s personal data to stay within the AI model on each phone, guaranteeing privacy.
Please, tell us more about the AI model you use to develop your solution. How does is work and what are the biggest technical challenges in the development?
As the model collects data on a person’s phone it retrains itself to improve and personalize it for each individual. But we also want to learn from all that data to gain insights that are beneficial to the broader population, so when a model updates, that update, rather than the person’s data is sent back to the cloud. All the updates are processed and a new, improved model is sent back out to everyone and that cycle repeats.
What are the basic requirements for the use of AI in mental health – and what are the barriers?
I believe one of the most fundamental requirements is trust. Any project in health, particularly one leveraging a persons data, and introducing new technologies, needs to earn the trust of the different users e.g. patients, doctors, nurses.
A worry for many, however, is that these machines are 'black boxes' i.e. closed systems that receive an input, produce an output, and offer no clue why. Engineers may be able to deliver ever more accurate models, forecasting pandemic spread, classifying symptoms of mental disease etc. but if they cannot explain these models to the relevant decision-makers e.g., doctors, public health officials, politicians then how can the models be trusted?
Were something to go wrong, being unable to explain why could be the death knell to an otherwise transformative technology. We believe factoring in transparency and explainability from the start will strengthen FAITH for long-term adoption. There are various types of transparency in the context of human interpretability of algorithmic systems. Of those we are striving for global interpretability (a general understanding of how an overall system works) and local interpretability (an explanation of a particular prediction or decision).
What kind of data do you use to construct your AI model and how do you collect it?
At this stage of the project we do not yet know what data will ultimately feed our model. The trial we will run is designed to capture all relevant data that we believe could inform such a model, but only at the conclusion of this trial can we then say which data is important. The trial will use the FAITH mobile application and Point of Care solution (Alpha Version) that collects information from mobiles, wearable devices and surveys in survival breast and lung cancer patients. The data captured will include bio-related variables, such as sociodemographic, clinical, and psychosocial factors, as well as depressive markers (Nutrition, Sleep, Activity, and Voice).
The project started in 2020. What have you achieved so far and what are the next steps in research and development?
Since the project kicked off we have carried out extensive requirements gathering. The design experts then worked on translating these initial requirements and the study protocol requirements into effective user interfaces for trials participants (i.e., maximizing data collection easiness and engagement) and hospitals (facilitating trials management). In parallel, the technical architectures for trials and final product are being defined.
What was originally envisaged as an MVP of a single FAITH offering quickly evolved into an understanding that rather than a single offering, of which there would be an MVP, this project had two distinct offerings: an alpha version that would be required to support the trial, and a beta version that would reflect the complete FAITH vision i.e., Inclusive of the federated learning analytics. There was then the design and development of the FAITH conceptual data model and data structures. Extensive research into the privacy and protection framework, leveraging Distrubted Ledger Technology (DLT), ensuring that all data transactions entering and accessing within the framework will have full protection and auditability. A working prototype of the federated learning component was developed using TensorFlow.
FAITH joined forces with four other projects. Can you tell us more about this new cluster, how and why it came to be?
A partner of the FAITH consortium, TFC Research and Innovation Limited, is coordinating the ‘Cancer Survivorship – AI for Wellbeing’ cluster. This brings together the FAITH team and four other EU-funded teams focused on healthcare and well-being. Working together, they are helping each other to collect, share, and understand early feedback from end-users on the solutions they are developing.
Tom Flynn (TFC Research and Innovation Limited) who leads the cluster says: 'We have a motto – We don’t work in silos! The Cluster was formed to share ideas and understanding amongst the engaging projects. They were brought together by common interest in the issues of mental health, well-being, depression and patient support. Collectively, the Cluster shares knowledge and understanding through the adaptation of a highly user-centric approach. Thus, in particular the beneficiary will be the patient.
You are a professional from the emtech ecosystem and interested in sharing your tech tale with a guest contribution at Tectales? Just contact us: email@example.com.