The Life Sciences have some of the most controversial implications for using artificial intelligence of any industry. From neuroimaging to drug development, AI can improve modern healthcare, or drastically alter it.
As with many new technologies, those who oppose the use of AI in life sciences do so dramatically. And the same goes for everyone pushing for its adoption. Using AI to progress public healthcare and other Life Sciences means rigorous ethical considerations.
Discovering new medicines is one of the most important ways AI is used in healthcare. Drug repurposing allows computers to identify an established drug’s effectiveness for new health conditions without the need for a full clinical trial each time.
Predictive modelling can also analyse new drugs efficacy, vastly reducing the time spent on first-stage trials. That means effective treatments reach the market faster.
AI makes clinical trials more cost-effective by targeting diverse and population-reflective participant pools in seconds. The right algorithm can also analyse the effectiveness of a trial before it’s conducted, minimising investment in fruitless, expensive studies.
Trials based on AI analyse more information faster. Personal biomarkers in individual participants can be identified; wearable devices can monitor in real-time; sophisticated literature can be reviewed systematically. All in seconds.
AI can quickly identify biomarkers based on genomic, proteomic, and metabolomic data. Those biomarkers can be used to predict disease progression and treatment response. They can even be used to guide prognosis.
It can also scan medical imagery at mass and identify patterns in patients with certain diseases. Basically, AI helps doctors and other healthcare professionals diagnose patients quickly and accurately.
A key ethical implication around consent is the use of AI for different reasons than originally intended. Once a project is complete, a machine that has been trained successfully is often useful in other areas.
Aktana’s Senior Vice President of Sales, Alan Kalton, says that “AI is no longer a luxury”. More and more sales and marketing teams in the Life Sciences industry are tapping into the deep customer data available to AI models.
Perhaps these consumers consented to their data being included in training the AI, perhaps they didn’t. If they did, would that mean they’ve automatically consented to that data being used to sell back to them? Probably not.
Implicit biases are problematic in any AI algorithm. As they ‘learn’ by amplifying datasets, any biases in their training data or human trainers are also amplified. Algorithms also learn from historical data, so there is a possibility for historically acceptable prejudices to be brought forward into modern technologies. And, then, society.
Life Sciences AI models must have a particular interest in avoiding biases because the social and legal implications of poor ethical practise are vast.
But AI can be used to make Life Sciences more diverse. From clinical trial participants to treatment applications, the expanded data pool offers more opportunity to reach marginalised communities.
A core issue with using expensive technology to solve world problems is that it requires funding. Those with the means to provide that funding tend to influence the projects and solutions they commission.
The problems being solved by AI may not be a priority for most of the world. As the billionaire space race is considered tasteless in a time many can’t afford food, investors’ personal interest may mean less critical diseases are researched before others.
There’s also a risk of public perceptions and research findings being skewed for the sake of profits or other business priorities. When individual companies are funding research, transparency may not be possible.
Healthcare advancement is almost unanimously considered an ethical use case for AI, but others aren’t so clear-cut. Projects across parallel Life Sciences projects often use the same data or algorithms, and some of these may be less ethical than others.
‘Dark’ industries like bioterrorism and recreational drug development benefit from the advancement of other Life Sciences. As AI accelerates our advancement in healthcare, we can assume it does the same for those too.
Ethical issues arise from these dual-use algorithms. Who owns the patent and intellectual property, for example? Who is entitled to resulting products – such as life-saving drugs? And at what cost?
As the argument for using AI to advance healthcare and other Life Sciences is so strong, it’s in everyone’s interest to find a way to do so ethically.
To find out more about how the Life Sciences industry is using AI, download Evolve’s magazine: AI in Life Sciences. Included is a ChatGPT Cheat Sheet to help you make the most of the world’s favourite AI chatbot.