Artificial Intelligence (AI) is being applied to any industry that manages a large amount of data, and in an increasingly interconnected digital society that includes pretty much everyone.
Healthcare is certainly one, and there have been attempts over the last decade to bring the benefits of machine learning to improving healthcare awareness and making use of data on health and genetics of large parts of the population. However, policy often lags behind innovation, especially in terms of digital technology, so there are some concerns as to whether AI is handling healthcare data in an ethical way.
Lack of regulations
The relationship between AI and healthcare data is not necessarily new, but it certainly didn’t receive much attention. Unfortunately, that results in an almost complete lack of regulation with regards to how companies collect, store and handle healthcare data.
In the U.S., the Health Insurance Portability and Accountability Act of 1996 (HIPAA) sets out privacy regulations for healthcare. Although the regulation was expanded in 2013, there are still many areas of digital security in which it is lacking, and it can only reach so far.
For example, HIPAA is limited in the companies required to follow its regulations. Social media companies have accumulated a huge amount of personal data on its users, healthcare data included within that lot, and yet they are under no obligation to handle this data in a transparent or appropriate way, as set out under the act.
Even when Facebook developed a “suicide detection algorithm” in 2017, the overt use of accumulated healthcare data to prevent suicide amongst its users, it could (and did) collect this data without users’ consent. What’s worse is they would not comment on how it was used or stored, nor whether the data was compromised in the subsequent data breach of September 2019.
HIPAA also does not extend to healthcare-adjacent companies, such as genetic testing services like 23andme. Shirley Tellamere, a tech writer at Lastminutewriting and Writinity, points out that “Although the services analyse DNA and provide users with information on their genetics, health and potential traits, they are not categorized as healthcare agencies and are therefore not subject to HIPAA. That means they can be just as opaque as Facebook when it comes to what they use your data for.”
AI and bias
Beyond the fact that healthcare data is largely unregulated when it comes to AI applications, there is another issue surrounding how healthcare data is used to train AI technology.
Bias is one of the greatest concerns of training AI technology. Sonja Wehsely, head of strategy, business development and government affairs at Siemens Healthineers said: “To promise there is never a bias is impossible.” This much is evident from U.S. clinics using an AI that privileged white over black people, another example of the widespread problem that AI facial recognition software is biased towards white men.
Perhaps most worryingly is the implications of this biased software. Yet another issue of unregulated companies handling healthcare data is that they can sell their databases to external companies without warning its users. 23andme has done this with GlaxoSmithKline to the tune of $300 million, but it’s not stopping there. “Insurance experts are trying to warn us about the implications of selling genetic data,” says Ruby Ruchton, an AI blogger at Draftbeyond and Researchpapersuk. “With the amount of personalised data on potential healthcare conditions, insurance companies could buy and use data from the likes of 23andme to select clients and set premiums.” If these applications contain the same racist and sexist bias of other AI systems, the insurance industry could become even more socially imbalanced than it already is.
Conclusion
This article does not mean to suggest there are no good applications of AI with regards to healthcare data. As Facebook demonstrated with its suicide prevention campaign, there are certainly ways in which it could be implemented to benefit users and even save lives. However, the matter remains of how we manage this data, and how this management is regulated in the long-term, and these questions need answers soon.