The term ‘artificial intelligence’ (AI) was first coined in the 1950s. Since then, this disruptive idea has come a long way from concept to reality.
From enhancing photos for your social media page to detecting diseases from medical images, AI is omnipresent in our day-to-day life. In business sectors, AI-driven analytics are used to detect fraud and predict stock movements. AI in digital media improves click through rates and helps businesses gain market insights. AI-powered chatbots are replacing physical staff, assisting customers with the information they need.
All of this is possible thanks to major advancements in computing power and ‘machine learning’ (ML) capabilities, the backbone of AI.
AI in Healthcare
In healthcare, AI is opening doors that were limited with conventional technology.
- An Australian startup is building the world’s largest metagenomics database. They’re using an AI/ML powered cloud platform to a gain deeper understanding of microbiomes. This may help to predict different disease states including IBDs and cancer with a high degree of accuracy.
- In radiology, radiomics (a technique for extracting large data from radiographic medical images using algorithms) is surpassing the naked eye in detecting disease states.
- It is also forecast that AI-based technology can reduce the burden on physicians, medical information and transcription by 17-20%.
Challenges for Health Authorities
With such fast-emerging technologies on the way, AI is causing some headaches for health authorities globally.
At a fundamental level, AI poses ethical, legal, and socio-economic dilemmas such as data sensitivity, privacy, and cyber security, to name a few.
For the medical technology sector, AI introduces complexity that traditional medical device regulation was not designed for. An example is the adaptiveness of an AI software algorithm (i.e. a different output to the output at the time of approval). Such unpredictability is making it difficult for regulators to evaluate the ‘basis’ behind AI’s decision making (prognosis, treatment recommendation etc.).
Artificial Intelligence Regulation is, without a doubt, complex and unchartered territory, and opinions are divided.
However, regulators and lawmakers agree on one thing:
Artificial Intelligence Regulation
Right now, China is leading the world by laying down a national plan for AI. In 2019, the NMPA released a “Technical Guideline on AI-Aided Software”, and approvals are being granted through an ‘innovative’ pathway for AI-based diagnostic software in radiology.
The US FDA followed suit with action plans mapping out a regulatory framework for AI/ML based Software as a Medical Device (SaMD). Guidelines on Good Machine Learning Practice (GMLP) are also in development.
Other developed countries are trailing behind, with the EU first tackling the ethical and legal issues, and Australia’s TGA planning to issue a separate guideline to the existing SaMD document.
In principle, countries with mature regulatory frameworks are largely aligned in adopting a total product lifecycle approach for AI by proposing that manufacturers:
- Establish assurance of organisational culture of quality and excellence (i.e. GMLP)
- Plan for algorithm changes (i.e. premarket assurance plan)
- Take a planned approach for modifications after initial product review
- Be transparent and monitor products’ real-world performance
Effective regulatory frameworks will support introduction of this powerful technology, for the benefit of patients globally.