For technical terms in this article please refer to the NHS AI Dictionary.
NHS England has published comprehensive guidance to help NHS staff understand, develop and adopt AI.
It is predicted in the NHS AI Lab roadmap that general practice will be one of the most affected workforce groups in the NHS.
AI has been used to help diagnose COVID-19 from chest imaging and used to help secondary care dermatology referrals, e.g. skin analytics. Other examples from the NHS AI award winners help with retinal screening and stewardship of antimicrobials. Symptom checkers such as NHS 111 online are also trialling AI to help with triage.
To achieve its potential, AI must be developed in a regulated way, and as a collaboration between clinicians, software engineers, data scientists and product designers. The early challenges include gathering enough good-quality data to build models, understanding the information governance surrounding this and developing proof of concept of AI tools. As these initial challenges are overcome other factors will grow in importance such as workflow integration, demonstrating evidence of real-world clinical effectiveness and providing ongoing safety.
You can read more about the information governance issues associated with AI for NHS staff on the NHS England web pages; and a Government guide on good practice for digital and data driven health technologies.
Tips when implementing AI
Understanding healthcare workers’ confidence in AI (2022) is an NHS report. It explores how to prepare the UK’s healthcare workforce to master digital technologies for patient benefit. The report is essential reading to those using AI in the NHS to understand the barriers to adopting AI amongst healthcare providers.
Staff may be reluctant to adopt AI technologies if they feel threatened, if they are worried about the risks, or if they do not see enough evidence of effectiveness. They need to be brought onboard so that those who are worried feel empowered to shape how the technology can used to support them.
To this end, NHS and GP organisations are working to regulate and design standards that support developers in ensuring that they can deploy their technology, because they’ve met minimum standards that enable greater confidence in the use of the technology. It will be important for GPs and other primary care leaders to be actively involved in this process to shape how technology is used.
Ongoing research into the impact of algorithms on decisions is needed so that clinicians can be appropriately educated.
Evaluation and validation
As with other digital healthcare technologies implementation must only be carried out when there has been robust clinical validation. More about evaluation and validation can be found in Understanding healthcare workers’ confidence in AI (2022)
Key considerations when planning AI in general practice
Patient engagement
The patient must be at the centre when assessing and implementing any new technologies. Care must be taken to ensure algorithms don’t exacerbate inequalities or introduce new discrimination. An example of this is where an algorithm developed to detect melanoma was trained on publicly available images which are more prevalent in white skin. As a result, it was more accurate in detecting melanoma in white skin than black.
Equality and health impacts assessment (EHIA) should be used to mitigate risks of discrimination and exacerbating heath inequalities.
Model cards
Information on the way AI algorithms are created and tested needs to be shared with healthcare teams. Used for a different purpose or on a different population, AI will produce misleading and potentially harmful results. There are various methods for evaluating algorithms and displaying key facts to users, e.g. a ‘model cards‘ or ‘model facts’ label has been proposed, showing key information, and explaining to users an algorithm’s capabilities and limitations including the characteristics of the training data set.
Risk management
Caution should be employed with new technology. What are the risks with the real-life application of this software and how can they be minimised?
Post-market surveillance is essential, as with any new medication or medical device. Medical device incidents or near misses should be reported to the manufacturer and the MHRA via the Yellow Card reporting system.
Considering wider impacts
There will always be unanticipated effects on clinical workload, care pathways and payment mechanisms. For example, if symptom checkers are to risk averse, workload may increase. Similarly, indeterminate results thrown up by algorithms may increase the need for additional diagnostic investigations. To mitigate against these effects on the health system the wider impact of new technology needs to be considered.
For more details see:
Related GPG guidance
- Health equalities and inclusion
- Interoperability
- Population health management
- Genomics
- Medical devices and digital tools
- Calculating Quality Reporting Service (CQRS)
Other helpful resources
Learning/workspaces
- Ada Lovelace Institute. An independent research institute with a mission to ensure data and AI work for people and society
- AnalystX workspace
- Digital, Artificial Intelligence and Robotics Technologies in Education (DART-Ed). A programme delivered by NHS England that explores the educational needs of the health and care workforce to enable use of AI and Robotic technologies to improve healthcare
Reports
- Academy of Medical Royal Colleges (AMRC), Artificial Intelligence in Healthcare
- Royal College of General Practitioners (RCGP) British Journal of General Practice 2019, Artificial Intelligence and Primary Care
- The Reform Trust, Thinking on its own: AI in the NHS
- Explaining decisions made with AI by the Information Commissioner’s Office (ICO) and The Alan Turing Institute – practical advice to help explain the processes, services and decisions delivered or assisted by AI, to the individuals affected by them
- NHS England, the 2019 Topol review, preparing the healthcare workforce to deliver the digital future
Other
- NHS England, The vision for AI in healthcare
- Government Digital Service (GDS), A guide to using AI in the public sector
- NHS England case study, ‘C the signs’ app for referrals
- US Food and Drug Administration (FDA) and Department of Health, Digital and data driven health and care technology, 2018 updated 2021, Guiding Principles to AI