Artificial intelligence (AI) and machine learning

Version1.2 28 March 2025

This guidance is part of the Working in a digitally transformed NHS section of the Good practice guidelines for GP electronic patient records.

For technical terms in this article please refer to the NHS AI Dictionary.

NHS England has published comprehensive guidance to help NHS staff understand, develop and adopt AI.

It is predicted in the NHS AI Lab roadmap that general practice will be one of the most affected workforce groups in the NHS.

AI has been used to help diagnose COVID-19 from chest imaging and used to help secondary care dermatology referrals, e.g. skin analytics.  Other examples from the NHS AI award winners help with retinal screening and stewardship of antimicrobials.  Symptom checkers such as NHS 111 online are also trialling AI to help with triage.

To achieve its potential, AI must be developed in a regulated way, and as a collaboration between clinicians, software engineers, data scientists and product designers.  The early challenges include gathering enough good-quality data to build models, understanding the information governance surrounding this and developing proof of concept of AI tools. As these initial challenges are overcome other factors will grow in importance such as workflow integration, demonstrating evidence of real-world clinical effectiveness and providing ongoing safety.

You can read more about the information governance issues associated with AI for NHS staff on the NHS England web pages; and a Government guide on good practice for digital and data driven health technologies.

Tips when implementing AI

Understanding healthcare workers’ confidence in AI (2022) is an NHS report. It explores how to prepare the UK’s healthcare workforce to master digital technologies for patient benefit.  The report is essential reading to those using AI in the NHS to understand the barriers to adopting AI amongst healthcare providers. 

Staff may be reluctant to adopt AI technologies if they feel threatened, if they are worried about the risks, or if they do not see enough evidence of effectiveness.  They need to be brought onboard so that those who are worried feel empowered to shape how the technology can used to support them.

To this end, NHS and GP organisations are working to regulate and design standards that support developers in ensuring that they can deploy their technology, because they’ve met minimum standards that enable greater confidence in the use of the technology.  It will be important for GPs and other primary care leaders to be actively involved in this process to shape how technology is used.  

Ongoing research into the impact of algorithms on decisions is needed so that clinicians can be appropriately educated.

Evaluation and validation

As with other digital healthcare technologies implementation must only be carried out when there has been robust clinical validation.  More about evaluation and validation can be found in Understanding healthcare workers’ confidence in AI (2022)

Key considerations when planning AI in general practice

Patient engagement

The patient must be at the centre when assessing and implementing any new technologies.  Care must be taken to ensure algorithms don’t exacerbate inequalities or introduce new discrimination.  An example of this is where an algorithm developed to detect melanoma was trained on publicly available images which are more prevalent in white skin.  As a result, it was more accurate in detecting melanoma in white skin than black.

Equality and health impacts assessment (EHIA) should be used to mitigate risks of discrimination and exacerbating heath inequalities.

Model cards

Information on the way AI algorithms are created and tested needs to be shared with healthcare teams.  Used for a different purpose or on a different population, AI will produce misleading and potentially harmful results.  There are various methods for evaluating algorithms and displaying key facts to users, e.g. a ‘model cards‘ or ‘model facts’ label has been proposed, showing key information, and explaining to users an algorithm’s capabilities and limitations including the characteristics of the training data set.

Risk management

Caution should be employed with new technology.  What are the risks with the real-life application of this software and how can they be minimised? 

Post-market surveillance is essential, as with any new medication or medical device.   Medical device incidents or near misses should be reported to the manufacturer and the MHRA via the Yellow Card reporting system.

Considering wider impacts

There will always be unanticipated effects on clinical workload, care pathways and payment mechanisms.  For example, if symptom checkers are to risk averse, workload may increase.  Similarly, indeterminate results thrown up by algorithms may increase the need for additional diagnostic investigations.  To mitigate against these effects on the health system the wider impact of new technology needs to be considered.

For more details see:

Other helpful resources

Learning/workspaces

Reports

Other