Artificial intelligence (AI) and machine learning

Version1.1 14 June 2023

This guidance is part of the Working in a digitally transformed NHS section of the Good practice guidelines for GP electronic patient records.

For technical terms in this article please refer to the NHS AI Dictionary.

In the near future…

A patient is identified by an algorithm as being high risk of hospital admission.  They could easily have been lost to follow up after failing to attend for regular reviews.

They are offered support at home to manage their chronic lung disease including a smart watch that measures observations (oxygen levels, heart rate and ECG) and can detect movement patterns, including an unsteady gait.  It can be configured to send automatic alerts to the GP, hospital team or community respiratory nurse in response to worsening clinical indicators and can call emergency services in the event of a fall.

The wearable monitor detects an increased heart rate and unsteady gait and prompts the patient to make a GP consultation.  The patient chooses to have the consultation via video link due to the weather.  During the consultation, the GP is presented with onscreen observations, and a decision support algorithm processes the information and ranks the many possible differential diagnoses in real time. 

Speech-recognition software transcribes the conversation automatically.  At the end of the consultation, follow-up investigations are suggested such as the need for additional blood tests or an X-ray, based on the most up to date and relevant clinical guidelines.

Treatment options are suggested by the algorithm based on the patient, but the ultimate treatment decision is made by the GP and the patient through shared decision-making.  The patient chooses to continue to be treated at home and is added to a virtual ward with automated calls and scheduled visits from the community medical team.

The 24-hour monitoring has provided peace of mind to the patient and their relatives. The healthcare team has benefited from 24-hour home monitoring and automation of clinical pathways. Together it has allowed the patient to remain at home for longer than would have been otherwise possible maintaining independence.

 Inspired by a King’s fund blog post.

Background

Artificial intelligence (AI) is at a critical point of being adopted by the NHS.  In other industries AI is already used widely, e.g. in facial recognition software on consumer devices, virtual assistants, and algorithms that provide results used in search engines and social media platforms.  It promises many benefits to healthcare, such as helping with the complex decision-making and analysing the huge amounts of data being generated by digital health devices (DHTs).

It is predicted in the NHS AI Lab roadmap that general practice will be one of the most affected workforce groups in the NHS.

AI has been used to help diagnose COVID-19 from chest imaging and used to help secondary care dermatology referrals, e.g. skin analytics.  Other examples from the NHS AI award winners help with retinal screening and stewardship of antimicrobials.  Symptom checkers such as NHS 111 online are also trialling AI to help with triage.

To achieve its potential, AI must be developed in a regulated way, and as a collaboration between clinicians, software engineers, data scientists and product designers.  The early challenges include gathering enough good-quality data to build models, understanding the information governance surrounding this and developing proof of concept of AI tools. As these initial challenges are overcome other factors will grow in importance such as workflow integration, demonstrating evidence of real-world clinical effectiveness and providing ongoing safety.

Algorithms in general practice

Algorithms are not something new to general practice having been used widely for many years, e.g., risk scores (QRISK, FRAX), prescription switching, and for searches and reporting such as for Quality and Outcomes Framework (QOF). These algorithms are generally well validated before being used in clinical practice and clinicians balance the risk of error with the reward of having convenient tools to use day to day and inform care.

Machine learning involves creating more complex algorithms that learn rules from data, rather than being written by experts. This typically relies on two key components:

  • the development of an advanced algorithm
  • training it with a large amount of data to increase its precision in predicting outcomes

There are many ways in which this can happen such as analysing data and finding patterns (unsupervised machine learning), finding the best way of predicting a specific outcome (supervised machine learning), or finding the best way of achieving a goal (reinforcement learning).

Potential benefits of AI in general practice

The NHS Long Term Plan sees AI as a key element in digital transformation ‘to help clinicians in applying best practice, eliminate unwarranted variation across the whole pathway of care, and support patients in managing their health and condition‘.

Some of the many ways in which AI is anticipated to touch on general practice are:

  • Automation/service efficiency | Voice recognition software could transcribe consultations, freeing up more staff time to deliver care. Natural language processing could also automate some patient documentation workflows, to identify actions and help suggest and automate responses.
  •  Diagnostic/decision support | Decision support can apply guidelines to the consultation data and suggest a diagnosis or suggest management to the clinician. An example of this is C The Signs, which uses AI to improve the diagnosis of cancer.
  • Precision (P4) Medicine | Predictive, Preventive, Personalised and Participatory Medicine incorporates multiple data sources such as the patient record, biometric data and genomic data. This can be used to calculate patient risk more accurately and can use pharmacogenomics predict an individual patient’s response to medication.  This promises to move from the traditional ‘one-size-fits-all’ form of medicine to a personalised, data-driven disease prevention and treatment methodology.
  • Image analysis | AI has been used to take the place of a second reader in radiology and histology. This can increase image processing capacity and may help identify lesions that had been overlooked.  This field is rapidly developing, and we may see new diagnostic tools to help interpret images taken in primary care such as photographs of skin.
  • Continuous monitoring | Algorithms can be always on, which makes them well placed to provide continuous monitoring of patients and early recognition of deteriorating patients.  Examples of how this could work include the virtual wards used during the COVID 19 pandemic.
  • Consumer technology | Results from smartphones and wearable devices may prompt a patient to make a consultation. For example, many smartwatches are able to detect atrial fibrillation and assess sleep quality. The latest smartphones can be used to detect respiration and heart rates.  Smart speakers similarly can detect coughs and snoring.
  • Population/public health. | AI may be used to spot patterns in practice population data not previously identified.  Interventions could be used to reduce the risk level of individuals by given them targeted advice directly via an app or letter with lifestyle suggestions such as stopping smoking or reducing alcohol intake.

Tips when implementing AI

Understanding healthcare workers’ confidence in AI (2022) is an excellent report developed by Health Education England (HEE) and the NHS AI Lab.  It explores how to prepare the UK’s healthcare workforce to master digital technologies for patient benefit.  The report is essential reading to those using AI in the NHS to understand the barriers to adopting AI amongst healthcare providers. 

Staff may be reluctant to adopt AI technologies if they feel threatened, if they are worried about the risks, or if they do not see enough evidence of effectiveness.  They need to be brought onboard so that those who are worried feel empowered to shape how the technology can used to support them.

To this end, NHS and GP organisations are working to regulate and design standards that support developers in ensuring that they can deploy their technology, because they’ve met minimum standards that enable greater confidence in the use of the technology.  It will be important for GPs and other primary care leaders to be actively involved in this process to shape how technology is used.  

Ongoing research into the impact of algorithms on decisions is needed so that clinicians can be appropriately educated.

Evaluation and validation

As with other digital healthcare technologies implementation must only be carried out when there has been robust clinical validation.  The following is a summary taken from the ‘Understanding healthcare workers’ confidence’ report.

Evaluation of AI’s efficacy is a continual process as it passes through stages of development and deployment:

  1. Internal validation | Testing by the developer. This usually uses a validation data set, often split from the same source as the training data set. It generally uses retrospective data sets (data that has been collected in the past).
  2. External validation | Testing with data from a different source to the training data. It tests in clinically relevant situations to ensure the AI is effective.  This may be performed by the AI developer, or independently by a third party.
  3. Local validation | This may be done as part of deploying AI at a local setting, to ensure it performs well with local data, patient populations and clinical scenarios.
  4. Prospective clinical studies | Testing in a real-world clinical setting using data collected in real time to determine if the AI is effective and improves patient outcomes.
  5. Ongoing monitoring | This is essential to identify safety risks, or performance issues that may not have been apparent at earlier stages, and to monitor performance, as it may deteriorate over time due to changes in the population ‘population drift’ or changes in medical practice. Deterioration could also happen when moving from a training set to a live instance.

Regulation

The Multi-Agency Advice Service

The MAAS (Multi-Agency Advice Service) is a collaboration between the National Institute for Health and Care Excellence (NICE), the Medicines and Healthcare products Regulatory Agency (MHRA), the Health Research Authority (HRA), and the Care Quality Commission (CQC).  It has been established to help ensure developers and procurers of AI have the information to confirm products are meeting regulatory requirements.

Medical device regulation

If an AI product is classed as a medical device, it will be regulated by the MHRA. Medical devices must be registered with the MHRA and are subject to Medical Device Regulations, the UK MDR 2002 .  There is another article in the Good Practice Guidelines about medical devices and digital tools.

The MHRA does not regulate AI products that are not classed as medical devices, such as products used to automate administrative processes.  These products must still conform with other regulations when used in healthcare, including the General Data Protection Regulation (GDPR) and the NHS Digital Technologies Assessment criteria (DTAC) framework.

Caution

Healthcare workers should be cautious and investigate carefully what the regulatory approval of an AI product means.  It can be easy to equate regulatory approval with proof that a product has been clinically validated and is safe and effective, but this may not always be the case.

Evidence standards for AI may not involve external validation.  Current MHRA guidelines for UK Conformity Assessed (UKCA) approval only require internal validation.  External validation may be implied from the UKCA clinical evaluation requirements, but this does not yet need to be done independently.  Prospective clinical studies are also not currently a requirement for regulatory approval.

Clinically valid algorithms depend on high-quality information from which to learn.  It is worth being cautious as many AI models perform well at the internal validation stage but perform significantly worse at the external validation stage due to flaws with internal validation or the model not working well when given data different to its original source, being insufficiently ‘generalisable’.

As a rule, if you want to implement or test an AI tool, check with MAAS.  If you want to recommend or prescribe an AI tool, check the NICE evidence standards framework (ESF). 

Evidence standards framework

NICE Evidence standards framework (ESF) for digital health technologies is likely to become an important tool for evaluating AI technology.  The standards include those needed to demonstrate value to the UK health and care system, including evidence of effectiveness, and evidence of being value for money. 

Privacy

There have been high profile cases where healthcare providers have not communicated clearly with patients about how their data was to be used.  The lesson learned from this is that patient engagement and participation is crucial to the successful adoption of AI technologies in health.  A suitable legal basis must be found before any use is made of confidential patient data, and any appropriate opt-outs, if invoked, must be respected.

If the processing is for direct care, then the consent is implied. 

Personal data should be de-identified before processing or sharing either by being anonymised (removal of all identifying data) or pseudonymised (removal of identifiers in a way that means it can no longer be easily attributed to a specific person).

GDPR states that data should only be stored for as long as is required and processed in the ways agreed for the purpose.  Data protection impact assessments should be completed – along with an appropriate data processing and data sharing agreement if applicable.

Key considerations when planning AI in general practice

Patient engagement

The patient must be at the centre when assessing and implementing any new technologies.  Care must be taken to ensure algorithms don’t exacerbate inequalities or introduce new discrimination.  An example of this is where an algorithm developed to detect melanoma was trained on publicly available images which are more prevalent in white skin.  As a result, it was more accurate in detecting melanoma in white skin than black.

Equality and health impacts assessment (EHIA) should be used to mitigate risks of discrimination and exacerbating heath inequalities.

Model cards

Information on the way AI algorithms are created and tested needs to be shared with healthcare teams.  Used for a different purpose or on a different population, AI will produce misleading and potentially harmful results.  There are various methods for evaluating algorithms and displaying key facts to users, e.g. a ‘model cards‘ or ‘model facts’ label has been proposed, showing key information, and explaining to users an algorithm’s capabilities and limitations including the characteristics of the training data set.

Risk management

Caution should be employed with new technology.  What are the risks with the real-life application of this software and how can they be minimised? 

Post-market surveillance is essential, as with any new medication or medical device.   Medical device incidents or near misses should be reported to the manufacturer and the MHRA via the Yellow Card reporting system.

Considering wider impacts

There will always be unanticipated effects on clinical workload, care pathways and payment mechanisms.  For example, if symptom checkers are too risk averse, workload may increase.  Similarly, indeterminate results thrown up by algorithms may increase the need for additional diagnostic investigations.  To mitigate against these effects on the health system the wider impact of new technology needs to be considered.

For more details see:

Wider context around AI

AI is not an off-the-shelf solution to healthcare’s problems and needs to be carefully evaluated before and as it is deployed.   AI solutions are likely to depend on well-functioning health systems rather than replacing the need for them.  Similarly, automation of processes within primary care and integration of new digital tools with existing systems will be essential for AI to bring the promised efficiency gains needed for users to adopt them.

There are some outstanding questions with AI that are yet to be answered such as accountability.  If harm happens because of care delivered jointly by physicians and algorithms, who is legally responsible?  The clinician, developer, the vendor, or the healthcare provider?  The NHS AI lab are exploring these issues along with NHS Resolution.

Explaining AI can be very difficult, even for those who have designed it.   Many work as ‘black boxes’ generating results without explaining how they reached them. This can make it difficult to assess for bias, error, reliability, or faults.  It can also be difficult to assess for reproducibility, as an algorithm continues to learn as it is used, so a result it gives on one day might not be the same as the result it gives on another day, even if all the inputs are the same.  The MHRA hopes to try to address this and ensure they are sufficiently transparent to be reliable, trustworthy, and testable.

Bias

There are many types of bias that can exist with AI.  Bias can be introduced to AI through the prejudices of the people developing the algorithm, or carelessness in the way training data is collected or processed.

One form of bias affecting AI is automation bias, which is when users favour suggestions from automated decision-making systems and ignore contradictory information, even when correct.  

Even with well-designed AI there is a risk of automation bias.  This is already seen as a cause of accidents with pilots using autopilots and drivers of self-driving cars.  This risk may even increase as algorithms get better, if too much confidence is given to the automated decision and if concentration on the task or skill of the workforce decreases over time.

Finally, temporal bias describes how an algorithm will eventually become obsolete due to changes in the population or to future events that were not factored into the model.  An example of this could be diagnostic models that are presented with new diseases such as Covid 19 or monkey pox.  Periodic evaluation of AI algorithms over time will help to ensure they remain relevant.

Summary

Primary care clinicians should be reassured that a safe operating environment is being worked on for artificial intelligence.  They should also be encouraged to get involved in its development and deployment to ensure primary care benefits from this burgeoning technology.

As with all new technology there will be compromises with AI, e.g. balancing the needs of individual privacy versus the needs of society to have good research data or the need to allow patients to benefit from a new technology versus the need to protect them from the new risks that it presents. 

Extensive collaboration between patients, clinicians, regulators, software engineers, data scientists, product designers and entrepreneurs will be needed to ensure AI improves healthcare in the way intended.

Many new roles will be needed in the data-driven health service that AI will bring, and the workforce needs to be trained for this.

Human subtleties and patient preferences will mean that clinicians will always be needed to provide empathy and psychological support to patients and agree a shared management plan with the patient.

The 2019 Topol review, preparing the healthcare workforce to deliver the digital future concluded that machines ‘will not replace healthcare professionals but will enhance them (‘augment them’), giving them more time to care for patients’. 

AI in practice

Machine learning can be used to identify, and risk stratify a practice’s patients with, or at risk of, diabetes, as in the example below.

Machine learning can use objective data that a practice or primary care network (PCN) holds, and segment and risk stratify all patients, not just those for whom blood tests are up to date.  This means we can help clinicians learn more about all their patients, not just their regular visitors.

A GP surgery wanted to re-design its diabetes service to identify which patients were at risk of adverse outcomes.  Clinicians worked together with data scientists to identify markers for adverse outcomes.  This approach used clinically agreed risk markers, rather than an ‘out of the box’ risk model.  This made the output more transparent and helped describe why people were at high risk – which is not typically exposed in risk registers.

They pooled data sets from general practice and hospital patient records to create a risk model. The model was reviewed to ensure it had identified the right patients and risk stratified them in a way that matches the clinical judgement of the GPs.

In a population of 17,000 patients, the model helped to identify around 40 patients who were unregistered as having diabetes but who have the condition.

Other helpful resources

Learning/workspaces

  • Ada Lovelace Institute. An independent research institute with a mission to ensure data and AI work for people and society
  • AnalystX workspace
  • The NHS AI Lab, guidance, case studies and reports on how AI has been developed and implemented in NHS and care to find out about the challenges, lessons learned and best practice
  • Digital, Artificial Intelligence and Robotics Technologies in Education (DART-Ed). A programme delivered by Health Education England (HEE) that explores the educational needs of the health and care workforce to enable use of AI and Robotic technologies to improve healthcare
  • FutureNHS, Analytics Learning Exchange (Alx) – to help those who work in health and care to become better skilled in the use of data, evidence, and analytical products (please note registration is required to access this site)

Reports

Books

  • Hannah Fry, Hello world. How to be human in the age of the Machine, 2019 Publisher: Transworld Publishers Ltd, ISBN: 9781784163068
  • Eric Topol, Deep Medicine: How Artificial Intelligence Can Make Healthcare Human Again, Publisher: Basic Books, ISBN: 9781541644632 

Videos

  • Faculty of Clinical Informatics, Artificial Intelligence Special Interest Group, webinar sessions available:
  • An introduction to machine learning and healthcare AI
  • What is ‘computable biomedical knowledge’ and why is it important?

Other