Improving systems and processes

Services can make simple and more complex changes that encourage and enable shared decision making.  This might be as simple as displaying posters and leaflets that encourage people to ask questions and play an active role in decisions about their treatment and care, or as complicated as adding EMIS codes to record shared decision making and using these to provide feedback to individual clinicians on their practice.

Making changes is always daunting, but at its most simple it’s about:

  • Identifying what can be improved so that the service works better to support shared decision making
  • Thinking through how you will be able to tell not just that things have changed, but that they have improved
  • Working out what changes you can make to improve those things
  • Making the changes
  • Checking that the changes you have made have led to the improvements you hoped for

These stages follow the NHS Model for Improvement and there is already a rich body of literature on how to do quality improvement within an NHS context.

We have set out:

The key stages in improving systems and processes

This is about asking ‘What are we trying to achieve?’.  Without explicit agreed aims there is a danger that there are different perceptions amongst the team about what the programme (or project) is trying to achieve.  Clarity of aims also helps in choosing how to measure progress - a key to providing teams with meaningful feedback to keep them motivated.

Involving patients at this stage can help you to identify where improvements could be made.

Aims can be set at a number of levels.  For example, the overall aim of the Health Foundation’s MAGIC programme was 'to implement shared decision making in all clinical teams taking part in the MAGIC programme', but interventions at project level have aims such as 'to increase the number of consultations adopting a shared decision making approach'.

Or more locally at team level: 'The project aim is to improve the quality, relevance and timeliness of information given to patients post diagnosis. The primary drivers are reviewing what patients are currently getting, searching for the latest information, asking patients what their needs are and how they can be met and engaging all members of the clinical team in producing and agreeing a standard quality assured information pack.'

Driver Diagrams can be very useful in helping team establish aims and see how their aims and work fit with what is happening elsewhere in the programme.

Mapping out aims at different levels and across projects not only allows people clarity about where their contribution fits, it also helps them make sense of the data that is being generated to measure progress, identify key stakeholders and people with similar roles to themselves and generally promotes cohesion of purpose. 

An example might be:

The overarching aim is that patients are active participants in making decisions about their treatment and care. 

The primary drivers to achieving this are that:

  • Patients receive active encouragement to expect and to participate in making decisions about treatment options
  • Patients receive ‘permission’ to become more involved in discussions and decisions about their treatment and care
  • Patients receive clear and strong messages from their clinical teams inviting them to become involved
  • Clinicians have the awareness, knowledge and skills to fully involve patients

Secondary drivers to ensuring patients receive active encouragement are to:

  • Involve patients as much as possible in internal presentations and walkabouts, helping to evaluate measurement methods and preparing publicity materials.
  • Use patient and carer groups to spread the message about Ask 3 Questions.
  • Thinking through how you will be able to tell not just that things have changed, but that they have improved.

There is a difference between measuring whether a change has taken place – for example, whether an option grid was used - and whether it actually constitutes an improvement.  Those involved in the MAGIC programme found it was often easier to measure whether the change had taken place than it was to determine whether the change was actually an improvement. 

Involving patients can help build a picture of what an improvement would look like.

Working out what changes you can make to improve those things:

Teams freed from the need to produce 'research results' are freer in their choices of interventions. This has important consequences for team working and innovation.

Experience from the sites suggests that:

  • 'The ability to instil ownership and promote a "created here" approach are key for clinical team engagement and buy-in. We have witnessed a build in momentum, where the teams are now identifying their own aims and undertaking work to test these, in part supported by the use of QI and PDSA approaches, although this is in its early stages.'

Making the changes

It is often helpful to test changes at a small scale and refine (using a Plan-Do-Study-Act Cycle).  This might be, for example, testing an option grid, for one condition, with one patient and then reviewing and amending, before increasing the scale to all patients with that condition with one general practitioner for one week, and so on.

Checking that the changes you have made have led to the improvements you hoped for

Involving patients can help you to understand what impact the changes you make are having.  Remember to find out what 'measures' can help you understand what is happening from a patient perspective and involve patients in planning these. 

When the shared decision making questionnaire was under further development in Newcastle, the team approached patients in waiting areas to ask their opinion of this survey – whether it was easy to understand; whether we were asking the right questions in the right way.  Feedback from patients and their relatives and carers helped the team to make the survey more user-friendly for patients and more useful in view of the responses subsequently gathered from the survey.

Findings from the data can show

  • There were problems introducing the intervention.
  • The intervention didn’t work at all.
  • The intervention produced a result you weren’t expecting.
  • The measure used didn’t measure what you wanted to measure.
  • or all four.

Whatever the outcome of the test there are lessons to be learnt.

Read about the learning from the Magic programme.


All resource on self-management support