What to measure: some approaches and tools
In essence, there are 3 types of measure that you can use to evaluate shared decision making, which we have briefly described with some examples of how they have been used in shared decision making and some examples tools:
Outcome measures reflect the impact of the improvement work on the patient or clinician. In the case of shared decision making a key impact is in the quality of decision making.
A good quality shared decision is one where the patient:
- is well informed – knows the key features of the options (risks, benefits, consequences
- understands what is important to them (their values) in choosing options
- is ready and prepared to make a decision
- makes a decision that is consistent with those values and that is followed through.
Decision Quality Measures are designed to measure the quality of the decision making and can serve a number of purposes, for example they can:
- show improvements in people’s knowledge, readiness to decide and increased confidence in choice of treatment after consultations/access to decision support compared to baseline. In the breast teams in Cardiff and Newcastle working on the Health Foundation’s MAGIC programme, the clinical nurse specialists were keen to demonstrate the importance of their role and the impact of consultations and decision support on the patient
- be used to help teams to tailor subsequent consultations to an individual patient's needs, and to better target their resources, by highlighting those who remained uncertain, had misunderstandings or where their values seemed out of kilter with their (emerging) treatment choice
- show that patients are receiving the right treatment for them and that rates of intervention are therefore at the right level. For example, Cardiff breast surgeons were keen to use the Decision Quality Measure data to support the slightly higher than average mastectomy rate, by demonstrating that their patients are well informed about available options, ready to decide and confident in their choice of treatment
- validate the informed consent process, by providing evidence that the patient/parent knows about the possible risks of the procedure
- potentially directly inform practice, but they require development for each clinical decision and thus are resource intensive.
Shared decision making questionnaire:
The shared decision making questionnaire was developed by sites working on the Health Foundation’s MAGIC programmeto help them understand how involved patients felt in decision making about their own treatment and care. It draws on already widely used and/or validated questions (for example in the NHS Patient Survey). This feedback could then be provided to clinicians, enabling them to monitor their own practice in relation to shared decision making and address any associated training/practical issues.
There is a tendency for a ‘ceiling effect’ when using this survey. One solution to this is to plot adverse responses (disagree or strongly disagree), mapped over time. Nonetheless, use of the questionnaire was a valuable reminder to clinicians to think about shared decision-making in their consultations.
Ask 3 questions survey (Cardiff):
Both patients and clinicians answered five simple questions which enabled clinicians in Cardiff to compare their perceptions with that of the patients about whether they were meeting patient needs in respect of information about options, risks and likelihood. The patient data were collected using a postcard-sized questionnaire given to patients by reception staff, and handed back to reception staff after the consultation. Clinicians completed the card after each patient consultation. At the end of a 'session' - usually about 10 patients – clinicians took the patient cards from reception and compared their 'scores' with those given by the patients. This provided very quick feedback and led clinicians to reflect on a case by case basis on whether they were actually 'doing shared decision making'. Although the data were fed into a PDSA cycle and saved for analysis, the true value was that clinicians 'learned' what constituted a shared decision in the patients' eyes. These data for personal learning were of sufficient value for many clinicians to carry out periodic checks themselves to see if their efforts were seen by patients. They were much less interested in presentations of aggregate data collected over time by the MAGIC team.
Ask 3 questions survey (Newcastle):
The Newcastle MAGIC team tested a single question Likert scale survey in which patients were asked to score the consultation in terms of shared decision making in a single episode (1=very poor quality shared decision making, 10=very good quality shared decision making). The aim was to look at a single measure that might be easy to capture and show wider variation.
Three versions were tested:
- Please tell us what you think about the quality of shared decision making you experienced in your consultation today. Download the Ask 3 Questions survey: Involvement
- Please tell us to what extent you were asked about what is important to you in making this decision today. Download the Ask 3 Questions survey: Quality of shared decision making consultation
- Please tell us how much you agree with the statement 'I was involved as much as I wanted to be in making a decision about my treatment and care today'. Download the Ask 3 Questions survey: Important to you
Team Feedback Tool:
This was developed as a structured questionnaire to understand an individual clinician's perspective of their own and of their team's practice. The team feedback tool aimed to establish a baseline of the clinical teams' perception of shared decision making activity within their clinical areas, at an individual and a team level.
The questionnaires were distributed at the beginning of the project and repeated one year later. Whilst initial scores were already generally quite high, overall most teams demonstrated a positive shift in terms of their understanding and use of shared decision making, both at an individual level and in terms of their perception of team activity.
A tool of this type can be used to support quantitative measures of perceptions of the clinicians and other team members, but also to inform reflective discussion within the team about progress.
Process measures help you to reflect on whether your systems and processes are working to deliver the outcome you want.
Teams working on the MAGIC programme used two different tools to measure processes:
Brief Reflective Feedback Tool:
The Brief Reflective Feedback Tool provides structure to enable an individual team to reflect on their activity, and can help a number of teams to discuss their experiences and share good practice in shared decision making. It can also be used to move beyond reflecting on the process of implementing shared decision making, to thinking through how it can be embedded.
Most Significant Change (MSC) technique:
The most significant change technique can help to identify clinicians' perceptions of the key barriers and facilitators to change, and their views on the elements of the programme that have been most successful in accomplishing change.
It takes just a few minutes to answer four short questions which explore:
- their involvement in shared decision making (in this case, the MAGIC programme);
- their view of the most significant change that had occurred as a result of MAGIC;
- why they viewed the change as significant;
- which aspects of the programme had most facilitated the change.
Balancing measures help you to reflect on what may be happening elsewhere in the system as a result of the change. For example, measurement of clinicians' use of the Option Grid in paediatric tonsillectomy initially showed that the consultations were taking longer than usual. However, repeated measurement of consultation times showed that, as clinicians became more familiar with the tool, the length of consultation returned to normal.