Trachy Talk
Our brand new podcast series from the NTSP will launch in January 2026! The latest new, research and insights from the National Tracheostomy Safety Project (NTSP). Monthly literature updates, specials and interviews from the expert team based in Manchester, UK.
The NTSP is committed to providing education, information and resources to improve patient safety and the patient experience for those with tracheostomies and laryngectomies. All of our resources are housed on our website www.tracheostomy.org.uk, accessed by over 30,000 visitors each month from around the world.
Our goal is to improve the safety and quality of care for patients with tracheostomies and laryngectomies through education. We work closely with patients, families and healthcare professionals to develop new resources to improve care. We’ve collaborated with key stakeholders in tracheostomy care since 2009, and developed freely accessible resources, supported by online learning developed with the UK Department of Health. We’ve worked with the Global Tracheostomy Collaborative since 2012 to improve care for patients and their families everywhere.
We are funded by grants, donations and in partnership with medical device companies through unrestricted awards. We are not tied to any particular brand or manufacturer. All of our work is undertaken by volunteer healthcare staff, patients and their families. You can access our training videos and resources for Basic Care, Emergency Care and Vocalisation & Swallowing. Download and print bedhead signs and emergency algorithms from our resources.
Most of our content is supported by videos. You can support our work by watching or clicking any of the advertising links that appear via the NTSP YouTube Channel.
Follow us on social media:
Trachy Talk
NTSP Specials (Season 2): Professor Paul Wilson on Quality Improvements
Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.
Paul Wilson is Professor at The University of Manchester, Co-Director of the NIHR Greater Manchester Rapid Service Evaluation (REVAL), and Co-Editor in Chief of Implementation Science - the leading international journal for implementation research. Paul spoke about QI at the 6th International Tracheostomy Symposium, held in Manchester UK, in October 2021. This presentation is an extract from that meeting.
The UK National Tracheostomy Safety Project (NTSP) is committed to providing education, information and resources to improve patient safety and the patient experience for those with tracheostomies and laryngectomies. All of our resources are housed on our website www.tracheostomy.org.uk, accessed by over 30,000 visitors each month from around the world.
This is the only podcast to bring you literature reviews, hot topic discussions and interviews with healthcare staff, patients and families.
Our goal is to improve the safety and quality of care for patients with tracheostomies and laryngectomies through education. We work closely with patients, families and healthcare professionals to develop new resources to improve care. We’ve collaborated with key stakeholders in tracheostomy care since 2009, and developed freely accessible resources, supported by online learning developed with the UK Department of Health. We’ve worked with the Global Tracheostomy Collaborative since 2012 to improve care for patients and their families everywhere.
We are funded by grants, donations and in partnership with medical device companies through unrestricted awards. This podcast series is supported by unrestricted education funding from the Atos Learning Institute. The funding supports the professional production of the podcasts and videos, and the medical device companies that support us do not have any creative influence over the content that we record. All of our work is undertaken by volunteer healthcare staff, patients and their families.
Most of our content is supported by videos. You can access our training videos and resources for Basic Care, Emergency Care and Vocalisation & Swallowing. Download and print bedhead signs and emergency algorithms from our resources.
You can support our work by watching or clicking any of the advertising links that appear via the NTSP YouTube Channel. You can also donate directly to the NTSP through the NTSP website, or by clicking the Buzzsprout podcast hosting "support" links. You can support our work by watching or clicking any of the advertising links that appear via the NTSP YouTube Channel.
Episode is after three that is recorded for the October 2021 International Tracheostomy Symposium. I'd like to introduce my friend and colleague Barbara Bonvento, who's part of our presentation team for the symposium. Barbara is going to introduce Dr. Paul Wilson, who's an implementation scientist working at the University of Manchester. Over to you, Barbara.
SPEAKER_01Next we got Paul Wilson, who's got a lot of credentials again. So he's a senior lecturer at the Centre for Primary Canon and Health at the University of Manchester, an implementation science research lead at the NIHL Applied Research Collaboration across Greater Manchester. Also is a co-editor in chief of the Journal of Implementation Sciences.
SPEAKER_02To you, Paul. My name is Paul Wilson, and I'm going to talk about making improvements happen in practice. So to deal with the terminology first, so-called implementation science. Is it a science? Well, no. The terminology or the term implementation science is synonymous with the journal of the same name, which I'm involved in the editing of. And the implementation science, the journal, started in the early 2000s. And since then there has been an explosion in what would be deemed to be implementation research. But we shouldn't forget that prior to 2005, people were still looking at these types of questions. It just wasn't labelled in the same way. So implementation science is a term, it's quite useful as an umbrella term, but it's really an umbrella term explaining an interdisciplinary field of applied health services research, looking at a particular set of questions. And just to illustrate this, I'm just going to put some historical context to it. So this is us going back to the 1950s, and this is the a seminal medical sociology study looking at the diffusion of innovations in a healthcare setting. Previously, there had been much interest in how ideas and technology spread within other sectors, particularly agriculture in the States, and sociology had a long track record going back to the 1930s, looking at the spread of new technologies within that sector. But in the 1950s, medical sociologists got interested in looking at um how innovations in healthcare, largely drug technologies, spread in social systems, i.e., amongst family physicians in this instance. So this is looking at the spread of a new antibiotic at that time, so it's tetracycline amongst family physicians in Chicago and Illinois. And what the original analysis found was that those doctors found an influence for social influences. So those doctors who were well connected and embedded in social networks were more readily or more likely to take up this new drug compared to colleagues who were more isolated, single practitioners and that sort of thing. Things like persuasive communications, educational outreach, in this instance, academic detailing by pharmaceutical reps, and looking at the effects that they have on changing professional practice. But this is really the early instance of implementation science in action. Now, if we then jump forward to the 80s and 90s, really the next big impetus in the field comes with the production of clinical practice guidelines. So in the 80s and 90s, it became possible to synthesize large amounts of research-based knowledge and codify that in and develop evidence-based clinical guidelines. So there was a huge industry that took off both sides of the Atlantic, the US and in Europe, developing guidelines that were evidence-based, largely based on systematic reviews. Not soon after, people then began to realize well, we've produced all these guideline recommendations about how people should practice, and adherence is an issue. How do we get people to actually adhere to guidelines, take them up and use them and enhance the quality of care that's delivered? So alongside guideline development came a card of researchers, again, both sides of the Atlantic, USA and Canada, Netherlands and the UK, all interested in strategies to get clinicians largely, and particularly in primary care initially, to take up and adhere to best practice as set out in clinical guidelines. So an impetus to a field developed at this stage, and that then went on to become what we now know as implementation, implementation science. So we're very interested in getting things that we know work and do benefit and improve healthcare taken up and used routinely in practice settings. We're interested in the methods to do that, to make that happen. Increasingly in the field, there's a focus on de-implementation. So just as we're interested in making sure people are doing things that we know work and benefit patients, we're also interested in removing, reducing, replacing, restricting the use of interventions that we know don't work or are of low or no clinical benefit. This is an increasing area of interest in the field, and some of the strategies that we use to promote uptake are not necessarily transferable to this space. So it's very new and quite an exciting space to be working in at the moment. The left-hand side of this diagram, you've got uh sort of discovery research into pre-clinical trials, into early first trials in humans, into what we know as phase three clinical trials, where you would seek to determine the effectiveness of an intervention and moving on through regulatory approvals and marketing authorizations, health technology assessment that you would expect health technologies to go through these days in particular high-income countries, and then moving on to we have something that's established, uh, how do we get it taken up and used in practice? So, implementation science historically has uh very much focused at this right hand side of the translational pathway. So, our focus is on evidence-based interventions, things that we know work or things that we know don't work. The research focus essentially can be sort of categorized into four big chunks. Uh the first is exploring uh health systems, the behaviors and practices within them that can act as barriers and enable us to successful implementation of any given technology, any thing that we want to improve, things that we want to increase use of, things that we want to stop people doing. We're interested in the context in where that's to happen. The next big chunk is design and evaluation of rigorous strategies that seek to address these barriers and enablers. So we analyze the organizational context in which changes to occur, and we use that information to design strategies that will mitigate against some of the uh potential barriers, but also enhance the facilitating factors to bring about change in that given context. The next big chunk is understanding what happens in that. So, invariably, implementation activities and improvement activities, if we're using the terminology interchangeably, are complex interventions. You know, we design strategies and part of evaluating their impact and their effectiveness in terms of bringing about a change in practice is understanding what happens in that space, what actually gets implemented, how it gets sustained, where, when, why, and how. These are very important questions in implementation science, and increasingly this is a real growth area alongside the de-implementation stuff that I mentioned previously. A big contribution that the implementation science field has made is the use of theory to advance knowledge and understanding, to design and evaluate studies, to use it as a guide to implement uh changes in practice settings. This has probably been the single biggest thing that you can point to within implementation science that has permeated across other disciplines in a way that we couldn't have imagined when we first started out, when people weren't using theory at all. So I'm just going to focus on the second point of those uh four areas, which is the implementation strategy. I've said a little bit about um design in terms of seeking to build on what we know about how a system works and the likely enablers and barriers within that that may help or hinder an implementation effort. Strategies are designed to mitigate those and enhance the uptake of innovations and the delivery of high-quality healthcare. There are hundreds of trials in this space. Just taking the very top one audit and feedback, there are nearly 200 trials looking at audit and feedback strategies to enhance the uptake of care processes or practices or procedures. Increasingly, the field is focused on optimising these strategies, enhancing their use so that when you use an audit and feedback intervention, you're maximising the opportunities for change to occur because you're uh optimizing it in the best possible way. So, what went before was that people would describe things as also in feedback but wouldn't really understand the behaviours or processes that they were trying seeking to target or the performance aspects that they were trying to change. Now there's a real focus on delivering information in a way that brings about those changes. And what we see in a lot of improvement efforts and change efforts across health systems, both primary and secondary care, in family physician settings, but also in hospital settings, we don't see a lot of application of best practices as we understand it using this research-based knowledge that's available. And isligat is something that we see more often or more commonly. And isligat was a phrase coined by the founding editor of implementation science, and it stands for it seemed like a good idea at the time. So we're quite good at picking things off the shelf and doing them, thinking we're doing them right or thinking we're doing them as they should be delivered, but that's not necessarily the case. And we often lack the specificity in terms of we talk in terms of delivering an auditing feedback and intervention, but are we really doing it? And do we have that causal pathway in terms of what we're doing in terms of trying to change, enhance performance? Is that actually linked in a causal change mechanism way? Very often that's not the case. And just to illustrate this, I'm going to just quickly step through a widely used strategy, quality improvement collaboratives. Some of you will have already participated in them. But teams come together, sometimes within organizations and across organizations, to improve performance on a given topic, so an agreed area or where improvement can occur. And we work together using rapid cycles of change, working with experts to share best practices and deliver improvements over time. So it's a widely used approach, particularly in hospital settings. But when we look at the literature, there's some evidence of effectiveness, but that success appears highly contextual. And the question we need to ask is why is that? The reason for doing a QIC is often poorly conceived, and little thought is actually given to what are the behaviours and processes that we we need to target to bring about improvements in care. And whether this approach, this quality improvement collaborative, can actually do that, can it address those underlying determinants in a way that brings about and enhances care? That's not always the case. We just think let's do a quality improvement collaborative, let's get together, let's talk about it, let's do some change methodologies and then see what happens. That lack of design and planning that we associate with research studies doesn't always translate into practice settings. We also find that, particularly with improvement collaboratives, they're not always enacted as planned. So what we write down in terms of what we want to do is not actually how it plays out in practice. So adaptations happen, unintended consequences happen. And some of the things that we were intending to do might get diluted or distorted, hindering the ability to bring about improvements in practice. They also can take much longer than planned. People often have an idea that change is something that can happen very quickly. Now, that is the case, and we've seen it with the pandemic in terms of care systems' ability to change and reconfigure themselves to address uh you know a huge problem in the system, that can happen in a rapid way. But routinely, most things take time, and we often underestimate the time it takes to bring about a change. The other thing we find is that we get quite a lot of drift within improvement collaboratives. So we might start off with a large number of teams, but we often end up analysing the teams that stay the course, so introducing a degree of selection bias in the results that we see. And these evaluations are often done in a very simple before and after way. So they're not using more rigorous methods that you would expect to see with their research studies. So, how can we improve? Lots of research and lots of systematic reviews on the effectiveness of quality improvement collaboratives, but increasingly we're now beginning to see work looking at how and why they work and in what circumstances do they work and what are the active ingredients and the key enabling components that we need to consider and we need to address if we're going to run with themselves. And I just picked out uh this review by Karen Zamboni from last year, where developed a program theory for quality improvement collaboratives. And if I was doing one, this is one of the first papers I would read. I would go away and understand how we think they're supposed to work, and then think about some of the active ingredients that I need to ensure are built into any improvement collaborative that I'm going to be running. And some of this is understanding the context, some of this is having an under idea of what we're trying to do in terms of the pathway to those outcomes. So, how do we need to bring about change in practice? What mechanisms are going to bring and deliver the outcomes that we are hoping to see, and designing a strategy that takes all these factors into account. So, my lessons for practice drawing on information from implementation science would be learn what's this is the key message for me, actually, is learn what's already been done. We also need to clearly specify what the change that needs to happen is, who needs to do what differently, where, how, and with whom, and when. And again, we have tools and resources that are in the literature that could be used and harnessed to help us understand that problem specification element. As Tracy's eloquently presented, using theory to guide the design and evaluation of those appropriate change strategies. Theory is a lens where we can plan, we can follow that process of change, and we can evaluate that process of change based on a set of assumptions that are grounded in not just theoretical but also empirical data. The obvious one, be realistic about the time frames and the resources that you require to bring about change, particularly if it's not seen as a priority within your organization. Those are the most difficult and most challenging changes to bring about. It might be the right thing to do, but if nobody else in the organization or the system thinks it's a priority or something's something that has to be addressed right now, then change is very difficult to bring about. And most importantly, I've said learn from what has gone before, but we also need to leave an audit train trail so others can understand what we've done. So you need to describe the process that you go through in change in your change process, describing what you did, what happened, because change isn't always as we set it out at the start. So what we plan to do is not necessarily what happens. So being able to document those changes and adaptations over time is really useful for other people within healthcare to understand how you brought about change in a given context. There's a global interest in methods to enhance the transfer of research findings. We know this is a global problem. Translational gaps are something that you read about in all the policy literature. Implementation science offers an interdisciplinary field of research that looks at this very question. And there is now a large evidence base that could be used and could be harnessed to inform practical implementation efforts on the ground. And I'll leave it there.
SPEAKER_00Thank you. Thanks to Paul and the team for that fascinating talk. As ever, the views and opinions we discuss on the podcast are our own and don't necessarily represent those of our various employers. You can follow us on our social media channels and find more episodes via our podcast web pages, via YouTube, or wherever you get your podcast from. Thanks for listening and see you next time.