A Cinderella Story of Competency for Videofluoroscopic Swallowing Studies

Posted by

Title: Measuring Competency Development in Objective Evaluation of Videofluoroscopic Swallowing Studies
Authors: Nordin, Miles, & Allen
Journal: 2017
Year of Publication: Dysphagia
Design Type: Exploratory
Purpose: “The overall aim of this study was to evaluate competency development and feasibility of SLP use of a systematic (or quantitative) VFSS measurement process in order to improve the objectivity of study interpretation and better inform care.”

“A second aim was to assess participating clinicians’ self-reported feelings of competence and pressure when learning and using this measurement system.”
Population: 11 SLPs of various experience recruited from University social media website
Inclusion criteria:  none specific; first-come-first-serve basis
Exclusion criteria:  none specific; required to attend 1 in-person training session


 

I don’t know about you, but a lot of my dysphagia learning was a lot of on-the-go mixed with lots and lots of reading, watching, reviewing, and questioning (not to mention the facepalms after learning how I was doing some wrong practices that still make me shiver with horror😱). While I’ve found different mentors here and there throughout the years, I still always wished I would’ve had a fairy godmother SLP to turn all my curious, incompetent, and #newbie regrets into one confident, competent, and quick clinician with all the fancy skills and high-class knowledge.

Even though I’m still waiting on my magic wand I ordered from grad school to arrive🙄, there seems to be hope for future mentors/mentees for this collaboration!

Whether you’re a #newbie or #vintage, go dust off those raggedy scrubs and grab your fave eager-to-learn SLP pals to read on so we can avoid listening to the negative step-sister attitudes in order to transform into the clinician of our patients’ dreams!!😍👸


 

The timing couldn’t have been any better for me to stumble upon this article as the quest for competency feels neverending for VFSS, and I know many others are and have been in the same boat, so it felt pretty cool to be able to easily relate to the authors’ research questions:

  1. Can SLPs master a selected number of objective VFSS measures within an 8-week period?
  2. Do speed, accuracy, and interpretation skills improve over time?
  3. Does previous clinical experience in VFSS influence competency development?
  4. How do SLPs’ perceived competence and perceived pressure in completing the measures change over time?

While I think our field has come a loong way over the last few decades (I mean, dysphagia was only really recognized starting in the mid-late ’80s!!), from MBSImP and the PAS to ‘thickened-liquids/chin-tucks are the cure’, we still have a ways to go towards improving what we know and what we do to be the very best experts in this area.

The authors are quick to acknowledge the past efforts of the standardized ratings mentioned above as well as Kendall & colleagues’ recently published standardized approach for interpreting and rating critical swallowing events. The measures that are described in a detailed and systematic way have actually shown high reliability and agreement across multiple studies, so I guess it makes sense why most of these were chosen to be included in this cinderella research study:

Timing measures (in seconds)

  • Total pharyngeal transit time (TPT)º
  • Airway Closure Duration (ACD)º
  • PES opening duration (PESdur)/POD)º

Displacement measures (cm) (aka how much something moved)

  • Maximum pharyngeal areaº
  • Maximum pharyngeal constrictionº
  • Pharyngeal constriction ratio (PCR)º
  • PES max opening (PESmax)º

And just why would we want to objectively look at these measures🤔??

“Increased objectivity and agreement in VFSS interpretation has potential for clinical benefits as accuracy in rehabilitation management should also improve” p.2 

Even more importantly, is this something that even makes a difference🤨?! The authors again are on the same page, which was why they didn’t limit the clinicians to certain levels of experience but really captured our huge variability in order to really see if anyone is trainable!Best Am Gifs GIFs | Gfycat

But how do we know when we’re “competent?”

Well, because this study was a first of its kind at the time, they went off the simple 80% criterion goal for accuracy, inter- and intra-rater agreement across all the above measures, and used the definition:

“Competency was defined as the SLPs’ ability to effectively apply new knowledge and skills in learning objective VFSS measures then translating them into reporting.” p.2

How do we know if this is something we can actually do in real life?

“Feasibility was defined as time required to measure and report an arbitrary judgment as to whether this would fit into a clinical workload.” p.2 

And because the clinician participants are still humans, and humans come with actual complicated emotions and thoughts, the authors very interestingly also included an outcome measure to see how they (the clinicians) felt about their own perceived competency and the pressure they may have felt💗 (..maybe a measure on the duration of crying in your car will be later😂).


 

So who were these lucky clinicians anyhow??!

The pretty exciting part: I feel like almost everyone can at least see themselves in one of these categories:

  • 6 novice SLPs (no experience in conducting VFSS)
    • 5= new grads with no clinical experience,  1= 3 years clinical experience (no VFSS exposure)
  • 4 experienced SLPs (experience leading VFSS)
    • Experience ranged from 2-10 years
  • NO SLP participants had any experience with objective measures for VFSS

After a 4-hour in-person training session that used a hands-on focus for 5 of the above swallowing measures with the opportunity to review each study for frame-by-frame analysis for displacement calculations, the participants were able to complete the rest of the training for the study at the leisure of their home! Weekly email communication for feedback, questions, and the key textbook were provided as well.

Obviously, I’m sure I’m not the only one kicking myself for not being aware of this study to volunteer and participate, but I can only hope there will be a follow-up sometime in the future.🤞🤞

15 Disney's Cinderella Emotions ideas | cinderella, disney, disney movies

So after that initial training session and setup, the participants were sent 3 de-identified, random videos every week (every Monday) across an 8 week period to review and interpret on their own time. All videos were:

  • soundless (to reduce any bias)
  • at 30 frames per second with a range from 39sec to 2min 25 sec in length
  • had timing present on the recording (100ths of a second)
  • had a radio-opaque ring taped to the patient’s chin to allow calibration for displacement measures (new fad maybe🙃??)

Sound relatively simple enough? Ya got that right (if not, whether it be for Varibars or frame rates, keep fighting whatever your good fight is!!).

Even though the SLP participants were instructed to watch each video in its entirety, the actual timing/displacement measures were only for the 20 mL fluid swallow for a video. I was still left with some curious assumptions that the “fluids” were only thin liquids, and definitely questioned if 3 VFSSs a week were really comparable to a real-life clinician outpatient caseload???🤔🤔

Otherwise, the authors gave us a huge BONUS by actually including their exact standardized interpretation sheet they used that includes the PAS, 5 objective measures, and a diagnostic impression/recommendation section. This way WE (us, clinicians😉) can even start thinking about how to practice and implement this in our OWN facility/department!!!🤩😃🙌

Steve Carell Wow GIF - SteveCarell Wow Yay GIFs

So using their training + this standardized sheet, the clinicians emailed the researchers at the end of every week their findings/recommendations in addition to the total time it took to complete each video. The clinicians also sent their perceptions (how they think they did, any pressure they felt) with a simple Likert scale for rated statements like “I did not feel at all nervous about doing the VFSS objective measures” etc. Yes, I know some might be pursing those lips at the thought of homework-but- when ya wanna learn something to be better, then ya gotta do whatchya gotta do right!🤓

 

How do we know if clinicians were accurate in their ratings and findings?

“Three [separate] expert clinicians completed measures on all twenty videos independently prior to the research. Inter-rater agreement between expert clinicians was ICC .92 (confidence interval: .87-1.0).” p.3 

Because the level of agreement between all the experts was so high, their ratings were used as a ‘gold standard’ to compare against the clinicians in the study in addition to the experts’ total time taken to complete one VFSS (average 20 minutes, range 18-24 minutes). The authors later advise that because this is a first for looking and comparing at all this stuff, there’s really no “gold standard parameters that can be used to define competency.”

“The gold standard of 80% accuracy was based on previous benchmarking work and 30-min completion time was solely based on what the researchers deemed to be clinically feasible within a hospital SLPs’ workload.” p.7

As far as the clinicians’ diagnostic impressions and recommendations, while the clinicians provided a comprehensive assessment, the researchers were only focusing on specific comments related to identifying “pharyngeal constriction and pharyngoesophageal segment impairments for the 2 objective measures (PCR, PESmax)” and “exercises specifically known to focus on the pharyngeal constriction (Masako, effortful swallow) and pharyngoesophageal segment opening (Shaker head lift, Mendelsohn), respectively.”


 

Did clinicians learn and improve?

Well first, a quick WOO-HOO shoutout that all 10 clinician participants completed the entire study with no dropout, which even though we can count the whole sample size on our hands, is still quite a feat for any study!👏👏

The researchers broke it down into a few different measures:

“Speed, measures, interpretations and perceived competence and pressure ratings were compared across weeks and experience levels” …[aka] linear regression p.3

(☝️☝️ if you want a quick review check out Dr. Brodsky’s super easy and simple explanation of what that actually is here!)

Speed

Pin on Movie Images

When comparing all the participants’ average time to complete one VFSS,  there was a decrease from 50 minutes the first week to 25 minutes by week 8! While there wasn’t any difference in which group was faster (novice vs experienced), the experienced SLPs did reduce their time for completion earlier on compared to the novice group.

This, my friends, is huge🤯 because time is if nothing but precious these days in any setting. So while having more experience with VFSS may get you to your goal earlier, even if you’ve never stepped foot in a Radiology Suite, you can still get to that same finish line with the proper training for these measures!😉🥇

“Comprehensive knowledge in anatomic structures is essential for interpreting VFSS. Experience is well established as advantageous for learning with experience enhancing ones’ ability to reflect on action while practicing. Faster recall is also achieved with increased experience, and it is reasonable to expect experienced SLPs to make quicker decisions regarding interpretation of findings and recommendations compared with new graduates.” p.6

First of all, that quote right there I think could be framed to make newer/learning SLPs feel better (I know I did!)😊

The authors are also quick to point out that all the clinicians:

  1.  still reviewed the entire VFSS but only specific objective measures were analyzed for pharyngeal timing/displacement
  2.  were also given minimal history info about each patient from the videos

I don’t know about you, but this could have a huge impact on my recommendations for someone who might be fully ambulatory years post head/neck radiation therapy versus someone who was just discharged and remains more critical with poor oral hygiene, so I would only assume that with this additional information the process would likely only improve!

 

Measure of Agreement

Why Can't you just agree with me sometimes? on Make a GIF

In other words, how close were clinicians’ and experts’ ratings??

While inter-rater agreement (aka the same clinician-rated the same thing multiple times) got better across the weeks, PAS scores really became the main star by achieving almost perfect agreement ( .81-1.00) every single week regardless.

Additionally, pharyngeal constriction ratio (PCR), PES max opening, and PES opening duration (POD) all improved across the 8 weeks, except airway closure duration (ACD), which actually decreased to ‘fair’ agreement. The authors were pretty quick to give some rationale for this one outlying disagreeable outcome for airway closure, saying that it could be because of poor screen resolution to detect subtle grey-scale changes and that standardizing these technicalities and controlling for ambient light while previewing and practicing viewing could improve the measure of accurate ratings (and I suppose, also improve agreement?).

Regardless, there was no difference between the experienced vs novice SLP after the training, so all you #newbies have faith that with the proper training you can be just as reliable as the seasoned SLPs for these measures!

 

Accuracy of Measures

Girlsathon GIFs - Get the best GIF on GIPHY

On average, all measures reached the goal ‘gold standard’ for 80% accuracy by the end of the 8 weeks for all SLPs, regardless of experience (no difference between experienced vs novice). Every measure reached 80% accuracy at some point, and all but one (PCR) was met by Week 5. Something I found really interesting and puzzling at the same time was the fact that after hitting that 80% goal for some measures (PCR, PESmax), they were able to still stay at or above that same level of accuracy until the end of the 8 weeks. However, others (POD, ACD, TPT) seemed a lot more variable and only met the 80% goal once or a few times. Those measures that fluctuated seemed to be all timing measures which to me personally, makes sense because that can be so darn hard to pinpoint!🧐🤔

One could only imagine how accurate we would be in many other things if we were given one-on-one attention and training (1 superstar novice SLP even met 80% accuracy in measure by Week 1!🤩), but I guess that’s what the whole CEU marathon is for! (Apparently, they also saw that as accuracy improved, the time to complete a video decreased😮, which makes sense and definitely something we’re always working towards!).

New Meat Coming Through GIFs - Get the best GIF on GIPHY

 

“These data support the findings by Logemann and colleagues that SLPs can achieve specific clinical skills with training irrespective of experience” p.6

 

 

 

Perceived Competence and Pressure

Under Pressure GIFs | Tenorclueless uploaded by ☆ 𝔏𝔲𝔠𝔦𝔢 × カオスの果実 ☆ on We Heart It

Speaking in strictly statistical significant terms: Clinicians felt more competent and less pressure as the weeks went by! Other interesting results that also make total sense when ya think about it🙃:

  • ↑ Accuracy ⇒ ↑ perceived competence
  • ↑ Accuracy ⇒ ↓ perceived pressure
  • ↓ Time to complete ⇒ ↑ perceived competence
  • ↓ Time to complete ⇒ ↓ perceived pressure

While the experienced SLPs had a higher perceived competence between the first few weeks, by week 6 there wasn’t any difference between the 2 groups. This makes sense because, once you start feeling like a rockstar, you feel like that unicorn perfect SLP!😎🤩 (I wish!)

Overall it’s pretty great how again, given the necessary training for certain measures, #newbies can up their confidence in their competence to the level of their seasoned colleagues. Obviously, there’s way more to this like ever-present organizational cultures or departmental politics as well as other pressures we often don’t have a say in. The authors even add how in touch they are with this reality stating how the literature recognizes the initial pressure of being “externally assessed, fear of revealing low competence or differing from peers.” At least we can feel like the belle of the ball in at least one area!

“As trainees receive positive feedback from a mentor, they perform better. In fact, there is evidence that stress may be a catalyst for greater performance. Again, the value of off-site (email or telehealth based) mentoring is promising.” p.7

 

Diagnosis and Rehabilitation Recommendations

What Do You Recommend GIFs - Get the best GIF on GIPHY

This was the section that had my head bobbing🤨, banging😬, and blasting off from my head🤯. This quote alone is enough to continue the ongoing discussion within our field:

“By week 8, mean percentage agreement for diagnosis and management recommendations ranged from 66.67 to 100%.” p.4

WHAT?!😳😲

In case it might’ve been missed, while on average the clinicians’ agreed on an actual (swallowing) diagnosis and accompanying recommendations some of the time, if you take a look at that range, it can be a bit scary. And when you think about Vose et al. illuminating article for SLPs’ decision making when it comes to dysphagia that proved there are a lot of things that don’t make a difference for accurate interpretations (like, experience😉), it doesn’t necessarily make me sleep any better at night…

After really looking at the actual Table for the data, there was some obvious variability that still left me puzzled. For example, it appeared that when it came to a diagnosis of PES-related issues, it was the novice that maintained an 80% goal across the weeks compared to the experienced SLPs (who actually decreased by the end of the week to 66.67%), and both groups maintained the 80% goal for the pharynx.

Other data seemed to reveal that the average % agreement for recommendations of PES issues had way more variability for both newbie/seasoned SLPs, with novice SLPS reaching 80% goal initially and then getting worse as the weeks went on, while seasoned SLPs getting better by week 8 (pharyngeal recommendations appeared with less variability and better achievement for the 80% goal across the weeks).

I had (and still have) to question why🤷‍♀️? Is it perhaps that the PES is just miserably misunderstood by all clinicians alike, regardless of experience? Or the fact that this specific swallowing event is more complex and difficult to pinpoint just what is going on (hence, high-resolution manometry🤔??). I really can’t wait for a more detailed dive into these specific data!!

Alas, the authors admit that they weren’t able to code specific recommendations (e.g. normal, modified, nil-by-mouth) or do any actual stats on the recommendations because of the large variability (seriously, check out that above Vose article though). Instead, they share that the responses were very rarely decisive, and again suggest this could also have been due to lack of patient history provided.🤨Hmm…I Dont Think So Find Make Share Gfycat Gifs – Cute766


 

So obviously, there are some good things here and some at-first-thought-to-be-good things here. The many limitations include obviously the small sample size which could have led to “wide confidence intervals:

“Caution must be applied, especially as numbers of experienced SLPs decreased over the weeks as participants reached gold standard. This can lead to normality violation and negatively affect ICC results. A fixed sample size could improve the power of future studies.” p.7

Additionally, with “only 3 videos measured per week, one difficult case could skew the data considerably,” and despite the researchers’ attempts to mitigate this, unless they give the SLP clinicians the best quality equipment/software necessary, there’s no control over my cheap ‘ole computer😅.

“Furthermore, other measures of swallowing would likely be utilized in clinical practice including bolus flow and timing measures and these may have altered treatment recommendations in this study, had they been included.” p.7

Overall, I think it’s more confirmation that GOOD training, even if it’s a short hands-on session followed by frequent distant follow-ups, can have a positive impact on all SLPs regardless of experience🥰. SLPs can be taught and trained on something new and even get better, faster, and more confident as they get more competent following this superb training, which ultimately benefits all of our patients.

“With the use of objective, reproducible measures and strong agreement in diagnostic interpretation and recommendations, more specific standardized care can be recommended. This should, in turn, lead to better ability to share data in a standardized fashion and hopefully better patient outcomes.” p.8

 

How can YOU use this article?!?

Maybe you’re an experienced SLP, wanting to mentor and improve the clinical skills of your department in order to best help the patients?

Or maybe you’re a #newbie SLP clinician or just yearning to further grow your clinical skills all the while wishing an SLP fairy godmother can turn your pumpkin basics into a beautiful carriage for competency?

 

Either way, we can all get together, work together, and do this together to demand the best of the best from each and every one of us, for our patients everywhere❤️

The Importance of Mentorship: Building the Mentoring Relationship (Part IV) – TECHNICLOUD

mentor gifs | WiffleGif
😅


Takeaways:

  • “Within six weeks, new graduate SLPs had achieved the same speed of completion measuring five measures of swallowing as experienced SLPs even though experienced clinicians increased their speed of analysis more quickly.”
  • “This research may act as the baseline to identify the exact competency standard for SLPs who are learning objective VFSS measures in the future.”
  • “SLPs can learn and incorporate objective VFSS measures within a feasible time frame. Speed to completion, inter-rater agreement and accuracy improved over an eight-week time period irrespective of prior VFSS experience levels.”
  • “these data demonstrate good agreement among SLPs on VFSS interpretation when utilising objective VFSS measures”
  • “This study can inform SLP departments when developing training programmes to facilitate the implementation of objective VFSS measures in future”


Article Referenced:

Nordin, N., Miles, A., & Allen, J. (2017). Measuring Competency Development in Objective Evaluation of Videofluoroscopic Swallowing Studies. Dysphagia32(3), 427-436. doi: 10.1007/s00455-016-9776-9

 

Shoutout to all the AMAZING SLP mentors out there!!!🤩😍

Peer Mentor Program Kick Off - YNPN-TC

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.