Critical Care, Improving Outcomes, Mythbusting

Dissections & digitometers

hello. Hello. HELLO!!!!!

Hi there. Just wanted to let you know that I finally rediscoved a use for the digitometer.

No, seriously.

This was a historical matched case-control study from 2002-2014 of 111 aortic dissection patients at 2 Canadian tertiary are centers and one regional cardiac referral center (and 111 case controls) looking specifically at bilateral blood pressure differential as well as pulse deficits as a marker for acute aortic dissection in the ED.

Not surprisingly, combining the two (blood pressure differential and pulse deficit) increased sensitivity (from 21% to 77%), but greatly decreases specificity (99% to 56%) – which is not quite ideal when dealing with needle in the haystack diagnoses.

Now, I do not know if it was because it was documented retrospectively (“oh no, they have a dissection on imaging, I better check pulses!”), or pulled up as part of a macro as a default (“pulses equal and brisk bilaterally’), and I would not be at all surprised if this were the case, but regardless, 21% of dissections having a pulse deficit vs 0.9% of non-dissections is pretty darn good.

Like +LR of 23.4 good. And in line with other studies reporting 24% sensitivity and 92% specificity.

Dont believe that BP differential is not a specific sign? Talk to your triage nurses to see how many times they recheck a blood pressure on a hypertensive patient in triage on the other arm – verified by prior studies showing over 50% of ED patients have >10 mmHg differential, and 19% have >20 mmHg differentials.

So while pulse deficit may possibly be over-exaggerated because of retrospective / biased ED charting, still, the absence of a pulse should at be nudging providers to consider advanced imaging sooner rather than later.

Standard
Mythbusting

Shocker! Parents may prefer PO to IV treatement.

FOR MENINGITIS!

One dogma that particularly sticks in my craw is that IV antibiotics are somehow mythically better than their PO counterparts. This is one that clinicians continue to perpetuate despite quite a bit of evidence to the contrary, and sometimes under the guise that “its what the patients want.”

Well, this paper from Brown University in Rhode Island (home of Del’s Lemonade and the Awful Awful) have sought to dispel.

For 3 months, the authors surveyed 102 consecutive parents of children who were not undergoing evaluation for potential Lyme disease and who were being seen in a pediatric ED. The reason for choosing Lyme disease as their hypothetical case is that the great state of Rhode Island is an are endemic to Lyme disease, and children with Lyme meningitis are often treated with intravenous ceftriaxone although oral doxycycline may be effective.

The parents were surveyed after observing a 9 minute video describing a hypothetical Lyme meningitis treatment trial (PO doxy vs IV ceftriaxone), and 84 of 102 parents (82%) would consent to their child participating (!). Even more impressive was that 37% of parents would accept 2 additional days of symptoms with oral meds even if intravenous treatment hastened symptom resolution (44% would accept one additional day symptoms). When told that there was likely equivalence, 47% would prefer doxycycline treatment, 24% thought it would be equivalent, and 29% still preferred IV treatments – with no differences in likelihood of choice based on age, ethnicity, education or knowledge of lyme disease. There was a weak correlation between perceived efficacy and tx preference, while there was a moderate correlation between perceived safety and treatment preference.

Basically, explain that about a third of patients have issues with IV antibiotics or PICCs (more diarrhea, allergic reactions, more DVTs, etc – not to mention bigger bills from VNS services, inpatient stays, etc), and you’ve got ~70% chance that the parents would be ok with outpatient treatment.

Understandably, as the authors put it, “the hypothetical nature of this study may overestimate the proportion of parents who would consent in an actual trial” – but it provides great food for thought to at least consider having the conversation with parents on your next shift when you as the clinician are on the fence of stay or go.

Standard
Cardiology, Improving Throughput, Mythbusting

Yes, No, and Maybe – Magnesium for Afib RVR

I really, really, wanted this study to work.  Few things are more disheartening on an overnight observation shift than needing to place someone on a diltiazem drip after an inability to rate control. Ergo, I have been known to give a few grams of magnesium to try to decrease the likelihood of that happening. Therefore, on the surface, this study seems promising – it looks at standard of care for Atrial fibrillation with rapid ventricular response (dealer’s choice, metoprolol, diltiazem, or digoxin) given in combination with either one of 3 treatments: placebo, 4.5g of magnesium, or 9g of magnesium; with about 150 patients in each arm. This study took 5 years (!) over 3 academic centers in Tunisia, who’s ED’s service 90,000-110,000 patients per year. Patients needed to have a heart rate over 120 bpm, have a systolic BP >90 mmHg, and without: renal impairment, wide-complex tachycardia, decompensated CHF, acute myocardial infarction, or an impaired level of consciousness. All seems fair.

The results, at face value, seem great if you’re a magnesium believer: rate control at 24h of 83.3% for placebo, 97.9% for 4.5g MgSO4, and 94.1% for 9g of MgSO4. This is a great example of completely reading a paper before you start to fight about giving magnesium.

First, all groups used digoxin around 50% of the time for rate control.  This clearly does not mimic US practice. Nor does giving 4.5-9g of magnesium over 30 minutes.  Then the authors sneak this one in:

In a secondary analysis including only patients receiving beta blockers and calcium channel blockers, the obtained results were not significantly different compared to those found in the overall group.

This is sandwiched between mentions of adverse drug reactions (4% flushing in the 4.5g arm vs 12% in the 9g arm vs <1% in the placebo arm, and otherwise there was no significant difference between the 3 arms), and the discussion of 24h rate control. I am not 100% certain what they meant by this statement – were they referring to ADRs? Were they implying that there was no difference between metoprolol and diltiazem treated patients and placebo at 24 hours?  With only about 50% of patients per arm (~75 patients in total/arm) being treated with these agents, it would be hard to show a meaningful improvement.  Not to mention the fact that the actual data for this secondary analysis is nowhere to be found in this paper.  Nor have the authors responded to my email asking for it.

Then, of course, there are the prior trials with less than 60 patients / arm comparing diltiazem to metoprolol showing >90% efficacy with diltiazem.

And, of course, there is the next question, of are we doing any good?  Since rate control has not always shown to be in the patients best interest – a 6 fold higher rate of adverse events– and none of the ED AFib RVR magnesium studies look further out than 24 hours, perhaps we should cautiously, if at all, recommend magnesium, or even suggest waiting until long term outcomes are further elucidated.  Since this study took 5 years to complete, I do not see the desired study happening anytime soon.

Standard
GI, Improving Outcomes, Mythbusting

The Better Poop Route.

This is a study comparing PO vs colonoscopy inserted fecal microbiota transplant for cdiff with the primary outcome the proportion of patients without a recurrence 12 weeks after stool transplantation.

 

Survey says- it doesnt matter how you do it, but both have their ups and downs.

 

While PO meds were significantly less costly, it was significantly less pleasant (44% vs 66% rated it “not at all unpleasant”) – understandably so as patients took FORTY capsules- under direct observation no less- which were made from a singular 80-100g donation. Colonoscopy implantation was performed with placement of 360cc of “fecal slurry in the cecum” – and was significantly more expensive ($874 vs $308). Colonoscopy patients had a 12.5% rate of minor adverse events vs 5.4% of the PO group (mostly nausea/vomiting or abdominal discomfort).

Further fun facts, participants most frequently characterized fecal transplants as “innovative treatment” (63% of patients), a “natural remedy” (41%), and “unpleasant, gross, or disgusting” (30%); which is surprisingly realistic to how providers feel about this same treatment.

Its nice to see the lower cost, less invasive treatment work equally well (both clocked in at about 95% rates of non-recurrence at 12 weeks), but geez, 40 tablets of poop seems a bit more like a lost dare than a medical necessity.

Standard
Mythbusting

HEART score predicts adverse events for no risk chest pain patients – for a whole 66 patients.

There has been some buzz about one particular HEART score paper on SoMe recently, and while touting the HEART score is all the rage, the excitement over a whopping 66 patients seems downright silly.

This study screened about 5000 patients and excluded roughly 4700 patients – without obvious good reason – as “inclusion criteria were satisfied if the provider ordered an ECG and troponin for the evaluation of ACS. Consistent with prior studies, patients were excluded for the following reasons: new ST -segment elevation > 1mm, hypotension, life expectancy <1 year, a non -cardiac medical, surgical, or psychiatric illness determined by the provider to require admission, prior enrollment, non -English speaking, and incapacity or unwillingness to consent.” Seems like most would be included then, right? Except that was not the case at all. I just do not see how >90% of patients could be excluded based on these criteria, in a study done in the United States.

So we are led to believe that this tertiary care center, which has >100,000 annual ED visits, somehow could only scrounge 282 patients with chest pain to recruit into this study. They randomized 1:1 into the HEART “pathway” (HEART score plus serial troponins – more on this later), with usual care per ACC/AHA guidelines – admit/obs, serial troponins, and provocative testing in consultation with cardiology. They assessed adverse events (MI, revascularization, cardiac death), objective testing (stress or angiography), and future hospitalizations / ED visits within the year.

While yes, all sixty-six patients with a HEART score of 3 or less and 2 negative troponins (yep, a whole 66 no/low risk patients at a 100k + visit ED), did have zero adverse events at 1 year, the question really is, who cares? This is not enough patients to be excited about, and besides, these are essentially no-risk chest pain patients that shouldnt have a high risk of adverse events anyway. They represent a group whom we should not be doing additional testing on anyway!

As for all the others, there was no change in adverse events between HEART vs usual care- about 10% of patients had adverse events in each arm. As such, I dont read this as the HEART pathway being awesome at reducing long term issues, but rather, more of the futility of advanced testing aimed at reducing adverse events; The HEART score is merely equivocal to our current tools at future risk stratification. While this is good, I doubt it will result in fewer downstream tests in “higher risk” patients, if only because old habits die hard, Americans like to do more, and ACC/AHA recommend doing so. Choose wisely, perhaps?

Laughably, the authors open their paper with this line: “Care patterns for the evaluation of Emergency Department (ED) patients with possible acute coronary syndrome (ACS) in the United States are heterogeneous, inefficient, and costly.” And yet, where did we get this “HEART pathway” from? The HEART score did not have two troponins. To boot, neither did the largest study published looking at the HEART score. So what does a second troponin get you? Well, if you believe this study of over 45,000 patients, in which *all* comers had 2 troponins and 2 ekgs, [patients only excluded if abnormal vital signs were present & defined as hypotension (systolic blood pressure <100 mm Hg), tachycardia (pulse >100 beats/min), tachypnea (respiratory rate >20 breaths/min), or hypoxemia (oxygen saturation level <95%), or if they had electrocardiographic ischemia, a left bundle branch block, or a paced rhythm], they only identified a primary end point event in 4 of 7266 patients, 2 of which were noncardiac and 2 of which were possibly iatrogenic.

So, I again ask, where did the two troponin issue come from? Well, here it is- in one singular study, not doing a second troponin would have missed 5 adverse events out of 899 patients. Because the total event rate was 12, the authors state that their sensitivity went from 58% to 100%. While this sounds nice, it can also be reframed as a 1.33% miss rate vs 0.78% miss rate – an overall improvement of…. 0.55% with a second troponin. And, this has not really been replicated to demonstrate the value of a second troponin. So, either use the HEART score as it was meant, or use two troponins and gestalt- both probably get you into trouble and add waste to the system and you reach the point of minimal gains. You’re testing 200 plus patients to find one more bumped troponin that may or may not get a meaningful intervention.

So I ask hospital administrators and EM & cardiology colleagues: is it worth the extra 4+ hours (3 hours for the second troponin, plus time for the draw to occur and test to get processed & resulted), plus all of the associated resources & space not allotted to other patients due to these 4+ hours that are given to thousands of patients annually… is a decrease of 0.55% worth a second troponin? I imagine if time and bed space was allocated to others, the 0.55% increase of adverse events would be more than made up for across the rest of the patients within the department by earlier recognition and treatment of just about every other disease entity within the department.  Sounds like, at a minimum, a time to introduce shared decision making (“hey, we catch one in two hundred with a second troponin, want to stay another 4 hours? and no, you can not go outside and smoke.  How about a nicotine patch and a tuna sandwich?”).

And lastly, no, you can not say the HEART score results in low risk of 1 year adverse events based on a single center study that failed to enroll more than 66 patients.

Standard
Critical Care, GI, Improving Outcomes, Mythbusting

Less transfusions, the better, platelet style.

Over 4 years, the Mayo clinic reviewed over 40,000 ICU patients, and sought to determine if prophylactic platelets for critically ill thrombocytopenics matter.

Of these 40,000 ICU patients, they excluded anyone who received RBCs within 24 hours prior to the PLT count that triggered transfusion “to minimize the risk of including those with active bleeding” … The primary outcome was essentially to determine if transfusing platelets led to more RBC transfusions, and, secondarily, all-cause mortality, ICU and hospital free days (among others). Seems backwards, but whatever.

So what falls out of this data?

For all comers, in a propensity matched cohort, one transfusion begets another – (46.3 % of those with platelet transfusions also had an RBC transfusion vs 10.4% of those without platelet transfusions who had an RBC transfusion). Interestingly, those transfused platelets had less ICU free days (22.7 vs 20.8), more hospital free days (15.8 vs 13). When you look specifically at the ~5000 patients with <50k platelet counts and compare those who were/were not transfused platelets, there was no change in ICU mortality, 30 day mortality, ICU-free days, or hospital free days. While this was underpowered to determine statistical significance for those with platelet counts <50k, it is not hard to imagine a larger study to suggest similar benefits of not transfusing these patients- particularly since this study saw fewer hospital free days and fewer ICU free stays (10.2 vs 7.8 days and 19.9 vs 18.3 days respectively) – favoring a more restrictive transfusion strategy (but again, not meeting statistical significance, perhaps due to so few patients with <50k PLT).

This is not the first study I have seen suggesting empiric transfusion or outright canceling of procedures based on platelet counts between 50k and 150k is essentially bunk, and that prophylactic platelet therapy is of little benefit, if not outright harmful. There is even a flicker of a signal that prophylactic platelet transfusions >20k may not be beneficial – but this has yet to be definitively shown true -yet.

I can not agree more with the last words to the authors, “Finally, it must be acknowledged that while clinical trajectories did not improve for the cohort as a whole after platelet transfusion, it is possible that certain subpopulations may indeed benefit from the intervention, though these subgroups have yet to be identified.”

Standard
Mythbusting

Delayed endocarditis diagnosis? The patient can have as many diseases as they please.

This is exploratory look at patients diagnosed with endocarditis at admission versus those with a delayed diagnosis. Granted, this is not a US study, and over a 9 year period at a single center, but does provide an interesting look at how we manage these patients….

They looked at those with an admitting diagnosis of endocarditis that eventually went on to have this as a final diagnosis as well (54 patients), and compared them to patients with a non-endocarditis initial diagnosis to those who eventually had a final diagnosis of endocarditis (64 patients).

Even in the two slam-dunk groups- the IVDA & those with valve replacements, the diagnosis was delayed in the 38% of the time for those with a history of IVDA. For those with a valve replacement there were also significant delays with native valves delayed 63% of time, vs prosthetics delayed 24% of time…. Are we really bad at diagnosing this? Let’s peel back this onion.

Cases were placed into 3 categories: (1) complications of endocarditis, but not immediately recognized as endocarditis – 70% of cases (2) infectious disease unrelated to endocarditis (14% of cases) – ie, hepatitis (3) inconsistent non-infectious disease (16% of cases).

Of those in the “complications” category, only 10% were unlikely to be dosed with antibiotics – they were admitted for “stroke” or CHF/ pulmonary edema. This is clearly understandable. Do we need STAT echos for the pulmonary edema patient? Or for the stroke? STAT echo’s right away for all of these patients – or perhaps routinely ordering them on all CHF / stroke patients may prove more costly and harmful than its worth.

The author’s make the argument that there is significant mortality involved with an initial missed diagnosis in their cohort- 75% vs 25% (!!!). I’m not certain these represent a complete whiff on the part of the treating clinicians. Rather, I would argue that these patients had their complications recognized and treated appropriately (ie, the pneumonia and UTI’s got antibiotics initially), and that these patients were likely sicker to begin with and that is why they had all the additional complications and higher mortality.

While perhaps a heightened awareness of valve replacement patients, and/or awareness of the disease process may help, but sadly, when you are looking for a needle in the haystack, having a 100% sensitive and specific diagnostic algorithm is unreasonable. When can certainly do better, but how much better without causing harms to the rest of the department remains debatable.

Standard
Mythbusting, Radiology

POCUS gel – hot or not?

“Wow thats cold!”

It’s a common statement after applying the probe to a patient, and wouldn’t you love to know the answer to the age old question, “Does gel temperature matter for patient satisfaction?”… While we have some data suggesting improved patient satisfaction scores with POCUS, but could those scores be better if the gel was warmed? Or perhaps would all the improvement in satisfaction scores be lost with cool gel?

This group performed bedside ultrasound using heated gel (102F) or room-temperature gel (82.3F – quite the warm department!).  The investigators even went so far as to trying to blind those performing the study with a heat-resistant glove (!) – and even validated gel temperature through weekly quality assurance measurements throughout the study period.  The investigators informed all patients that the study entailed investigation into various measures to improve patient satisfaction with POCUS, but did not inform them of their specific focus on gel temperature.  After POCUS was performed, patients completed the following survey: “How satisfied are you with the experience of having a bedside ultrasound today?” (on a 100-mm visual analogue scale (VAS) for satisfaction), as well as, “Are you satisfied with the care you received today in the emergency department (yes or no), as well as “How professional was the provider who performed your bedside ultrasound?” (A Likert scale spanning 1-5)

All in all, 59 patients underwent randomization to POCUS with room-temperature gel and 61 underwent randomization to heated gel.

In the end, heated gel made no difference for any of the three questions posed to patients. While I think those of us utilizing warm gel were probably making space in the blanket warmer, it’s nice to know that this intervention, while nice, does not make much of a difference, and a “special” warmer specific for gel in your department is probably unnecessary.

Standard
Mythbusting

Do Prehospital Antibiotics Matter?

In short, probably not, but still not completely disproven.

This randomised controlled open-label trial looked at giving 2 grams of IV ceftriaxone to patients that met SIRS criteria (save for WBC- testing unavailable to EMS) with suspected infectious illness. Patients were randomly assigned (1:1) to the intervention group or usual care group using block-randomisation with blocks of 4. This study took place across ten large regional ambulance services serving 34 secondary and tertiary care hospitals in the Netherlands over a 2 year period. They screened 3228 patients of which 2698 were eligible (pregnancy, beta-lactam or ceftriaxone allergy, suspected prosthetic joint infection, among others); 1150 in the usual care arm (IV fluids, supplemental oxygen prn), and 1548 in the intervention group (2g ceftriaxone plus usual care). 13 patients in each arm were excluded from final analysis or excluded due to withdrawn consent or being lost to follow up. The primary outcome was all-cause mortality at 28 days.

So, while they screened over 3,000 patients over 2 years (a massive undertaking!), unfortunately, only 37 (3.3%) patients in usual care and 66 (4.3%) patients in the early antibiotics group had septic shock. Perhaps you could make an argument that the intervention group was slightly sicker with 22% vs 17% having 2 or more qSOFA criteria. Despite a median time to antibiotics of 70 minutes in the ED (thus, probably close to 90+minutes faster in the intervention cohort), and with 14% having antibiotics >3hrs from presentation and 14% having none at all (suspected viral syndrome) – there was 8% mortality in both arms at 28 days and 12% at 90 days in both arms. No difference.

When you look at mortality for septic shock it was 27% (10/37) in the prehospital antibiotic cohort vs 28.8% (19/66). Again, not statistically significant. While prehospital antibiotics might make a difference in a larger cohort, its probably going to be very hard to ever do that study – this was a 2 year study looking at over 3,000 patients and they were barely able to accumulate over 100 septic shock patients.

While an American might argue “they only gave ceftriaxone, you need a real drug like Pip-tazo and vancomycin!” – slow down. The authors acknowledge that ceftriaxone may not have been appropriate because it was “a big gun” that they could all agree on and most patients were rapidly narrowed to receive, most commonly, amoxicillin–clavulanic acid with ciprofloxacin and ceftriaxone the second and third most common antibiotics given. They did not have culture reports back at time of publication, but having low mortality, and 9% of each cohort were not given antibiotics from the ED due to suspected viral illness makes me suspect that they do not have nearly the resistance problem (or concerns) that the Americans do, likely do to appropriate stewardship. Likewise, while one may be concerned about missed diagnosis due to premature closure, there was a miss rate of 1.4% in the intervention group vs 1.7% in the usual care group, also not statistically significant.

In the end, the authors provide a sensical view of the current state of prehospital antibiotics, “Studies showing that early antibiotic treatment is beneficial for reducing mortality found this positive association mainly in patients with more severe illness and a (time to antibiotic) of more than 5–6 hours… However, we currently do not advise antibiotic administration in the ambulance to patients with suspected sepsis.“

While it is certainly plausible that prehospital antibiotics may be beneficial for those with septic shock, it is a near certainty that, at least in the USA, sepsis hysteria would further ensue and the inertia of giving everyone a dose of broad spectrum antibiotics will likely occur – not to mention our continued fixation with iatrogenic salt-water drowning. The cost to the system – including other patients in the department – of responding to these prehospital alerts for those not in shock will likely be the hidden cost infrequently published or discussed by administrations.

Standard
Critical Care, Mythbusting

Even Pharma is getting in on Vanco-PipTazo AKI

This was an entertaining 9 page meta-analysis espousing the therapeutic harm of vancomycin and pip-tazo in the form of acute kidney injury.  With a conflict of interest page that reads like a pharmaceutical mutual fund (The Medicines Company, Cubist, Pfizer, Merck, Forest/Allergan, Melinta), it’s no wonder that they infer increased mortality due to AKI, yet conveniently COMPLETELY ignore that the same papers they reference show no mortality difference – and if anything a trend towards mortality benefit for vanco-PipTazo.  Likewise, with dialysis rates <2%, the induced kidney injury is less likely to cause harm than a suboptimal drug that wont kill your bug.

They also fail to mention cefepime neurotoxicity.

There are other ways to go about this. Like, say, reviewing the damn cultures.

But in the end, since The Medicines Company and Melinta have new broad spectrum antibiotics on the market or on the way, it probably behooves them to run a slight smear campaign on current treatment regimens. Therefore, forgive me for considering the possibility that the authors intentions may not be pure.

Standard