Here’s another paper on machine learning algorithms to improve infectious disease diagnostics. Today’s topic? Identifying patients admitted to the hospital with infections. The authors used supervised machine learning to develop a clinical decision support system to aid in the diagnosis of infection in newly admitted patients using routine admission bloodwork. Their machine learning system used a new technique called Support Vector Machines (SVM) for data classification. Everything I know about SVMs I learned from reading this paper, so forgive any mischaracterizations. That said: SVMs work by stratifyng a dataset by the variable of interest (in this case, the presence versus absence of infection in a newly admitted patient), finding the cutoffs of other the variables that maximize distance between the two strata (the “hyperplane”), then retaining only the data points on the margins of the hyperplana (in this case, the infected patients who most resemble uninfected patients, and vice-versa) and using the data vectors formed by each set of patients to classify new data introduced into the algorithm.
In this case, the authors used clinical data from their hospital laboratory over a 6-year period. The variables made available to the algorithm were chosen by (1) physicians’ assessment of which data were helpful in the management of infections, (2) two infectious disease specialists’ corroboration of the recommendations made in #1, and (3) literature review to confirm the relevance of the variables recommended by #1 and #2. A total of six data points were included: CRP, WBC count, creatinine, ALT, total bilirubin, and alkaline phosphatase. Patients were defined as having infections if they had a positive microbiologic culture within 48hr of these other labs being drawn. The training set included 160,203 patients with culture results as well as results for all six lab values. As a validation set, the authors prospectively enrolled another ~100 patients and monitored them for diagnosis of a community-acquired bacterial infection within the first 72hr of admission, defined as a positive culture along with clinical evidence of infection.
A total 104 newly admitted patients were included in the validation cohort (54% female, median age 65 years). Blood testing was performed by the treating physicians without interference by the researchers (note: the algorithm had previously been shown to provide consistent results with at least 4/6 parameters available, and all patients had at least this many data points). Thirty-six patients were diagnosed with a true infection within 72hr of admission. Those with infection were older (71 vs 61; P>0.01) and had higher CRPs (96 vs 5 mg/L; p<0.01), WBCs (12k vs 8k cells/ul), and alkaline phosphatases (101 vs 86 IU/L; p=0.01). Thirty-three of thirty-six (92%) of the patients with infection and 64/68 (6%) of the patients without infection received empiric antimicrobial therapy. The machine learning algorithm had a diagnostic ROC AUC of 0.84 with two optimal cutoff points: one with sensitivity of 89% and specificity of 63%, and another with sensitivity of 44% and specificity of 93%. Both cutoff values would have classified all three infected patients who did not receive empiric antibiotics as infected, while they would have classified one and three of the uninfected patients who received antibiotics as uninfected, respectively. Hence, this algorithm had the potential to increase the diagnostic accuracy of physician gestalt (which in this study was already 92% sensitive and 94% specific – not bad!) to essentially 100%.
Perhaps an algorithm like this could be an inexpensive alternative to the procalcitonin as an anxiolytic for antibiotic trigger-happy clinicians. Since everyone who steps into an ED these days gets a complete metabolic profile and blood count with differential anyway, why not use that information if it predicts infection so well, rather than sending hundreds of thousands of PCTs at $25 a pop? 30590545
What risk of S.aureus endocarditis is low enough to forgo performing a transesophageal echocardiogram (TEE)? Previous studies have shown that a subset of S. aureus bacteremia (hospital-onset, <72 hours duration, and in a patient with no intravascular prosthetic material) is unlikely to represent infective endocarditis, with pretest probabilities <5%. However, there is still no consensus about diagnostic implications of this finding: some experts recommend not performing any echocardiography in this population, while others suggest a transthoracic echo (TTE) is adequate so long as good views were obtained, while yet others suggest that TEE still ought to be performed in all cases. In this paper, the authors try to resolve the question of how to proceed clinically using previously published data on the harms associated with TEE and misdiagnosis of endocarditis (and math).
The authors start by defining two thresholds of probability, the testing threshold and the Test-treatment threshold. In plain language, the testing threshold is the odds of having a disease at which your patient starts to benefit from testing rather than not testing and not treating, and the test-treatment threshold is the odds of having a disease at which your patients starts to be better off being treated empirically rather than undergoing testing. Put another way, these are the disease probability bounds within which ordering a test is more likely to benefit than harm your patient.
The authors tried to define the lower of these two bounds (the testing threshold) with regards to the outcome of 6-month mortality. They considered the sensitivity and specificity of TEE for occult endocarditis as well as three other parameters: probability of excess mortality from extended antibiotic treatment (within which they included mortality due to central line infections and severe adverse drug reactions), probability of mortality from undergoing TEE, and degree of mortality reduction from treatment of occult endocarditis. They extracted data for these variables from randomized controlled trials on this topic, or observational studies where no RTC data was available.
Table 1 shows the key data. The authors report the “best-case scenario” data suggest a 0.5% excess mortality rate from extending antibiotic therapy, excess mortality of 0.1% for undergoing TEE (though notably, this number varied from 0.01% to 1% between papers), and a mortality reduction of 15% for appropriate treatment of endocarditis. Using the 0.1% value for harm associated with TEE, the authors calculate the testing threshold for TEE to be 1.1%, suggesting that current risk scoring systems, whose low-risk patients still have an estimated 1.8% probability of endocarditis, are inadequate to identify patients who would not benefit from TEE. However, the authors are quick to point out that the testing threshold varies wildly based on the risk of undergoing TEE, and that that influences the results to the degree that TEE might be never indicated or always indicated based on the value used.
How do I interpret this study? First, the calculated testing threshold for endocarditis in S.aureus bacteremia, while lower than the probability of endocarditis in “low-risk” groups by the scoring systems mentioned above, wasn’t *that much* lower. So, I think if you have a means to further risk stratify that low-risk subset of patients (perhaps only considering the patients who had no vegetations seen on a study that obtained good echocardiographic windows?) then you’d probably push those patients’ probability of endocarditis below the testing threshold and could reasonably defer a TEE.
Also, my apologies to the Australian authors of this study for referring to “TEE” rather than “TOE,” but in the US our cardiologists only know how to do transesophageal echocardiograms, not transoesopheal echocardiograms, so here we are. 29581048
How do stool testing guidelines for children with acute gastroenteritis fair in identifying patients with bacterial enteropathogens? So we have these stool cultures and gastrointestinal pathogen PCR panels, and they can give us etiologic diagnoses when children show up to the clinic or ER with infectious diarrhea. However, most acute gastroenteritis (in all populations) is due to norovirus and other enteric viruses, all of which are self-limiting and none of which have specific treatments. Since we don’t live in Le Guin’s Earthsea, knowing the true name of your patient’s diarrhea in these cases isn’t so useful; what we’d really like are clinical criteria that define who has the types of gastroenteritis that we might act upon (from either the patient care or public health perspectives), so that we can just test those people.
As part of a larger study, the authors enrolled 2,447 children from two Canadian emergency centers who presented with three or more episodes of diarrhea or vomiting within 24 hours. They collected relevant clinical data and then tested each child for nine enteropathogens (Aeromonas, Campylobacter, Escherichia coli O157, other Shiga toxin-producing E. coli, enterotoxigenic E. coli, Salmonella, Shigella, Vibrio, and Yersinia). Finally, for this secondary analysis, the authors compared these data to six guidelines on when to test children with diarrhea for enteropathogens to determine the sensitivity and specificity of each set of recommendations. The authors assessed the following guidelines: the CDC’s 2003 stool testing guidelines, the 2009 British National Institute for Health and Care Excellence guidelines, the 2014 European Society for Pediatric Gastroenterology, Hepatology, and Nutrition’s guidelines, the Infectious Disease Society of America’s 2017 guidelines, and two expert opinion recommendations published in CID and CMAJ.
Six percent of children in the cohort had one or more of the nine bacterial enteropathogens, and the positivity rate was highest (31%) among children with bloody diarrhea. The sensitivity of the guidelines ranged from 26% (CID criteria) to 67% (CMAJ criteria), and the specificity ranged from 64% (CMAJ criteria) to 97% (CID criteria). Performance of a stool culture by the treating ED physician (i.e. the stool testing had been specifically ordered, rather than merely done as part of the study protocol) had a sensitivity of 31% and a specificity of 95%. No individual set of guidelines provided an acceptable combination of sensitivity and specificity; the most sensitive guidelines would have increased the number of stool tests set by 200-500% and still missed a third of cases, whereas the most specific guidelines would have missed 75% of cases. The authors conclude that new guidelines for stool testing in children (i.e. ones that actually improve on physician gestalt) are urgently needed. 30517612