Publications

2017

Kim DH, Huybrechts KF, Patorno E, et al. Adverse Events Associated with Antipsychotic Use in Hospitalized Older Adults After Cardiac Surgery. Journal of the American Geriatrics Society. 2017;65(6):1229-1237. doi:10.1111/jgs.14768

OBJECTIVES: To evaluate in-hospital adverse events associated with typical and atypical antipsychotic medications (APMs) after cardiac surgery.

DESIGN: Retrospective cohort study.

SETTING: Nationwide inpatient database, 2003 to 14.

PARTICIPANTS: Individuals (mean age 70) newly treated with oral atypical (n = 2,580) or typical (n = 1,126 APMs) after coronary artery bypass grafting or valve surgery (N = 3,706).

MEASUREMENTS: In-hospital mortality, arrhythmia, pneumonia, use of brain imaging (surrogate for oversedation and neurological events), and length of stay after drug initiation RESULTS: In the propensity score-matched cohort, median treatment duration was 3 days (interquartile range (IQR) 1-6 days) for atypical APMs and 2 days (IQR 1-3 days) for typical APMs. There were no large differences in in-hospital mortality (atypical 5.4%, typical 5.3%; risk difference (RD) = 0.1%, 95% confidence interval (CI) = -2.1 to 2.3%), arrhythmia (2.0% vs 2.2%; RD = 0.0%; 95% CI = -1.4 to 1.4%), pneumonia (16.1% vs 14.5%; RD = 1.6%, 95% CI = -1.9 to 5.0%), and length of stay (9.9 days vs 9.3 days; mean difference = 0.5 days, 95% CI = -1.2 to 2.2). Use of brain imaging was more common after initiating atypical APMs (17.3%) than after typical APMs (12.4%; RD = 4.9%, 95% CI = 1.4-8.4).

CONCLUSION: In hospitalized individuals who underwent cardiac surgery, short-term use of typical APMs was associated with risks of adverse events similar to those with atypical APMs. Moreover, greater use of brain imaging associated with atypical APMs suggests that these drugs may cause oversedation or adverse neurological events. Because of the low event rates, the analysis could not exclude modest differences in adverse events between atypical and typical APMs.

Schonberg MA, Li V, Marcantonio ER, Davis RB, McCarthy EP. Predicting Mortality up to 14 Years Among Community-Dwelling Adults Aged 65 and Older. Journal of the American Geriatrics Society. 2017;65(6):1310-1315. doi:10.1111/jgs.14805

OBJECTIVES: Extended validation of an index predicting mortality among community-dwelling US older adults.

DESIGN/SETTING: Examination of the performance of a previously developed index in predicting 10- and 14-year mortality among respondents to the 1997-2000 National Health Interview Surveys (NHIS) using the original development and validation cohorts. Follow-up mortality data are now available through 2011.

PARTICIPANTS: 16,063 respondents from the original development cohort and 8,027 respondents from the original validation cohort. All participants were community dwelling and ≥65 years old.

MEASUREMENTS: We calculated risk scores for each respondent based on the presence or absence of 11 factors (function, illnesses, behaviors, demographics) that make up the index. Using the Kaplan Meier method, we computed 10- and 14-year mortality estimates for the development and validation cohorts to examine model calibration. We examined model discrimination using the c-index.

RESULTS: Participants in the development and validation cohorts were similar. Participants with risk scores 0-4 had 23% risk of 14-year mortality whereas respondents with risk scores (13+) had 89% risk of 14-year mortality. The c-index of the model in both cohorts was 0.73 for predicting 10-year mortality and 0.72 for predicting 14-year mortality. Overall, 18.4% of adults 65-74 years and 60.2% of adults ≥75 years have >50% risk of mortality in 10 years.

CONCLUSIONS: Our index demonstrated excellent calibration and discrimination in predicting 10- and 14-year mortality among community-dwelling US adults ≥65 years. Information on long-term prognosis is needed to help clinicians and older adults make more informed person-centered medical decisions and to help older adults plan for the future.

Devore EE, Fong TG, Marcantonio ER, et al. Prediction of Long-term Cognitive Decline Following Postoperative Delirium in Older Adults. The journals of gerontology. Series A, Biological sciences and medical sciences. 2017;72(12):1697-1702. doi:10.1093/gerona/glx030

BACKGROUND: Increasing evidence suggests that postoperative delirium may result in long-term cognitive decline among older adults. Risk factors for such cognitive decline are unknown.

METHODS: We studied 126 older participants without delirium or dementia upon entering the Successful AGing After Elective Surgery (SAGES) study, who developed postoperative delirium and completed repeated cognitive assessments (up to 36 months of follow-up). Pre-surgical factors were assessed preoperatively and divided into nine groupings of related factors ("domains"). Delirium was evaluated at baseline and daily during hospitalization using the Confusion Assessment Method diagnostic algorithm, and cognitive function was assessed using a neuropsychological battery and the Informant Questionnaire for Cognitive Decline in the Elderly (IQCODE) at baseline and 6-month intervals over 3 years. Linear regression was used to examine associations between potential risk factors and rate of long-term cognitive decline over time. A domain-specific and then overall selection method based on adjusted R2 values was used to identify explanatory factors for the outcome.

RESULTS: The General Cognitive Performance (GCP) score (combining all neuropsychological test scores), IQCODE score, and living alone were significantly associated with long-term cognitive decline. GCP score explained the most variation in rate of cognitive decline (13%), and six additional factors-IQCODE score, cognitive independent activities of daily living impairment, living alone, cerebrovascular disease, Charlson comorbidity index score, and exhaustion level-in combination explained 32% of variation in this outcome.

CONCLUSIONS: Global cognitive performance was most strongly associated with long-term cognitive decline following delirium. Pre-surgical factors may substantially predict this outcome.

Kim DH, Lee J, Kim CA, et al. Evaluation of algorithms to identify delirium in administrative claims and drug utilization database. Pharmacoepidemiology and drug safety. 2017;26(8):945-953. doi:10.1002/pds.4226

PURPOSE: To evaluate the performance of delirium-identification algorithms in administrative claims and drug utilization data.

METHODS: We used data from a prospective study of 184 older adults who underwent aortic valve replacement at a single academic medical center to evaluate the following delirium-identification algorithms: (1) International Classification of Diseases (ICD) diagnosis codes for delirium; (2) antipsychotics use; (3) either ICD diagnosis codes or antipsychotics use; and (4) both ICD diagnosis codes and antipsychotics use. These algorithms were evaluated against a validated bedside assessment, the Confusion Assessment Method, and a validated delirium severity scale, the CAM-S.

RESULTS: Delirium occurred in 66 patients (36%), of which 14 (21%) had hyperactive or mixed features and 15 (23%) had severe delirium. ICD diagnosis codes for delirium were present in 15 patients (8%). Antipsychotics were used in 13 patients (7%). ICD diagnosis codes alone and antipsychotics use alone had comparable sensitivity (18% vs. 18%) and specificity (98% vs. 99%). Defining delirium using either ICD diagnosis codes or antipsychotics use, sensitivity improved to 30% with little change in specificity (97%). This algorithm showed higher sensitivity for hyperactive or mixed delirium (64%) and severe delirium (73%). Requiring both ICD diagnosis codes and antipsychotics use resulted in perfect specificity but low sensitivity (6%).

CONCLUSION: Delirium-identification algorithms in claims data have low sensitivity and high specificity. Defining delirium using ICD diagnosis codes or antipsychotics use performs better than considering either type of information alone. This information should inform the design and interpretation of claims-based comparative effectiveness and safety research. Copyright © 2017 John Wiley & Sons, Ltd.

Wee CC, Jones DB, Apovian C, et al. Weight Loss After Bariatric Surgery: Do Clinical and Behavioral Factors Explain Racial Differences?. Obesity surgery. 2017;27(11):2873-2884. doi:10.1007/s11695-017-2701-y

BACKGROUND: Prior studies have suggested less weight loss among African American compared to Caucasian patients; however, few studies have been able to simultaneously account for baseline differences in other demographic, clinical, or behavioral factors.

METHODS: We interviewed patients at two weight loss surgery (WLS) centers and conducted chart reviews before and after WLS. We compared weight loss post-WLS by race/ethnicity and examined baseline demographic, clinical (BMI, comorbidities, quality of life), and behavioral (eating behavior, physical activity level, alcohol intake) factors that might explain observed racial differences in weight loss at 1 and 2 years after WLS.

RESULTS: Of 537 participants who underwent either Roux-en-Y Gastric Bypass (54%) or gastric banding (46%), 85% completed 1-year follow-up and 73% completed 2-year follow-up. Patients lost a mean of 33.00% of initial weight at year 1 and 32.43% at year 2 after bypass and 16.07% and 17.56 % respectively after banding. After adjustment for other demographic characteristics and type of surgery, African Americans lost an absolute 5.93 ± 1.49% less weight than Caucasian patients after bypass (p < 0.001) and 4.72 ± 1.96% less weight after banding. Of the other demographic, clinical, behavioral factors considered, having diabetes and perceived difficulty making dietary changes at baseline were associated with less weight loss among gastric bypass patients whereas having a diagnosis of anxiety disorder was associated with less weight loss among gastric banding patients. The association between race and weight loss did not substantially attenuate with additional adjustment for these clinical and behavioral factors, however.

CONCLUSION: African American patients lost significantly less weight than Caucasian patients. Racial differences could not be explained by baseline demographic, clinical, or behavioral characteristics we examined.

Vasunilashorn SM, Dillon ST, Inouye SK, et al. High C-Reactive Protein Predicts Delirium Incidence, Duration, and Feature Severity After Major Noncardiac Surgery. Journal of the American Geriatrics Society. 2017;65(8):e109-e116. doi:10.1111/jgs.14913

OBJECTIVES: To examine associations between the inflammatory marker C-reactive protein (CRP) measured preoperatively and on postoperative day 2 (POD2) and delirium incidence, duration, and feature severity.

DESIGN: Prospective cohort study.

SETTING: Two academic medical centers.

PARTICIPANTS: Adults aged 70 and older undergoing major noncardiac surgery (N = 560).

MEASUREMENTS: Plasma CRP was measured using enzyme-linked immunosorbent assay. Delirium was assessed from Confusion Assessment Method (CAM) interviews and chart review. Delirium duration was measured according to number of hospital days with delirium. Delirium feature severity was defined as the sum of CAM-Severity (CAM-S) scores on all postoperative hospital days. Generalized linear models were used to examine independent associations between CRP (preoperatively and POD2 separately) and delirium incidence, duration, and feature severity; prolonged hospital length of stay (LOS, >5 days); and discharge disposition.

RESULTS: Postoperative delirium occurred in 24% of participants, 12% had 2 or more delirium days, and the mean ± standard deviation sum CAM-S was 9.3 ± 11.4. After adjusting for age, sex, surgery type, anesthesia route, medical comorbidities, and postoperative infectious complications, participants with preoperative CRP of 3 mg/L or greater had a risk of delirium that was 1.5 times as great (95% confidence interval (CI) = 1.1-2.1) as that of those with CRP less than 3 mg/L, 0.4 more delirium days (P < .001), more-severe delirium (3.6 CAM-S points higher, P < .001), and a risk of prolonged LOS that was 1.4 times as great (95% CI = 1.1-1.8). Using POD2 CRP, participants in the highest quartile (≥235.73 mg/L) were 1.5 times as likely to develop delirium (95% CI = 1.0-2.4) as those in the lowest quartile (≤127.53 mg/L), had 0.2 more delirium days (P < .05), and had more severe delirium (4.5 CAM-S points higher, P < .001).

CONCLUSION: High preoperative and POD2 CRP were independently associated with delirium incidence, duration, and feature severity. CRP may be useful to identify individuals who are at risk of developing delirium.

Ngo LH, Inouye SK, Jones RN, et al. Methodologic considerations in the design and analysis of nested case-control studies: association between cytokines and postoperative delirium. BMC medical research methodology. 2017;17(1):88. doi:10.1186/s12874-017-0359-8

BACKGROUND: The nested case-control study (NCC) design within a prospective cohort study is used when outcome data are available for all subjects, but the exposure of interest has not been collected, and is difficult or prohibitively expensive to obtain for all subjects. A NCC analysis with good matching procedures yields estimates that are as efficient and unbiased as estimates from the full cohort study. We present methodological considerations in a matched NCC design and analysis, which include the choice of match algorithms, analysis methods to evaluate the association of exposures of interest with outcomes, and consideration of overmatching.

METHODS: Matched, NCC design within a longitudinal observational prospective cohort study in the setting of two academic hospitals. Study participants are patients aged over 70 years who underwent scheduled major non-cardiac surgery. The primary outcome was postoperative delirium from in-hospital interviews and medical record review. The main exposure was IL-6 concentration (pg/ml) from blood sampled at three time points before delirium occurred. We used nonparametric signed ranked test to test for the median of the paired differences. We used conditional logistic regression to model the risk of IL-6 on delirium incidence. Simulation was used to generate a sample of cohort data on which unconditional multivariable logistic regression was used, and the results were compared to those of the conditional logistic regression. Partial R-square was used to assess the level of overmatching.

RESULTS: We found that the optimal match algorithm yielded more matched pairs than the greedy algorithm. The choice of analytic strategy-whether to consider measured cytokine levels as the predictor or outcome– yielded inferences that have different clinical interpretations but similar levels of statistical significance. Estimation results from NCC design using conditional logistic regression, and from simulated cohort design using unconditional logistic regression, were similar. We found minimal evidence for overmatching.

CONCLUSIONS: Using a matched NCC approach introduces methodological challenges into the study design and data analysis. Nonetheless, with careful selection of the match algorithm, match factors, and analysis methods, this design is cost effective and, for our study, yields estimates that are similar to those from a prospective cohort study design.

Graham KL, Dike O, Doctoroff L, et al. Preventability of early vs. late readmissions in an academic medical center. PloS one. 2017;12(6):e0178718. doi:10.1371/journal.pone.0178718

BACKGROUND: It is unclear if the 30-day unplanned hospital readmission rate is a plausible accountability metric.

OBJECTIVE: Compare preventability of hospital readmissions, between an early period [0-7 days post-discharge] and a late period [8-30 days post-discharge]. Compare causes of readmission, and frequency of markers of clinical instability 24h prior to discharge between early and late readmissions.

DESIGN, SETTING, PATIENTS: 120 patient readmissions in an academic medical center between 1/1/2009-12/31/2010.

MEASURES: Sum-score based on a standard algorithm that assesses preventability of each readmission based on blinded hospitalist review; average causation score for seven types of adverse events; rates of markers of clinical instability within 24h prior to discharge.

RESULTS: Readmissions were significantly more preventable in the early compared to the late period [median preventability sum score 8.5 vs. 8.0, p = 0.03]. There were significantly more management errors as causative events for the readmission in the early compared to the late period [mean causation score [scale 1-6, 6 most causal] 2.0 vs. 1.5, p = 0.04], and these errors were significantly more preventable in the early compared to the late period [mean preventability score 1.9 vs 1.5, p = 0.03]. Patients readmitted in the early period were significantly more likely to have mental status changes documented 24h prior to hospital discharge than patients readmitted in the late period [12% vs. 0%, p = 0.01].

CONCLUSIONS: Readmissions occurring in the early period were significantly more preventable. Early readmissions were associated with more management errors, and mental status changes 24h prior to discharge. Seven-day readmissions may be a better accountability measure.