Home About us Editorial board Ahead of print Current issue Search Archives Submit article Instructions Subscribe Contacts Login 
  • Users Online: 84
  • Home
  • Print this page
  • Email this page


 
 Table of Contents  
CONFERENCE ABSTRACTS AND REPORTS
Year : 2018  |  Volume : 4  |  Issue : 2  |  Page : 189-208

The 2018 St. Luke's University Health Network Annual Research Symposium: Event highlights and scientific abstracts


Date of Web Publication30-Aug-2018

Correspondence Address:
Login to access the Email id

Source of Support: None, Conflict of Interest: None


DOI: 10.4103/IJAM.IJAM_27_18

Rights and Permissions

How to cite this article:
. The 2018 St. Luke's University Health Network Annual Research Symposium: Event highlights and scientific abstracts. Int J Acad Med 2018;4:189-208

How to cite this URL:
. The 2018 St. Luke's University Health Network Annual Research Symposium: Event highlights and scientific abstracts. Int J Acad Med [serial online] 2018 [cited 2018 Dec 11];4:189-208. Available from: http://www.ijam-web.org/text.asp?2018/4/2/189/240132

Guest Editor

Jill C. Stoltzfus

Department of Research and Innovation, The Research Institute, St. Luke's University Health Network, Bethlehem, Pennsylvania, USA

Background Information and Event Highlights: The Annual St. Luke's University Health Network (SLUHN) Research Symposium was created in 1992 to showcase research and quality improvement projects by residents and fellows. The Network Research Institute Director is responsible for planning and organizing the event, with collaboration and consultation provided by Graduate Medical Education leadership, residency, and fellowship faculty, and the Director of Media Production Services. Residents and fellows submit applications for oral and/or poster presentation along with an accompanying abstract describing each project. Three to four physician judges are selected to evaluate the presentations for the first and the second place cash prizes awarded in both oral and poster presentation categories.

The 2018 Research Symposium winners are as follows:

  1. Oral presentations:
    • First place: Michael Ting, MD (Minimally Invasive Gynecology Surgery Fellowship), “Does Cranberry Extract Supplementation Change the Incidence of Urinary Tract Infections Following Pelvic Reconstructive Surgery?”
    • Second place: Ajith Malige, MD (Orthopedic Surgery Residency), “The Clinical Utility of Maceration Dressings in the Treatment of Hand Infections: An Evaluation of Treatment Outcomes.”
  2. Poster presentations:
    • First place: Shane McGowan, MD (Orthopedic Surgery Residency), “A Double Blind Randomized Controlled Equivalence Study Comparing Intra-Articular Corticosteroid to Intra-Articular Ketorolac Injections for Osteoarthritis of the Knee”
    • Second place (tie): Adam Kobialka, DO, and Peter Murphy, DO (Sports Medicine Fellowship), “Sonographic Assessment of Optic Nerve for Evaluation of Sports Associated Concussion”
    • Second place: Hemlata Singh, MD, and Emelia Perez, MD (Geriatric Fellowship), “Impact of Having a Multidisciplinary Team in Nursing Homes to Improve Psychotropic Medication Prescribing Practices.”


As in the previous 2 years, the 2018 Research Symposium for Resident and Fellows included a keynote speaker. This year's distinguished keynote speaker was Vicente H. Gracias, MD, Senior Vice Chancellor for Clinical Affairs, Rutgers Biomedical and Health Sciences, New Brunswick, New Jersey. Dr. Gracias' keynote presentation addressed the “virtuous cycle” of building an institutional research infrastructure including conceptualizing research as an active verb versus a static state of being; identifying currently successful institutions to serve as benchmarks for one's own research initiatives; and developing a concrete timeline for obtaining funding, with specific areas of focus. Dr. Gracias strongly encouraged audience members to determine where St. Luke's wants to excel as an organization and to then focus its research talents and energies in those areas, with the goal of becoming a center of excellence in strategically important subjects and fields.

Activities extended into the early afternoon session, with additional presentations by featured speakers from SLUHN nursing and physician faculty. Dr. Dan Ackerman gave an insightful presentation focusing on quality improvement projects in neurosciences including helpful information on designing quality improvement initiatives with the intent of subsequently converting them into valuable and clinically relevant research projects.

Dr. Bonnie Coyle and Dr. Rajika Reed followed with a, interim report on a network-wide community health project involving colorectal cancer screening. This complex undertaking involves multiple participants from various departments and network locations. Early results are very promising, showing evidence of improved rates of screening – a key outcome of the project.

Dr. Stephen Kareha presented important new research developments in the area of physical therapy, focusing on comprehensive and multidisciplinary approaches to important clinical issues such as chronic pain and functional outcomes. Dr. Jamshid Shirani then followed with an update on the latest developments in cardiology research at St. Luke's University Hospital, including novel projects in the area of advanced cardiac sonography. The final presentation of the day was given by Candida Ramos, RN, BSN, CCRN, and included late-breaking updates in key areas of clinical quality improvement across the network's critical care units.

In the subsequent sections, abstracts from projects presented at the 2018 Annual SLUHN Research Symposium will be listed, beginning with oral presentations and ending with scientific posters. Note that some of the projects could not be published due to pending submissions to national and international scientific meetings.

The following core competencies are addressed in this article: Practice-based learning and improvement, medical knowledge, patient care, systems-based practice.

Oral Presentation Abstracts


  Abstract Number 1 Top


Colonic Diverticular Disease in Polycystic Kidney Disease: Is There Really an Association?

Rodrigo Chavez, Marcela Perez-Acosta, Jill Stoltzfus, Cara Ruggeri, Vikas Yellapu, Ayaz Matin, Kimberly Chaput, Noel Martins, Berhanu Geme

Internal Medicine Residency, Gastroenterology Fellowship, and Post-Doctoral Fellowship, St. Luke's University Health Network, Bethlehem, PA, USA

Background: Autosomal dominant polycystic kidney disease (ADPKD) is caused by mutations in polycystin and is the most common inherited renal cystic disease. Diverticulosis is reported to be more common in patients with ADPKD. Alterations in polycystin function are believed to enhance the smooth-muscle dysfunction associated with diverticulosis. However, most of these associations were initially found in small studies in the 1970s and 1980s. Other studies have found no association or are inconclusive. Our study sought to further clarify the association between ADPDK and diverticulosis.

Methodology and Statistical Approach: After Institutional Review Board exemption was granted, we retrospectively reviewed the National Inpatient Sample database from 2003 to 2011. Abstracted data were obtained using the International Classification of Diseases, Ninth Revision codes and included diverticulosis, diverticulitis, kidney transplant (KT), ADPKD, constipation, smoking, and use of steroids. We conducted Chi-square tests and calculated unadjusted odds ratios (OR) using 2 × 2 tables, followed by multivariate logistic regression modeling to adjust for potential confounders.

Results: In our database, the prevalence of diverticulosis was 2.3%, and 35% of patients with diverticulosis developed diverticulitis, for a prevalence of 0.8% in the general population. In patients with ADPKD, the prevalence of diverticulosis increased to 4.4% when compared to the general population (unadjusted OR = 1.9, 95% confidence interval [CI]: 1.95–2.02, P < 0.0001), while the prevalence of diverticulitis increased to 1.4% (unadjusted OR = 1.78, 95% CI: 1.72–1.83, P < 0.0001). [Table 1] presents these findings. Since many patients with ADPKD require KT, we also evaluated diverticular disease in patients with KT. Both diverticulosis and diverticulitis were more prevalent in patients with ADPKD (4.1% for diverticulosis, unadjusted OR = 2.47, 95% CI: 2.26–2.71, P < 0.0001; and 1.2% for diverticulitis, unadjusted OR = 2.80, 95% CI: 2.38–3.30, P < 0.0001). [Table 2] presents these findings. In multivariate logistic regression analysis, ADPKD had the second highest adjusted OR for diverticulosis (1.9, 95% CI: 1.88–1.95), with constipation having the highest adjusted OR of 2.4 (95% CI: 2.40–2.42), although the Hosmer and Lemeshow test indicated poor model fit.





Discussion and Conclusion: This is the largest study demonstrating an increased prevalence of diverticulosis in patients with ADPKD. Since the presence of diverticulosis is the sine qua noncondition required for diverticulitis, it is important to be aware of the high prevalence of diverticulosis in patients with ADPKD. In particular, diverticulitis progressing to perforation in patients with KT can have devastating consequences. Limitations of this study include its retrospective nature and the relatively poor fit model in the multivariate logistic regression analysis.


  Abstract Number 2 Top


Identification of Seniors at Risk Score in Geriatric Trauma: A Pilot Study

Stephen Dingley, Christine Ramirez, Holly Weber, Rebecca Wilde-Onia, Ann-Marie Szoke, Adam Benton, Danielle Bennett, Alaa-Eldin Mira, Alyssa Green, Stanislaw Stawicki

General Surgery Residency, Geriatric Medicine Fellowship, and Post-Doctoral Research Fellowship, St. Luke's University Health Network, Bethlehem, PA, USA

Introduction/Background: The “identification of seniors at risk” (ISAR) score is a clinical screening tool developed for use in the emergency department. Since its introduction, ISAR has been shown to correlate with elevated 6-month risk of death, functional decline, need for discharge to a nursing facility, and readmissions. The purpose of the current study was to explore the applicability of ISAR to elderly (age ≥65 years) trauma patients. We hypothesized that increasing ISAR scores would correlate with 30-day mortality, discharge destination, and patients' functional outcomes at our Level 1 trauma center (L1TC).

Methodology and Statistical Approach: In this retrospective study, we analyzed clinical data for all geriatric patients who presented to our L1TC and underwent complete ISAR screening between August 2013 and December 2017. In addition to the ISAR score and 30-day mortality (our primary outcome), abstracted variables included patient demographics (age and gender), mechanism of injury, injury severity score (ISS), Glasgow Coma Scale (GCS), hospital/intensive care/step-down lengths of stay, all-cause morbidity, functional independence measures (FIMs) on discharge, and discharge to facility. ISAR was stratified into 4 tiers: 0 = “lowest,” 1–2 = “mild,” 3–4 = “moderate,” and 5+ = “highest.” Estimates of ISARs effects on outcome variables were adjusted for age, gender, GCS, and ISS using analysis of covariance. Data were reported as frequencies, means with standard deviation, or medians with interquartile ranges (IQRs), as appropriate. Statistical significance was set at α = 0.05.

Results: A total of 1030 patients met out inclusion criteria. Median patient age was 80.1 (IQR: 74–92) years, with 58% being female, 99% having blunt mechanism, and median ISS of 9 (IQR: 4–10). Overall, increasing ISAR scores were associated with greater 30-day mortality, substantially increased all-cause morbidity, longer hospital and Intensive Care Unit (ICU) stays, lower FIM score on discharge, and decreased percentage of patients discharge to home (all, P < 0.002, [Table 1]).



Discussion and Conclusion: This is the first large-sample descriptive study examining the relationship between ISAR score and outcomes in geriatric trauma patients. We found that increasing ISAR was associated with greater mortality and morbidity, longer hospital/ICU stays, lower functional outcome at discharge, and decreasing proportion of patients discharged directly to home. Further investigation of ISARs utility in the setting of geriatric trauma is warranted.


  Abstract Number 3 Top


Mortality Predictors in Trauma: A Single-Institution Comparison Study Using a Large Sample of Injured Patients

Ashley Jordan, William Terzian, Thomas Wojda, Marissa Cohen, Joshua Luster, Jacqueline Seoane, Philip Salen, Holly Stankewicz, Elizabeth McCarthy, Stanislaw Stawicki

General Surgery Residency, Emergency Medicine Residency, and Post-Doctoral Research Fellowship St. Luke's University Health Network, Bethlehem, PA, and St. Luke's Warren Family Medicine Residency, Phillipsburg, NJ, USA

Introduction/Background: Mortality prediction in trauma is challenging. Unexpected deaths continue despite improved understanding of pathophysiology and management of trauma-related shock. Several laboratory variables have been evaluated for their ability to quantitate mortality risk in injured patients. Popular indicators of physiologic stress include serum bicarbonate, anion gap (AG), base deficit (BD), and lactate. The aim of this study was to compare the utility and predictive value of each of these variables in a large subset of trauma patients.

Methodology and Statistical Approach: After Institutional Review Board approval, we queried patient records from our Level 1trauma center registry. Variables included patient sex, age, injury severity score (ISS), Glasgow Coma Scale, mortality, and initial laboratory assessments. Our primary outcome was 30-day mortality, and we analyzed the impact of stratified AG (≤3, 6, 9, etc.), BD (≥16, 12, 8, etc.), serum bicarbonate (≤10, 14, 18, etc.), and lactate (≤1, 2, 3, etc.) on 30-day mortality. In addition, we assessed the ability of these variables to predict mortality using receiver operating characteristic curves and employing DeLong methodology. Data were reported as mean ± standard deviation or median with interquartile range (IQR). Area under the curve (AUC) values were reported as area ± standard error (SE). Statistical significance was set at α < 0.01.

Results: Our study included 2811 patients, of whom 70% were male with a median age of 44 years (IQR: 26–58 years). Median ISS was 9 (IQR: 4–16), with an overall mortality rate of 5%. Descriptive characteristics of laboratory values are as follows: mean serum lactate = 2.83 ± 2.51 (n = 371), mean BD = 1.27 ± 5.01 (n = 1167), mean serum bicarbonate = 24.80 ± 5.29 (n = 2165), and AG = 11.20 ± 6.80 (n = 2128). Mortality increased significantly with escalating physiologic stress; serum lactate was the best predictor of mortality (AUC = 0.75 ± 0.04 SE), followed by BD (0.72 ± 0.03), serum bicarbonate (0.68 ± 0.03), and AG (0.66 ± 0.03). Summary of study results, listed by resuscitation end-point category, is shown in [Figure 1].



Discussion and Conclusion: All of the variables examined demonstrated predictive value for trauma-related mortality; however, initial serum lactate and BD were superior to serum bicarbonate and AG. Our data indicate that initial serum lactates ≥3 are associated with doubling of mortality, while lactates ≥7 more than quadruple baseline mortality. For BD, mortality increased from <5% for BD <4 to >40% for BD >16. Trauma practitioners should consider serum lactate and BD as the primary assessment options for mortality risk estimation.


  Abstract Number 4 Top


Primary Care Movement System Screen: A Multisite, Observational Study

Christine Kettle, Stephen Kareha, Jenna Cornell, Neeraj Khiyani

Orthopedic Physical Therapy Residency, St. Luke's University Health Network, Easton, PA, USA

Introduction/Background: Movement of the human body is essential for allowing individuals to interact with their environment. The prevalence of movement system disorders is currently unknown, and there are no screening methods to appropriately detect these disorders. An easy-to-use and reliable screening tool would facilitate earlier initiation of treatment, which could prevent the transition of acute movement system disorders to chronic conditions and subsequently improve both patient care and unnecessary medical expenses.

The primary aim of our study was to evaluate whether a screening tool used in the primary care setting could accurately identify patients with movement system disorders. The secondary aim was to understand why people do not discuss these problems with their primary care physician.

Methodology and Statistical Approach: We developed a screening tool to assess whether patients with movement system disorders could be accurately identified in the primary care setting, using data from a previously conducted pilot study that identified the prevalence of movement system disorders in the primary care setting. Based on these data, we required a minimum sample size of 998 to achieve power (1–β) of 80% and α = 0.05. Patients in selected primary care offices were asked to fill out a brief survey, which were then analyzed to identify the prevalence of movement system disorders. We entered and stored data in REDCap, a secure data capture system. We conducted multivariate logistic regression to explore the relationship of comorbidities with movement system disorders. We also applied a qualitative approach utilizing a thematic analysis to determine why patients do not discuss these issues with their primary care provider.

Results: A total of 385 participants were included in our study (mean age + standard deviation = 49.8 ± 16.5), with 75.9% of patients having a movement system disorders. Sensitivity of our screening question was 0.72. From a subsample of 104 participants with complete data for comorbidities, multivariate logistic regression revealed a trend toward significant prediction of movement system disorders for gastrointestinal esophageal reflux disorder(P = 0.08). Finally, based on our thematic analysis of 102 participants who did not discuss this issue with their primary care physician, the three most prevalent themes were reduced access to health care and perceived lack of importance of the problem, and the fact that it was a new condition.

Discussion and Conclusion: Our screening question's sensitivity for movement systems disorders was acceptable. If physicians are able to identify predictors and factors associated with movement system disorders, they may be able to screen for this outcome more effectively. In addition, earlier identification of movement system disorders may facilitate earlier treatment and therefore prevent costs associated with resulting chronic disorders.


  Abstract Number 5 Top


Does Cranberry Extract Supplementation Change the Incidence of Urinary Tract Infections Following Pelvic Reconstructive Surgery?

Michael Ting, Andrew Brown, Vincent Lucente

Minimally Invasive Gynecology Surgery Fellowship, St. Luke's University Health Network, Allentown, PA, USA

Introduction/Background: Urinary tract infections (UTIs) are common complications after gynecologic surgery. Among patients undergoing pelvic floor and anti-incontinence surgery, 10%–64% developed a postoperative UTI. Cranberry extract has been shown in previous studies to decrease the risk of UTIs after general gynecologic surgery. We sought to determine if supplementation with cranberry extract in the immediate postoperative period reduced the risk of UTI in patients undergoing pelvic reconstructive and anti-incontinence surgery.

Methodology and Statistical Approach: This was a retrospective cohort study approved by our Institutional Review Board. The study population included all patients who underwent anti-incontinence surgery and pelvic reconstructive surgery with or without a concomitant mid-urethral sling during two 6-month intervals. Two patient groups were compared; the first group received standard postoperative instructions, while the second group was asked to take 6 weeks of postoperative cranberry supplementation in addition to receiving standard of care. All outpatient and hospital charts were retrospectively reviewed for demographics, perioperative data, and urine cultures up to 6 weeks postoperatively. A UTI was defined as a positive urine culture or urinary symptoms treated with antibiotics after evaluation by a physician. We compared the two cohorts on an intention-to-treat basis using Chi-square tests.

Results: A total of 560 patients were evaluated; 287 patients with postoperative cranberry extract supplementation and 273 receiving only postoperative standard of care. The two cohorts were similar regarding age, body mass index, menopausal status, race, history of recurrent UTIs, procedures performed, surgical complications, and postoperative urinary retention requiring the placement of an indwelling catheter. There were statistically significant but minor differences between the supplementation and nonsupplementation groups in age (63.2 vs. 65.2 years, respectively, P = 0.05) and self-reported sexual activity (49.1% vs. 40.3%, respectively, P = 0.04). However, the incidence of postoperative UTI was not significantly different between groups (8.4% vs. 7.7%, respectively, P = 0.77).

Discussion and Conclusion: Postoperative supplementation with cranberry extract did not reduce the incidence of UTIs after anti-incontinence and pelvic reconstructive surgery. This finding is contrary to more recent literature, suggesting a benefit with cranberry supplementation after general gynecologic surgery. We attribute our low postoperative UTI rate to the utilization of a chlorhexidine vaginal prep before surgery and an aggressive bladder retraining protocol to prevent unnecessary catheterizations. Although cranberry extract may provide some protection from the development of UTIs, a larger sample size may be needed to detect a potential benefit in our patient population.


  Poster Presentation Abstracts: Abstract Number 1 Top


The Impact of Continuity on Breastfeeding Rates

Brianne Allerton, Diane Jacobetz, Andrew Goodbred, Brittany Kuperavage, Yamini Kathari

Family Medicine Residency, St. Luke's University Health Network, Bethlehem, PA, USA

Introduction/Background: There are numerous factors that influence a mother's decision to breastfeed her newborn, including her physician's opinion and information provided prenatally about breastfeeding and formula feeding, family support, and employment considerations, among other factors. Physicians are able to impact a mother's decision to breastfeed or formula feed by starting the conversation about the benefits of breastfeeding early in the pregnancy. This relationship is well developed between family medicine physicians and their patients and may result in higher rates of breastfeeding among their female patients, compared to the rates of breastfeeding in women receiving their prenatal care from obstetricians and subsequent follow-up of the infant with a different physician. Our study sought to clarify the differences in breastfeeding rates based on the above considerations.

Methodology and Statistical Approach: This retrospective chart review included mothers who received their prenatal care at the St. Luke's Family Medicine Center or in the Obstetrics Department of the St. Luke's University Health Network. Data were collected using REDCap and analyzed with separate Chi-square tests at each of the following time points: 2 weeks, 6 weeks, 6 months, and 1 year.

Results: Breastfeeding rates in patients seen by family physicians were 71.4% at 2 weeks, 63.6% at 6 weeks, 53.5% at 6 months, and 56.4% at 1 year. Breastfeeding rates in patients seen by obstetricians were 29.6% at 2 weeks, 14.8% at 6 weeks, 14.6% at 6 months, and 13.2% at 1 year. The above differences between patients seen by family physicians versus obstetricians were statistically significant (P < 0.0001).

Discussion and Conclusion: Our study findings will be helpful in developing better methods of encouraging mothers to breastfeed. Since our study was conducted before implementation of the St. Luke's Baby and Me initiative, it would be valuable to assess rates of breastfeeding after this program's implementation. St. Luke's Baby and Me promotes breastfeeding, rooming-in and skin-to-skin contact through education, teaching, and support for mothers, newborns, and families. While similar to the Baby-Friendly Hospital designation, it includes modifications that allow families to make informed decisions that are respected and supported by their physicians and health-care team.


  Abstract Number 2 Top


Maternity Complications in Women with Hypertrophic Cardiomyopathy

Rasha Aurshiya, Amitoj Singh, Srilakshmi Vallabhaneni, Afsha Aurshina, Jamshid Shirani

Cardiology Fellowship, St. Luke's University Health Network, Bethlehem, PA, USA

Introduction/Background: There is an increased risk of death and adverse cardiovascular outcomes in women with hypertrophic cardiomyopathy (HCM) during childbirth. However, data are scarce and limited to a small number of patients reported from large tertiary care centers. We aimed to examine the maternal cardiovascular and obstetric outcomes of childbirth in women with HCM in the United States.

Methodology and Statistical Approach: Our retrospective study population consisted of 422 mothers with HCM (age 29 ± 6 years, 53% Caucasian) admitted to a hospital for childbirth from 2003 to 2011 according to Nationwide Inpatient Sample database. We analyzed these data descriptively.

Results: Basic demographics of the study sample are shown in [Table 1]. In 58% of mothers, the mode of delivery was a cesarean section (CS), and mean length of stay was 5 ± 9 days. No maternal mortality was reported, and serious cardiovascular complications were uncommon including cardiac arrest (n = 5 [1.1%]), cardiogenic shock (n = 5 [1.1%]), and ventricular tachycardia (n = 20 [4.6%]). Cardiopulmonary resuscitation, mechanical circulatory support (other than balloon counterpulsation), and temporary venous pacemakers were each required in 5 women (1.1%), while 14 (3.4%) needed mechanical ventilation. Acute respiratory distress syndrome, deep vein thrombosis, and acute renal failure were each reported in 5 women (1.1%). Obstetric complications included abruption placenta (n = 28 [7%]), preterm labor (n = 89 [21%]), premature rupture of membranes (n = 20 [5%]), preeclampsia/eclampsia (n = 19 [4.5%]), and gestational hypertension (n = 10 [2.3%]). Postpartum hemorrhage occurred in 3.3% of patients, and maternal blood transfusion was needed in 5.8% of patients. Labor was obstructed in 2.3% of patients, and 32% of vaginal deliveries required instrument assistance. Although there was no fetal mortality, fetal distress, abnormal fetal heart rate, and fetal growth retardation occurred in 15%, 14%, and 3.6%, respectively. Overall, 23% and 40% of patients suffered at least one adverse cardiovascular or obstetric complication, respectively. [Table 2] and [Table 3] provide detailed information regarding complications, procedures, and other key study outcomes.







Discussion and Conclusion: The predominant mode of delivery for pregnant HCM patients in the United States has been CS. Although there is remarkably low maternal and fetal mortality in these patients, obstetric complications occur in ~40% of patients. In contrast, maternal cardiovascular complications are relatively low. Large multicenter trials or registries must be initiated to further clarify these findings.


  Abstract Number 3 Top


Syncope Management and Cost Analysis

Patrick Callaghan, Matthew Carey, Hesham Tayel, Hussam Tayel, Vikas Yellapu, MD; Nora Ko, Cara Ruggeri, Justin Psaila

Internal Medicine Residency and Post-Doctoral Research Fellowship, St. Luke's University Health Network, Bethlehem, PA, USA

Introduction/Background: Syncope comprises nearly 1%–3% of Emergency Department (ED) visits and nearly 6% of all inpatient admissions. Lifetime prevalence of syncope in the general population is close to 20%. The presenting symptoms are frequently vague, leading to extensive testing with low diagnostic yield and significant costs. Costs incurred by syncope admission have been as high as $2.4 billion per year. This quality improvement project aimed to review the cost of evaluation for patients admitted to St. Luke's Bethlehem University Hospital with a diagnosis of syncope. Ultimately, these data will be used to improve resource utilization in diagnosing and treating syncope.

Methodology and Statistical Approach: This was a retrospective cohort study of data from 2015 to 2017. Using electronic medical records from EPIC, we identified patients who were admitted to the observation department from the ED with syncope as the primary diagnosis. Using billing and procedures codes, we identified the diagnostic tests performed for each admission. Using the San Francisco Syncope Rule, patients were stratified into high- and low-risk categories. If patients had any of the following, they were considered high risk (Score >1): shortness of breath, congestive heart failure, electrocardiogram abnormalities at admission, or systolic blood pressure <90 mmHg at admission. Otherwise, patients were considered low risk (Score = 0), as this test has a negative predictive value of 99%, meaning low-risk patients do not require further diagnostic assessment. We used Chi-square analysis to identify associations between patient groups (low risk vs. high risk) and diagnostic test results and the Student's t-test for identifying differences in mean costs incurred during admission.

Results: We reviewed 197 cases of syncope (98 low risk and 99 high risk). We compared the number of echocardiograms, head computed tomography scans, telemetry monitoring, carotid ultrasounds, and magnetic resonance imagings received by patients in each group. Chi-square testing revealed no significant differences between low-risk and high-risk groups [Table 1].



Mean diagnostic costs were $2588 per patient in the high-risk category and $2160.14 in the low-risk group, and this difference was not statistically significant (P = 0.30).

Discussion and Conclusion: Our study revealed substantial unnecessary costs incurred by low-risk syncope patients, despite the lack of a statistically significant difference compared to high-risk patients. Educational programs and implementation of appropriate diagnostic order sets will help curb these costs and streamline syncope management.


  Abstract Number 4 Top


Face to Name

Devyn Graham, Heather Krasa, Cara Ruggeri, Ben Veres

Internal Medicine Residency, St. Luke's University Health Network, Bethlehem, PA, USA

Introduction/Background: With the growing size of health-care provider teams in academic teaching hospitals, patients often report difficulties identifying what role different providers play in their care, leading to confusion and frustration, as well as possible decreased patient satisfaction. Studies using identification tools such as labeled photograph cards at the bedside can assist patients in correctly identifying their physicians as well as lead to higher patient satisfaction in certain cases. The aim of our quality improvement project was to increase patients' ability to identify their primary physicians, enhance patient satisfaction, and strengthen overall physician and patient communication on the inpatient internal medicine teaching service.

Methodology and Statistical Approach: We collected baseline data from July to November 2017, through a survey (English and Spanish) asking patients to rank their experience regarding the quality of communication by the primary team of physicians during hospital stay. A team member administered the survey on the day of discharge for all patients who met inclusion criteria. Three of our survey questions were taken from the Hospital Consumer Assessment of Healthcare Provider and Systems survey. We introduced our intervention from January to April 2018. We gave laminated photograph cards to enrolled patients at the beginning of their stay. Team members' photographs and names as well as a daily message area were utilized each day by participating providers. Messages included the goals or upcoming diagnostic tests for that day, as well as a list of consulting specialist services. These messages were given in both English and Spanish, depending on the patient's native language. On daily rounds, the team leader circled the photograph of each team member, which included the attending physician, resident, and intern who would be caring for them that day. The team leader also updated the daily goals. Identical discharge day surveys were given to patients, with the addition of one question.

Results: Anecdotally, a number of patients, their families, and nursing staff gave enthusiastically positive feedback about the face cards. [Table 1] and [Figure 1] and [Figure 2] present pre- and post-intervention survey results.







Discussion and Conclusion: Overall, the face cards improved patients' understanding of their care, improved their recognition of their primary team of physicians, and appeared to improve satisfaction in several areas. The face cards allowed patients to more easily put a face to the name of their physicians by giving them access to the cards throughout the day. Our intervention also allowed nursing staff to more rapidly identify the resident caring for the patient, enabling them to page that resident directly with questions or concerns. Despite these positive findings, our study was limited by the fact that only 13 patients received postintervention surveys. Challenges to obtaining a larger patient group included lost face cards, residents forgetting to update the cards on a daily basis, and postintervention surveys not being completed before discharge. In addition, it was difficult to integrate face card implementation into residents' daily practice. However, if face cards become a regular part of the workflow, they may prove very useful in strengthening physician–patient communication and ultimately impacting patient satisfaction. Therefore, the use of face cards will be taught to incoming interns at the beginning of their residency, with the goal of this practice becoming seamlessly integrated into their daily responsibilities.


  Abstract Number 5 Top


Bone Markers in Charcot Neuroarthropathy

Brandy Grahn, Brent Bernstein

Podiatric Medicine and Surgery Residency, St. Luke's University Health Network, Allentown, PA, USA

Introduction/Background: Effective medical therapy for Charcot neuroarthropathy (CNA) has been based mostly on different biochemical markers that are used to gauge osteoclastic activity and thereby monitor the progression of CNA. Among these markers are deoxypyridinoline (DPD) crosslinks and bone-specific alkaline phosphatase (BSAP). Bone turnover markers have also been used to monitor the effectiveness of bisphosphonate therapy in the treatment of CNA. Bisphosphonates have been shown to successfully reduce the levels of these markers, which has led to the investigation of bisphosphonates as a potential medical therapy for CNA. The assumption here is that levels of these bone markers correspond to the severity of a patient's disease. If this assumption holds true, it would suggest that by decreasing the levels of the bone markers through bisphosphonate therapy, CNA may be taken out of the acute phase. Therefore, the purpose of our study was to compare the levels of relevant bone markers in the acute and quiescent stages to determine if they accurately reflect the severity of CNA. We hypothesized that, as CNA progresses into chronic or quiescent stages as measured by pedal temperature, the levels of bone markers would decrease accordingly.

Methodology and Statistical Approach: We retrospectively reviewed 41 patients diagnosed with CNA in our clinic. Disease severity was determined through temperature differences between affected and unaffected limbs, which was then compared to the levels of bone turnover markers such as BSAP and DPD/Crt ratio using Pearson product-moment correlation coefficients.

Results: The correlation between temperature and DPD/Crt was positive (r = 0.16, P = 0.30), while the temperature-BSAP correlation was negative (r = −0.17, P = 0.30). However, neither association was statistically significant.

Discussion and Conclusion: The lack of statistical significance in the relationship of bone turnover marker levels to pedal temperature and disease severity calls into question their reliability in the diagnosis of CNA, as well as their feasibility in monitoring the treatment effectiveness of medications such as bisphosphonates.


  Abstract Number 6 Top


Sonographic Assessment of Optic Nerve for Evaluation of Sports-Associated Concussion

Adam Kobialka, Peter Murphy, Maheep Vikram, Celestine Nnaeto

Sports Medicine Fellowship, St. Luke's University Health Network, Bethlehem, PA, USA

Introduction/Background: It is believed that concussion leads to increased intracranial pressure. Measurement of ocular ultrasound parameters that respond to changes in the intracranial pressure (including the optic disc and optic nerve sheath diameter) may provide a safe, inexpensive, and noninvasive means to help diagnose concussion, as well as help monitor for resolution of symptoms. At present, there is no objective, noninvasive, point-of-care assessment measure that is both reliable and inexpensive for diagnosing sports-associated concussion/mild traumatic brain injury. Our study sought to determine if there is a relationship between ultrasound measurements of the optic disc/optic nerve sheath and clinically diagnosed sports-related concussion. We also investigated the association between ultrasound measurements of the optic disc/optic nerve sheath and clinically diagnosed resolution of sports-related concussion. The purpose of this study is to assess for sonographic evidence of increased intracranial pressure in sports-related mild traumatic brain injury (also known as concussion) and whether this could play a diagnostic and/or management role of sports-related concussion. We will measure the optic nerve sheath diameter and the optic disc to indirectly assess for increased intracranial pressure within the 1st week of clinically diagnosed sports-related concussion as well as repeat measurement of these parameters following clinically diagnosed resolution of the concussion. The primary objective is to investigate for a relationship between ultrasound measurements of the optic disc/optic nerve sheath and clinically diagnosed sports-related concussion. The secondary objective is to investigate for a relationship between ultrasound measurements of the optic disc/optic nerve sheath and clinically diagnosed resolution of sports-related concussion.

Methodology and Statistical Approach: This was a prospective cohort study of patients between 17 and 50 years of age with a clinically diagnosed sports-related concussion/mild traumatic brain injury. Before performing ocular ultrasound on study patients, all research staff had to complete at least 10 ultrasounds to demonstrate competency as assessed by an active American Institute of Ultrasound in Medicine member. During the study, we measured both the optic disc and optic nerve sheath diameter to indirectly assess for increased intracranial pressure, and we repeated these measurements following clinically diagnosed resolution of the concussion. We compared our results to the age-accepted normative values for unaffected individuals. Patients continued to receive standard of care for concussion treatment throughout the study. We reported descriptive outcomes and Spearman's rank correlation coefficients.

Results: At the beginning of our study, there were 9 patients, with 6 included at study cessation. The initial ultrasound of the optic nerve was performed at a mean of 3 days postinjury. Of the 9 participants who had ultrasound performed postinjury, mean optic nerve sheath diameter for both the left and right eye was 6.30 mm, compared to the accepted mean diameter of 5.00 mm. Mean time for patients to have a clinically diagnosed resolution of concussion symptoms such that they were able to return to sports activity was 26.5 days. At this time, repeat ultrasound was performed on six remaining patients. Mean optic nerve sheath diameter decreased to 5.70 mm for both the right and left eyes, thereby demonstrating an overall decrease in the optic nerve sheath diameter with resolution of concussion symptoms. Main study results are presented in [Table 1].



Discussion and Conclusion: This study revealed that optic nerve sheath diameter increases with concussion injury and subsequently decreases back to a normal range following concussion resolution. This study further suggests that sonographic evaluation of optic nerve sheath diameter may be utilized by clinicians to aid in the diagnosis of concussion, as well as help guide return-to-play decisions. Our study was limited by the small number of participants. In particular, the majority of sports-related concussions diagnosed in the primary care sports medicine setting at St Luke's University Health Network are in high school athletes under the age of 18; therefore, future studies should include this population. It would also be beneficial to obtain comparison measurements from healthy individuals. Furthermore, although we used the accepted mean of 5.00 mm for normal optic nerve sheath diameter, it would be beneficial to include an accurate mean for our study population as well as for the ultrasound operator. We hope that our study can help guide future research that includes larger sample sizes and both adult and pediatric patients, along with comparison groups. This information would enable more accurate assessment of the effectiveness of sonographic evaluation of optic nerve sheath diameter in accurately diagnosing and determine appropriate return to play times for concussed athletes.


  Abstract Number 7 Top


Comparing Shoulder and Cervical Spine Surgical Intervention in Shoulder Pain

Ajith Malige, Paul Morton, Gbolabo Sokunbi

Orthopedic Surgery Residency, St. Luke's University Health Network, Bethlehem, PA

Introduction/Background: Etiology of neck and shoulder pain may be multifactorial. When surgical intervention is indicated, determining whether to start with spine or shoulder surgery is an important clinical decision that is based on the severity of pathologies, comorbidities, and patient preference. The literature includes very few studies exploring the incidence or outcomes of different surgical treatment paths. Therefore, we sought to examine this issue more closely.

Methodology and Statistical Approach: We retrospectively reviewed 154 charts at a single institution between 2009 and 2017 from patients who had both cervical spine and shoulder pathology and underwent operative intervention of one or both pathologies. We recorded demographic information, diagnoses, operative details, and subjective reports of operative success in relieving shoulder symptoms. We analyzed our data with independent samples t-tests or Mann–Whitney rank-sum tests, as appropriate.

Results: Of the 154 patient charts reviewed, most patients were male (n = 90, 58.4%) and between ages 40 and 59 years (n = 95, 61.7%). Ninety-one patients (59.1%) underwent shoulder surgery, 15 (9.7%) underwent cervical spine surgery, and 48 (31.2%) underwent both operations. Overall, 71 patients (46.1%) noted complete cessation of original shoulder symptoms postoperatively. The following outcomes were similar when comparing only cervical spine to shoulder intervention: patient-reported success (P = 0.85), NRS pain score decreases (P = 0.45), all functional outcomes except for final external rotation range of motion (P = 0.02), and postoperative opioid use (P = 0.30). When comparing patients who underwent cervical followed by shoulder intervention to shoulder followed by cervical intervention, the following outcomes were similar: patient-reported success (P = 1.00), numerical rating scale (NRS) pain score decreases (P = 0.37), all 6 functional outcomes, and postoperative opioid use (P = 0.08). In contrast, for patients who underwent both operations versus only one type, patient-reported success was significantly different (P = 0.0004), but not NRS decreases (P = 0.18), functional outcomes, or postoperative opioid use (P = 0.43).

Discussion and Conclusion: We observed similar success rates when comparing patients who underwent only shoulder surgeries versus only cervical spine surgeries, as well as patients receiving shoulder followed by cervical spine surgery versus cervical spine followed by shoulder surgery. Performing both types of surgeries also yielded higher success rates compared to only one type of surgery.


  Abstract Number 8 Top


A Double-Blind Randomized Controlled Equivalence Study Comparing Intra-Articular Corticosteroid to Intra-Articular Ketorolac Injections for Osteoarthritis of the Knee

Shane McGowan, Paul Morton, William Rodriguez, Vikas Yellapu, Tim Visser, Gregory Carolan

Orthopedic Surgery Residency and Post-Doctoral Research Fellowship, St. Luke's University Health Network, Bethlehem, PA, USA

Introduction/Background: The United States Bone and Joint Initiative asserts that 26 million individuals in the US are affected by osteoarthritis (OA). The prevalence of OA has doubled in the past 20 years, and its financial burden on hospitals exceeds $16 billion per year and is responsible for $160 billion in lost wages, making OA the second most expensive medical condition. OA is a significant problem for patients and hospitals; therefore, identifying effective and efficient treatments is paramount. Our study evaluated two treatment options for OA in a segment of our patient population.

Methodology and Statistical Approach: This was a randomized controlled trial comparing the efficacy of intra-articular ketorolac (Toradol) with intra-articular betamethasone injections in reducing self-reported pain. We randomly stratified patients into two arms: the comparison group received 6 mg of betamethasone, and the treatment group received 60 mg of ketorolac. The patient, injector, and researchers were blinded to treatment assignment. Before injection, enrolled patients completed the Western Ontario McMaster Universities Osteoarthritis Index (WOMAC), with follow-up WOMAC responses obtained postinjection at 1 month, 3 months, and 6 months. We also recorded age, gender, body mass index, laterality, severity, and fluid aspirations. Our primary end point was changes in WOMAC scores for pain at each time point after the initial injection. We analyzed our data using paired t-tests and Chi-square tests.

Results: Over a 3-year period, 409/543 patients (75.3%) met inclusion criteria [Table 1]. A total of 111 knees were injected. There were statistically significant differences in mean WOMAC scores for both the ketorolac and betamethasone groups at 1 month (54.0–38.9 [P = 0.001] vs. 53.0–37.8 [P = 0.004], respectively) and at 3 months (54.4–37.8 [P = 0.001] vs. 50.9–29.3 [P =.001], respectively). At 6 months, the ketorolac group had significantly decreased mean WOMAC scores (48.5–16.0 [P =.004]), but the betamethasone did not (54.4–32.9 [P = 0.11]). When comparing the changes of WOMAC scores between groups across all time periods, there was no statistically significant difference [Figure 1] and [Figure 2].







Discussion and Conclusion: Our study demonstrated a significant decrease in WOMAC scores among patients who received ketorolac intra-articular injections, which was comparable to the decrease among patients in the intra-articular betamethasone group. Since ketorolac is a more accessible and cost-effective drug, it is worth considering its use in managing OA of the knee.


  Abstract Number 9 Top


Quality Improvement of Asthma Symptom Control Documentation

Margaret Mintus, Eginia Franco, Satinderpal Kaur, Piotr Zembzruski, Abby Rhoads, Elspeth Black

Family Medicine Residency, St. Luke's University Health Network, Bethlehem, PA, USA

Introduction/Background: Asthma affects 25.7 million people in the United States. Recent studies have revealed poor control of asthma in the larger population, resulting in 1.8 million emergency room visits, 439,000 hospitalizations, and 3615 deaths, leading to >$60 billion in health-care costs each year. To help address this burden, it is essential to improve the documentation of patient symptoms. Our quality improvement project sought to improve asthma symptom control documentation by at least 20% to exceed American Academy of Family Physicians (AAFP) benchmarks, which include nighttime symptoms (42%), limitation of activity (31%), and asthma control (27%).

Methodology and Statistical Approach: Utilizing the AAFP Asthma Metric Module, we collected baseline data on 211 asthma patients. We observed that our documentation of nighttime symptoms, limitation of activity due to symptoms, and overall control were in need of improvement (28%, 18.5%, and 35.5%, respectively). During the 3-month implementation period (August to November 2017), patients were identified on their billing sheet as having asthma at the time of their appointment check-in. These patients were given an Asthma Control Test (ACT), which was used by our clinic physicians to document specific symptoms and overall control of patients' condition. At the end of our implementation time period, we repeated our evaluation of documentation to monitor progress.

Results: Based on ACT results, we detected an improvement in documentation in all three areas of interest, as follows:





  • Nighttime symptoms: 28%–>53%
  • Limitation of activity due to symptoms: 18.5%–> 39.5%
  • Overall asthma control: 35.5%–>73%.


Discussion and Conclusion: By identifying patients with asthma at the beginning of their appointments, having them complete a self-evaluation of their symptom control, and working collaboratively with ancillary clinic staff, we were able to improve our patient care across all three areas of interest. We hope to continue identifying patients with poor asthma control who are at risk for exacerbation and hospitalization, thereby lowering the morbidity of this disease within our clinic population.


  Abstract Number 10 Top


Provider Prescription Patterns for Acute Sinusitis: A Call for Antibiotic Stewardship in the Outpatient Setting

Priya Patel, Thomas Wojda, Pradeep Patel, Derek Tang, Eugene Decker

Family Medicine Residency – Warren Campus and Post-Doctoral Research Fellowship, St. Luke's University Health Network, Phillipsburg, NJ, USA

Introduction/Background: Excessive antibiotic prescription in ambulatory practice contributes to the growth of antibiotic-resistant bacteria. Acute sinusitis, the fifth most common diagnosis for which an antibiotic is prescribed, is frequently viral in origin and will resolve without antibiotics. The objective of our study was to provide a descriptive analysis of our clinic's antibiotic (ABX)-prescribing patterns for acute sinusitis.

Methodology and Statistical Approach: This was a retrospective analysis of patients >18 years of age who were diagnosed with acute sinusitis between January 1, 2016, and August 31, 2017. We obtained our data from Allscripts electronic medical records system and used the REDCap Electronic Data Capture system to centralize data collection. We documented patient age, gender, chronic obstructive pulmonary disease (COPD)/asthma history, smoking status, month of diagnosis, ABX prescribed, visit type (attending or preceptor), symptoms (facial pressure/pain/fullness, fever >100.5, maxillary toothache pain, purulent rhinorrhea, and symptoms >10 days), follow-up, and clinical outcome (worsening symptoms, not improving, or new symptoms vs. improved symptoms). We analyzed our data descriptively.

Results: Patient outcomes included the following: median age = 44 years; 101 males, 295 females; 67 patients with COPD/asthma 67; 6 immunocompromised patients; 371 attending visits, 25 preceptor visits; 190 ABX initially prescribed [Figure 1]; 34 patients with follow-up visits (22 before 4 weeks and 12 after 4 weeks); and 126 patients diagnosed in winter, 116 in spring, 82 in summer, and 72 in fall [Figure 2]. A total of 20 patients had second visits with attendings, and 14 had second visits with preceptors. A total of 27 patients had worsening symptoms and 7 improved, with ABX prescriptions as follows: amoxicillin (AMX) = 3; sulfamethoxazole/trimethoprim (TMP-SMX) = 3; azithromycin (AZM) = 3; cephalexin (CFX) = 1; amoxicillin-clavulanic acid (AMC) = 3; and doxycycline (DX) = 1. [Table 1] presents characteristics of patients who were prescribed an ABX on a second visit. Only one patient reported gastrointestinal side effects after azithromycin was initially prescribed.







Discussion and Conclusion: Although our data on acute sinusitis cannot determine causality based on factors such as season, smoking status, and worsening or no improvement of symptoms, it appears that facial pressure or fullness may be associated with ABX prescription. We hope to use this information as baseline/historical control data in implementing an antibiotic stewardship program at our outpatient clinic to better determine if improved provider education enhances judicious prescription of antibiotics for this common illness.


  Abstract Number 11 Top


Acute Bronchitis: Prescription Patterns of Health-Care Providers in an Outpatient Clinic

Sidra Sindhu, Thomas Wojda, Naffie Ceesay, Chris Michel, Jessica Smith, Eugene Decker

Family Medicine Residency – Warren Campus and Post-Doctoral Research Fellowship, St. Luke's University Health Network, Phillipsburg, NJ, USA

Introduction/Background: Although acute bronchitis (AB) is a predominantly viral illness, antibiotics continue to be prescribed frequently in outpatient settings, despite having limited to no benefit. The objective of our study was to describe our clinic's antibiotic (ABX) prescription patterns for AB, with the overall aim of enhancing provider awareness of the role of antibiotics for this disease.

Methodology and Statistical Approach: This was a retrospective analysis of adult patients >18 years who were diagnosed with AB between January and August of 2017. We obtained our data from the Allscripts electronic medical records system and used the REDCap Electronic Data Capture system to collect data. We obtained data on patient age; gender; preexisting chronic obstructive pulmonary disease (COPD) or asthma; smoking status and month of diagnosis; ABX given; type of visit (attending or resident/preceptor); symptoms (acute illness of <21 days, cough, respiratory tract symptoms, no other explanation, and no documentation); follow-up; and clinical outcome (worsening symptoms, not improving, or new symptoms vs. improved symptoms). Due to lack of a viable comparison group, we only reported descriptive outcomes.

Results: We analyzed 319 cases (mean age ± standard deviation = 52 + 16 years), 230 females and 89 males, 136 patients with COPD/asthma, 254 attending visits and 65 preceptor visits, and ABX given 126 times versus 193 not given [Figure 1]. The prescribed ABX included azithromycin (AZM) = 89/126 (70%), amoxicillin (AMX) = 14/126 (12%), fluoroquinolones (LVF) = 11/126 (9%), amoxicillin-clavulanic (AMC) = 5/126 (4%), doxycycline (DX) = 6/126 (5%), and sulfamethoxazole/trimethoprim (TMP-SMX) = 1/126 (0.8%). Follow-up outcomes included 97 patients (47 not improving, worsening, or new symptoms; 50 improved), 35 preceptor visits, and 91 attending visits. The following ABX was prescribed: 14/18 AZM (77%), 2/18 AMC (11%), 1/18 AMX (6%), and 1/18 LVF (6%), and the mean age ± standard deviation of this population was 57.5 ± 20 years. [Table 1] presents characteristics of patients given ABX.





Discussion and Conclusion: In our study population, ABX was prescribed on 40% of the initial visits, with AZM being most frequently prescribed on both initial and follow-up visits (70% and 77%, respectively). Multiple factors may influence the decision to prescribe antibiotics such as preexisting lung diseases, smoking status, symptomology variation, and other factors. Understanding these factors may assist in the development, an antibiotic stewardship program aimed at enhancing provider awareness of optimal treatment guidelines for AB in outpatient settings.


  Abstract Number 12 Top


Impact of Having a Multidisciplinary Team in Nursing Homes to Improve Psychotropic Medication Prescribing Practices

Hemlata Singh, Emelia Perez, Alaa-Eldin Mira, William Kuehner, Lou Czechowski

Geriatric Medicine Fellowship St. Luke's University Health Network, Bethlehem, PA, USA

Introduction/Background: Nationally, there is a well-recognized need for improving medication prescription for elderly patients to minimize adverse effects and prevent negative outcomes associated with the use of certain medication classes. At the forefront of these medications is the use of psychotropic medications, which can lead to increased morbidity and mortality in the elderly population. Examples include antipsychotic and hypnotic medications, antidepressants, and anxiolytics. Based on previous studies in nursing homes, multidisciplinary team involvement has improved prescribing practices and resulted in a decreased number of patients on psychotropic medications. One known strategy is for a multidisciplinary team to review patients' charts on a monthly basis. These teams are typically comprised of a physician, facility psychiatrist, pharmacist, nursing supervisor, floor nurse, and nursing aides. Teams should then follow Omnibus Budget Reconciliation Act (OBRA) guidelines to gradually reduce the doses of potentially inappropriate medications created in 1987, OBRA implemented federal standards of care for nursing homes. Our quality improvement project highlighted data obtained from Cedarbrook Nursing Home following implementation of a multidisciplinary team during a 1-year period. We looked to the Centers for Medicare and Medicaid Services goal of reducing psychotropic medication usage by 15% among long-term nursing home residents by the end of 2019.

Methodology and Statistical Approach: We conducted monthly chart reviews of nursing home patients >65 years of age beginning in January of 2017. The multidisciplinary team began looking at prescribing practices and reviewed medication lists to identify residents currently taking psychotropic medications. After the team pharmacist tracked changes using pharmacy data, the team recommended gradual dose reductions, with actual implementation based on patient tolerance. During monthly meetings, the team discussed the appropriateness of medications as well as monitoring of patient behaviors. We collected data for 12 months and compared these findings to both state and national reports of psychotropic use. We presented our data descriptively.

Results: We reviewed 193 nursing home residents in 2017 and 194 in 2018. There were 39 residents who were prescribed psychotropic medications as of July 2017, and 31 residents were prescribed these medications as of February 2018. Over the course of 1 year, 19 residents discontinued their psychotropic medications, with 4 requiring re-initiation, 1 resident discharge, and 1 resident who died. Temporal trend in medication utilization are presented in [Figure 1]. Temporal changes in prescribing patterns during the current study are shown in [Table 1].





Discussion and Conclusion: Our findings suggest that consistent collaboration among caregivers, prescribers, and pharmacists can improve prescribing practices of psychotropic medications as defined by clinical guidelines. Specifically, implementing a multidisciplinary team helped reduce the use of psychotropics as well as the risk of polypharmacy and other adverse outcomes associated with these medications. Our data are consistent with previous studies of this topic.


  Abstract Number 13 Top


Body Mass Index in Trauma Patients: Is There a Relationship between Obesity and Injury Pattern?

WT Hillman Terzian, Alyssa Green, Franz Yanagawa, Ashley Jordan, Thomas Wojda, Elizabeth McCarthy, Jacqueline Seoane, Colleen Taylor, Brian Hoey, William Hoff, Stanislaw Stawicki

General Surgery Residency and Post-Doctoral Research Fellowship, St. Luke's University Health Network, Bethlehem, PA, USA

Introduction/Background: The obesity epidemic poses a threat to public health. This trend affects all aspects of the US health-care system including trauma centers. At present, the relationship between body mass index (BMI) and injury patterns is poorly understood. Our study sought to determine whether significant associations exist between BMI and injury patterns using a large administrative trauma patient database.

Methodology and Statistical Approach: We conducted a retrospective review of our Level I trauma center database for the time periods between June 2015 and December 2017. We collected patient information on demographics, Injury Severity Score (ISS), and injury patterns (proximal/distal extremities, torso, spine, head, and neck). We evaluated vital signs, mortality, and lengths of stay (hospital and Intensive Care Unit [ICU]). BMI ranges were divided into terciles: low (LO <20.1), intermediate (INT 20.1–26.3), high (HI: 26.4–35.8), and morbid (MOR >35.8). Statistical comparisons were performed using analysis of covariance with adjustments for patient age, gender, and ISS. Statistical significance was set at α = 0.05.

Results: Out of 7950 patients, 7579 (95.3%) had both height and weight recorded. Mean patient age was 56.8 years, with 56.1% male, ISS of 8.64, and BMI of 27.7. There was no significant association between BMI and mortality. Patients with higher BMIs had elevated systolic blood pressures (LO 143 vs. MOR 151). Increasing BMI was associated with more torso injuries, both qualitatively and quantitatively (AIS chest LO 1.48 vs. MOR 1.97; proximal LO 12% vs. MOR 16%; and distal LO 10% vs. MOR 21%). Leg and spine injuries increased with BMI. There were no significant differences between groups in terms of head and neck trauma, with corresponding AIS reductions as BMI increased (LO 2.13 vs. MOR 1.96). Patients with higher BMIs had longer hospital stays (LO 4.1 vs. MOR 5.1 days) and ICU stays (LO 0.9 vs. MOR 1.5 days). Detailed results for primary and secondary study outcomes are provided in [Table 1].



Discussion and Conclusion: We found that torso, lower extremity, and spine injuries were significantly more common with increasing BMI; however, this was not the case with upper extremity and head/neck trauma. Although there was no significant increase in mortality with higher BMIs, these patients had longer hospital and ICU stays.


  Abstract Number 14 Top


Clinical Significance of Paradoxical Hypotension during Dobutamine Stress Testing

Srilakshmi Vallabhaneni, MD; Matthew Carey, MD; Rasha Aurshiya, MD; Jamshid Shirani, MD

Cardiology Fellowship, St. Luke's University Health Network, Bethlehem, PA, USA

Introduction/Background: Dobutamine stress echocardiography (DSE) is a safe and effective alternative to exercise stress testing in patients with suspected coronary artery disease (CAD). Hypotensive response to exercise stress has been associated with severe obstructive CAD and poor prognosis. The aim of this systematic review and meta-analysis was to explore the occurrence, risk factors, pathogenesis, and clinical significance of paradoxical hypotension (PH) during DSE.

Methodology and Statistical Approach: We conducted a systematic search of English literature using PubMed, Ovid Medline, and Cochrane library from inception through January 30, 2018. Our keywords were dobutamine stress and hypotension, vasodepression, and paradoxical vasodepression (PVD). All English-language case reports, case series, retrospective studies, case-control, and cohort studies were included if they reported outcomes of PVD and/or hypotension during dobutamine stress testing. We examined all full-text articles to ensure that they met our study criteria. Studies in languages other than English were excluded from the study. Our final database included ten retrospective, three prospective, and two case reports describing PH during DSE (n = 6134). We used a fixed-effects model to conduct our meta-analysis and reported I2 statistics as a measure of study heterogeneity.

Results: PH was defined in the different studies as a drop in systolic blood pressure (SBP) ≥10–20 mmHg; a decrease in SBP at peak to less than baseline; or a >15% drop in SBP from baseline. Incidence of PH was 18% (n = 1100, mean age = 64 years, 63% men). PH was associated with older age (64 vs. 60 years, odds ratio [OR] = 3.45, 95% confidence interval [CI] 2.73–4.34, P < 0.0001) and history of hypertension (38% vs. 29%, OR = 1.66, 95% CI: 1.39–1.97, P < 0.0001). Gender, prior CAD, left ventricular ejection fraction <40%, or prior beta-blocker use was not associated significantly with PH. Inducible myocardial ischemia was present in 48% of patients with PH compared to 45% in the control arm; however, coronary angiography was completed in only 11% of patients with PH and 8% of patients with normal blood pressure response. Of the patients who underwent coronary angiography, PH did not predict angiographic evidence of obstructive CAD. [Figure 1] and [Figure 2] show forest plots for key study outcomes.





Discussion and Conclusion: PH during DSE is seen in older patients with a history of hypertension and is highly associated with the presence of inducible ischemia. Additional studies are needed to further clarify the association of PH with CAD.




 

Top
 
 
  Search
 
Similar in PUBMED
Access Statistics
Email Alert *
Add to My List *
* Registration required (free)

 
  In this article
Abstract Number 2
Abstract Number 3
Abstract Number 4
Abstract Number 5
Poster Presentat...
Abstract Number 2
Abstract Number 3
Abstract Number 4
Abstract Number 5
Abstract Number 6
Abstract Number 7
Abstract Number 8
Abstract Number 9
Abstract Number 10
Abstract Number 11
Abstract Number 12
Abstract Number 13
Abstract Number 14
Abstract Number 1

 Article Access Statistics
    Viewed215    
    Printed4    
    Emailed0    
    PDF Downloaded5    
    Comments [Add]    

Recommend this journal


[TAG2]
[TAG3]
[TAG4]