|CONFERENCE ABSTRACTS AND REPORTS
|Year : 2019 | Volume
| Issue : 2 | Page : 135-149
The 2019 St. Luke's university health network annual research symposium: Event highlights and scientific abstracts
|Date of Submission||03-Aug-2019|
|Date of Decision||05-Aug-2019|
|Date of Acceptance||08-Aug-2019|
|Date of Web Publication||29-Aug-2019|
Source of Support: None, Conflict of Interest: None
|How to cite this article:|
. The 2019 St. Luke's university health network annual research symposium: Event highlights and scientific abstracts. Int J Acad Med 2019;5:135-49
|How to cite this URL:|
. The 2019 St. Luke's university health network annual research symposium: Event highlights and scientific abstracts. Int J Acad Med [serial online] 2019 [cited 2020 Jun 6];5:135-49. Available from: http://www.ijam-web.org/text.asp?2019/5/2/135/265680
Jill C. Stoltzfus, PhD
Department of Data Management and Outcomes Assessment, St. Luke’s University Health Network, Bethlehem, Pennsylvania, USA
Address for correspondence: Dr. Jill C. Stoltzfus, Department of Data Management and Outcomes Assessment, EW4, 801 Ostrum Street, Bethlehem, Pennsylvania 18015, USA. E mail: email@example.com
Background Information and Event Highlights: The Annual St. Luke’s University Health Network Research Symposium was created in 1992 to showcase research and quality improvement projects by residents and fellows. The Senior Network Director of Graduate Medical Education (GME) Data Management and Outcomes Assessment is responsible for planning and organizing the event, with collaboration and consultation provided by GME leadership, and residency and fellowship faculty. Residents and fellows submit an application for oral and/or poster presentation along with an accompanying abstract describing their project. Three to four physician judges are selected to evaluate the presentations for the first and the second place cash prizes awarded in both oral and poster presentation categories. In 2018 and 2019, medical students were also invited to submit a poster presentation (with up to three accepted), although these posters were not included in the scientific competition.
The 2019 Research Symposium winners are as follows:
- Oral presentations:
- First place – Kayla Bardzel, PharmD (Pharmacy Residency), “Compliance to Procalcitonin Algorithms and Patient Outcomes in Lower Respiratory Tract Infections and Sepsis: Impact of Education without Continuous Antimicrobial Stewardship Feedback”
- Second place – Thomas Wojda, MD, MBA (Family Medicine Residency – Warren Campus), “Prospective, Randomized Study of Short Term Weight Loss Results Applying a Gamification Based Strategy.”
- Poster presentations:
- First place – Jessica Ton, MD (Minimally Invasive Gynecology Fellowship), “Trigone Only Injections of Botulinum Toxin are as Effective as Trigone Sparing Injections but Have Less Deleterious Effects on Bladder Emptying: A Retrospective Single Institution Review”
- Second place (tie) – James Ritter, DPT (Orthopedic Physical Therapy Residency), “Does Orthopedic Residency Training Result in Improved Patient Outcomes? A Retrospective Case–Control Study”
- Second place – Anna Yang, MD (Emergency Medicine Residency – Bethlehem Campus), “Opioids Prescribed by Emergency Physicians from an Academic Center Versus a Community Hospital.”
As in the previous 3 years, the 2019 Research Symposium for Resident and Fellows included a keynote speaker. This year’s distinguished keynote speaker was Laurence Bauer, MSW, M. Ed, Chief Executive Officer of the Family Medicine Education Consortium, Inc. Mr. Bauer’s keynote presentation addressed the future of research in healthcare settings, with emphasis on emerging “hot topics,” including genetic discoveries, the importance of the microbiome, and the impact of nutritional science. Mr. Bauer also discussed how the changing healthcare landscape (e.g., the combination of value based payment and consumer driven healthcare; influence of social determinants of health, transaction costs, and patient demands) will affect the relationship between hospitals and the communities they serve. In particular, primary care is playing an increasingly vital role, with renewed interest in the biopsychosocial model of health and moving from specialty domination to recognition of the importance of generalist expertise. Mr. Bauer concluded by highlighting how “big data” will play an even more central role in the changing healthcare landscape.
The following core competencies are addressed in this article: Practice based learning and improvement, Medical knowledge, Patient care and procedural skills, and Systems-based practice.
| Oral Presentation Abstracts|| |
| Abstract Number 1: The Effects of Extracorporeal Shockwave Therapy as an Adjunct to Standard Care in Patients with Chronic Overuse Injuries|| |
Elizabeth Ballard, Stephen Kareha, Matt Johnson, Greg Colvin, James Scifers
Orthopedic Physical Therapy Residency, St. Luke’s University Health Network, Bethlehem, PA, USA
Introduction/Background: Extracorporeal shockwave therapy (ESWT) is a noninvasive treatment method that delivers acoustic pressure waves, through an applicator head, to the site of pain or pathology. These acoustic pressure waves stimulate metabolism, enhance circulation, and accelerate tissue healing. Evidence suggests that this modality is best utilized to treat chronic tendinopathies that are unresponsive to conservative care for greater than 6 months. The objective of this study was to determine if ESWT improved treatment outcomes in patients suffering from chronic lower extremity overuse injuries.
Methodology and Statistical Approach: This retrospective cohort study compared the outcome of patients who received ESWT as an adjunct to joint and soft tissue mobilization and progressive resistance training after a period of time without receiving ESWT. Data were collected in an outpatient sports medicine physical therapy office. Patients with chronic, overuse, lower extremity pathologies that were resistant to joint and soft tissue mobilization and progressive resistance training participated in the study. Pre and post intervention assessment of pain and function were completed using a numeric pain rating scale and the lower extremity computerized adaptive test (LE CAT; Focus on Therapeutic Outcomes, Inc.,). A global change scale was also administered as a postintervention measurement for each patient. Changes in pain and LE CAT scores were analyzed using Wilcoxon signed rank tests.
Results: Eleven patients (four males, seven females, mean age = 38 years ± 15.3) were included in the study. Participants in the experimental phase received an average of 7.18 ± 3.54 ESWT treatments. Results revealed a statistically significant difference in pain before (median 0, range: −4–5) and after (median 5, range: 0–9) addition of ESWT treatment (P = 0.022). LE CAT scores also showed statistically significant improvement before (median 6, range: −5–27) and after (median 18, range: 7–38) addition of ESWT treatment (P = 0.029). In addition, change per treatment intervention without ESWT treatment when compared with ESWT showed a statistically significant difference for pain (P = 0.005) and function (P = 0.008), demonstrating the superior effectiveness of ESWT in reducing pain and improving function more efficiently than without the inclusion of ESWT. Patients rated their median overall improvement as 90% (range: 80%–100%) following the addition of ESWT to their treatment program.
Discussion and Conclusion: The findings of this study suggest that ESWT may be a useful addition to joint and soft tissue mobilization and progressive resistance training for decreasing pain and improving function in patients with chronic overuse lower extremity injuries. It should be noted that this research study had a limited number of participants; therefore, further studies are needed to generalize these results to a larger population, with comparison of clinical outcomes across multiple treatment groups and protocols.
| Abstract Number 2: Injury Patterns and Outcomes Associated with Fractures of the Native Distal Femur in Adults|| |
David Roy, David Ramski, Ajith Malige, Matthew Beck, Patrick Brogle
Orthopedic Surgery Residency, St. Luke’s University Health Network, Easton, PA, USA
Introduction/Background: Previous studies on epidemiology of distal femur fractures had a high degree of demographic and temporal variability in their sample populations. There were also notable disparities in inclusion and exclusion criteria, especially regarding patient age and preexisting implants within the affected extremity. These study findings have been inconsistent, and therefore, the distal femur fracture injury pattern remains poorly defined. Our study sought to build on these previous findings to define an injury pattern unique to fractures of the distal femur.
Methodology and Statistical Approach: From January 1990 to March 2016, we identified 171 patients presenting at our Level 1 trauma center, of which 91 injuries met inclusion for final analysis. Patients were excluded based on preexisting ipsilateral orthopedic implants, open physes, or missing information. For each patient, demographic characteristics, additional injuries, the effects of mechanism of injury on injury severity, and postoperative outcomes after distal femur fractures were recorded. For comparison of demographics and categorical variables, we conducted Chi square and Fisher’s exact tests as appropriate. For comparison of continuous variables, we conducted independent sample t tests.
Results: Additional orthopedic injuries (most commonly an ipsilateral patella or tibia fracture, P= 0.02) were more likely to occur in patients who sustained high energy injuries (86%, P= 0.0001). High energy injuries resulted in more severe distal femur fracture types and significantly greater rate of open fractures (19.8% of all fractures, P= 0.0001). High energy injuries were also associated with long operating room times during fixation (P < 0.001), estimated blood loss during surgery (P = 0.03), and hospital length of stay (P = 0.04). Finally, high energy injuries were linked to lower union rates (P = 0.036) and an increased rate of additional surgeries (P = 0.011).
Discussion and Conclusion: Patients who sustain a distal femur fracture have a greater risk for additional fractures (particularly ipsilateral tibia and patella fractures), open injuries, and nonorthopedic traumatic injuries. These high energy injuries are also associated with a more complicated clinical course and lower rate of union compared to low energy injuries.
| Abstract Number 3: Prospective, Randomized Study of Short Term Weight Loss Results Applying a Gamification Based Strategy|| |
Thomas Wojda, Parampreet Kaur, Sagar Mehta, Patrick Bower, Matthew Fenty, Mark Kender, Kate Boardman, Maureen Miletics, Jill Stoltzfus, Stanislaw Stawicki
Family Medicine Residency (Warren Campus), St. Luke’s University Health Network, Allentown, PA, USA
Introduction/Background: In response to the obesity epidemic, various strategies have been proposed. While surgical approaches remain the most effective long term management option, the effectiveness and sustainability of short term, nonsurgical weight loss are controversial. Gamification (e.g., point systems and constructive competition) of weight loss activities may help achieve more sustainable results. We hypothesized that the use of a smartphone based gamification platform (SBGP) would facilitate sustained nonsurgical weight loss at 3 months. In addition, we examined whether intensity of SBGP participation correlates with outcomes and if it has parallel effects on hemoglobin A1c (HbA1c) levels.
Methodology and Statistical Approach: We conducted an institutional review board approved, prospective, randomized study from January 2017 to February 2018 that included 100 bariatric surgery candidates randomized to either SBGP (n = 50) or no SBGP (NSBGP, n = 50). Following enrollment, SBGP patients installed a mobile app (Picture It! Ayogo, Vancouver, Canada) and received usage instructions. Patients were followed for 3 months with weight checks, patient engagement questionnaires, and healthcare encounters. Mobile app frequency was also tracked, including number of interactions and real time feedback. Our primary outcome was weight loss at 3 months, and our secondary outcome was HA1c, which we compared between SBGP and NSBGP groups using nonparametric statistics. In addition, the intensity of app use was contrasted with weight loss for the SBGP group. Participation was measured on a low–intermediate–high scale (a composite of in app encouragements, likes, answers, and “daily quest” inputs).
Results: After losing four patients to follow up, 49 SBGP and 47 NSBGP patients completed the study. There were no significant demographic differences between the two groups (mean age 38.4 ± 10.4, median weight 273 lbs, 81% female, 28% diabetic, and 44% hypertensive). We noted no significant differences in average weight loss at 3 months between SBGP patients (3.94 lbs) and NSBGP patients (1.45 lbs). However, actively engaged patients lost more weight (8.33 lbs) compared to less engaged patients (2.51 lbs) in the SBGP group. Of note, absolute measured weight loss was greater among men [Figure 1]a. We did not note statistically significant between group differences in HbA1c [Figure 1]b.
|Figure 1: (a) Weight difference of users according to gender. (b) Hemoglobin A1c differences at 3 months|
Click here to view
Discussion and Conclusion: This study suggests that when using gamification as an adjunct in nonsurgical approaches to weight loss, active patient engagement and male gender may be the strongest determinants of success. Our findings will be important in guiding strategies to optimize weight loss through customization and personalization of SBGP approaches to maximize patient engagement and clinical results.
| Abstract Number 4: The Impact of a Standardized Checklist on the Quality and Duration of Emergency Department Physician Sign Out|| |
Anna Yang, Brendan Healy, Holly Stankewicz, Jill Stoltzfus, Philip Salen
Emergency Medicine Residency (Bethlehem Campus), St. Luke’s University Health Network, Allentown, PA, USA
Introduction/Background: Transitions of patient care during shift change introduce the potential for miscommunication and can impact patient safety and emergency department throughput. Our study sought to determine if utilization of a sign out (SO) checklist (CL) resulted in improved quality of transfer of patient care and whether a CL impacted the duration of SO.
Methodology and Statistical Approach: After receiving institutional review board (IRB) approval, we conducted a prospective study assessing emergency medicine residents’ transfer of patient care. For the initial 2 months, residents were engaged in unstructured sign outs. For the next 2 months, residents utilized a standardized CL to aid in SO. Attending physicians recorded SO duration, number of SO patients, quality of SO utilizing visual analog scores (VASs), and discussion of care issues, including diagnosis, patient care tasks, disposition, admitting team, code status, and the necessity for additional information about patient from the attending physicians. We reported frequencies of topics residents were expected to verbalize, as well as median VAS across all patient SO events. We analyzed our data using Wilcoxon signed rank test and independent samples t tests.
Results: Assessment of physician SO was performed for 77 days (38 days of status quo and 39 days utilizing a CL). There were 548 assessments in the non CL (NCL) cohort and 697 in the CL cohort. Across all subjects, increasing numbers of SO patients correlated with increased duration of SO (Pearson r = 0.74, P< 0.0001). There was a significant difference in mean SO duration based on CL status, P= 0.01 (CL = 10.3 ± 5.1, NCL mean = 13.6 ± 5.3); however, the difference in mean number of SO patients per minute was not significant (CL = 0.86 ± 0.31, NCL = 0.86 ± 0.23). Median VAS assessment of SO improved to 8 for CL (range: 2.5–10) versus 7.5 (range: 0.05–0.95) for NCL (P < 0.0001). Important aspects of SO improved with implementation of CL, including tasks (NCL = 578/686, 84.3%; CL = 482/493, 97.8%; P< 0.0001); disposition (NCL = 683/703, 97.2%; CL = 518/521, 99.4%; P= 0.004); admitting team (NCL = 392/584, 67.1%; CL = 321/421, 76.2%; P= 0.03); and necessity of attending clarification (NCL = 100/427, 23.4%; CL = 39/345, 11.3%; P< 0.0001).
Discussion and Conclusion: Our study demonstrated that use of a CL process improves SO quality without affecting SO duration.
| Poster Presentation Abstracts|| |
| Abstract Number 1: Prevalence of Depression in Athletes using the Baron Depression Screen|| |
Brianne Allerton, Sachdeep Takhar, Maheep Vikram, William McCafferty III
Family Medicine Residency (Bethlehem Campus), St. Luke’s University Health Network, Bethlehem, PA, USA
Introduction/Background: The field of behavioral health continues to expand as we better understand the importance of treating behavioral health disorders. The US Preventive Services Task Force recommends screening for depression in individuals as early as 12 years of age and continuing into adulthood. Athletes represent a unique group that often experiences different stressors and pressures compared to the general population. As a result, there was a need to develop a depression screening tool geared toward athletes. The ten item Baron Depression Screen in Athletes was developed by David Baron, DO, and asks specific questions about how athletes view their particular sport, as well as how their emotional state is tied to their chosen activity. Our study sought to determine how this screening tool performed in college athletes.
Methodology and Statistical Approach: We administered the Baron Depression Screen to a group of Moravian College athletes and collected data on their gender, sport, and year in college. A score of five or greater was considered a positive screen for depression. Individuals were given information about behavioral health support services to use at their discretion. We collected and recorded our data in REDCap (Vanderbilt University, Nashville, TN), working in conjunction with the Athletic Training Department at Moravian College and Research Departments at St. Luke’s University Health Network.
Results: We obtained a total of 69 surveys (39 female [56.5%] and 30 male [43.5%]). College years included 17 freshmen (24.6%), 11 sophomores (15.9%), 16 juniors, and 25 seniors (36.2%). Sports representation included 29 basketball (42%); 10 volleyball (14.5%); 9 baseball (13%); 8 each of softball and track and field (11.6%); 2 cheerleading (2.9%); and 1 each of field hockey, football, and lacrosse (1.4%). Positive screen results occur in 15 individuals (21.7%). Of the 15 positive screens, 11 were female (7 basketball, 3 softball, and 1 volleyball) and 4 were male (2 baseball, 1 track and field, and 1 basketball).
Discussion and Conclusion: Recognizing and treating depression is an important component of public health. Although other screening tools are widely used and accepted in the general population, it is important to recognize the special subset of stressors that exist in athletes. As our study demonstrated, the Baron Depression Screen is useful in identifying depression in a group of college athletes. We acknowledge our study may be biased due to small sample size, the fact that not all sports were represented, as well as the lack of uniform distribution of the sports that were included. Ongoing research is needed to form a better understanding of screening for depression college athletes as a unique population group, which will lead to more effective treatment.
| Abstract Number 2: Glycemic Control Following Hospital Discharge for Patients Newly Initiated on Insulin during Admission at Differing Frequencies of Injection|| |
Katie Bressler, Nicholas Patricia, Daniel Longyhore
Pharmacy Residency, St. Luke’s University Health Network, Bethlehem, PA, USA
Introduction/Background: Insulin has become a mainstay of treatment for patients with progressed type 2 diabetes mellitus. Intensive insulin therapy provides rapid glycemic control to reduce the risk of complications. For patients started on insulin at hospital discharge, an evidence gap exists regarding the number of insulin injections per day and the ability to achieve glucose control. We sought to determine the efficacy of insulin based on the number of injections per day on initiation for the treatment of insulin naïve type 2 diabetes mellitus.
Methodology and Statistical Approach: For our retrospective chart review, we queried the St. Luke’s University Health Network electronic health record system for patients who were initiated on insulin therapy during inpatient admissions between January 1, 2016, and September 1, 2017 with a primary or secondary diagnosis of type 2 diabetes mellitus. Patients were included if they had at least two follow up visits with the St Luke’s Physician Group and at least one follow up hemoglobin A1c (HbA1c) posthospital discharge.
All outcomes were evaluated up to 12 months’ postdischarge and stratified by the number of insulin injections per day (1, 2, or 4). The primary outcome was change in HbA1c. The secondary outcomes included the number of patients who reached prespecified HbA1c values, change in number of noninsulin hypoglycemics, change in number of insulin injections per day, number of episodes of severe hypoglycemia or hyperglycemia, number of patients who discontinued insulin treatment, and change in patient weight. We summarized our data using descriptive statistics.
Results: Of the 24 patients initiated on insulin on discharge, 7 received 1 injection/day, 4 received 2 injections/day, and 13 received 4 injections/day. At up to 12 months’ postdischarge, patients initiated on 1, 2, and 4 injections/day displayed an absolute HbA1c reduction of 5.2%, 5.4%, and 5.2%, respectively. The proportion of patients who achieved a HbA1c value of ≤7 was similar across groups (70% for 1 injection/day, 75% for 2 injections/day, and 76.9% for 4 injections/day). Across all groups, patients maintained on insulin throughout follow up were continued on the same number of injections per day for the entire study duration. Patients initiated on more injections per day had a greater reduction in total daily units (2 units reduction for 1 injection/day, 10 units reduction for 2 injections/day, and 28 units reduction for 4 injections/day). The four injections per day group was the only one that contained patients who were able to discontinue insulin therapy due to reaching their patient specific HbA1c goals. The four injections/day group was also the only one that did not include patients who maintained a HbA1c >9 at up to 12 months’ follow up.
Discussion and Conclusion: Patients prescribed a greater number of insulin injections per day on hospital discharge achieved a similar percentage reduction in HbA1c, with fewer patients who maintained an A1c >9 at 12 months’ follow up.
| Abstract Number 3: Can Cranberry Extract Prevent Urinary Tract Infection? A Meta analysis of Randomized Controlled Trials|| |
Jana Havranova, Steven Cardio, Matthew Krinock, Max Widawski, Rachel Sluder, Harsh Goel
Internal Medicine Residency (Bethlehem Campus), St. Luke’s University Health Network, Allentown, PA, USA
Introduction/Background: Urinary tract infections (UTIs) are among the most common bacterial infections, responsible for an estimated 7 million annual office visits, 1 million emergency room visits, and >400,000 hospital admissions, the latter accounting for over $2.5 billion in healthcare costs. Occurring predominantly in females, up to 30% of patients with a UTI suffer recurrences, mandating repeated antimicrobial treatments. Given the cost, increasing anti microbial resistance, and adverse effects associated with such treatment, nonantimicrobial prophylaxis in patients with recurrent UTIs is a topic of intense research. Several studies suggest that daily use of cranberry extract may lower the incidence of recurrent UTIs, although these studies had mostly small samples and heterogeneous conditions in terms of population studied, treatment type and duration, and defined outcome, all of which prevent healthcare providers from giving conclusive recommendations. Our meta analysis investigated the randomized controlled trial (RCT) evidence regarding the role of cranberry extract in preventing UTIs.
Methodology and Statistical Approach: We performed a PubMed/MEDLINE search using search words “cranberry” AND “urinary tract infection,” “UTI,” “dysuria,” “pyuria,” “bacteriuria,” or “cystitis.” We included placebo controlled RCTs conducted in adults (≥18 years old) that compared cranberry extract in tablet/capsule form to placebo and reported outcomes as number of UTIs. RCTs using cranberry juice were excluded. We used RevMan version 5.3 (Copenhagen: The Nordic Cochrane Centre, The Cochrane Collaboration, 2014) to conduct our meta analysis, quantify heterogeneity, and perform subgroup analyses.
Results: Initial searches yielded 723 articles, of which 63 remained after excluding duplicates, review articles, and preclinical or animal studies. An additional 48 studies were eliminated that included patients <18 years of age, administered cranberry juice, or failed to report outcomes that would allow us to calculate odds ratios (ORs). A total of 15 RCTs yielding 22 discreet comparison groups were included in the final analysis.
[Figure 1] presents the pooled OR, showing significantly lower odds (32%) of UTI with cranberry extract compared to placebo. Notably, there was substantial heterogeneity between studies. Given the differing definitions of outcomes, we performed subgroup analysis by outcome definition. [Figure 2],[Figure 3],[Figure 4] show pooled ORs by outcome (i.e. UTI symptoms, pyuria and/or bacteriuria, and symptomatic UTI confirmed by urine cultures, respectively). All subgroups revealed a significant benefit with cranberry extract. Notably, the pyuria/bacteriuria group showed no between study heterogeneity, whereas symptomatic UTI and symptoms plus positive cultures groups showed an increase in heterogeneity. Hence, differing outcome definitions seem to contribute significantly to heterogeneity among studies. Other potential causes of heterogeneity included duration of treatment, bioactive proanthocyanidins (PAC) content of treatment used, and population characteristics.
|Figure 1: Pooled Odds Ratios, showing significantly lower odds (32%) of UTI with cranberry extract compared to placebo. Notably, there was substantial heterogeneity between studies|
Click here to view
|Figure 2: Pooled Odds Ratios for studies reporting on UTI symptoms, showing significant benefit with cranberry extract|
Click here to view
|Figure 3: Pooled Odds Ratios for studies reporting on pyuria and/or bacteriuria, revealing a significant benefit with cranberry extract|
Click here to view
|Figure 4: Pooled Odds Ratios for studies reporting on symptomatic UTI confirmed by urine cultures, again showing a significant benefit with cranberry extract|
Click here to view
Discussion and Conclusion: Cranberry has been used as a folk remedy to prevent UTIs for almost a century. Initially thought to be due to acidification of urine, the benefit of cranberries in this regard has recently been ascribed to interference with bacterial adhesion to the urothelium. A type PACs found in cranberries seem to be the most likely candidates mediating this effect, although hundreds of other compounds contained in cranberries remain to be explored. A large number of RCTs have shown inconsistent benefit of cranberry in preventing UTI, likely stemming from such factors as sample size, population characteristics, duration of treatment, outcome definition, and formulation of cranberry used. Our meta analysis, which was restricted to adult RCTs that used cranberry extract in capsule/tablet form, showed a significant benefit of cranberry extract in preventing UTIs, regardless of how UTI is defined. Moreover, we demonstrated that much of the heterogeneity in RCTs stems from outcome definition, with the effect of cranberry extract on pyuria/bacteriuria being the most consistent, and very few studies reported the PAC content of their cranberry preparation. These factors should help guide future large scale clinical trials.
| Abstract Number 4: Geriatric Assessment in Primary Care|| |
Satinderpal Kaur, Maria Ghetu
Geriatric Medicine Fellowship, St. Luke’s University Health Network, Bethlehem, PA, USA
Introduction/Background: Early identification of problems and focused intervention in older adults with rolling geriatric assessment over several visits leads to a better quality of life. Comprehensive geriatric assessment, or the systematic evaluation of older patients by a team of health professionals, consists of data gathering, team discussion, and development of a plan, with monitoring and revision as needed.
Methodology and Statistical Approach: Our project consisted of the following: (1) evaluation of methods of geriatric patient assessment and subsequent documentation in the electronic medical record (EMR) by physicians using preproject assessment forms; (2) introduction of an EMR template containing geriatric assessment components (i.e., social history, functional assessment, incontinence, cognition, nutrition, mental health, immunization, and advance care planning); (3) educating physicians about how to use the template during their patients’ family medicine office visits; and (4) assessment of physician responses using a postproject survey.
Results: A total of 23 physicians completed the surveys, with 78% reporting improvement in patient care due to conducting comprehensive geriatric assessment; 70% physicians reporting satisfaction with documentation; and 78% physicians reporting confidence in doing geriatric assessment. Detailed summary of pre and post project physician responses is provided in [Table 1] and [Table 2].
Discussion and Conclusion: Given the increasing number of older patients, family medicine physicians should be proficient in doing comprehensive geriatric assessments. Although Medicare annual visits include certain components of assessment, comprehensive geriatric assessment can lead to early recognition of problems that impair quality of life by identifying areas for focused intervention. Furthermore, doing geriatric assessment over several visits can effectively identify more subtle or hidden problems.
By using precompleted forms with patients and families, as well as training office staff to complete validated assessment tools, physicians can maximize efficiency by focusing on identified problems. Finally, using the geriatric assessment template in the EMR will improve documentation and physicians’ daily workflow efficiency.
| Abstract Number 5: Impact of Ambient Background Noise on Sign out in the Emergency Department|| |
Connie Lorenzo, Holly Stankewicz, Brendan Healy, Jill Stoltzfus, Philip Salen
Emergency Medicine Residency (Bethlehem Campus), St. Luke’s University Health Network, Phillipsburg, NJ, USA
Introduction/Background: Elevated emergency department (ED) noise levels can impact physician communication during physician sign out (SO). Therefore, our study sought to assess the impact of time, background music, background discussion (BD), and SO on ambient noise in an ED physician charting area.
Methodology and Statistical Approach: This prospective observational study monitored ambient noise levels in a university hospital ED physician charting area at various times during the day (7:00 am–3:00 pm), evening (3:00 pm–11:00 pm), and night shifts (11:00 pm–7:00 am) utilizing a cellular phone sound level monitor app (Decibal X application by Skypaw Hanoi, Viet Nam). A research assistant and two physicians collected a convenience sample of noise measurements in decibels (dB) of a physician charting area over a 36 day period. Monitoring of SO noise measurements occurred within one meter of SO physicians; non SO noise measurements were made centrally in the charting area. SO data collected also included SO duration and number of SO patients. We used analysis of variance, Student’s t test, and Spearman’s rank coefficients (rho) to analyze our data.
Results: The 358 ambient noise measurements demonstrated that day shifts (66.6 dB ± 3.9, n = 104) and evening shifts (65.3 dB ± 4.7, n = 175) generated more noise than night shifts (57.8 dB ± 8, n = 79; P= 0.05). Background music (BM) in the charting area resulted in higher mean ambient noise levels during physician SO (BM = 65.6 dB ± 7.4, n = 14 versus no BM = 55.4 dB ± 7.8, n = 65; P< 0.001). BD in the physicians’ charting area resulted in higher mean ambient noise during physician SO (BD = 67 dB ± 6.5, n = 23 vs. 53.2 dB ± 5.8, n = 55; P< 0.001). Higher decibel volumes correlated with longer SO duration (Spearman’s rho = −0.41; P< 0.001) and higher numbers of SO patients (Spearman’s rho = −0.56; P< 0.001). SO also impacted mean ambient noise (SO = 60 dB ± 8.7, n = 97 vs. non SO = 65.5 dB ± 4.4 n = 261; P< 0.0001).
Discussion and Conclusion: Our study revealed that time of day, BM, BD, and physician SO impacted ambient noise levels in a physician charting area. Higher ambient noise levels correlated with longer physician SO and increased numbers of SO patients.
| Abstract Number 6: Review of Physician Referrals to Orthopedic Spine versus Neurosurgery in an Academic community Setting|| |
Ajith Malige, Roger Yuh, Vikas Yellapu, Vince Lands, Gbolabo Sokunbi
Orthopedic Surgery Residency, St. Luke’s University Health Network, Phillipsburg, NJ, USA
Introduction/Background: The choice of surgeon for spinal surgeries is often dictated by primary care physician (PCP) referral and patient choice. Previous studies have reported on patients’ value while choosing their surgeon, but there are no studies exploring PCP referral patterns to orthopedic versus neurosurgery spine surgeons. We aimed to identify trends in referral for spinal pathology to orthopedic surgery versus neurosurgery based on pathology, location, and intervention.
Methodology and Statistical Approach: In total, 450 internal medicine, family medicine, emergency medicine, neurology, and pain management physicians who practice at one of three locations (suburban community hospital, urban academic university hospital, and urban private practice) were asked to participate. Consenting physicians completed our 24 question survey, addressing their beliefs based on various pathologies, locations, and surgical intervention.
Results: Of the 450 physicians contacted, 108 completed our survey (24%). Fifty seven physicians (52.8%) felt that neurosurgeons provide better long term comprehensive spinal care. Overall, 66.7% of physicians would refer to neurosurgery for cervical spine radiculopathy, 52.8% would refer to neurosurgery for thoracic spine radiculopathy, and 56.5% would refer to orthopedics for lumbar spine radiculopathy. Most physicians would refer all spine fractures (compression, thoracic, lumbar, and sacrum) to orthopedics for treatment except cervical spine fractures (56.5% to neurosurgeons). For spinal tumors, most would refer to neurosurgery for extradural tumors (91.7%) and intradural tumors (96.3%). Most would refer to orthopedic surgeons for chronic pain. Finally, physicians would refer to orthopedics for spine fusion (61.1%) and discectomy (58.3%) and to neurosurgery for minimally invasive surgery (59.3%).
Discussion and Conclusion: Even though both orthopedic surgeons and neurosurgeons are both intensively trained to treat a similar breadth of spinal pathologies, physicians vary in their referring patterns based on type of pathology, location, and intended surgery. It is essential that spine surgeons provide education to PCPs and other physician colleagues to ensure less biased referral patterns.
| Abstract Number 7: Does Orthopedic Residency Training Result in Improved Patient Outcomes? A Retrospective Case–control Study|| |
James Ritter, Stephen Kareha, Rett Holmes
Orthopedic Physical Therapy Residency, St. Luke’s University Health Network, Bethlehem, PA, USA
Introduction/Background: As physical therapy has progressed to include a doctoral degree, so too has the demand for clinicians with superior clinical expertise. Due to an increased emphasis on quality and safety, multiple sources have investigated clinical experience, continuing education, knowledge, and critical reasoning as components of clinical expertise. Researchers have demonstrated that critical reasoning and knowledge are the most important factors and have extrapolated that mentored clinical practice may be the best approach.
Given that medical residency training leads to superior patient care, the question arises: How valuable is residency training to physical therapy patients? Such programs include 1800 total program hours, with 300 educational hours, 1500 patient care clinic hours, and 150 h of 1:1 mentoring. Our study examined outcomes for physical therapy residents using a standardized patient measurement tool.
Methodology and Statistical Approach: Our study utilized data from the Focus on Therapeutic Outcomes (FOTO), a self reported instrument that records a patient’s starting functional and pain levels, as well as a final on completion of an episode of care. Scoring ranges from 0 to 100, where 0 represents a patient’s inability to perform any type of activity with the affected body part, while a score of 100 indicates no limitations in any type of activity.
We selected four physical therapy residents who had 6 months of FOTO data before residency training. Four case–controls were matched for the same amount of clinical experience and educational background. We compared 6 months of data following residency training.
FOTO also compares each physical therapist to a national database encompassing all clinicians and creates a percentile ranking based on effectiveness. Effectiveness is defined as a combination of how much a patient improves on their FOTO score as well as the time required for such improvement to occur. The residents and their case–control matches had a starting effectiveness within the 20–40th percentile ranking.
We used Mann–Whitney rank sum tests to analyze our data.
Results: There was a nonsignificant difference between residents and case–control matches regarding starting effectiveness (P = 0.13), with case–control matches having a higher starting value. There was a 22 point difference in residents’ and case–control matches’ 6 month FOTO data following the 1st year and a half of clinical practice, with residents demonstrating a higher average effectiveness. Summary of study findings is provided in [Table 1].
Discussion and Conclusion: Our study suggests that educational and experiential backgrounds may not be good indicators of starting effectiveness. We observed a trend that residency trained clinicians had greater rank point improvement in effectiveness, suggesting superior outcomes for our patients. The FOTO’s minimally clinically important difference has been shown to be 13 points, meaning our study’s average 22 point increase supersedes this value.
| Abstract Number 8: Does Nonalcoholic Fatty Liver Disease Increase the Risk of Cholangiocarcinoma? A Single Center Retrospective Review|| |
Ana Martinez Tapia, Janak Bahirwani, Christopher Folterman, Kimberly J. Chaput
Internal Medicine Residency (Bethlehem Campus), St. Luke’s University Health Network, Bethlehem, PA, USA
Introduction/Background: Cholangiocarcinoma is a highly aggressive cancer of the bile duct epithelium, usually diagnosed at an advanced, noncurable stage. The prognosis for this devastating cancer remains dismal, in spite of medical treatment options. Limited research has been done to provide new insights into the evolving risk factors and pathogenesis of cholangiocarcinoma. Some studies have shown an association between nonalcoholic fatty liver disease (NAFLD) and cholangiocarcinoma, mainly intrahepatic cholangiocarcinoma (ICC). The major risk factors for NAFLD include central obesity, impaired fasting glucose, dyslipidemia, and metabolic syndrome. In this study, we sought to determine a possible association between NAFLD and cholangiocarcinoma.
Methodology and Statistical Approach: We retrospectively reviewed the electronic medical records of adults >18 years old with diagnoses of NAFLD, ICC, and extrahepatic cholangiocarcinoma (ECC) based on International Classification of Diseases (ICD) code, imaging, or pathological report between October 2008 and March 2019. Exclusion criteria were alcoholic liver disease, primary sclerosing cholangitis, hepatitis B and C, HIV infection, and inflammatory bowel disease (Crohn’s disease and ulcerative colitis). We conducted separate Fisher’s exact tests to examine possible contributors such as fatty liver disease, metabolic syndrome and subsequent risks, impaired fasting glucose, body mass index, dyslipidemia, arterial hypertension, and coronary artery disease.
Results: A total of 59 patients with a diagnosis of cholangiocarcinoma were analyzed, subdivided into ICC (n = 48) and ECC (n = 10). Mean age of patients with ICC and ECC was 74 and 71 years, respectively. Gender distribution was 54% females and 45.8% males in the ICC group and 60% females and 40% males in the ECC group. In the ICC group, 18% had fatty liver disease compared to 0% in the ECC group (P = 0.3). In the ICC group, 37.5% had metabolic syndrome versus 30% of patients with ECC, but this was statistically not significant (P = 0.7). Each individual risk factor for metabolic syndrome was further analyzed, including hypertension, hyperlipidemia, obesity, impaired fasting glucose, and coronary artery disease. Among these, 81% of patients with ICC had hypertension compared to 50% of ECC patients, which was significantly different (P = 0.05). Hyperlipidemia was present in 66% of ICC patients compared to 40% of ECC patients, which was not significantly different (P = 0.1).
Discussion and Conclusion: Our study found no significant association between NAFLD and cholangiocarcinoma, with no significant association for gender, age, or metabolic syndrome and its individual risk factors except for hypertension (which, to our knowledge, has not been found in previous studies). We acknowledge that our sample size was small, with greater representation of ICC versus ECC patients. In addition, diagnosis of NAFLD was obtained mainly by imaging and histopathology results rather than using ICD codes, which may have resulted in underestimation of NAFLD in our sample compared to the rate of 28% in the general population. Further research is needed to assess possible risk factor associations with cholangiocarcinoma.
| Abstract Number 9: Evaluation of Applicant Interview Scores as Predictor of Performance in Residency|| |
Anna Yang, Anthony Moon, Krista Morley, Connie Lorenzo, Scott Melanson, Rebecca Jeanmonod
Emergency Medicine Residency (Bethlehem Campus), St. Luke’s University Health Network, Bethlehem, PA, USA
Introduction/Background: When assessing applicants, residency programs consider a variety of factors, such as academic strength, recommendation letters, prior experience, and (often most importantly) the interview. However, there is a lack of consensus on how best to score applicant interviews. Our emergency medicine (EM) residency program has been using a 10 cm visual analog scale (VAS) to globally score applicant interviews. Prior studies at our program demonstrated that the VAS score correlates more strongly to the applicant’s final rank on the match list than any other factor. However, the relationship between interview performance and actual performance during residency training remains unclear. Our study sought to determine if applicant interview scoring by VAS is consistent with overall residency performance.
Methodology and Statistical Approach: This was an institutional review board exempted before and after study of the last six classes from a single EM residency program. The initial mean VAS score of each resident was obtained based on the assessment by three interviewing faculty, and conducted at the time the resident applied to the program. These scores were compared to the mean VAS score of each resident as assessed by nine current core faculty members who were blinded to each resident’s initial VAS score. The VAS scoring tool was identical for both groups and consisted of a 10 cm line with designations for “bottom of rank list” and “top of rank list” at the 0 and 10 cm positions, respectively. Faculty members were instructed to score each resident by placing a hash mark on the VAS corresponding to where they would place the resident on a rank list. We analyzed our data using Mann–Whitney rank sums and Wilcoxon signed rank tests.
Results: VAS scores were compared for 51 residents. Initial VAS scores in mm were normally distributed and ranged from 38.5 to 94.3 (mean = 69.2). Follow up VAS scores in mm ranged from 13.2 to 90.1 (mean = 69.7). There was no significant difference between the two sets of scores (P = 0.42) as well as based on repeated measures testing (P = 0.44). Twelve out of 51 residents (23.5%) had a significant change in their VAS scores (>20 mm), with eight scores improving (range: 20.92–41.26 mm) and four scores decreasing (range: −26.89 to − 65.83 mm).
Discussion and Conclusion: Global applicant interview scoring by VAS is a reasonably good predictor of residency performance at our EM residency program, with only 8% of residents performing below faculty expectations.
| Medical Student Poster Presentation Abstracts|| |
| Abstract Number 1: Opioid Storage and Disposal Patterns of Patients Following Emergency Department Discharge|| |
Marlee Milkis, Derek Tang, Holly Stankewicz, Valerie Hoerster, Stephanie Litzenberger
Medical School of Temple University/St. Luke’s University Health Network, Bethlehem, PA, USA
Introduction/Background: Opioid overdose is a leading cause of premature death in the United States, and the public health burden imposed by prescription opioid misuse has reached epidemic proportions, which is further compounded by leftover pills that may be accidentally ingested or diverted for illicit use. Despite data suggesting that most nonmedical use of opioid prescription medicine is due to friend and family sharing, there has been minimal research into storage and disposal practices of leftover pills within the medical community, particularly in high throughput settings. Our study aimed to further elucidate such practices among patients who were prescribed opioid medications following emergency department (ED) discharge.
Methodology and Statistical Approach: This study employed a cross sectional survey that randomly sampled patients from electronic medical records (EMRs). Eligible participants included patient ages 18–89 years who had received an opioid prescription 3 weeks before survey administration from an ED within a large teaching hospital network. Exclusion criteria were patients with a previous or current EMR diagnosis of opioid dependence/withdrawal, cancer, postoperative pain, dementia, intellectual disability, pregnancy, hospital admissions, or record of prior opioid prescription within 30 days of the ED presentation date. Patients answered questions about their practices and beliefs related to sharing, storing, and disposal of opioid medications as well as sources of information received on these topics. Our statistical analyses incorporated survey weights to account for sampling design and nonresponse, and we report descriptive outcomes.
Results: Among the 499 sampled patients, 195 (39.1%) responded, and 97 (19.4%) completed the survey. A majority (59 patients, 72.2%) had leftover prescription pills; 42 patients (71.2%) reported storing their pills; 13 (22%) disposed of them; and 4 (6.8%) shared, gave them away, or chose “other” for what was done with remaining pills. Among patients who stored their medication, less than one fourth (23.8%) reported that the storage location was locked. Most patients disposed of their medications in a sink or toilet (53.8%). Medications were returned to an authorized collection site by 30.8% of patients.
Discussion and Conclusion: Most of our study’s ED patients did not practice safe storage and disposal of unused medications. Overprescription of opioid medications coupled with patient education on proper storage and disposal of these medications remains a challenge. Limitations of our survey analysis (e.g., sampling design, nonresponse, and self reporting) may underestimate the true magnitude of storage and disposal practices.
| Abstract Number 2: Risk Factors for Postpartum Medroxyprogesterone-Persistent Bleeding|| |
Dhanalakshmi Thiyagarajan, Calliope O’Shea, James Anasti
Medical School of Temple University/St. Luke’s University Health Network, Bethlehem, PA, USA
Introduction/Background: Contraception is an important component of postpartum care. Patients are frequently counseled on receiving medroxyprogesterone acetate immediately postpartum (MPA PP) as a convenient option, since MPA PP can be given before discharge, is safe, works immediately, and is only injected four times/year; therefore, new mothers do not have the burden of remembering daily contraception. However, irregular bleeding is a common problem with MPA PP and contributes to termination of this contraceptive method. Contributing factors increasing the risk of irregular bleeding are currently elusive; therefore, our study sought to compare patients who complained of MPA PP bleeding with those who did not.
Methodology and Statistical Approach: Over a 30 month period (January 2016–July 2018), we retrospectively reviewed charts of patients in a suburban hospital network who received MPA PP and pursued follow up between 3 and 12 months after receiving their first dose. We collected data about various patient characteristics, as well as incidence of persistent bleeding complaints, type of bleeding pattern, MPA PP continuation or termination, and reason for termination. We conducted between group comparisons using independent samples t tests or Chi square tests as appropriate, with a Bonferroni correction applied due to the multiple measurements.
Results: Of the 435 patients who received MPA PP, 222 (51%) discontinued its use for nonbleeding related reasons. Of the remaining 213, 105 (49%) complained of bothersome irregular bleeding, while 49 (47%) stopped MPA PP as a result. The average time for a bleeding complaint was 5.4 ±/ 2.1 months. Only cesarean delivery and breastfeeding were associated with MPA PP bleeding (P < 0.0001 for both).
Discussion and Conclusion: Based on our study, breastfeeding is a risk factor for bleeding side effects, and cesarean delivery is a protective factor against bleeding side effects caused by MPA PP. The pathophysiology for these risk and protective factors for MPA PP bleeding is still uncertain. Although MPA PP is a convenient contraceptive method following delivery, its unpredictable bleeding patterns are a deterrent for continuation. Our results may help guide physicians as they offer individualized contraceptive counseling to future patients to improve patient satisfaction.
| Abstract Number 3: Efficacy of Wellness Resources on Lessening Medical Student Stress|| |
Manasa Srivilli, Alison Von Deylen, Emily Du, Brittany Caruso, Jillian Stone, Akshay Shanker
Medical School of Temple University/St. Luke’s University Health Network, Bethlehem, PA, USA
Introduction/Background: The demands of a career in medicine are extremely high. Both personal and professional distress have been associated with poor outcomes for medical students, including burnout, loss of empathy, thoughts of dropping out, and even suicide. Addressing medical student wellness is critical to the development of well trained healthcare providers, since wellness is associated with positive outcomes for students, both personally and professionally, as well as for healthcare institutions, including medical schools and hospitals.
Previous findings have revealed that unprofessional behaviors such as cheating and dishonesty are associated with students who have poor mental health compared to peers who enjoy better mental health. For medical schools, a health risk assessment and accompanying wellness program can improve the productivity, success, and career satisfaction of students and physicians. Offering wellness opportunities that encompass multiple methods and approaches ensures a comprehensive program that is better able to meet the unique needs of individual students. Therefore, it is important to first characterize the risk factors for poor mental health in medical students to subsequently tailor wellness services that address these factors. Our study sought to determine associations between risk factors and mental health in medical students at the regional campus of an Association of American Medical Colleges affiliated medical school.
Methodology and Statistical Approach: We conducted a cross sectional study examining the experiences and perceptions of medical students across three class years in relation to their self reported mental health. We created a survey using Survey Monkey© that collected data about students’ educational stressors, wellness behaviors, and mental health resources. Mental health was measured using the Mental Health Continuum Short Form (MHC SF). Responses were split into three groups of “flourishing,” “moderate,” or “languishing” mental health. We then conducted separate Chi square tests in SPSS version 25 (Armonk, NY, USA: IBM Corp.,) to determine the association between MHC SF score categories and gender, class year, and use of at least one wellness resource. We also examined factors contributing to mental health status among medical students by calculating the proportion of students who reported various stressors, activities, and use of resources.
Results: Our surveyed sample was classified into the “flourishing” or “moderate” groups. There was no significant difference between gender, class year, or use of wellness resources. We also found that preclinical workload, board examinations, and academic performance were the three factors that contribute most strongly to student stress. Close to 50% of students reported not using wellness resources. For the 50% of students who did access wellness resources, the two main resources used were St. Luke’s fitness centers and Student Government Association organizations. Close to 75% of students reported a perceived ability to cope well with the stressors of medical school, and over 50% reported no need for mental health services. Overall, a majority of students reported satisfaction with their decision to go to medical school.
Discussion and Conclusion: Based on our results, preclinical workload, board examinations, and academic performance were the three major contributors to medical student stress. However, the majority of students reported being able to cope with the stress of medical school, with “flourishing” or “moderate” mental health. In addition, we found that many students did not use wellness resources and felt no need to do so. These results provide insight into the types of wellness programs and resources that may be useful in combating medical student stress and burnout.
Although our preliminary study did not find significant differences between groups, our findings can guide future research examining the major factors, both positive and negative, that are contributing to medical student wellness. Future research should also investigate specific strategies that students are using to combat their medical school related stress. We plan to conduct a follow up study at a later date and hope to further clarify common stressors as well as possible solutions to alleviate these stressors.
Financial support and sponsorship
Conflicts of interest
There are no conflicts of interest.
Ethical conduct of research
All research projects presented during the St. Luke’s University Health Network Annual Research Symposium were verified to have either appropriate Institutional Review Board approvals or exemptions. For case reports, proof of appropriate patient consent documentation was required. In all instances, appropriate EQUATOR guidelines (see https://www.equator network.org/reporting guidelines/) for scientific reporting were followed.
[Figure 1], [Figure 2], [Figure 3], [Figure 4], [Figure 5]
[Table 1], [Table 2], [Table 3]