Home About us Editorial board Ahead of print Current issue Search Archives Submit article Instructions Subscribe Contacts Login 
  • Users Online: 991
  • Home
  • Print this page
  • Email this page


 
 Table of Contents  
CONFERENCE ABSTRACTS AND REPORTS
Year : 2015  |  Volume : 1  |  Issue : 1  |  Page : 32-40

Scientific abstracts from the 2015 St. Luke's University Health Network Annual Research Symposium


Date of Web Publication29-Dec-2015

Correspondence Address:
Login to access the Email id

Source of Support: None, Conflict of Interest: None


Rights and PermissionsRights and Permissions

How to cite this article:
. Scientific abstracts from the 2015 St. Luke's University Health Network Annual Research Symposium. Int J Acad Med 2015;1:32-40

How to cite this URL:
. Scientific abstracts from the 2015 St. Luke's University Health Network Annual Research Symposium. Int J Acad Med [serial online] 2015 [cited 2021 Apr 16];1:32-40. Available from: https://www.ijam-web.org/text.asp?2015/1/1/32/172707

Guest Editor

Jill C. Stoltzfus

Director, The Research Institute, St. Luke's University Health Network, Bethlehem, Pennsylvania, USA.

Background Information: The annual St. Luke's University Health Network Research Symposium was created in 1992 to showcase residents' and fellows' research and quality improvement projects. The Research Institute Director is responsible for planning and organizing the event with collaboration and consultation provided by the Chief Academic Officer, Graduate Medical Education leadership, residency and fellowship faculty, and the Director of Media Production Services. Residents and fellows submit an application for and oral and/or poster presentation along with an accompanying abstract describing their project. Three physician judges not affiliated with any residency or fellowship program are selected to evaluate the presentations with the first and the second place cash prizes awarded in both oral and poster presentation categories.

The following core competencies are addressed in this article: Practice-based Learning and improvement, Medical knowledge, Patient care, Systems-based practice.


  Oral Presentation Abstract Top



  Oral Presentation Abstract Number 1 Top



  Validation of the Visual Analog Score for Visualization in Shoulder Arthroscopy With Comparison to a New Grading System Top


Daniel Avery, Vince Lands, Brett Gibson, Gregory Carolan, Jill Stoltzfus

Department of Orthopaedic Surgery, St. Luke's University Health Network, Bethlehem, Pennsylvania, USA

Introduction/Background: We recently published a study supporting the use of epinephrine in arthroscopic irrigation fluid to enhance visualization in shoulder arthroscopy. Visualization was rated with a visual analog scale (VAS), which was correlated with irrigation fluid hematocrit in a previous study; however, using VAS for visualization in shoulder arthroscopy has never been formally validated. Therefore, our study sought to validate formally its use and compare it to a new grading scale that attempted to add more objective criteria.

Methodology and Statistical Approach: Video clips representing different levels of visualization in shoulder arthroscopy were created and randomly evaluated by six different raters who were all fellowship trained sports medicine orthopedic surgeons. Two separate evaluations took place rating visualization according to a VAS and a new grading system. Data were then analyzed to ascertain the interobserver reliability and intraobserver variability. Separate intraclass correlation coefficients (ICCs) with a two-way random effects model were calculated to assess interobserver reliability (average measures) and intraobserver reliability (single measures) in the VAS and grading scales (IBM SPSS Statistics for Windows, Version 23.0. Armonk, NY: IBM Corp, USA) with the objective to determine consistency of responses rather than absolute agreement. We calculated ICCs in lieu of weighted kappa coefficients, which are commonly applied to ordinal data, because weighted kappa coefficients have some notable limitations and are therefore not universally endorsed. Using PASS Software (NCSS, LLC, Kaysville, Utah, USA), a sample size of 6 raters with 40 observations per subject achieves 100% power to detect an ICC of at least 0.50 under the alternative hypothesis (null hypothesis ICC = 0.00) at alpha = 0.05.

Results: Using the VAS for visualization, the interobserver reliability showed a strong degree of consistency (ICC = 0.96, 95% confidence interval [CI] = 0.93–0.98). Likewise, the intraobserver variability exhibited a strong degree of consistency for five of the six raters with the ICCs ranging from 0.87 to 0.92. Using the shoulder arthroscopy grading scale, the interobserver reliability showed a strong degree of consistency (ICC = 0.97, 95% CI = 0.94–0.98). The intraobserver variability also demonstrated moderate to strong consistency for five of the six raters with ICCs ranging from 0.61 to 0.90.

Discussion and Conclusion: Visualization in shoulder arthroscopy has lacked a validated measure. The results of our study show a high degree of consistency for interobserver reliability and intraobserver variability using both the VAS and the visualization in shoulder arthroscopy grading system. Both forms appear to be reliable methods for qualifying visualization.


  Oral Presentation Abstract Number 2 Top


Stabcric: Surgical Technique Against Bougie Cricothyrotomy

Jeremy Kadish, Edwin Layng, Matthew Berrios, John Pester

Department of Emergency Medicine, St. Luke's University Health Network, Bethlehem, Pennsylvania, USA

Introduction/Background: Performing a surgical airway is a last-resort heroic measure when a physician cannot ventilate or intubate a patient. The current standard open surgical technique involves many steps and requires the use of multiple tools including retractors. Each of these steps introduces an opportunity for error and increased time to securing the airway. This observational study examined whether the bougie-assisted cricothyrotomy is easier to learn and faster to perform than the classically taught open surgical method.

Methodology and Statistical Approach: This was a single-center, randomized, crossover study using pig trachea models. Participants included 12 medical students (MS3 and MS4) at St. Luke's University Hospital. Volunteers were randomized to one of the two techniques, and then trained with an instructional video prior to the procedure. After a 4-week washout period, the same participants were brought back to perform the other cricothyrotomy technique. The primary outcome measure was time to correct endotracheal tube placement. Secondary outcomes included time spent learning each technique and number of attempts at tube placement. Data were analyzed using SPSS version 22 (IBM Corporation, Armonk NY, USA). Wilcoxon signed rank tests were conducted to compare time to endotracheal tube placement as well as time spent learning each technique.

Results: Median time to placement for open surgical versus bougie-assisted was 310.5 s (interquartile range [IQR], 235.75–418.5 s) versus 195.5 s (IQR 162.75–284.5 s), respectively (P = 0.034). Median time to learn for open surgical versus bougie-assisted technique was 339 s (IQR 287.25–436.75 s) versus 249.5 s (IQR 166.75–300.75 s), respectively (P = 0.005).{Inline 1}

Discussion and Conclusion: This study demonstrated that the bougie-assisted method is quicker to learn and perform when compared to traditional open surgical cricothyrotomy. The number of attempts to achieve proper placement was not significantly different between techniques. Emergency cricothyrotomy is a time-dependent procedure often performed on a critically ill patient, in whom an airway has not been secured by other methods. While there are many studies comparing the surgical technique to others, this study offers a unique addition to existing literature. We selected an inexperienced group regarding both techniques, eliminating procedure-learning bias. In addition, this study used a crossover analysis having the same participants perform each technique, drawing a realistic comparison of both procedures. The limitations are the single-center nature and a small sample size. However, our results suggest that the bougie-assisted technique is preferred when performing an emergent cricothyrotomy. A larger, sufficiently powered study is required to verify these conclusions.


  Oral Presentation Abstract Number 3 Top


Short Versus Long Intramedullary Nails for Treatment of Unstable Intertrochanteric Femur Fractures

Vamsi Kancherla, Paul Morton, Chinenye Nwachuku, William Delong

Department of Orthopaedic Surgery, St. Luke's University Health Network, Bethlehem, Pennsylvania, USA

Introduction:Unstable intertrochanteric (IT) hip fractures treated with short intramedullary nails (IMN) may reduce estimated blood loss (EBL), operative time (op-time), and transfusion rates, but may result in revision surgery due to a periprosthetic fracture. Long IMNs may reduce the need for revision surgery at the expense of increased EBL and op-time. Our study sought to compare short and long IMNs.

Methodology and Statistical Approach: A prospective, randomized, comparative pilot study was initiated at one institution from 2012 to 2014 that treated unstable IT fractures (orthopaedic trauma association 31 A2.2 or greater) with either a short IMN (Group 1) or a long IMN (Group 2). Demographics, length of stay (LOS), EBL, op-time, transfusion rate, and complications/revisions were recorded. Clinical outcomes were also assessed by visual analog scale (VAS) scores and an SF12 health survey. A Student's t-test for parametric data and a Fisher's exact test for nonparametric (categorical) data were used to determine significance (P< 0.05).

Results: Group 1 (5 males, 16 females) and Group 2 (1 male, 18 female) had a mean age of 86.3 and 84.1 years, respectively [Table 1]. LOS, EBL, transfusion rate, and op-time for Group 1 were 5.9 days, 133 mL, 48%, and 42 min, respectively while Group 2's values were 5.3 days, 225 mL, 47%, and 57 min, respectively [Table 2]. EBL (P = 0.01) and op-time (P = 0.03) were statistically lower in Group 1. Both groups had 1 perioperative death due to hypoxia (stroke in Group 1, PE in Group 2). Group 1 had 3 late complications, 1 periprosthetic femur fracture requiring revision surgery with a long IMN and 2 lag screw cutouts requiring conversion to total hip arthroplasty. SF12 physical component summary (PCS) and mental component summary (MCS) scores were 40/54.4 (PCS/MCS) and 40/60.7 (PCS/MCS) for Group 1 and 2, respectively [Table 3]. VAS scores were <1 for both groups.
Table 1: Demographic data

Click here to view
Table 2: Primary outcomes

Click here to view
Table 3: Secondary outcomes

Click here to view


Discussion and Conclusion: Unstable IT hip fractures treated with short IMN may offer less surgical morbidity and increased complications when compared to long intramedullary nail fixation.


  Oral Presentation Abstract Number 4 Top


Leukocytosis Following Splenic Injury: a Comparison of Splenectomy, Embolization, and Observation

Brian Wernick, Ulunna MacBean, Ronnie Mubang, Suzanne Liu, Stanislaw P. Stawicki

Department of Surgery, St. Luke's University Health Network, Bethlehem, Pennsylvania, USA

Introduction/Background: The spleen is one of the most commonly injured solid abdominal organs. Despite this fact, controversies persist about both the natural history and management of splenic injury. There is continued debate regarding the persistence of leukocytosis following both splenectomy and splenic embolization. We sought to compare the composite behavior of white blood cell (WBC) count in patients who underwent clinical observation (O), embolization (E), or splenectomy (S). We hypothesized that when compared to observation and splenectomy, embolization would result in persistent levels of intermediate WBC elevation.

Methodology and Statistical Approach: Following Institutional Review Board approval at participating institutions, a retrospective study of WBC behavior was conducted between March of 2000 and December of 2014. Of all splenic injuries, a semi-random convenience sample was selected for inclusion, resulting in a representative sample of each subgroup (O, E, and S). Basic demographics and injury severity data were abstracted. Composite graphs of WBC from the time of trauma admission to the latest available WBC draw were constructed. In addition to raw WBC data, a seven-period moving average was plotted for each group up to 1000 h (45 days) postsplenic injury. We also conducted analysis of variance to compare subgroups.

Results: A total of 841 data points from 75 patients (20 S, 22 E, and 33 O) were studied. Median age was 39.5 years with median injury severity score of 19, admission Glasgow coma scale of 15, and 81% males. As shown in [Figure 1], during the first 30 postinjury days, the three groups separated into three distinct WBC “layers,” with the O group demonstrating the lowest composite WBC levels (11.0 ± 5.03), followed by intermediate values in the E group (13.1 ± 4.97), and persistent elevations in the S group (17.4 ± 6.84) (P< 0.001). During the subsequent 15 postinjury days, the WBC in O and E groups coalesced and normalized (7.67 ± 2.38 vs. 7.06 ± 2.13, P > 0.05), while leukocytosis continued in the S group (14.1 ± 6.33, P< 0.002).
Figure 1: Temporal patterns of white blood cell count behavior for patients who underwent splenectomy (black line), embolization (red line), and observation (green line). Data shown from the time of injury until 1,080 hrs

Click here to view


Discussion and Conclusion: Up to 30 postinjury days, both S and E groups were associated with significant WBC elevations compared to the O group. Elevations noted in the E group were of intermediate magnitude when compared to O and S groups. Beyond 30 days following the injury, the O and E groups coalesced, and their composite WBC levels were normalized. This latter finding may represent indirect evidence that splenic embolization does not result in permanent loss of splenic function.


  Poster Presentation Abstract Top



  Poster Presentation Abstract Number 1 Top


Hospital Admissions for Lower Extremity Infections: Comorbidities, Procedures, and Length of Stay

Eric Bronfenbrenner, Elliot Busch, Brent Bernstein

Department of Podiatry, St. Luke's University Health Network, Bethlehem, Pennsylvania, USA

Introduction/Background: Hospital length of stay (LOS) has been examined for links to patient satisfaction, hospital costs, and comorbidities. Patients admitted to hospitals for lower extremity infections (LEIs) experience great variability in LOS. Although numerous studies have look at the effect of diabetes mellitus (DM), peripheral vascular disease (PVD), and other factors' influences on LOS, a literature review found no studies examining LEI LOS. The objective of this study was to describe factors that may affect LOS for patients with LEI in a multicenter university hospital network.

Methodology and Statistical Approach: We conducted a retrospective chart review. During a one-year period, 252 patients were admitted to the hospital network by foot and ankle surgeons for LEI. After discharge, LOS was recorded in days. Patients were then retrospectively grouped by (1) age in decade, (2) preexisting presence of DM, PVD, have both diseases or neither disease, (3) whether patients required amputation, debridement, or incision and drainage, or any combination of these procedures during the stay, (4) undergoing angiogram or lower extremity (LE) bypass during the stay, and (5) admission day of the week. We reported descriptive statistics.

Results: The mean LOS for LEI was 6.3 days. Patients with both DM and PVD had a mean LOS of 8.8 days compared to approximately 6 days if they had either PVD or DM. Patients requiring any combination of podiatric surgery had a mean stay of 9.6 days compared to approximately 6 days if they only required one procedure. Patients undergoing an angiogram stayed 8.8 days while patients undergoing LE bypass stayed 13.1 days. Among age groups, patients aged 60–69 years had the longest mean LOS (7.3) while patients <40 years of age had the shortest mean LOS (4.0). Patients admitted toward the end of the week had a slightly longer LOS.

Discussion and Conclusion: This retrospective chart review gives insight into some of factors that can influence a patient's hospital stay for LEI. LOS in relation to age revealed a bell-shaped curve of distribution. It is unclear if age alone led to this result, or if it was simply a reflection of increased comorbidities in older decades of life. Patients diagnosed with both DM and PVD had longer LOS by approximately 2 days, regardless of interventions performed while in house. Performance of any single podiatric procedure did not appear to alter LOS. However, for multiple procedures, there was a marked increase in LOS. Vascular interventions also had larger increase in LOS, which is expected given that each procedure requires at least a day. Logically, patients requiring only intravenous antibiotics without any further intervention experienced the shortest LOS. Admissions occurring on a Thursday, Friday, or Saturday increased expected LOS by about 1 day. This may be due to limited interventions over the weekend or a reflection of continuity of care. This chart review illustrates the more common features of a patient with LEI requiring hospitalization. We hope to collect data over multiple years to conduct inferential statistical analyses and hopefully help streamline medical plans by giving both patients and physicians a better sense of the expected hospital course.


  Poster Presentation Abstract Number 2 Top


Influenza Annual Vaccine: Early Versus Late Administration

Avijeet Dut, Colleen Fitzgerald, Helaine Levine

Family Medicine Residency-Warren Hospital, St. Luke's University Health Network, Phillipsburg, New Jersey, USA

Introduction/Background:The best approach for reducing influenza-related morbidity and mortality is to prevent influenza by actively immunizing all individuals >6 months of age. Immunocompetent adults are predicted to develop immunity to vaccine strains 2 weeks after vaccine administration, while children take longer to develop immunity. Based on published local, regional, and national influenza activity data and development of immunity following vaccination, practitioners in New Jersey should strive to get patients immunized before mid-November and should continue to offer vaccines through April. Centers for disease control data show that as of early November of the last two flu seasons, more than half of Americans had not yet received a flu vaccination and lacked the protection it offers.

Methodology: This retrospective cohort study explored timeliness of influenza vaccine administration at a residency-based family medicine outpatient practice, Coventry family practice (CFP). Bill dates for patients over 2 years of age during the 2013–2014 and the 2014–2015 influenza seasons were obtained via searching for International Classification of Diseases-9 influenza billing codes and analyzed for month of administration.

Results: For all age groups, the percentage of visits where flu vaccine was administered varied little between 2013–2014 and 2014–2015 seasons. In all child age groups, less than half of the total number of flu vaccines was given before November 1st in both years. In the 19–64 years age group, slightly more than half of the total numbers of flu vaccines were given during the early vaccination season in both years. In the >65 year age group, 43% of all flu vaccines were given during the early vaccination season in 2013–2014, but this rose to 61% in 2014–2015. In all child and adult age groups in both years, large numbers of influenza immunizations were given after flu season had started, but few were given after the peak of flu activity when flu risk remains high.

Discussion: Based on this exploratory study, CFP should expand its investigations into timeliness of influenza administration in high-risk groups. Strategies will be developed and implemented that enable early vaccine season access to flu vaccine for more patients of all ages and that increase flu vaccine availability for those who come later in the season.


  Poster Presentation Abstract Number 3 Top


The Dilemma: Current Systemic Inflammatory Response Syndrome Criteria Applied to Obstetric Patients

Melissa Chu Lam, Ingrid Paredes, James Anasti

Department of Obstetrics and Gynecology, St. Luke's University Health Network, Bethlehem, Pennsylvania, USA

Introduction/Background: Maternal sepsis is an infrequent but important complication of pregnancy, childbirth, and the puerperium. It is estimated that puerperal sepsis causes at least 75,000 maternal deaths every year, mostly in low-income countries. Studies from high-income countries report maternal morbidity incidence due to sepsis of 0.1–0.6/1000 deliveries. A study published in October 2013 reported that severe sepsis and sepsis-related deaths are actually rising in the USA, and delay in diagnosis and treatment increases maternal sepsis-related deaths. The normal physiologic changes in pregnancy likely contribute to the delayed recognition of sepsis. The American College of Chest Physicians/Society of Critical Care Medicine define sepsis as a systemic inflammatory response syndrome (SIRS) secondary to infection (culture-proven or visible) with 2 of the following 4 criteria required to be present: Temperature >38° (100.4° F) or <36° (96.8°F); respiratory rate (RR) >20 breaths/min or PCO2<32 mm Hg, heart rate (HR) >90 beats/min, or white blood count (WBC) >12 × 109 cells/L, or <4 × 109 cells/L, or bands >10%. Recently, the modified early warning system (MEWS) was developed as a tool to identify patients who are at risk for catastrophic decompensation. However, because of demographic and physiologic differences between obstetric patients and the general population, in which SIRS and MEWS were developed, it is not certain if these scoring systems are applicable to obstetric patients. In this study, we aimed to describe how many postpartum patients in our institution with no suspicion of infection would meet SIRS criteria.

Methodology and Statistical Approach: Using a retrospective chart review, we collected temperature, HR, RR, and WBC from 200 postpartum patients and calculated the percentage that would meet SIRS criteria as previously defined (both individual percentages for each SIRS criterion category and the overall percent of the patients who met the SIRS criteria).

Results: Our postpartum patients had a mean temperature of 98.2° F, HR of 78.5, and WBC of 14.3; the median RR was 20 (due to its skewed distribution). Individual SIRS category frequencies revealed 0/200 patients with temperature >38°C (100.4° F), 1/200 (0.5%) with temperature <36°C (96.8° F), 29/200 (14.5%) with HR >90 beats/min, 1/200 (0.5%) with RR >20 breaths/min, 136/200 (68%) with WBC >12 × 109 cells/L, and 0/200 with WBC < 4 × 109 cells/L. A total of 26/200 patients (13%) had >2 SIRS criteria.

Discussion and Conclusion: Our study revealed poor specificity of SIRS criteria in our obstetrical population. The physiologic changes of pregnancy in the postpartum period often result in higher HR and WBC count compared to nonpregnant healthy adults, which overlaps with SIRS criteria. Additional tools are needed to predict sepsis in pregnant women to reduce sepsis related mortality in this patient population.


  Poster Presentation Abstract Number 4 Top


Traditional Autopsy Versus Computed Tomography-Imaging Autopsy: a Case of “synergistic Disagreement”

Maggie Lin, Stanislaw Stawicki, Noran Barry, Ike Akusoba, Heidi Hon, Marissa Cohen, Pratik Shukla, James Cipolla, Brian Hoey

Departments of Surgery and Radiology, St. Luke's University Health Network, Bethlehem, Pennsylvania, USA

Introduction/Background: Decreasing traditional autopsy (TA) rates have produced a crisis in key areas of medicine, including traumatology. Other affected areas include medical education and quality improvement. To help remedy this negative trend, a number of imaging autopsy (IA) initiatives were conceived, including the CATopsy project at our institution. While the concept is promising, little is known about the correlation between TA and IA findings. We examined the congruence between TA and IA in a group of trauma fatalities, hypothesizing that there would be moderate amount of agreement between them.

Methodology and Statistical Approach: Following Institutional Review Board approval, we conducted a prospective, observational study of TA versus IA at our Level I Trauma Center between June of 2001 and May of 2010. All decedents in the current study underwent a postmortem, whole-body, noncontrast computed tomography (CT), “a pan-scan” that was interpreted by an independent, board-certified radiologist who specializes in CT imaging. A fully independent TA was also performed by a board-certified pathologist. Autopsy findings were grouped into previously defined categories. Comparisons of TA and IA included graphical representation of findings and computation of the degree of agreement (kappa coefficient). Chi-square testing was also used to identify which modality detected potentially fatal findings more frequently in each defined category.

Results: Twenty-five trauma decedents (19 blunt; 9 female), with a median age of 32.5 years, had a total of 435 unique findings on either IA or TA. A total of 34 categories of findings were analyzed. Overall, agreement between IA and TA was worse than what chance would predict (kappa = –0.58). The greatest agreement was seen in injuries involving axial skeleton and intracranial/cranio-facial trauma. Most disagreements occurred in soft tissue, ectopic air, and incidental findings. Potentially fatal findings were found on both TA and IA in 48/435 (11%) instances, 79/435 (18%) on TA only, and 73/435 (17%) on IA only. The findings reveal disproportionate categories with one modality more frequently identifying potentially fatal outcomes in a defined category than the other. TA identified more potentially fatal findings related to solid organ injury and heart/great vessels while IA revealed more incidental/procedure-related and ectopic air/fluid findings.

Discussion and Conclusion: Our study does not support substitutive use of noncontrast CT-based IA. However, our findings suggest that the two types of postmortem evaluation may be complementary (and thus synergistic) when performed together. Further research is required in this important area of investigation, focusing on applications of IA in medical education and quality improvement in the absence of TA.


  Poster Presentation Abstract Number 5 Top


Comparison of Transradial and Transfemoral Approach for Coronary Artery Bypass Graft Angiography and Intervention: Systemic Review and Meta-Analysis

Yugandhar Manda, Jamshid Shirani

Department of Cardiology, St. Luke's University Health Network, Bethlehem, Pennsylvania, USA

Introduction/Background: The transradial (TR) approach for coronary artery angiography and intervention is gaining popularity and is associated with reduced morbidity and mortality in comparison to the transfemoral (TF) approach. However, the safety and effectiveness of TR approach in this setting is not well studied. Therefore, we conducted a meta-analysis to investigate this issue.

Methodology and Statistical Approach: We performed a systemic review of the literature and identified one randomized controlled trial and six observational studies (n = 1370) that specifically addressed this issue. We then performed meta-analysis using Review Manager (RevMan version 5.3, The Nordic Cochrane Centre, The Cochrane Collaboration, Copenhagen, Denmark) to compare the characteristics and outcomes of each approach, including vascular access site complications, major adverse cardiovascular events (MACEs), access site crossover rates, fluoroscopy time, procedure time, and contrast volume use. We analyzed the data using both fixed and random-effects models, which yielded similar results. Between-study heterogeneity as measured by the I2 statistic was 0%.

Results: Baseline patient characteristics were similar in both groups. Compared with the TF approach, the TR approach was associated with decreased risk of vascular access site complications ([Figure 1], 1.4% vs. 3.8%; odds ratio [OR]: 0.34, 95% confidence interval [CI]: 0.16–0.75; P = 0.008) and a tendency toward lower MACE ([Figure 1], 2.39% vs. 4.7%; OR 0.57, 95% CI: 0.25–1.3; P = 0.18). There was no significant difference in rates of major bleeding (0.17% vs. 0.57%; OR 0.65, P = 0.58) or in-hospital death (0.29% vs. 0.7%, OR 0.58, P = 0.54). The risk of vascular access site cross over rate was high in the TR approach (5.16% vs. 0.4%; OR: 6.31, P = 0.0003). The TR approach was associated with fluoroscopy time, procedure time, and contrast volume usage that were comparable to the TF approach (P = 0.22, P = 0.13, and P = 0.84, respectively).

Discussion and Conclusion: The TR approach for bypass graft angiography and intervention reduces vascular access site complications and has comparable fluoroscopy time and contrast volume usage.


  Poster Presentation Abstract Number 6 Top


Validity of the Tip Apex Distance as a Predictor of Failure in Cephalomedullary Nails: a Single Center Study

Paul Morton, Anshul Agarwala, Anup Gangavalli, Nick Caggiano, John Black, William Delong

Department of Orthopaedic Surgery, St. Luke's University Health Network, Bethlehem, Pennsylvania, USA

Introduction/Background: Orthopedic surgeons are always seeking the most effective strategies to fix hip fractures. Since 1960s, fixed angle sliding hip screws have been a gold standard. Baumgaertner's landmark article demonstrated that the tip apex distance (TAD) measured on X-ray following fixation with fixed angled sliding hip-screws estimates the possibility of lag screw cut-out, which leads to significant disability. The goal of this study was to evaluate the use of the TAD measurement in hip fractures fixed with cephalomedullary nails.

Methodology and Statistical Approach: This was a retrospective review of SLUHN orthopedic surgery patients. All patients over age 18 treated with cephalomedullary nails with at least 3 months of follow-up were included. Patients were excluded for inadequate films, incomplete data, different implant, prior hip fracture, or preexisting deformity. Patients were evaluated for demographics, treatment characteristics, and radiologic findings. Basic statistical analysis was conducted using Chi-square, Kruskal–Wallis, and Mann–Whitney rank sums test as appropriate to describe the patient population and treatment outcomes. Statistical significance was defined as P< 0.05.

Results: A total of 677 femur fractures were initially retrieved, of which 235 met inclusion criteria. There were 183 implants with a TAD <25 mm and 52 patients with a TAD >25 mm. There were 12 implants with cutout, 5 patients who required revisions for nonunion, and 3 patients with breakage of implants. The mean TAD was 23.51 ± 10.10 mm for implants that cutout, 21.30 ± 9.37 mm for breakage, 13.35 ± 4.77 mm for nonunion, and 20.10 ± 7.77 mm for implants that went on to successful union. Implants with a TAD >25 mm were significantly more likely to have cutout occur (13.5%) compared to those with TAD <25 mm (2.7%) (P = 0.002).

Discussion and Conclusion: Within our cohort of cephalomedullary nails, the TAD value >25 mm clearly demonstrated a higher likelihood of lag-screw cutout. This study adds to the body of evidence demonstrating that surgeons placing a cephalomedullary nail should aim for a lag screw TAD <25 mm intraoperatively.


  Poster Presentation Abstract Number 7 Top


Failed Endometrial Ablation: Who Is at Risk?

Angel Gonzalez Rios, Melissa Chu Lam, Cori Shollenberger, Jessica Wagner, Jill Stoltzfus, James Anasti

Department of Obstetrics and Gynecology, St. Luke's University Health Network, Bethlehem, Pennsylvania, USA

Introduction/Background: About 20% of women who undergo endometrial ablation subsequently need a hysterectomy. Thus, the identification of factors that increase the risk of endometrial ablation failures would be valuable in counseling patients. Through limited chart reviews, a few factors have been identified. Our study sought to identify additional risk factors.

Methodology and Statistical Approach: A retrospective chart review was conducted on patients who underwent hysterectomy for failed endometrial ablation. In addition, a review of randomly selected cases of successful endometrial ablation for the same period was performed. Body mass index (BMI, calculated as [weight (kg)/height (m)]2), postendometrial ablation weight gain, gravity, parity, cesarean delivery, comorbidities, smoking, gynecologic surgery, hysteroscopy or dilation and curettage (D and C) at the time of endometrial ablation, uterine sound length, and preoperative endometrial biopsy result were compared in separate univariate analyses. To correct for multiple testing, all comparisons were considered significantly different if P<0.002.

Results: During a 10-year period, endometrial ablation was performed in 785 patients with 202 undergoing subsequent hysterectomy; 271 patients of the remaining 583 were used as a control group. All of the following represented statistically significant risk factors based on differences between the two patient groups: Smoking, multiparty, history of cesarean delivery, previous gynecologic surgery, larger uterine sound length, and lack of hysteroscopy or D and C at the time of endometrial ablation. The following factors were not significantly different between groups: BMI, postendometrial ablation weight gain, and preoperative endometrial biopsy pathology. Of note, 62% of the patients had adenomyosis on uterine pathology.

Discussion and Conclusion: From our detailed chart review, we identified several previously unrecognized factors that increase the risk of endometrial ablation failure. Further research into these factors may help in the counseling of patients regarding treatment options for their abnormal uterine bleeding.


  Poster Presentation Abstract Number 8 Top


Red Blood Cell Distribution Width Variation: a Marker for Preterm Labor?

Angel Gonzalez Rios, Melissa Chu Lam, Matthew Zuber, Jessica Wagner, Jill Stoltzfus, James Anasti

Department of Obstetrics and Gynecology, St. Luke's University Health Network, Bethlehem, Pennsylvania, USA

Introduction/Background:The red blood cell distribution width is a measure of the variation of red blood cell volume. Recently, elevated levels have been associated with several disease processes, such as heart disease and septic shock outcomes. In some studies, this increase in red blood cell distribution width has been attributed to occult inflammation. In normal pregnancies, red blood cell distribution width increases during the last 4 weeks before delivery. We, therefore, hypothesized that as a result of inflammation, preterm patients would have higher red blood cell distribution width compared to term women.

Methodology and Statistical Approach: In a retrospective case–control study, we compared 150 randomly selected patients who spontaneously delivered before 37 weeks of gestation with 150 patients who delivered at term during the last 10 years at our hospital. Red blood cell distribution width, hemoglobin, hematocrit, and white blood cell count were compared at the day of admission and day of delivery. We excluded individuals with known infection, inflammatory diseases, and hematologic disorders.

Results: Patient age was similar in each group (preterm = 25.65 years, term = 26.17 years). The average gestational age was 33.16 weeks in the preterm group and 39.16 weeks in term. Red blood cell distribution width did not differ between the groups (preterm = 13.76, term = 14.26). Subanalysis of preterm patients with premature preterm rupture of membranes or gestational age <35 weeks did not differ compared with term women.

Discussion and Conclusion: To our knowledge, this is the first study to look at red blood cell distribution width in preterm patients. Based on our study, red blood cell distribution width does not appear to be a significant marker for preterm delivery.


  Poster Presentation Abstract Number 9 Top


Injuries Associated With Supracondylar Femur Fractures

David Roy, Paul Morton, Kirk Jeffers, Patrick Brogle

Department of Orthopaedic Surgery, St. Luke's University Health Network, Bethlehem, Pennsylvania, USA

Introduction/Background: Multiple epidemiologic studies of supracondylar femur fractures have been performed, but none describes the incidence of both orthopedic and nonorthopedic associated injuries in the polytrauma patient. Multiple orthopedic injuries add significant complexity to surgical timing, fixation, implant choice, and mobility, and nonorthopedic injuries further compound this complexity. The purpose of this study was to determine the incidence of injuries and initial operative outcomes associated with supracondylar femur fractures.

Methodology and Statistical Approach: This was a retrospective review of SLUHN inpatient and outpatient records from 2006 to 2014. Subjects were evaluated regarding demographics, treatment characteristics, radiologic findings, and coexisting injuries. Exclusion criteria were as follows: Fracture periprosthetic to a previous knee arthroplasty, nonoperative treatment, open physis, and loss to follow-up. Outcomes were defined as the final available imaging for the subject or the final available imaging prior to further surgical intervention. Data are reported in frequencies and ranges.

Results: Of the 136 distal femur fractures found, 68 were excluded. Included subjects (n = 68) were 16 males and 52 females with average age at the time of injury of 68 years (range 17–102) and average BMI of 29 (range 15–48). Forty-four subjects sustained injuries by falling from standing (65%), 13 were in a motor vehicle collisions (20%), 2 fell from a height greater than standing (3%), 5 sustained injuries through other mechanisms (7%), and 3 were unknown (4%). Concurrent orthopedic injuries included 11 to the ipsilateral LE (16%), 4 to the ipsilateral upper extremity (6%), 3 to the contralateral LE (4%), 3 to the contralateral upper extremity (4%), 1 pelvis (1%), and 6 spine injuries (9%). Concurrent nonorthopedic injuries included 6 chest (9%), 2 head (3%), and 3 abdominal (4%). Fifteen surgeons performed the operative fixation of 68 distal femur fractures with two-thirds of the surgeries split between two surgeons. Fifty-seven fractures were fixed with a condylar plate (84%), 10 were fixed with a retrograde nail (15%), and 1 was fixed with an antegrade nail (1%). Average follow-up time for initial outcome was 10 months (range 1–65); 29/68 fractures remained stable in anatomic alignment (43%), 31 remained stable in nonanatomic alignment (46%), and 8 collapsed into varying anatomical positions (12%). Of these 68 operatively fixed fractures, 14 showed arthritic changes that were not present on initial imaging (21%).

Discussion and Conclusion: Guidelines for assessment, planning, and treatment of the patients with supracondylar femur fractures remain ill-defined with scarce supporting evidence. While better implant design has improved the success rate of fracture repair for certain outcome variables in recent decades, longer-term outcomes are uncertain. This study provides an epidemiological picture of the demographics, fracture types, associated injuries, and operative outcomes for supracondylar femur fractures in adult subjects and, therefore, contributes to the baseline understanding of the demographics, associated injuries, and treatment characteristics of supracondylar femur fractures in adults.


    Figures

  [Figure 1], [Figure 2]
 
 
    Tables

  [Table 1], [Table 2], [Table 3]



 

Top
 
 
  Search
 
Similar in PUBMED
Access Statistics
Email Alert *
Add to My List *
* Registration required (free)

 
  In this article
Validation of th...
Oral Presentatio...
Poster Presentat...
Poster Presentat...
Poster Presentat...
Poster Presentat...
Poster Presentat...
Poster Presentat...
Poster Presentat...
Poster Presentat...
Poster Presentat...
Oral Presentatio...
Oral Presentatio...
Oral Presentatio...
Oral Presentatio...
Poster Presentat...
Article Figures
Article Tables

 Article Access Statistics
    Viewed1560    
    Printed133    
    Emailed0    
    PDF Downloaded45    
    Comments [Add]    

Recommend this journal


[TAG2]
[TAG3]
[TAG4]