Posted By Rachel Linz,
Friday, January 15, 2016
Updated: Tuesday, January 12, 2016
The Reproductive Health Program in Oregon is a little different than in most other states. Not only do we administer a Title X grant (Title X of the Public Health Services Act, signed into law by President Richard Nixon in 1970, is the only federal funding dedicated solely to family planning services), but we also administer a Section 1115 family planning demonstration waiver through the Centers for Medicare and Medicaid Services (CMS), despite being within our state’s Public Health Division rather than our state’s Medicaid office. Our waiver is called Oregon ContraceptiveCare, or CCare, and covers family planning and contraceptive management services for individuals who are U.S. citizens or lawful permanent residents with household incomes up to 250% of the federal poverty level and who are not enrolled in the state’s Medicaid program. The RH Program’s provider network includes all local public health departments in the state as well as Planned Parenthood health centers, university health centers, community health centers and School-Based Health Centers, totaling 150 clinics statewide. Through our entire provider network, we serve over 80,000 clients annually.
One area of focus for the Oregon RH Program has been to increase access to long-acting reversible contraceptives, or LARC methods. These include contraceptive implants and intrauterine devices, are effective for up to 3-10 years depending on type, and have failure rates similar to sterilization methods (see Figure 1). In fact, LARC methods are about 20 times more effective at preventing pregnancy than birth control pills! We provide technical assistance and training for clinicians and billing staff regarding insertion and removal of LARC devices, billing, reimbursement and maintenance of device stock on site, and best practices regarding client counseling techniques to increase client success with their methods, regardless of which method a client chooses.
As all readers of CSTE Features no doubt know, the United States transitioned to the 10th revision of the International Statistical Classification of Diseases and Related Health Problems (ICD-10) on October 1, 2015. For the Oregon RH Program, we are fortunate that our clinical data collection is narrowly focused and we only require diagnosis codes for visits under CCare, not for Title X (which covers a broader scope of services than CCare). Because of CMS requirements, CCare visits must include a primary diagnosis code indicating that contraceptive management was the primary purpose of visit (V25 codes under ICD-9, Z30 codes under ICD-10). To assist our provider network in managing the transition, we created a crosswalk that includes the ICD-9 codes for each contraceptive method alongside the appropriate ICD-10 code, as well as the Healthcare Common Procedure Coding System (HCPCS) supply codes associated with each method type (see Figure 2).
The biggest challenge with coding for the Oregon RH Program under ICD-9 has continued under ICD-10: several contraceptive methods do not have their own unique codes. We’ve all heard about new ICD-10 codes that have been created to document very specific types of injuries in specific locations, but what has not been in the news is the fact that the most effective LARC method, the hormonal implant (<0.5% failure rate) actually lost its unique codes that it had under ICD-9! Of the 18 different contraceptive methods available in the U.S., only four have their own specific diagnosis codes: intrauterine devices, oral contraceptives, injectable contraceptives, and natural family planning. Both female and male sterilization methods use the same diagnosis codes.
Our solution, which aligns with recommendations from national family planning and coding experts, is the following: for hormonal methods that do not have their own specific codes (the contraceptive implant, patch and ring), to use the codes for “unspecified” contraceptives (Z30.019 for initial encounter, Z30.40 for follow-up or surveillance encounters). For less effective methods that do not have their own specific codes (cervical cap, diaphragm, sponge, female and male condoms, and spermicide), we recommend using codes for “other” contraceptives (Z30.018 and Z30.49). This way, although we cannot determine specific contraceptive methods from diagnosis codes alone, we can determine the approximate level of effectiveness. The bottom line is that other information, such as HCPCS codes and National Drug Code (NDC) numbers, is required to determine exactly which contraceptive methods are dispensed. Additional ICD-10 codes may become available in the future, but for now, tracking ongoing use of certain long-acting methods remains a challenge.
Figure 1. Contraceptive method effectiveness. Most effective methods include the contraceptive implant, intrauterine devices, and sterilization methods. Moderately effective methods include injectables, pills, patches, rings and diaphragms.
Figure 2. A portion of the Oregon RH Program’s ICD-9/ICD-10 crosswalk. Under ICD-10, the hormonal implant lost its unique diagnosis codes while injectable contraceptives gained unique codes. Other methods such as the diaphragm have never had their own unique codes.
Rachel Linz, MPH is an Informatics Training in Place Program (I-TIPP) fellow and senior research analyst with the Reproductive Health Program at the Oregon Health Authority. To learn more about ICD-9 and ICD-10, join subcommittees in the Surveillance/Informatics Steering Committee.
Posted By Emily J. Holubowich ,
Monday, January 4, 2016
Updated: Monday, January 4, 2016
Emily Holubowich, Senior Vice President at CRD Associates, is CSTE’s Washington representative and leads our advocacy efforts in the nation’s capital.
After a couple of fiscal “close calls” this fall—shutdown threats, last-minute budget negotiations, and a couple of stopgap spending measures to keep the government running—Congress ultimately passed and the President quickly signed the Consolidated Appropriations Act of 2015 before heading home for the holidays. This trillion-dollar spending measure provided appropriations for all “discretionary” government functions, including those administered by the Department of Health and Human Services.
In the end, public health fared well, all things considered. The Centers for Disease Control and Prevention (CDC) received nearly $7.2 billion in the “omnibus” spending bill for fiscal year (FY) 2016. That’s a $277.7 million (four percent) increase over FY 2015 levels. This funding includes nearly $6.3 billion in discretionary budget authority, as well as more than $892 million in mandatory Prevention and Public Health Fund (PPHF) dollars and $15 million from the Public Health and Social Services Emergency Fund.
The overall increase in funding should translate into good news for state and territorial epidemiologists. The National Center for Emerging and Zoonotic Infectious Diseases (NCEZID) received nearly $580 million, including $52 million from PPHF. This funding level represents a $175 million (43 percent) increase over FY 2015. Within NCEZID, the antibiotic resistance (AR) initiative would receive $160 million in new funding; less than the President’s requested $264 million for CDC. Based on the President’s budget request submitted to Congress in early 2015, we would expect much of the NCEZID funding, including AR, to support core infectious disease surveillance capacity at state and local health departments through Epidemiology and Laboratory Capacity (ELC) grants. This funding would be in addition to $40 million from the mandatory PPHF provided to ELC grants for the fifth consecutive year in the omnibus. The final spending measure requires CDC to submit to Congress a detailed spend plan for AR within 60 days of the legislation’s enactment, so more specific information about ELC funding and the bill’s impact on states and territories will be available soon.
Among our other NCEZID appropriations priorities, food safety received increased funding ($52 million) and advanced molecular detection was flat funded ($30 million).
The Public Health Workforce program, through which the CDC/CSTE Applied Epidemiology Fellowship receives funding, is also provided flat funding of $52.2 million and no supplemental PPHF funding. The appropriations bills do not specify how much funding would be dedicated to the Applied Epidemiology Fellowship program per se, but with flat funding of the program we might expect flat funding for our fellows. The President had requested a $15.2-million increase in budget authority for Public Health Workforce, as well as $36.2 million in PPHF. Three years ago, Congress eliminated $15 million in PPHF dollars for Public Health Workforce in the wake of sequestration.
Some other notable items related to public health:
The National Center for Injury Prevention and Control received $70 million to combat the opioid epidemic, a $50 million increase over FY 2015 levels.
National Center for Environmental Health budget was increased by $2.9 million over FY 2015 and most of the cuts proposed in earlier spending legislation were restored with the exception of the $2.8-million “Built Environment and Health Initiative,” which was eliminated.
Funding for CDC’s tobacco programs sustained a $6.5 million cut compared to FY 2015, but was mostly restored after being cut by $100 million in proposed spending bills.
With the enactment of FY 2016 spending legislation, legislators will begin work in earnest on FY 2017 spending legislation when they return to Washington in January. CSTE will once again partner with the Association of Public Health Laboratories and other colleagues in the public health community to advocate for our key priorities—strong support for disease monitoring and for training the next generation of epidemiologists. In addition, our executive leaders are travelling to Washington in early February to discuss our funding requests with key decision-makers in Congress and the administration. Until then, we anxiously await the release of the President’s final budget request of the administration, expected to be released the first week in February, to see what the White House has in store for disease surveillance.
For more information about funding levels for your specific priorities, please click here for a copy of the omnibus spending legislation, and click here for a copy of the accompanying report that provides more detailed instructions about public health funding levels and intended purposes.
Posted By D. Rebecca Prevots,
Tuesday, December 22, 2015
Updated: Friday, December 18, 2015
Editor’s note: CSTE Executive Board Secretary-Treasurer and Hawaii State Epidemiologist, Sarah Park, recently met up with D. Rebecca Prevots, Ph.D., Chief of the Epidemiology Unit of the Laboratory of Clinical Infectious Diseases, Division of Intramural Research at the National Institute of Allergy and Infectious Diseases (NIAID) at NIH. Sarah recommended that Rebecca write this blog for CSTE describing the epidemiology research and support at NIH and how she might help CSTE members with applied epi questions not addressed by other federal partners.
I will give an overview of epidemiology and epidemiologic capacity at NIH based on my experience here. Since I spent 12 years at CDC and now 12 years at NIH, I do have some perspective into how epi fits into these different cultures (the more academic and the more applied public health). NIH is a very big place, with the 27 different institutes and centers comprising the National Institutes of Health (note the “s”). Most institutes have extra- and intramural groups, with the extramural groups funding research outside of NIH and the intramural groups conducting research on the NIH campus. Approximately 90% of the NIH budget goes to extramural research, usually in the form of grants to academic institutions, and the remainder is for intramural research.
In addition to the intra\extramural distinction, each institute or center varies in its mission, structure, and function, and therefore it is difficult to provide one picture of epidemiology across NIH. Most large institutes (NCI, NIAID, NIEHS, NHLBI, and NICHD) have intra- and extramural epidemiology groups. The descriptions of the various epidemiology groups can be found on the NIH webpage under their respective institutes. The epidemiology groups in each institute will have expertise related to their specific areas of study. The main mission of the NIH is biomedical research in support of human health, which the epidemiology units complement in a variety of ways that include:
design and analysis of NIH clinical studies,
involvement in design and analysis of field studies, and
analysis of large datasets to look at population patterns
The nature of the ongoing research varies widely, but intramural epi researchers at those institutes typically do original research, usually in populations outside of NIH. The overarching mission of intramural epi groups is to add value to the mission by focusing on rare diseases or high-risk research that otherwise wouldn’t get funded extramurally.
I can speak to what I know best: in my epi group at NIAID, we seek to lead and support research within the Division of Intramural Research. This includes research on rare lung diseases from nontuberculous mycobacteria and fungi, design and analysis of clinical research data, analysis of population-based data (e.g., datasets from the Centers for Medicaid and Medicare Services and the Agency for Health Research and Quality), and using approaches such as spatial analysis to better understand risk factors for some conditions. We also provide epidemiologic expertise into international field studies conducted by NIH, such as the intramural-conducted field studies of malaria in Mali.
In summary, there is epi expertise at NIH, and that expertise varies widely across groups. Certainly if there is a topic of interest (such as NTM), there are often experts here who can help. And certainly I would be interested in fostering ties with CSTE!
Posted By Virginia Dick,
Friday, December 18, 2015
Updated: Friday, December 18, 2015
Evaluation is a crucial activity for many state and local public health agencies. In addition to the evaluation requirements in many federal funding programs, more and more states are recognizing the need and value of conducting thorough process and outcome evaluations of local, state, and regional efforts. All funders, governmental and non-governmental, are placing an increasing emphasis on demonstrating the impact of the program, policy or system change.
It is only through a comprehensive evaluation that jurisdictions can gain a better understanding of the impact of programs or policy changes as well as the key components involved in implementation of those programs and policies. Many evaluators, myself included, advocate that there is a critical need for process evaluation as well as outcome evaluation. A solid process evaluation provides the foundation for understanding why a program, system change, or policy had the anticipated impact (or not) as well as what components were most critical and how replication can occur in other settings.
Unfortunately, evaluation responsibilities often fall to epidemiologists or data managers who have little direct evaluation training and experience. In ideal circumstances, epidemiologists and evaluators work hand in hand during evaluation to provide the most thorough review of the process as well as aligning related data elements and issues. However, due to staffing and funding challenges, many jurisdictions are not able to have both of these positions engaged on the same effort at the same time, or the staff are spread across many efforts. Building evaluation capacity within the current epidemiology workforce is one way to help build the overall evaluation capacity within jurisdictions.
In 2015, CSTE conducted a four-part webinar series to discuss and examine public health evaluation. The series was designed to provide a high-level overview discussion of evaluation and how evaluation broadly should be approached. While the series was developed in conjunction with the Chronic Disease Subcommittee, the presentations were done in a manner to allow generalizability across all areas of public health. All of the webinars are available in the CSTE webinar library and can be viewed at any time. Below is a brief description of each of the webinars as well as the link to the webinar itself if you would like to learn more about any of the topics.
Some “think about it” questions that were generated included: What is the first step in determining your evaluation design? What are some of the primary reasons for conducting evaluations? What is the biggest challenge you face in determining evaluation design, engaging stakeholders, and determining what tools to use?
Lesson 2: Approaches to Evaluation / Evaluation Types (March 19, 2015) - Webinar Slides
This webinar discussed the strengths, weaknesses, and primary focuses of four primary approaches to evaluation. The four approaches that were discussed in depth include Utilization-focused evaluation, Developmental evaluation, Theory Driven, and the Kirkpatrick model. In addition, time was spent discussing the difference between summative and formative evaluations and outcome and process evaluations. While discussing the differences, it is also important to consider the interconnectedness between the different types.
“Think about it” questions to consider from this week included: examining which evaluation approach you are most comfortable with, identifying what types of evaluation are typically conducted within your agency, and whether your approach is consistent with the types of evaluation that you have typically been involved with in your agency.
Lesson 3: Outcome/Process Evaluations (April 16, 2015) - Webinar Slides
This webinar spent significantly more time delving into the distinction between and relationship among formative/summative and outcome/process evaluation. Items considered and discussed including the appropriate different types of questions for each type of evaluation, how different evaluation theories or approaches examine each of the types of evaluation, and most appropriate data, analysis, and reporting mechanisms for each type of evaluation. Significant time was spent in this session discussing the importance of mixed methods designs in robust evaluations.
“Think about it” questions that were generated from this session included considering the most appropriate evaluation questions for each evaluation type, examining how summative evaluations can be used to inform programmatic practice, and thinking about the pitfalls of relying on only summative evaluation reports.
Lesson 4: Data Visualization and Reporting ( June 14, 2015) - Webinar Slides
The final webinar in this series discussed data visualization and reporting. In particular, how to determine the best reporting mechanisms for various stakeholder audiences, different methods for visualizing data, and the critical importance of different reporting tools were all discussed. This session tied together several underlying themes from the other sessions, including consideration of all stakeholders and ensuring the usability and relevance of the evaluation by all stakeholders.
Some of the “think about it” questions that were generated at this session included how to handle challenging responses from stakeholders to evaluation findings, how to identify and engage with key stakeholders to discuss the data, and how stakeholders can support evaluation efforts by providing more detailed understanding of key findings.
Dr. Virginia Dick is director of research and evaluation at CSTE. If you are interested in discussing evaluation in more detail, please feel free to contact her at firstname.lastname@example.org. For other interesting webinars on data and applied public health epidemiology issues, visit the webinar library.
Posted By Brittni Frederiksen,
Friday, December 11, 2015
Updated: Friday, December 11, 2015
The buzzword in the Maternal and Child Health and Reproductive Health communities is LARC – long-acting reversible contraceptives (e.g. intrauterine devices (IUDs) and implants). LARCs are game-changing, highly effective contraceptive devices that can decrease unintended pregnancies and increase birth spacing. They are also cost-effective and require little effort on the part of the user, making them an appealing contraceptive method to both teens and adult women.
As a CDC/CSTE Applied Epidemiology fellow in Maternal and Child Health (MCH) in the Bureau of Family Health at the Iowa Department of Public Health (IDPH), I have had the opportunity to work on a number of projects related to LARCs. Iowa’s involvement in LARC-related initiatives started when Iowa and Colorado were funded by a private donor to lead Initiatives to Reduce Unintended Pregnancies by promoting LARC use and removing barriers to uptake. The Iowa Initiative grant allowed clinics to expand hours and locations, train clinical nurse practitioners and physicians on the benefits of LARCs and how to use them, and most importantly, purchase LARCs so clinics could offer them at low cost or no cost to their patients. LARCs are expensive upfront, and prior to the Iowa Initiative it was difficult for clinics to afford to offer LARCs to their patients because of the cost. During the Iowa Initiative period, use of LARCs as a primary method of contraception increased substantially while the percent of unintended pregnancies declined by 11% and pregnancies terminated by abortion declined by 25%. Even though the Iowa Initiative ended in 2012, LARCs have continued to remain a popular contraceptive choice for women in Iowa.
Over the past year Iowa has participated in a multi-state LARC Learning Community led by the Association of State and Territorial Health Officials (ASTHO). This initiative is designed to assist states in the implementation of immediate post-partum LARC insertion. In other words, strategies to provide women with a LARC before her hospital discharge, post-delivery. Immediate post-partum insertion of LARCs allow women to prevent unintended pregnancies and effectively space pregnancies. This in turn can decrease poor health outcomes for mothers and babies. One of the barriers to immediate post-partum LARC insertion has been the bundling of the post-partum insertion in the delivery payment. Bundled reimbursement prevented providers from getting reimbursed for the LARC device and insertion. In February 2014 Iowa Medicaid Enterprise released an informational letter no. 1349 to unbundle LARCs from the payment for the inpatient admission associated with the delivery. This was a significant step in promoting immediate post-partum LARC insertion. We were fortunate to have the opportunity to work with two Harvard students that visited IDPH for a week in January. We created an evaluation plan so that we can evaluate the effectiveness of the LARC unbundling as well as a proposed outreach and training program to educate providers, billing staff, and Medicaid recipients about insertion and billing of LARCs in the immediate post-partum period.
To ensure women have access to quality family planning services and to encourage use of the more effective contraceptive methods, the Office of Population Affairs has proposed two performance measures: The percentage of women aged 15-44 years at risk of unintended pregnancy that adopts or continues use of:
The most effective (i.e., male or female sterilization, implants, IUDs) or moderately effective (i.e., injectables, oral pills, patch, ring, or diaphragm) FDA-methods of contraception
An FDA-approved, LARC.
The first measure is an intermediate outcome measure, and it is desirable to have a high percentage of women who are using the most effective or moderately effective contraceptive methods. The second measure is an access measure, and the focus is on making sure that women have access to LARC methods. These two measures applied to women who have had a live birth are also in development to measure postpartum contraceptive use among women ages 15-44.
Iowa has had the unique opportunity to collaborate with the Office of Population Affairs and the Centers for Disease Control and Prevention to pilot the two performance measures using Title X data and Medicaid paid claims data. As a CDC/CSTE Applied Epidemiology fellow, I was able to apply my SAS programming skills to develop SAS code to calculate the performance measures using Medicaid claims data that other states can also use to calculate the measures among their Medicaid populations. The Center for Medicaid and CHIP Services (CMCS) recently awarded 13 states, including Iowa, and one US territory funding to support state efforts to collect and report data to CMCS on the new developmental quality measures to assess progress on their Maternal and Infant Health Initiative goal to increase the use of effective methods of contraception among all women in Medicaid and CHIP. This has been an incredible opportunity to apply my epidemiological skills to a national initiative that will ensure women have access to the contraceptive method of their choice while reducing unintended pregnancies and improving birth outcomes.
Brittni Frederiksen is a CDC/CSTE Applied Epidemiology fellow at the Bureau of Family Health in the Iowa Department of Public Health. For more information on related issues, join the CSTE Maternal and Child Health Subcommittee to follow updates on current activities.
Posted By Christine Dobson,
Friday, December 4, 2015
Updated: Friday, December 4, 2015
The event that would largely shape my introduction to the applied epidemiology field came as I was en route to the CSTE Applied Epidemiology Fellowship Orientation Week. Early in the morning on August 24, 2014, a 6.0-magnitude earthquake struck the San Francisco Bay Area where I live and work. I learned of the earthquake from the East Coast while traveling to orientation in Atlanta. Had I been home, the earthquake would have jolted me awake at 2 in the morning and rocked me for about 20 seconds. The earthquake, which was epicentered approximately 30 miles northeast of San Francisco, caused five residential fires, downed power lines, and resulted in damage to buildings, homes, roadways, and water mains. Suspecting that the California Department of Public Health (CDPH), my host agency, might play a role in its aftermath, I contacted colleagues offering to help with any response effort upon my return to California.
That response effort was a community rapid needs assessment, called a Community Assessment for Preparedness and Emergency Response (CASPER). The Emergency Preparedness Team (EPT) within the Division of Environmental and Occupational Disease Control at CDPH had been asked by the Public Health Division in Napa County, one of the hardest-hit counties, for assistance assessing earthquake-related physical injuries, chronic disease exacerbation, mental health issues, and general emergency preparedness among households in Napa County. A team comprised of epidemiologists, health educators, and physicians from the county and state health departments and the CDC was formed to plan the CASPER event. Using guidance from CDC’s CASPER toolkit, the team worked together to design a 48-question questionnaire for household-based interviews, to select the geographic areas for household sampling, to recruit volunteers to canvass the selected households, and to coordinate myriad logistics for a three-day field sampling event scheduled three weeks after the earthquake. Over those three days, 15 two-person teams of volunteers approached 488 households and successfully completed an interview with a member of 201 of those households.
Although we assessed many different health impacts related to the earthquake, impacts to mental health were a major focus of the CASPER. Just over a quarter of households had a member who experienced a traumatic psychological exposure, which is known to be associated with posttraumatic stress disorder, in the immediate aftermath of the earthquake. Feelings of psychological distress, such as feeling anxious, fearful, or distracted, were experienced in 75% of households. Among all households in which a traumatic or distressing psychological experience occurred, professional help or other emotional support was sought in 41% of households. In response to the data generated by the CASPER, Napa County was able to more effectively reallocate its mental health resources in the immediate aftermath of the earthquake. In addition, the county conducted public training sessions and education campaigns to support those with mental health risks, conducted community social events on the earthquake anniversary date to promote community resilience and earthquake preparedness, and provided psychological first aid training to outreach workers.
The CASPER findings were summarized in a report drafted by the EPT and submitted to Napa County Public Health. We also communicated our findings to a wider public health audience at a number of different meetings through both poster and oral presentation platforms: a webinar sponsored by the Safe States Alliance; poster presentation at the 10th Annual Preparedness Summit (Atlanta, GA); an invited talk at the 2015 CSTE Disaster Epidemiology Subcommittee Workgroup Meeting (Atlanta, GA); and presentations at the 2015 EIS Conference (Atlanta, GA) and the 2015 CSTE Annual Conference (Boston, MA). In addition, our findings specific to mental health needs as a result of the earthquake were presented in a MMWR report.
In addition to the actionable data that were generated, the CASPER, the first large-scale project to which I contributed as a CDC/CSTE fellow, also provided a comprehensive and intensive introduction to the work and workings of local and state health departments. I was part of the collaboration and discussions that occurred among local, state, and federal health officials. I was involved in the development of survey materials, in the training of volunteers in sampling and interviewing methods, in the analysis of our collected data, in the drafting of the report, and in presenting our findings across many forums. From start to finish, the CASPER provided opportunities both for learning and for developing my epidemiologic skills in the applied public health space.
Moreover, the CASPER afforded me the opportunity to meet community members affected by the earthquake. Many wanted to share their stories of the shaking and disorienting chaos that gave way to weeks and months of rebuilding their homes and regaining their physical and emotional health. Yet when asked at the conclusion of the interview what their household was most in need of now, the residents I spoke to most often replied, “nothing.” They were simply grateful that their plight had not been more severe. And so my experience on the frontlines of public health had taught me another important lesson, which was the resiliency of those impacted by disaster.
Christine Dobson, ScD is a CDC/CSTE Applied Epidemiology fellow in the Environmental Health Investigations Branch and Occupational Health Branch at the California Department of Public Health. AEF fellow applications are currently open for prospective fellows until January 13, 2016. If you work at a health agency and would like to host a fellow, HSIP host-site applications must be completed by December 16.
Posted By John Satre,
Friday, November 20, 2015
Updated: Friday, November 20, 2015
The Public Health ELR Network: A working model that increases interstate communication and reduces connections required between laboratories and public health1
The concept of a nationwide electronic laboratory reporting (ELR) network among state public health agencies organically sprang out of a field that had already been planted with seeds requiring time to develop and grow. Over the course of the last decade, these seeds have developed and are now producing fruit across the nation. It is appropriate to briefly outline some of the relevant components that make it possible to realize a nationwide, functioning public health ELR network.
Planting the Seeds
The following developments have been necessary for a public health ELR network to blossom:
Health Level 7 (HL7) Standardization – The ELR 2.5.1 HL7 Version 2.5.1 Implementation Guide: ELR to Public Health, Release 1 (US Realm) provided a target for partners involved in public health disease surveillance to develop independent interoperable messaging.
NEDSS and ELC – the National Electronic Disease Surveillance System (NEDSS) initiative provided the overarching vision and the Epidemiology and Laboratory Capacity (ELC) cooperative agreements provided the support to adopt or develop new surveillance systems capable of handling data in a much more sophisticated manner.
LOINC, SNOMED, and HL7 – these coding systems provide a common language for data sharing partners to communicate key concepts.
HITECH – The Health Information Technology for Economic and Clinical Health (HITECH) Act provided education, valuable work products, and technical assistance to public health jurisdictions related to Meaningful Use and ELR.
Meaningful Use Public Health Objectives – These incentives provide impetus for hospitals to approach public health with the necessary resources to plan, develop, and implement electronic connections in each state.
Activities in public health often take excessive amounts of time, but they also happen to be spread over short spans measured only in minutes. These activities, when summed up by day or week accumulate into a large amount of time. One of these activities is simply referring a laboratory result received by one public health jurisdiction to a different state public health jurisdiction for investigation. It is not uncommon for a public health jurisdiction to receive a reportable laboratory result when it should have been reported to a different jurisdiction. In addition, there are times when one jurisdiction – upon receipt of a laboratory report - sends a request for assistance to another state (for example, collecting treatment information from a healthcare provider in the second state). This interaction generally occurs through mail, fax, or phone communication; timeliness of the referral or response tends to linger in the midst of busy schedules. Iowa’s electronic laboratory reporting team – in collaboration with multiple disease surveillance teams in Iowa and Nebraska – has developed program-specific rules and technology to evaluate electronic lab reports upon receipt for possible referral to another jurisdiction; the process is called ELR Redirect. Every laboratory result received by ELR passes through a component where the reportable condition is identified, so it can be associated with program rules. Then the appropriate state is calculated from the combination of up to three addresses for potential referral or an automated request for assistance. These include patient state, ordering physician state, and ordering facility state. Based on the combination of condition and addresses, the laboratory report is either redirected to another state public health jurisdiction or passed on to the original recipient’s surveillance system. Sometimes it is appropriate for the lab result to be kept by the original recipient and redirected to another state. This innovative design accomplishes several things: it saves time for the original recipient and the secondary recipient, provides the lab result to the appropriate public health jurisdiction in near real-time, and allows the surveillance system to consume a standard message automatically thereby eliminating manual data entry of this record into the final destination system(s).
Taking the Next Step
Standards have been implemented over the past 10 years along with new innovative design. The next step allows for a nationwide electronic surveillance network. If a state public health agency is able to receive a laboratory report electronically and electronically pass that report on to another state public health agency, the potential exists for every state public health agency to redirect a laboratory result to every other state public health agency. Costly electronic connections that have yet to be established between laboratories and public health agencies to achieve ELR may now be rendered unnecessary. The possibility of a disease surveillance community that is 100 percent electronic is within sight.
Iowa is currently working with United Clinical Labs (UCL) which serves healthcare providers and their patients in Iowa, Illinois, and Wisconsin. Multiple projects are underway to establish an electronic connection between UCL, which is located in Dubuque, IA, and the public health agencies in Iowa, Illinois, and Wisconsin through the UCL-to-Iowa ELR connection.
1Click here for image licensing information for the video.
John Satre is an Informatics–Training in Place (I-TIPP) fellow at the Iowa Department of Public Health. For more information or to request the customizable software component, e-mail John Satre. If you would like to become an I-TIPP fellow, apply before April 1, 2016. For more information about ELR, join one of CSTE’s three Surveillance/Informatics subcommittees.
Posted By Katrina Hansen,
Friday, November 13, 2015
Updated: Friday, November 13, 2015
We, the New Hampshire Healthcare-Associated Infections (HAI) Program, have published nine annual reports on the occurrence of HAI in hospitals and ambulatory surgery centers. These reports provide a summary of HAI data routinely reported in accordance with state law and can be used to identify and monitor improvement over time. The original intent of the NH HAI law was to increase transparency for healthcare consumers and assist with making healthcare decisions. However, experience shows that these reports are primarily used by healthcare facilities. With no standardized template available, our reports have evolved over time as we created our own and adapted various practices from other states to accommodate healthcare facilities and consumers. The NH HAI Program has and continues to struggle with balancing the needs of both technical and consumer audiences while preserving the important nuance and details that are needed to understand the data.
The picture above is a snapshot of NH HAI 2014 Hospital Report. NH uses a ‘red, green, and yellow light’ scheme to indicate if measures are higher, lower, or similar to national data, respectively. Example 1: The central line bloodstream infection (CLABSI) standardized infection ratio (SIR) was 0.61 or 39% fewer (green) infections than predicted based on national data. Example 2: The catheter associated urinary tract infection (CAUTI) SIR was 1.26 or 26% more infections than predicted based on national data. However, this different was not statistically significant, which means the over number of CAUTI was SIMILAR (yellow).
With the leadership of Lindsey Weiner (epidemiologist at the Division of Healthcare Quality Promotion, CDC) and Andrea Alvarez (HAI/influenza program coordinator at the Virginia Department of Health), a multidisciplinary group was formed to collaborate and establish best practices in the analysis and display of HAI data, which resulted in the HAI Data Analysis and Presentation Standardization (DAPS) toolkit. I was privileged to be a part of this workgroup, learn from other HAI colleagues, and participate in discussions to develop a tool that can be used by other programs. Throughout my involvement, common themes arose, including:
We are all healthcare consumers and have competing priorities in our day-to-day lives. We need to remember that short and simple messages are best when trying to reach a general audience.
HAI metrics [e.g., the standardized infection ratio (SIR)] and data limitations are not easy to explain to a consumer audience, let alone some technical audiences.
Symbols and colors can quickly and effectively convey a message. The challenge is picking the right symbols and colors applicable to all settings.
HAI data are continuously evolving; state and federal mandates change; and we need to be flexible to adapting needs.
While on this workgroup, I learned a lot from the discussions regarding challenges other jurisdictions encounter when publishing these important data. Coming from a rural and less populated state, we struggle with having robust enough data to present information for certain measures and facility types. I know we are not the only state working within this context and it was helpful learning from others with similar experiences. I kept this in mind when providing feedback on the toolkit and the need to provide an option to states that have comparable restrictions. Similarly, I used our own experience in NH to help contribute to the healthcare personnel vaccination component of the template. Several approaches are addressed and made available within the toolkit in order to provide flexibility to states.
Going forward, I plan to incorporate as much as possible in future reports. I estimate that we will reach a wider audience and ultimately increase HAI knowledge by ensuring that these data are more meaningful and accessible to all. Participating in the CSTE workgroup was a great opportunity to not only improve our own work in New Hampshire, but also to help shape the way healthcare-associated infections programs move forward together across jurisdictions in a complementary and consistent way.
Katrina Hansen, MPH is healthcare-associated infections program manager at the New Hampshire HAI Program. Click the link above to learn more about the HAI DAPS Toolkit and join the HAI Subcommittee for the opportunity to engage in initiatives such as this.
Posted By Heather Dubendris,
Friday, November 6, 2015
Updated: Tuesday, October 27, 2015
The past year as an AEF fellow has afforded me a plethora of opportunities. From site visits and outbreak investigations to data validation, coordinating the implementation of a surveillance system to taking calls, I quickly learned that working in a state health department keeps you on your toes, as no two days are ever the same.
I started my fellowship in the midst of the Ebola crisis. My first day at the office I was met in the lobby and quickly swept upstairs to an incident management team meeting. From there, I began working with the guidance team, developing guidance materials (even filming a contact tracing trainer video), and then later in a data management role monitoring an average of 40 travelers returning from Ebola-affected countries on any given day.
Presenting at the CSTE Annual Conference in Boston, MA –June 2015
As a healthcare-associated infections (HAI) fellow, I get to work closely with highly knowledgeable public health experts from both public and private sectors on emerging public health priorities, such as antimicrobial resistance. This year, I developed a protocol and designed a surveillance system to better understand the burden of carbapenem-resistant Enterobacteriaceae (CRE) in North Carolina. This project requires collaborating with seven major healthcare facilities in our state and the state laboratory of public health. By collecting epidemiologic information from cases and conducting resistance type testing on isolates, surveillance will provide information on the incidence of CRE in North Carolina, identify common mechanisms of carbapenem resistance and identify common healthcare exposures related to CRE. Preliminary results show that of 55 isolates tested, 35 (62%) are positive for Klebsiella pneumoniae carbapenemase (KPC). KPC is a common mechanism of resistance first identified in North Carolina in 2001. Most patients, 51 (84%), have taken antibiotics in the 90 days prior to their positive result, and hospitals have reported an average of 3.7 (95% CI 2.9-4.5) surgeries or devices (such as a central line) among these patients. Final results will be available once surveillance concludes next spring.
CDC's National Healthcare Safety Network (NHSN) data are used at the state and federal level to report HAI events. North Carolina law requires the reporting of five HAIs. The government and hospitals rely on NHSN data to assess improvements over time and make comparisons between states. Therefore, it is essential that the data are valid. I am currently conducting an external validation of two reportable HAIs: central line-associated blood stream infections (CLABSI) and LabID Clostridium difficile (C. diff) events. We selected twenty-eight hospitals throughout North Carolina for validation. I used the first few months of my fellowship to learn from other states’ validation experiences and adapt items from the CDC validation toolkit to create a North Carolina-specific validation protocol, forms, medical record abstraction tools, and database. Next, I conducted a pilot validation at a hospital in March. And after making some minor changes to the validation tools, I began traveling to the selected hospitals and working with infection prevention staff to extract data from medical records. The quality assurance provided by validation is essential, as these data are used for setting public health priorities to protect patient health. Traveling for site visits has been great way to explore the different regions of North Carolina!
NC Division of Public Health HAI Prevention Program with Evelyn and Tom McKnight
(HONOReform) and Dr. Joe Perz at the NC APIC Conference–September 2015
While I am primarily an HAI fellow, I sit in the Communicable Disease Branch, so I learn about a variety of infectious diseases and associated events. I serve as epi on call and conduct outbreak investigations. When on call, I provide guidance and recommendations to local health departments, providers, and the public about a variety of reportable diseases and conditions. Last February I worked with our Epidemic Intelligence Service (EIS) officer to investigate an outbreak of late-onset group B Streptococcus in a neonatal intensive-care unit, and I have also assisted with investigations of hepatitis, Legionella, and a fungal brain mass.
In my current position I get to travel to a variety of trainings and meetings, including trainings in infection control, SAS programming courses and national conferences focusing on infectious diseases. Being part of a cohort of fellows provides a network throughout the country to reach out to for support and insight when a new perspective is needed. The professional relationships developed during this fellowship and the experience gained at the health department have set me up to succeed in a future career as a public health epidemiologist. If you are a recent public health graduate and are interested in a career at a public health agency, I strongly recommend the CDC/CSTE Applied Epidemiology Fellowship to you.
Heather Dubendris, MSPH is a CDC/CSTE Applied Epidemiology fellow at the Communicable Disease Branch of the North Carolina Department of Health and Human Services. Learn more about healthcare-associated infections by joining the CSTE HAI Subcommittee. Do you want to be a fellow like Heather? The application is now open for the CDC/CSTE Applied Epidemiology Fellowship.
Posted By Luke Baertlein,
Friday, October 30, 2015
Updated: Friday, October 30, 2015
In the Montana Asthma Control Program we were faced with a problem of wanting to report on the geographic distribution of asthma burden while not having spatial data or data we could report at county-level aggregation. To get around this, we used an indirect spatial estimation method to explore and report on the spatial distribution of asthma morbidity. Indirect spatial estimation can be conducted when aggregate data are available for small areas, providing a means to approximate the spatial distribution of a metric when spatial data are unavailable. We analyzed emergency department (ED) visits for asthma and estimated the spatial variation in population-based rates using kernel density estimation (KDE) applied to ZIP-code area aggregated data.
Data for this project came from emergency department datasets for 2010 through 2013, accessed through the Montana Hospital Discharge Data System (MHDDS) and provided by the Montana Hospital Association. At the time of the analysis, our data use agreement did not allow reporting data aggregated below the region level, including at the county level. This was later changed, allowing us to compare our spatial estimates to county-aggregated estimates. MHDDS datasets are based on the Uniform Billing forms collected from participating hospitals and represent over 90 percent of hospitalizations and ED visits in the state. Asthma ED visits were defined as any visit with a primary diagnosis coded as ICD-9-CM 493. The ZIP code of residence was used to approximate location of residence.
The creation of the spatial map followed four key steps:
Create a polygon map of rates: The ZIP-code area counts of asthma ED visits and population were used to estimate and map ZIP-code area rates of asthma ED visits per 100,000 persons per year.
Convert from polygon to raster format: A cell grid was overlaid on the map of ZIP-code area rates and each cell was assigned the rate of the ZIP-code area containing it.
Apply KDE to raster data: KDE with a 50km bandwidth was applied to the cell grid, producing a rate for each point equal to the average rate within a 50km radius of the point. A bandwidth that would cross ZIP-code boundaries from most points was used so that the rates were smoothed across ZIP-code areas.
Test for regions with rates different from the statewide average: Significance testing was applied using the Getis-Ord GI* statistic, again with a 50km bandwidth, to test the difference of each point and its surrounding points from the statewide rate.
The map of the spatial estimates of asthma ED visit rates is shown in Figure 1. For comparison, the same data aggregated at the county level is shown in Figure 2.
Figure 1. Spatial estimates of the relative rate of asthma emergency department
visits, Montana, 2010-2013, Montana Hospital Discharge Data System
Figure 2. County estimates of the relative rate of asthma emergency department
visits, Montana, 2010-2013, Montana Hospital Discharge Data System
We found a trend in overlap of regions with rates higher than the state and American Indian reservations. Of the six regions with rates higher than the state, five overlapped with reservations. Only one of the six reservations did not have a rate detectably higher than average. While a racial disparity in asthma prevalence in Montana has been found in the statewide BRFSS, race is not recorded in the Montana hospital discharge data system. By examining the spatial distribution, we were able to point to a potential racial disparity in ED visit rates. However, the pattern is not as apparent in the county-aggregated map. This could be further examined by including more detailed geographic race distribution data, such as census data, in the analysis.
As with all methods, this one is not without limitations. There is a general bias in this method against detecting small areas with high rates surrounded by areas with low rates. These tend to be estimated to have a lower than actual rate due the lower rates of their surrounding points. For example, geographically small cities with high rates located in regions with low rates outside the city would likely be underestimated. There is also potential bias from the use of ZIP-code area aggregation. The spatial approximation assumes that the rate is constant within each ZIP-code area. Also, the spatial estimates are influenced by choice of bandwidth which is somewhat arbitrary. For this map a 50km bandwidth was chosen to ensure adequate smoothing over ZIP-code boundaries at a state-wide level. However, a smaller bandwidth may have been more appropriate for areas with smaller ZIP code areas, such as major cities. Finally, this method does not take the precision of the ZIP-code area rate estimates into account. Given these limitations, inferences about the true spatial distribution of asthma ED visit rates based on this method should be made with caution.
While the limitations of an indirect spatial estimation using the method outlined above may limit its use for scientific inference, it may be useful for public health planning and communication when other options are not available, especially when used in conjunction with political-area estimates, such as county asthma ED rate estimates in this case. A map of a spatial distribution rather than of a distribution by political boundaries, such as by counties in a state, can be a tool to communicate the geographic distribution of a disease that promotes consideration of environmental factors (in a broad sense, including the social and economic environment as well as the physical) while de-emphasizing the role of local political areas (e.g. counties).