American Association of State Highway and Transportation Officials

Special Committee on Research and Innovation

 

FY2023 NCHRP PROBLEM STATEMENT TEMPLATE

 

Problem Number:  2023-G-07

 

Problem Title

Method for Evaluation of a Mixed-Project Safety Program

 

Background Information and Need For Research

The general purpose of the Highway Safety Improvement Program (HSIP) is to accomplish a substantial reduction in traffic fatalities and serious injuries on all public roads (including nonStateowned public roads and roads on tribal land) through the implementation of infrastructure highway safety improvements (1). The HSIP requires followup on the strategic highway safety plans to identify the targeted safety emphasis areas and key actions to reduce severe injury crashes. The HSIP (Section 148 of Title 23, United States Code (23 U.S.C. §148)) is one of the core federalaid programs in the federal surface transportation act, Fixing America's Surface Transportation Act (FAST)(2). The HSIP is supported and funded at the federal level and is administered on the state level.

 

The HSIP process includes three primary stages:

1.         Planning stage: focuses on studying, analyzing, and selecting projects that potentially require safety improvements.

2.         Implementation stage: includes installing the optimal countermeasures that are expected to improve safety performance.

3.         Evaluation stage: focuses on project effectiveness and monitoring the HSIP process from start to finish.

 

The evaluation stage provides an opportunity to continuously improve planning, implementation, and project documentation processes and decisions. Implementing a reliable evaluation method is key to supporting the HSIP project process by increasing the effectiveness of future projects and maximizing the return of investment for safety funding. State transportation agencies continue to enhance and expand their HSIP evaluation practices (1). Currently, most state transportation agencies track basic project information, evaluate individual projects in some manner, and disseminate results to stakeholders (3). While agencies are making progress in enhancing HSIP evaluation practices, a variety of approaches are being used across agencies. Specific guidance on tracking and evaluating the effectiveness of projects, countermeasures, and programs can benefit agencies and help them enhance their own processes.

 

The focus of the evaluation stage of the HSIP, as well as the concept of evaluation to many agencies, refers to the post implementation safety performance of a single project. While the individual project may have multiple countermeasures implemented the majority of the currently used evaluation techniques do not focus on the evaluation of a program of diverse safety projects. The American Association for State Highway Transportation Officials’ (AASHTO) Highway Safety Manual (HSM) outlines a multipart method for the evaluation of multiple projects using an Empirical Bayes (EB) methodology, however, this process is directed towards the analysis of similar site types implementing the same countermeasure (or same combination of countermeasures). The methodology is best suited for development of crash modification factors (CMFs) rather than the evaluation of a diverse portfolio of safety projects, which is typical for State HSIPs.

 

While simpler statistical methods, such as naïve before-after or before-after with traffic volume correction, don’t have the methodological limitations of the EB methodology from the HSM, the results can be influenced by a few larger projects or dominant site types when individual projects are aggregated for the program level effectiveness. Given the push for involving local agencies and off-state system roads within the HSIP, it’s more important now than ever before to be able to effectively evaluate a full portfolio of HSIP projects with an even comparison. This will help state agencies improve funding allocations and project types across all roadway and site types and drive down fatal and serious injury crashes across complete roadway networks.  Furthermore, simpler statistical evaluation methods being used at a program level, cannot account for the biases that EB methods do.

 

Literature Search Summary

Chapter 6 of the Highway Safety Improvement Program Manual (HSIP Manual) focuses on project level evaluation detailing the benefits and drawbacks of various project level evaluation methods. The HSIP manual specifically notes that the evaluation procedures are intended for individual projects, groups of similar projects/ treatments, or development of CMFs (1). 

 

The Highway Safety Improvement Program Evaluation Guide (2017) covers program evaluation from crash data-based and activity-based measures of performance. The methods discussed in the guide serve as high level indicators of program effectiveness offering several methods that use aggregate program level data (such as total crash numbers and total HSIP funds implemented) to capture a big picture evaluation of state programs. The guide does not offer a quantitative approach to program level effectiveness that builds off EB based before/ after analyses (4).

 

The Highway Safety Manual Part B Chapter 9 provides guidance on how to calculate safety effectiveness of a treatment using predictive analyses with the EB before-after safety evaluation method. The method provided offers a starting point for evaluation from the HSM Part C Predictive Method and EB based before/ after project evaluations. However, this method is recommended for similar projects or treatments and is best used for CMF development (5).

 

The Texas Transportation Institute, on behalf of the Texas Department of Transportation conducted an analysis of more than 387 segment and 70 intersection projects in Texas. The research applied the HSM Part B Chapter 9 method to the segment and intersection projects as complete groups, developing safety effectiveness indices for each. The research indicates that the methodology can be applied to an entire program of projects, but there is no discussion of how different predictive method site types were accounted for when using the EB before/after methodology. Furthermore, the researchers chose to only evaluate the groups of projects for development of CMFs (essential program level effectiveness) using naïve before/ after and naïve before/after with traffic volume correction (6). 

 

References

(1)  Herbal, S., Laing, L., & McGovern, C. (2010). Highway Safety Improvement Program (HSIP) Manual (Publication No. FHWA-SA-09-029). Washington, DC: US Department of Transportation, Federal Highway Administration.

(2)  Fixing America's Surface Transportation Act. (2015). Washington, D.C.: U.S. Government Publishing Office.

(3) HSIP Evaluation Study – Final Report. Ohio Department of Transportation. Columbus, Ohio. 2020.

(4) Gross, F. (2017). Highway Safety Improvement Program (HSIP) Evaluation Guide (Rep. No. FHWA-SA-17-039). Washington, DC: US Department of Transportation, Federal Highway Administration.

(5) Highway Safety Manual. (2010). Washington, DC: American Association of State Highway and Transportation Officials.

(6) Tsapakis, I., Sharma, S., Dadashova, B., Geedipally, S., Sanchez, A., Le, M., . . . Dixon, K. (2019, October). EVALUATION OF HIGHWAY SAFETY IMPROVEMENT PROJECTS AND COUNTERMEASURES: TECHNICAL REPORT (Rep. No. FHWA/TX-19/0-6961-R1). Retrieved February, 2020, from Texas Department of Transportation website: https://static.tti.tamu.edu/tti.tamu.edu/documents/0-6961-R1.pdf

 

Research Objective

To develop a methodology capable of producing quantitate estimates of how a broad group of projects has affected crash frequencies or severities. The broad group of projects will represent the breadth of treatments and project site types that may comprise a state’s HSIP yearly body of projects (in so much as the sites are represented by Predictive Method site types either included in or planned for inclusion in the Highway Safety Manual). At a minimum the methodology will:

           allow for an aggregation of EB analysis across and diverse site types, accounting for the different relative contributions of various site types in the Safety Effectiveness Calculation (Odds Ratio or equivalent performance metric) result for the entire program of projects,

           account for outliers or otherwise weight projects to prevent high leverage sites from obscuring true effectiveness evaluation of program level results,

           include robust examples for support in methodology implementation, and

           include recommendations for potential inclusion of the methodology into the HSM and HSIP Program Evaluation process.

 

Urgency And Potential Benefits

Implementation of the AASHTO HSM Predictive Method has had a dramatic effect on the project development and implementation processes across the country. While these improvements have undoubtedly improved traffic safety, there needs to be an analogous statistical method for the critical evaluation component at the program level to help direct funding efforts and focus improvements to the most effective project types. The proposed methodology addresses an immediate need for improvement of the HSIP evaluation process. The research resulting from this needs statement has the potential to be adopted for yearly evaluation by state departments of transportation as a part of their HSIP, allowing States to measure the impact of the HSIP on reducing fatalities and serious injuries and make continuous process improvements, resulting in a national shift in the priority and implementation of HSIP funding. The evaluation results could also be aggregated at the national level to demonstrate the value of the HSIP. Additionally, the evaluation results could guide a state’s project portfolio to ensure progress towards the federal safety performance measures.

 

The AASHTO Committee on Safety ranked this its #5 priority.

 

Implementation Considerations

Deliverables of the research project should include an implementation approach that involves outreach to users of the proposed methodology. Guidance on applying the methodology should be provided to enable agency staff to assess the level of effort, skills, and knowledge needed. Implementation efforts could include a pilot use of the research results, with plans for disseminating any reports, case studies, noteworthy practices, and lessons learned to additional potential users.

 

Recommended Research Funding and Research Period

Estimate for Funding: $400,000

Research Period: 24 Months

 

Problem Statement Author(S): For each author, provide their name, affiliation, email address and phone.

Derek A. Troyer, P.E.

Highway Safety Engineer

Program Management – Highway Safety

1980 W. Broad Street, Mailstop 3260, Columbus, Ohio 43223

614.387.5164

Derek.Troyer@dot.ohio.gov

 

Cynthia Yerkey, P.E.

Senior Project Manager/ Highway Safety Specialist

Jacobs Engineering Group

1001 Lakeside Ave., Suite 1420, Cleveland, Ohio 44114

216.777.1010

Cindy.yerkey@jacobs.com

 

Karen Scurry, P.E.

FHWA Office of Safety

1200 New Jersey Avenue, SE, Washington, D.C. 20590

202.897.7168

Karen.Scurry@dot.gov

 

Potential Panel Members: For each panel member, provide their name, affiliation, email address and phone.

To be determined.

 

Person Submitting The Problem Statement: Name, affiliation, email address and phone.

Adnan Qazi, P.E.

Arkansas Department of Transportation

AASHTO Committee on Safety, Research Subcommittee Chair

501-569-2642

Adnan.Qazi@ardot.gov