HOME MyTRB CONTACT US DIRECTORY E-NEWSLETTER FOLLOW US RSS


The National Academies

NCHRP 08-170 [Active]

Closing the Loop: Post-Implementation Evaluation of Transportation Projects

  Project Data
Funds: $600,000
Staff Responsibility: Jennifer L. Weeks
Research Agency: High Street Consulting
Principal Investigator: Kevin Ford
Effective Date: 2/20/2024
Completion Date: 2/20/2026

BACKGROUND

State departments of transportation (DOTs) and metropolitan planning organizations (MPOs) have long worked toward implementing performance-based planning and programming (PBPP). These practices became more robust after federal requirements for performance reporting was first introduced in the 2012 Moving Ahead for Progress in the 21st Century Act.

While many state DOTs and MPOs are fairly advanced in their application of performance-based planning, agencies seek to understand how to perform project evaluation of fully completed and operational transportation projects, i.e., post-implementation projects. Evaluating the performance of projects after they are implemented against the project’s strategic goals and objectives provides a feedback loop that informs future project selection, funding, development, and implementation. Post-implementation evaluation results are also useful for communicating to the public and decision-makers about projects.

Research is needed to provide informative, practical direction to agencies on how to design and apply post-implementation project evaluation.

OBJECTIVE

The objective of this research is to develop a guide and toolkit for the post-implementation evaluation of projects and programs. The guide should include frameworks for process and analysis, flowcharts, and decision tools that will facilitate the post-implementation evaluation. 

The guide and toolkit should be easily implemented, using self-assessments, checklists, and other tools applicable to projects of various sizes, scopes, modes, and purposes. The final deliverables should highlight examples of effective practices through case studies or other illustrative applications of the guide and tools.  The deliverables must accommodate agencies with limited experience as well as agencies that are more advanced in their evaluation practices.

The guide and toolkit will at least encompass the following elements:

  • One or more process frameworks that include specific steps in how to conduct evaluations under a variety of settings;
  • Timelines for conducting evaluations, recognizing that some performance goals might take years to realize after a project is operational;
  • Directions on how to select the correct process and/or timeframe for a given project;
  • Instruction on how to identify appropriate post implementation performance measures that respond to agencywide and project-specific goals and objectives; 
  • Instruction on how to measure the success of goals against these measures, particularly over time as a project matures in its operative state;
  • Methods or tools for isolating the performance of a project within a given context, such as within a given transportation and/or land use environment; 
  • Methods or tools for evaluating a project in conjunction with other types of external factors that may influence the performance of a project in operation such as unanticipated economic or social disruptions, or as a component of a set of transportation or land use investments or policies;
  • Methods or tools to identify, collect, and apply quantitative and qualitative data for post-implementation project evaluation; and
  • Recommendations on when and how to communicate the results of evaluations to different project stakeholders, including members of the public.

TASKS: TASK 1: PROJECT INITIATION

Kickoff meeting: At the start of the contract period, the Principal Investigator will lead a virtual kickoff meeting to introduce key members of the research team, set initial expectations, review the work plan, and clarify any project-wide scope issues.

Amplified Research Plan: The research team will prepare and deliver an amplified research plan (ARP) within 15 days of contract commencement for review and acceptance by NCHRP. The ARP will be revised and finalized following Panel review and comment.

Deliverables: (1) virtual kickoff meeting; (2) Draft Amplified Research Plan; (3) Final Amplified Research Plan;

TASK 2: LITERATURE REVIEW 

The research team shall conduct a thorough and comprehensive review of published literature and agency documentation to learn what methodologies were tried in the past, which are being used right now, and what lessons and insights can guide future efforts.

 The research team will work with the NCHRP panel on identifying any known theory or practice sources that the research team should target for more information. The team will prepare a spreadsheet template (log) for note collection and organization.  The research log will be used as a compendium that lists the identified list of publications and agencies of interest.

 Task 2.1 Compile Studies, Reports, and Literature for Post-Implementation Frameworks

The literature search will address both theory and practice. The former will cover peer-reviewed research of post-implementation frameworks, and the latter practitioner studies of real-world data and applications.

Task 2.2 Review Relevant Post-Implementation Literature 

The research team will build out the research log by documenting findings pertaining to key research questions that the research team coordinates with the Panel. Such questions will cover i) how post-implementation evaluations are conducted, ii) commonalities of applications, iii) strengths and weaknesses, and iv) implementation barriers.

Should such gaps persist after in-depth review, the research team will extend a broader search of related literature that may offer advice towards filling these holes, and any others identified. The research team will further reach out to the panel to confirm if the review findings reflect their expectations for topics and frameworks to be examined for this research.

Deliverables: The research team shall submit: (1) Research log containing agency practice compendium; (2) Literature review report.

 TASK 3: ENGAGEMENT SURVEYS

Building on the information generated in Task 1, the research team will be ready to engage in meaningful discussions with transportation practitioners through a series of surveys and calls. 

The purpose of these efforts will be to: i) better understand who is conducting post-implementation evaluation, for what purpose, and which methods they are employing; ii) document lessons learned from these experiences; iii) identify needs for those unable to conduct such evaluations or who wish to go further; iv) obtain reactions to best practice findings from the literature review and potential applicability to their organization, and v) identify possible case studies with participants willing to share data.

 Task 3.1 Practitioner Survey

The research team will develop and distribute a practitioner survey and conduct follow-on focus groups to obtain details missed in the literature review regarding the state of practice and what agencies need to successfully conduct ex post project evaluation.

 All draft surveys and interview content shall be reviewed and approved by NCHRP prior to their use. The following methodology will be used.

 Build a Contact List

Form a contact list of practitioners whose role would feasibly support a post-implementation evaluation. Personalized invitations to complete the survey will be extended to agencies of interest from the Task 1 practice review.

 Draft Survey Content and Disseminate

This survey will be designed to be concise. Content will include a mix of multiple choice, open-ended questions, and background context about the respondent’s experience with post-implementation evaluation and role within their agency. A draft will be shared with the panel for input prior to dissemination.

 Analyze Feedback

Thoroughly analyzing responses will help identify common qualities in post-implementation practices and challenges, as well as confirm sufficient representation from different geographies, agency types, and sizes. The research team will prepare a heat map of generalized respondent regions, creating word clouds of open-ended questions, and frequency charts of multiple-choice responses. A presentation that documents the findings will be prepared and shared with the panel.

 Task 3.2 Follow-up Phone Interviews

Based on what the research team uncovers in the survey, a subset of respondents with notable experience or insights will be identified for further discussion. The research team will look for practitioners involved in successful post-implementation evaluations who can share effective practices as well as those who ran into challenges when attempting them or who can speak to the barriers that have prevented their agency from pursuing or using them. Follow-up discussions will take place virtually with individuals or in small groups of similar experiences.

 Panel members will be extended an opportunity to participate in these calls.

 Deliverables: (1) Survey Respondent List and Response Status, (2) Draft and Final Engagement Survey; (3) Follow-up Call Agenda and Notes

 TASK 4: GAP ANALYSIS

Using what was uncovered in the previous tasks, the research team will piece together the detailed reasons behind why post-implementation analyses have not become mainstream. The research team will structure the findings into key gap areas, such as data limitations, insufficient evaluation tools, lack of an overarching framework or overreliance on subjective opinion, and/or organizational challenges to integrating evaluations into practice, among others identified in Tasks 1 and 2.

 The research team will clearly outline the current state and desired state of practice in each gap area. To resolve differences between the current and desired states, a set of actions to bridge the gap will be proposed along with an assessment of approximate level-of-effort for an agency to implement each action.

 Deliverables: (1) Technical Memo detailing the gap analysis methods and results

 TASK 5: PHASE I INTERIM REPORT

The research team will synthesize the findings of all three tasks of Phase I work into the Interim Report (IR1), a single written product that encapsulates the literature review, practitioner engagement surveys and calls, and results of the gap analysis. The research team will translate this understanding into a detailed work plan for executing Phase II and a list of all deliverables needed to address the research objective.

 NCHRP will convene an in-person interim panel meeting to discuss the proposed Phase II research plan and ideas for optimal implementation. Within one month of receiving Panel comments, the research team will submit revised versions of the interim report and work plan.  

 Panel acceptance of the IR1 and Phase II workplan, and NCHRP approval, will be required prior to advancing into Phase II.

 Deliverables: (1) Draft Interim Report 1; (2) Panel Meeting Presentation; (3) Panel Meeting Summary, (4) Final Interim Report 1

 PHASE II: DEVELOPMENT AND TESTING

 TASK 6: PRELIMINARY FRAMEWORKS

A core set of conceptual and analytical frameworks, informed by Phase I research, will represent the baseline of post-implementation evaluation across a variety of situations. These will be the universal, nuts-and-bolts, step-by-step processes that practitioners will build from to tailor analyses to their unique circumstances. These frameworks will be the scaffolding for the Guidebook’s more nuanced instructions and strategies.

 The frameworks will provide a holistic set of organizational and technical steps that can be applied for agencies with varying levels of readiness for implementing post-evaluation analysis. The approach will lay out analytical options that can be applied as agencies mature their business processes, with clearly articulated benefits for why they should continually enhance their capabilities.

 Task 6.1 Organizational Framework

This task is to outline the processes and essential practices within an organization necessary for the technical analyses to run smoothly and make an impact. The organizational framework will cover at least the following challenges and elements:

        Gain Leadership Support

        Build Trust & Buy-In 

        Define Roles & Responsibilities

        Establish Clear Inter-Office Communication

        Connect Measures to Goals

        Navigate Data

        Monitor Performance

        Set Assessment Points

Task 6.2 Technical Framework

To help agencies know where to start along their post-implementation journey, the research team will develop a self-assessment questionnaire that helps practitioners gauge their technical maturity and identify a corresponding analytical track to proceed along.

 Data will be collected and analyzed for three different levels of technical maturity.

 For more nascent agencies, defining an analytical option that could be readily implemented and prove the value of post-implementation evaluation would be helpful to make the case for collecting and storing historical before and after data. A nascent approach for the eventual framework may walk practitioners through a programmatic analysis that compares historical programmatic spending with systemic performance.

 For agencies with emerging capabilities, it is envisioned that sufficient cross-sectional data may exist, but longitudinal information is not yet available. This may particularly be the case for some metrics of interest. In such cases, techniques could include comparing similar site locations for which different improvements have been completed – essentially, a ‘with’ and ‘without’ comparison.

 For advanced agencies, it is assumed that a comprehensive set of historical project, performance, and inventory is available. For such agencies, the framework could include an in-depth project-level approach designed to build a training [calibration] set of past projects and their corresponding metric impacts.

 Task 6.3 Integrating the Framework into Business Processes

Regardless of technical maturity, it is envisioned that the framework will support a variety of business cases and processes:

        Resource allocation by estimating the typical construction cost (in constant dollars) per unit performance improvement relative to a starting condition

        Project initiation by applying machine learning algorithms to a training set of historical before and after data, so as to recommend better work scopes and/or screen the likely efficacy of proposed solutions;

        Project prioritization by developing an impact-based score determined by comparing the similarity of proposed projects to those done in the past under similar conditions;

        Project selection by evaluating the efficacy of past project combinations;

        Target setting by looking back at what different project set compositions produced relative to the set of needs addressed; and

        Recalibrating predictive models based after observing the performance impact of construction.

Post-implementation evaluation data can help justify decisions and enable agencies to achieve better performance outcomes. Information on how to use the framework and what methods, tools, and resources are required to implement it, will be further included in the team’s reporting to the panel and documented into a separate technical memorandum to accompany the Organizational and Technical Framework technical memo.  

 Deliverables: Organizational and Technical Framework; Business Process Integration Framework

 TASK 7: TOOL DEVELOPMENT

An interactive tool will be developed that allows practitioners to jump right in and tinker with the necessary data and analytical components of the Task 5 framework without needing to read the entire guide.

 The research team will develop an interactive wireframe to help establish an initial set of requirements for a spreadsheet application using widely available software such as Excel, then proceed to design a highly visual tool that reveals insight from pre-configured analyses of users’ own data inputted through a clear template.

 Task 7.1. Envisioning Tool Features through an Interactive Wireframe

Prior to embarking on the development of a lightweight prototype, the research team will share an interactive wireframe – via a web URL – with the NCHRP panel members to understand layout and style preferences and get feedback on desired tool functionality. Panel comments will be solicited focused on: i) functional requirements: user interface layout, controls, and design, ii) data input requirements for future users to input before-and-after data in an annotated template tailored to each maturity level defined in Task 5, and iii) reporting requirements for supported business case applications.

 Task 7.2. Building out the Post-Implementation Tool

Once an initial set of tool requirements are decided upon, the research team will pivot to building a post-implementation evaluation tool with the following characteristics:

        Visually appealing and accessible – The tool will use graphics and provide full functionality to those will different degrees of visual ability.

        Configurable inputs for future use – The tool will be designed to be ‘plug and go’ with reasonable default parameters that the user can overwrite and grow with as they get more comfortable with the tool and underlying framework.

        Tailored to user needs – The tool will utilize iterative feature development and user testing (including at panel meetings).

        Stands the test of time – The value of our research products should be readily apparent for years to come.

 Deliverables: (1) Tool Wireframe; (2) Requirements Log; (3) Draft Post-Implementation Evaluation Tool

 Note: Although existing software programs may be used in this research project, no new programs are to be developed to support the tool.

 TASK 8: CASE STUDY EXAMPLES

The team will execute at least 3 case studies that gather detailed feedback on real-world application and identify possible revisions to the framework and tool more readily.

 The case studies will start with pilot testing the proposed frameworks and tool with select practitioners, then developing a narrative on each participant agency’s background related to post-implementation evaluation and experience in applying the draft products.

 Task 8.1. Test Frameworks and Tool with Practitioners

As a follow-up to the Task 2 practitioner calls, the research team will reengage those that shared before-and-after data. For these agencies, the research team will extend an offer to virtually review the draft framework presentation and demonstrate the draft tool with their data. Initial feedback on the call will be documented, followed by an opportunity to further test the tool. The research team proposes to share the tool along with a feedback survey for gathering comments from the participating agency, including their perceived level of viability for different business use cases within the agency.

 After receiving the additional responses, the team will prepare a prioritized list of potential modifications for both the guide and tool to share with the NCHRP panel before incorporating in the final guide and tool.

 Task 8.2. Documenting Testing Results in Case Studies

The research team will translate user feedback into a series of case studies compiled into a technical memorandum for the panel’s review. Each case study will include an overview of the state-of-practice at the reporting agency, describe the provided before-and-after data, detail challenges experienced in collecting the data and applying it to this project, outline steps and strategies applied to complete the analysis, gather insights from the participant’s application of the frameworks and tool, and identify business use case(s) that the agency feels is a best fit for post-implementation evaluation. Each draft writeup will be shared with the corresponding agency for review and willingness to publish as part of the final research products developed in Phase III.

 Deliverables: (1) Practitioner Feedback Log; (2) Tool instances for each case study; (3) Documented Case Studies

 TASK 9: PHASE II REPORT

Interim Report 2 (IR2) will serve as the official submission of the team’s draft deliverables from Phase II along with any relevant context needed by the Panel to assess, interpret, and use them. It serves as a major assessment point to determine if the products will meet project goals or if changes or additional research are needed to best accomplish their purpose.

 The research team will compile the drafted framework – inclusive of an evaluation playbook for implementation– a draft post-evaluation implementation tool and corresponding case study documentation, and an annotated outline of the anticipated guidebook in a report to share with the NCHRP Panel.

 NCHRP will convene a virtual panel meeting at the close of Phase II to discuss findings and recommend any remaining modifications to the documents in anticipation of their submission as final deliverables. NCHRP approval will be required to advance to Phase III.

 Deliverables: Draft Interim Report 2; (2) Panel Meeting Summary, (3) Final Interim Report 2

 PHASE III: FINAL PRODUCTS

TASK 10: DELIVERABLE PRODUCTION

The team will integrate comments received by the panel on the package of draft deliverables into each of the final deliverables. The research team shall prepare a detailed Response to Comments Memorandum, including documentation of changes made.

Anticipated deliverables of this project include the following: (1) a guide and toolkit for supporting ex-post project evaluation; (2) case study examples to support application of the developed guide and toolkit; (3) an implementation plan for supporting application of the products of this research in practice; (4) a Conduct of Research report that summarizes the research process and outcomes; and (5) communication materials targeting stakeholders, leadership, etc.   

STATUS: Project is in Phase III.    

 

To create a link to this page, use this URL: http://apps.trb.org/cmsfeed/TRBNetProjectDisplay.asp?ProjectID=5323