Sue Haselhorst, ERS, and Joe Dolengo, National Grid, presented at ACEEE Summer Study, 2016
ABSTRACT
It has become clichéd that implementers want evaluation results sooner and with actionable recommendations; however, impact results often arrive two to four years after the measures are installed and paid, for and the evaluator recommendations are stale.
National Grid’s (NGrid’s) New York Commercial and Industrial (C&I) evaluation study manager has launched a bold and innovative approach to evaluations that is focused on quick turnaround measurement and verification (M&V) and simultaneous process-oriented feedback directed at improving program implementation. This thinking is inspired by the New York Reforming the Energy Vision (REV) initiative, which calls for evaluations that are “designed and implemented to yield timely information that [feeds into] the annual iterations of utility programs.” This approach incorporates these features:
- A rolling sample to select sites for M&V during the implementation period, permitting reporting of results months after the measures are installed rather than years.
- Leveraging the granular M&V engineering data collection process to provide program implementers with granular feedback on the application process, technology performance, and on-site operation of the measures.
This paper will report on the implementation of this approach, its reception by the implementation staff, the aspects of the program that have worked well, and where adjustments need to be and have already been made.
Introduction
For years, the realization rate (the ratio of evaluated to tracking savings) was the major deliverable of an impact evaluation. In the late 1990s, regulators began requiring verification of the claimed savings to ensure the reliability, cost-effectiveness, and/or appropriateness of the shareholder incentives. The realization rate, with separate factors for energy and demand, encapsulated this gross savings verification in a single number. The paradigm worked well during a period of stable measures, programs, and goals, and delayed evaluation results were acceptable – although recognized as not ideal. Table 1 presents the average time lapse between the mid-point of the evaluated program year (PY) and the publication date for impact evaluation studies published between 2012 and 2015, inclusive.
Table 1. Average time lapse between installation and impact evaluation
In recent years, program goals and budgets have increased dramatically, with even more money expected to flow from the investment community. Codes and standards are changing rapidly and striking deep, decreasing the available energy efficiency potential. This confluence of factors has led to more rapid program design changes, thus leading to a need for quicker evaluations. From a program implementer’s point of view, program implementation recommendations two and three years after a program’s year-end are likely to be stale and out of date since a program’s design, quality control procedures, and measure mix most likely changed in that timeframe.
Enter your name and email address to download the complete paper. The download link will appear below the green box.
[email-download download_id=”13185″ contact_form_id=”10361″]