1 / 40

FERC 745 Baseline Analysis

FERC 745 Baseline Analysis. Presented to the NEPOOL Markets Committee Meeting June 2, 2011. Background. On March 15, 2011, FERC issued Order No. 745 Demand Response Compensation in Organized Wholesale Energy Markets. In Order No. 745, the Commission found that:

nelly
Download Presentation

FERC 745 Baseline Analysis

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. FERC 745 Baseline Analysis Presented to the NEPOOL Markets Committee Meeting June 2, 2011

  2. Background • On March 15, 2011, FERC issued Order No. 745 Demand Response Compensation in Organized Wholesale Energy Markets. • In Order No. 745, the Commission found that: • “when a demand response resource participating in an organized wholesale energy market is capable of balancing supply and demand in the energy market and is cost-effective, as determined by the net benefits test …, that demand response resource should receive the same compensation, the LMP, as a generation resource when dispatched.” (Order at P82)

  3. Background (Continued) • CRA showed that the net benefit test will result in Threshold Prices in the $30 to $60/MWh range, which could result in prolonged periods of continuous price responsive DR events. • “Déjà vu all over again”Yogi Bera, ISO-NE had to deal with this same issue in 2008 with the DALRP. • The problem is that continuous DR events cause the accuracy of the baseline to degrade over time because there is little or no recent data to refresh the baseline.

  4. Background (Continued) • The Commission also directed the ISO: “… to develop appropriate revisions and modifications, if necessary, to ensure that their baselines remain accurate and that they can verify that demand response resources have performed.” (Order at P94) • This presentation summarizes analysis conducted on the impact of frequent clearing of demand response in energy markets on baseline accuracy, and investigated methods to improve baseline accuracy.

  5. Analysis Methodology

  6. Analysis Objective • Determine how baseline accuracy is impacted by consecutive events days. • Load reduction offers at the Demand Reduction Threshold Price (DRTP) will clear on many consecutive days. • Determine how baseline accuracy is impacted by the choice of baseline adjustment method. • Determine how baseline accuracy is impacted by the application of a Baseline Integrity Price (BIP).

  7. Analysis Approach • Objective: Quantify how closely variations in the ISO-NE baseline methodology estimate an asset’s actual load. • Data analyzed • Actual 2010 hourly loads from a sample of customers (assets) that DID NOT participate in Demand Response Program price events. • Used Actual Day-Ahead LMP and RT LMP data. • Estimated hourly baselines for each asset using ISO-NE’s “90/10” methodology. • Three baseline variations and three bidding scenarios were simulated.

  8. Baseline and Bidding Strategies • Baseline Variations • ISO-NE Baseline with Asymmetric Adjustment (Current Method) • ISO-NE Baseline with no Adjustment • ISO-NE Baseline with Symmetric Adjustment • Bidding Variations • Bids at DR Threshold Price (DRTP) • Bids slightly greater than BIP • Bids at $1,000/MWh (Never Clears)

  9. Measuring Baseline Error • Methodology: • In each cleared hour, for each asset, calculate the difference between the asset’s adjusted baseline and its actual load. • Analysis included only assets that DID NOT participate in Demand Response Program price events. • For each asset and hour, baseline error is calculated as the difference between the computed baseline and actual metered load. • Summaries of baseline errors over customers and hours measure overall baseline method accuracy.

  10. Measuring Baseline Error (cont.) • In baseline variations using a daily baseline adjustment, the adjustment was made using load during the two hour time period starting 2-1/2 hours prior to the simulated event. • The start time for each simulated event was variable and was determined by the first hour in which the LMP exceeded the bid price of the bidding strategy. • Only non-holiday weekdays were included in the analysis.

  11. Data Sources • ISO-NE provided hourly data from over 600 commercial and industrial customers (assets) that did not participate in 2010 Demand Response Program price events. • Data included 2010 load and LMP data across the same time period as that used by Charles River Associates (CRA) for the NBT Threshold and BIP Analysis.

  12. Baseline Error • Error = (Baseline) – (Actual Load) • Positive error means baseline is over-estimating actual load. • Calculated load reduction amounts would be overstated. • Negative error means baseline is under-estimating actual load. • Calculated load reduction amounts would be understated.

  13. Median Relative Error • Measures bias or systematic error • Error = (Baseline) – (Actual Load) • Daily, asset-level relative error = (mean asset error)/(mean actual load) • Allows comparison across assets of varying sizes. • Median relative error = Median of daily relative errors over all the assets analyzed. • by month or day

  14. Asset Specific Hourly ErrorError= (Baseline) – (Actual Load)

  15. Asset Specific Relative ErrorRelative Error = Average Error/Average Load Monthly Relative Error uses averages over all cleared hours in a month.

  16. Median Daily Relative Error • Calculated asset-level relative errors for each month. • Identified the Median Relative Error in each day. • There are an equal number of assets that have relative errors above and below the median. • All assets are equally weighted, because error is calculated in percentage terms, • Median is less affected by outliers than Mean.

  17. Investigating Baseline Accuracy • The following series of slides will investigate the impact of the following scenarios on baseline accuracy: • Never Clearing (baseline is refreshed daily) • This is the benchmark against which other scenarios and method will be compared. • Clearing Everyday (baseline is not refreshed)

  18. Never Clearing BenchmarkRel. Error starting January 1, 2010 to April 30, 2010 Relative error does not exhibit bias. Daily data points are clustered evenly around zero. Avg. rel. err = -0.1%

  19. Never Clearing BenchmarkRel. Error starting June 1, 2010 to September 30, 2010 Summer relative error is more variable but daily data points are still clustered evenly around zero – No bias. Avg. rel. err = -0.1%

  20. Never Clearing BenchmarkRel. Error for August 1, 2010 to November 30, 2010 Relative error variability decreases into winter – Slight positive bias due to natural lag in 90/10 baseline. Avg. rel. err = 3.0%

  21. Never Clearing BenchmarkRel. Error starting September 1, 2010 to December 31, 2010 Relative error variability decreases into winter but still slight positive bias. Avg. rel. err = 2.9%

  22. Never Clearing BenchmarkRel. Error starting October 1, 2010 to December 31, 2010 Relative error variability decreases into winter but still slight positive bias. Avg. rel. err = 2.6%

  23. Always Clearing  Worst CaseRel. Error starting January 1, 2010 to April 30, 2010 Relative error variability still low due to winter but with clearing every day there is a small negative bias. Avg. rel. err = -2.1%

  24. Always Clearing  Worst CaseRel. Error starting June 1, 2010 to September 30, 2010 Relative error variability increases during summer. Non-updated, late spring baseline causes greater negative bias. Avg. rel. err = -7.9%

  25. Always Clearing  Worst CaseRel. Error starting August 1, 2010 to November 30, 2010 Summer baseline carried into winter causes positive bias. Avg. rel. err = 12.3%

  26. Always Clearing  Worst CaseRel. Error starting September 1, 2010 to December 31, 2010 A September start exhibits the most bias. Avg. rel. err = 17.0%

  27. Always Clearing  Worst CaseRel. Error starting October 1, 2010 to December 31, 2010 Starting in October with a baseline updated through September, bias through December starts to decrease. Avg. rel. err = 14.0%

  28. Conclusions from Initial Baseline Analysis • When the baseline is refreshed daily, bias in the baseline ranges from -0.1% to 3% on average depending on the starting date. • Bias caused by inherent lag in the 90/10 baseline methodology. • When the baseline is never refreshed, the bias in the baseline ranges from -7.9% to 17% on average depending on starting date. • Bias caused by baseline calculated from data in one season being carried into the next season. • The highest average bias of 17% occurs when the start date is in the beginning of September.

  29. Simulation of Baseline Scenarios • Bids are offered at the DR Threshold Price. • Bid clears if DA or RT Price > Threshold. • Start September 1st, 2010 • Evaluate three baseline scenarios using ISO-NE 90-10 baseline with: • No Adjustment • Asymmetric Adjustment (Current ISO-NE DALRP baseline) • Symmetric Adjustment (Proposed ISO-NE PRD baseline)

  30. DR Threshold Price Simulation Rel. Error for Sept. to Dec. Period Current DALRP Method with Asymmetric Adjustment increases the bias (red triangle) Unadjusted Baseline same bias as Clears Everyday plot slide 26 (blue square) Proposed PRD Method with Symmetric Adjustment lowers bias (green dot)

  31. Results of DR Threshold Price Simulation • Unadjusted and Asymmetric Adjustments produce average bias of 17% and 18.1%, respectively. • Applying Symmetric Adjustment reduces average bias to 10.4%. • Baseline bias is also function of the number of consecutive cleared days.

  32. Impact of Baseline Method and Consecutive Events on Baseline Bias • Plot shows the effect of consecutive events on average bias for different adjustments • Baseline Integrity Price (BIP) was evaluated as a mechanism to further reduce bias by refreshing the data used in calculating the baseline.

  33. ISO-NE and CRA to Discuss the BIP Concept

  34. Analysis of Baseline Scenarios Using Baseline Integrity Price (BIP) • The following slides evaluate the three baseline scenarios where bids are offered at the Demand Reduction Threshold Price (DRTP) and the BIP is used to refresh the baseline computation. • ISO-NE Baseline with Asymmetric Adjustment (Current Method) • ISO-NE Baseline with no Adjustment • ISO-NE Baseline with Symmetric Adjustment

  35. Baseline Integrity Price (BIP) SimulationRel. Error for Sept. to Dec. Period Combination of BIP & Symmetric Adjustment produces lowest bias BIP Approach lowers bias for all baseline variations “NB price with BIP” = Offers are equal to the Net Benefits Test Threshold Price and the Baseline Integrity Price is used

  36. Results of BIP SimulationAverage Bias (Sept – Dec Period) • Using the BIP Method with a Symmetric Adjustment reduced average bias to 4.7% compared to 10.4% bias with only a Symmetric Adjustment and no BIP.

  37. Comparison DRTP with & without BIP Rel. Error for Sept. to Dec. Period W/ no BIP, Symmetric Adjustment drops average bias to 10.4%. W/ no BIP, Unadjusted and Asymmetric Adjusted baselines have 17-18.1% average bias (Red & Blue lines) Combination of BIP & Symmetric Adjustment drops average bias to 4.7%. “NB price No BIP” = Offers are equal to the Net Benefits Test Threshold Price and the Baseline Integrity Price is not used “NB price with BIP” = Offers are equal to the Net Benefits Test Threshold Price and the Baseline Integrity Price is used

  38. Lessons Learned • None of the baseline methods analyzed can remain accurate without being refreshed using recent meter data. • The ISO-NE Baseline with Asymmetric Adjustment (Current Method) has the most bias of the three variations tested with or without BIP. • The ISO-NE Baseline with Symmetric Adjustment had the least bias of the three variations tested. • The BIP Method reduced bias for all three variations tested.

  39. Conclusions on Baseline Options • The ISO-NE baseline with Symmetric Adjustment when used with the BIP method will produce the most accurate (i.e. least bias) baseline estimate.

  40. Thank you for your attention.

More Related