Improved Gridded Wind Speed Forecasts with Block MOS 

Get Complete Project Material File(s) Now! »

State of the art of wind speed MOS and gridded MOS

MOS methods for wind speed forecasts have been applied and studied deeply in the field of wind power generation, due to the important economic impacts of wind speed forecast errors for wind power firms and grid regulators. These methods are usually applied to the locations of wind farms only. Based on the reviews of Lei et al. (2009) and Jung and Broadwater (2014), two main approaches are used in the wind power field: time series analysis methods or data mining techniques.
As for the time series methods, autoregressive (AR), moving average (MA), autoregressive-moving average (ARMA) and so on are used to forecast wind speed or wind power for very short term (less than 6 h ahead). These models build the forecast as a linear combination of past forecasts and/or forecast errors. For instance, Liu et al. (2010) decompose the wind speed time series in wavelet components, model each wavelet component with a time series method, then add the forecasted component wavelets.
For farther lead times, the main approach is to use support vector machines (SVM) or neural network (NN). Colak et al. (2012) review data mining techniques for wind speed and wind power forecasting. Some applications build an hybrid model with a modeling of the linear evolution of wind speed with time series methods, and a correction with SVM or NN to take account of the nonlinear evolution. Bhaskar and Singh (2012) decompose the wind speed time series on wavelets, then uses each wavelet component to regress on with an adaptive wavelet neural network. Guo et al. (2012) decompose the wind speed time series on a small number of mode functions, use NN to forecast each sub-series (with a variable selection step) then add the different forecast components. Douak et al. (2013) select weather predictors with an active learning technique, then use kernel ridge regression to build the MOS. Shi et al. (2012) and Cadenas and Rivera (2010) use ARIMA to model the linear evolution of the wind speed time series, and NN or SVM to model the nonlinear part. With ARIMA, Liu et al. (2012) choose the structure of an NN MOS or initialize a Kalman filter that serves to build their MOS. Haque et al. (2012) build different versions of NN, and add as inputs the average observed values on “similar days”. Adding the similar days’ average observation improves the performance over the NN without the similar days. Qin et al. (2011) choose dynamically between an NN method and a persistence method according to the wind conditions. In the meteorological community, Sfanos and Hirschberg (2000) from the National Oceanographic and Atmospheric Administration (NOAA) use multiple linear regression. Kusiak et al. (2009) compare the performance of different data mining techniques (SVM, NN, regression trees and random forest) to forecast wind speed and wind power up to a forecast horizon of 84 h. Some authors build several MOS forecasts and combine them to further improve their performance. Bouzgou and Benoudjit (2011) combine four MOS forecasts (linear regression, two neural networks and SVM), with three combination strategies (a simple average, a weighted average and NN). Li et al. (2011) use the Bayesian framework to combine several neural network MOS, for short term forecasts. Zhou et al. (2011) compare several SVM-based MOS with different kernels, and conclude that the choice of the kernel is not important but that forecast performances are sensitive to the value of the fitting parameters. Cheng and Steenburgh (2007) compare traditional MOS to bias removal with a Kalman filter or a 7-day running mean. They conclude that traditional MOS work better when the weather changes, the others perform the best during quiescent cool regimes, and both approaches are equivalent during quiescent warm regimes. They do not propose to combine the forecasts to try and get the best of each.
As for the gridded MOS, no consensus exists in the literature on whether one should grid MOS previously built at measurement locations, or grid measurements before building MOS on a grid. To the best of our knowledge, no systematic comparison of the two approaches has been achieved. The NOAA uses gridded MOS operationally, by gridding MOS built at station locations. Glahn et al. (2009) and Gilbert et al. (2009) detail how their method iteratively corrects grid-point forecasts after comparing them to MOS built at nearby station locations. Based on the same methodology, Im et al. (2010) detail the NOAA analysis to grid hourly measurement of surface parameters, in order to verify the gridded MOS and produce very-short term forecasts. Mass et al. (2008) grid MOS by estimating the bias at station locations, associating each grid point to a station with similar elevation and/or land-use characteristics and estimating the bias correction at the grid point. Solari et al. (2012) use simulations with a very fine grid of 270 m on port areas to build a linear relationship between wind speed measurements and simulated values over the area. The statistical relationship built at measurement locations is used to correct wind speed forecasts all over the grid. Burlando et al. (2010) adopt a similar approach to forecast wind speed along a railway line. Charba and Samplatsky (2009) and Charba and G. Samplatsky (2011) build their MOS on a grid by using as observation an analysis of rainfall accumulation. Thorey et al. (2015) use previously gridded radiation measurements as observation.

State of the art of EMOS and combination thereof

EMOS are much studied and used in the weather forecast community, as reviewed in Gneiting (2014). The issue is addressed with parametric or nonparametric approaches. In parametric EMOS, a forecast distribution is taken from a family of probability distribu-tion, whose parameters depend on the ensemble. For wind speed, several distributions have been tried, such as the gamma distribution (Sloughter et al. 2010; M¨oller and Scheuerer 2013), the truncated normal distribution for the wind speed (Thorarinsdottir and Gneiting 2010) or transformation thereof (Hemri et al. 2014) or the log-normal distribution (Baran and Lerch 2015). The parameters of the distribution are modeled as a linear regression with the forecast values or statistics thereof as inputs. This kind of approach is called non homogeneous regression (NR, Gneiting et al. 2005). The more flexible Bayesian model averaging (BMA), introduced by Raftery et al. (2005), builds a mixture of several para-metric distributions, whose weights are dynamically computed in a Bayesian framework (Baran et al. 2014). As noted in Gneiting (2014), other EMOS methods use kernel func-tions to estimate the forecast density or to fit density functions on the ensemble, but can be interpreted in the framework of BMA. As for nonparametric EMOS methods, quantile-to-quantile transformation, introduced by Bremnes (2007), is used operationally at the Hungarian Meteorological Service (HMS) to post-process ensemble forecasts (Ih´asz et al. 2010). This method establishes a bijection between quantiles of the same order in the climatology of the observation and in the climatology of the forecast. Each forecast value is then transformed to the associated observed value thanks to this bijection. Several vari-ations exist to compute the required forecast climatology, such as using a rolling period as in Flowerdew (2012), or running the ensemble NPW model over past years as is done at the HMS. Hamill and Whitaker (2006) advocate the use of analog methods as an EMOS methods, by forecasting observations from past days whose ensemble forecast are much alike as the current ensemble forecast. In Hamill and Colucci (1998), the rank histogram is used to estimate the forecast distribution. The rank histogram is just the histogram of the rank of the observation in each forecast/observation pair. The proportion of observa-tions of each possible rank in a training sample is used to associate an order to the future sorted forecast values. Between two consecutive sorted values, the distribution is supposed uniform. Extrapolating the probability function outside the range of the forecast values may require some parametric assumption. Taillardat et al. (2016) compare NR to EMOS based on random forests for several parameters.
The combination of EMOS does not seem to be a common practice. In some ways, BMA can be considered as a combination method. Baran (2014) uses non homogeneous regression for wind speed ensemble forecasts with a truncated normal and a log-normal distribution then chooses among them according to the value of the ensemble mean. Baran and Lerch (2016) use as their final forecast a weighted average of the same two previous NR EMOS. The combination weight and the parameters of the two combined distributions are fitted by optimizing the CRPS or the likelihood. Baudin (2015) combines the pooled and sorted values of several ensembles with combination methods that have theoretical guarantees that the combination cannot perform much worse than some reference forecast. The used combination methods are an adaptation to probabilistic forecasts of methods to combine point forecasts (Cesa-Bianchi et al. 2006; Mallet et al. 2007). The weights are computed with functions of the CRPS as a measure of performance.

Gridding 10-m windspeed measurements

In order to get gridded fields of 10-m wind speed measurements even where actual measure-ments are not available, several interpolation strategies exist. The most straightforward one is to use as the predictand an existing analysis of some NWP model. However classical variational assimilation schemes such as 3D-Var (Courtier et al. 1991) or 4D-Var (Courtier et al. 1994) assimilate station measurements. Therefore, an objective verification of such an analysis versus those measurements is not straightforward and may lead to overconfidence in the forecasts’ performances, as will be shown later. Furthermore, since assimilation schemes mix in some way forecasts and observations, the obtained analysis could be af-fected by the forecast bias. As presented in Schaefer and Doswell (1979), it is also possible to work on the two dimensional wind field, interpolating divergence and vorticity instead of the wind vector itself. This may allow imposing physical constraints, such as mass conser-vation, and working on the wind vector instead of the wind speed only. But while working on a limited domain, this solution requires boundary conditions which may not be trivial. A third efficient method to interpolate measurements is to run a very high resolution model and find a statistical relationship between measurements and short lead-time forecasts at the same (or nearby) locations. This interpolation function is built for locations where the predictand and the predictors are available and applied to points where only the predictors are known, as presented in Burlando et al. (2013). This approach typically uses NWP model with a resolution of the order of a few tens of meters. This is not feasible for a whole country as wide as France, but a good compromise could be using a model over the entirety of France with a grid size of a few kilometers. This statistical interpolation is the approach chosen here and compared to an analysis existing at M´et´eo-France, which is a kilometer-scale analysis based on 4D-var assimilation. The methodology is presented in more detail hereafter.

READ  The role of the school context in promoting resilience

Methodology

Let us suppose we have at our disposal past predictand and predictor values, at time t = 1, . . . , T for N s stations located at sites si where i = 1, . . . , N s. Let us note S a fine (model) grid covering the region of interest and T be a fine temporal grid covering (1; T ). Then, for a generic spatio-temporal point (s, t), with s ∈ S and t ∈ T , let us note y(s, t) and x(s, t) the value of the predictand and the vector of predictors, respectively.
Interpolating the predictand consists in building some function f such that y(s, t) = f (x(s, t)) + ǫ(s, t), with ǫ an interpolation error. The function f is built to have the best generalization capability, that is the lowest possible errors ǫ over the sites in S. It is fitted locally, that is, for a given spatio-temporal point (si, t) the training set D(si, t) is made of a subset of {s1, . . . , sN S } × T depending on (si, t). Many interpolation strategies can be tried by varying the training set, the family of func-tions to which f belongs, the choice of the predictors x, and the optional modelling of the error ǫ. The error can be supposed deterministic (Hengl 2007) with no modelling at all. Alternatively, the error can be modelled with statistical models either without spatio-temporal dependence (Hastie et al. 2009; Kuhn and Johnson 2013) or with spatial dependence explicitly modelled (Hengl 2007; Cressie and Wikle 2011).

Data description

The predictand is the hourly 10-m wind speed defined as the average of the instantaneous wind speed measurements taken during the ten minutes before each hour. These measure-ments are available at 436 meteorological stations over mainland France (named above si, with i = 1, . . . , N s), which are managed by M´et´eo-France. In order to balance quantity and quality of measurements, retained data are actually measured at heights between 8 and 13 m for stations of environmental class lower than or equal to 3 according to the World Meteorological Organization’s Guide to Meteorological Instruments and Methods of Observation (World Meteorological Organisation 2008, Chapter 1, Annex 1.B). For wind speed measurements, environmental class 3 requires that “the mast should be located at a distance of at least 5 times the height of surrounding obstacles” and that “sensors should be situated at a minimum distance of 10 times the width of narrow obstacles (mast, thin tree) higher than 8 m”. Mean distance between couples of nearest stations is 21 km. The study period goes from January 2011 to March 2015.
For the best interpolation strategy described hereafter, the vector of predictors x at a site s is composed of the position of the site and the most recent wind speed forecast from an NWP model. • Position: the position of each site s ∈ S is specified by its horizontal coordinates (sx and sy) in the extended Lambert-93 georeferencing system and its altitude (sz). The value of sz is obtained by considering the altitude of the nearest point in the digital elevation model (DEM), called BDAlti1, of the French geographical institute (IGN, Institut national de l’information g´eographique et foresti`ere). The freely available version of this DEM, which is used in this study, has a resolution of 75 m and covers France only.
• Most recent wind speed forecast from an NWP model: the NWP model used is AROME (Applications de la Recherche a` l’Op´erationnel a` M´eso-Echelle, or mesoscale applications of research for operational use), M´et´eo-France’s high resolution NWP model. It is a limited area and non-hydrostatic model. During the study period, it had a 2.5 km grid size over France (Seity et al. 2011). For one specific site, date and time, the wind speed forecast comes from the most recent run, excluding the analysis, and it is noted WAROM E (s, t). Since AROME runs four times per day, the used lead-times range from 1 to 6 hours. As an example, for an interpolation at 1600 UTC, the predictors come from the run of 1200 UTC with a lead-time of 4 hours. The wind speed forecast used at station locations is AROME’s forecast bilinearly interpolated from AROME’s grid towards these locations.

Verification strategy

Since no wind speed measurement is available at grid points, assessment of the interpola-tion strategy is achieved through cross-validated interpolation towards some test stations. Cross-validation consists in splitting the available archive into two subsets: one training set is used to fit the interpolation functions, one test set is used to assess the interpolation performance. Since cross validation is time consuming, a subset of 150 test stations were chosen, repre-sentative of the French topography and hourly wind speed climatology. Ten lists of fifteen stations were built as test sets, so that each list gathers stations far enough from one an-other to ensure that results are close to those of leave-one-out cross validation. The closest test stations in each list are separated by at least 80 km. Interpolation is done toward each of these ten test lists separately and performance is assessed. Consequently, the training is always done with 421 stations (up to missing data).
Comparing this new analysis against the existing AROME analysis provides an assess-ment of its usefulness for operational purposes. However, AROME assimilation scheme already assimilates station measurements, which biases its scores toward better perfor-mances. Thus, in order to get an accurate assessment of the analysis performance as an interpolator, 10 AROME assimilations were rerun without assimilating one test set of 15 stations each. Since this reanalysis takes time, it was only run for 120 dates between July 2013 and July 2014, at 1500 UTC corresponding to the maximum of the diurnal cycle of wind speed. This reanalysis is referred hereinafter as AROMEcv, since it is computed with cross-validation.

Comparison to reference and cross-validated AROME analyses

The two first columns of Table 2.1 present the measures of performance for the TPRS analysis and the reference. For the whole sample, both analyses are unbiased. However, TPRS performs better than bilinear interpolation for the other measures of performance. The RMSE is improved by 16% and most of the errors are less than 4 m s−1in absolute value.
This table also shows the same measures of performance but for classes defined by the terciles of the wind speed distribution over France during the study period: weak (below 2.9 m s−1), average (between 2.9 m s−1and 4.8 m s−1) and strong (above 4.8 m s−1). For the lowest measured wind speeds, TPRS and reference tend to yield slightly too strong wind (positive bias) and the converse for the strongest measured wind speeds (negative bias). However, the bias remains low. Whatever the wind speed regime, TPRS outperforms bilinear interpolation whatever other performance measure is considered.
Figures 2.1 and 2.2 show the evolution of RMSE and BIAS over time of the day for TPRS and reference analyses, computed over the 150 test stations and all the dates in the study period. The curves may show abrupt changes every 6 hours, when the predictors are taken from a different run. This is due to the better performance of the underlying forecast thanks to the proximity of AROME assimilation. Anyway, TPRS is consistently better Table 2.1 – Measures of performance for TPRS, bilinear reference interpolation (ref.), operational AROME analysis (AROME) and AROME reanalysis computed with cross-validation (AROMEcv). These figures concern 150 test stations and 120 dates at 1500 UTC, for all wind speed values and three different intervals of wind speed measurements. Bold figures indicate best performances among TPRS, reference and AROMEcv.

Table of contents :

R´esum´e en fran¸cais 
0.1 Introduction
0.2 Chapitre 2: Improved Gridded Wind Speed Forecasts with Block MOS
0.2.1 Motivation
0.2.2 R´esultats
0.3 Chapitre 3: Estimation of the CRPS with Limited Information
0.3.1 Motivation
0.3.2 R´esultats
0.4 Chapitre 4: Aggregation of Probabilistic Wind Speed Forecasts
0.4.1 Motivation
0.4.2 R´esultats
0.5 Conclusion
1 Introduction and Summary 
1.1 Introduction
1.2 Chapter 2: Improved Gridded Wind Speed Forecasts with Block MOS
1.2.1 Motivation
1.2.2 State of the art of wind speed MOS and gridded MOS
1.2.3 Results
1.3 Chapter 3: Estimation of the CRPS with Limited Information
1.3.1 Motivation
1.3.2 State of the art of the estimation of the CRPS
1.3.3 Results
1.4 Chapter 4: Aggregation of Probabilistic Wind Speed Forecasts
1.4.1 Motivation
1.4.2 State of the art of EMOS and combination thereof
1.4.3 Results
1.5 Conclusion
2 Improved Gridded Wind Speed Forecasts with Block MOS 
2.1 Introduction
2.2 Gridding 10-m windspeed measurements
2.2.1 Methodology
2.2.2 Data description
2.2.3 Verification strategy
2.2.4 Results about the best interpolation strategy
2.3 Improving wind speed forecasts on a grid by block regression
2.3.1 Data
2.3.2 Block MOS
2.3.3 Results
2.4 Conclusion
3 Estimation of the CRPS with Limited Information 
3.1 Introduction
3.2 Review of available estimators of the CRPS
3.3 Study with simulated data
3.3.1 CRPS estimation with a random ensemble
3.3.2 CRPS estimation with an ensemble of quantiles
3.4 Real data examples
3.4.1 Raw and calibrated ensemble forecast data sets
3.4.2 Issues estimating the CRPS of real data
3.4.3 Issues on the choice between QRF and NR
3.5 Conclusion and discussion
3.A What is elicited when the 1-norm CRPS of an ensemble is minimized?
3.B Relationships between the estimators of the CRPS
3.B.1 Equality of dcrpsFair and dcrpsPWM
3.B.2 Equality of dcrpsNRG and dcrpsINT
3.B.3 Relationship between dcrpsPWM and dcrpsNRG
4 Aggregation of Probabilistic Wind Speed Forecasts 
4.1 Introduction
4.2 Theoretical framework and verification strategy
4.2.1 The individual sequence prediction framework
4.2.2 Sequential aggregation of step-wise CDFs
4.2.3 Verification strategy
4.3 Aggregation methods
4.3.1 Inverse CRPS weighting
4.3.2 Sharpness-calibration paradigm
4.3.3 Minimum CRPS
4.3.4 Exponential weighting
4.3.5 Exponentiated gradient
4.4 The experts and the observation
4.4.1 The TIGGE data set and experts
4.4.2 The calibrated experts
4.4.3 The observation
4.5 Results
4.5.1 CRPS, reliability and sharpness
4.5.2 Spatio-temporal characteristics of the most reliable aggregated forecast
4.6 Discussion about calibration and aggregation procedures
4.7 Conclusion and perspectives
4.A Formula for the gradient of the CRPS
4.B Bounds for the EWA aggregation
4.C Time series of the regret at each lead time
4.D Maps of rank histograms of raw ensembles
4.E Time series of the aggregation weights
5 Conclusion and Perspectives 
Bibliography 

GET THE COMPLETE PROJECT

Related Posts