Observation impact on the shortrange forecast
Observation impact on the shortrange forecast
Abstract and Keywords
This chapter describes the concept of forecast error sensitivity to observations (FSO) and its use for diagnostic purposes. Assessment of the observational contribution to analysis and forecasting is among the most challenging aspects of diagnostics in data assimilation and numerical weather prediction. The FSO tool computes the contribution of all observations to the forecast error: a positive contribution is associated with an increase in forecast error and a negative contribution with a decrease. The technique is illustrated by an application to the weather prediction system of the European Centre for MediumRange Weather Forecasts.
Keywords: observational contribution, forecast sensitivity to observations, FSO, forecast error, numerical weather prediction
Advanced Data Assimilation for Geosciences. First Edition.
Edited by É. Blayo, M. Bocquet, E. Cosme, and L. F. Cugliandolo.
© Oxford University Press 2015. Published in 2015 by Oxford University Press.

6 Observation impact on the shortrange forecast 165
C. CARDINALI
(p.167) This chapter illustrates the concept of forecast error sensitivity to observations and its use for diagnostic purposes. The tool presented here computes the contribution of all observations to the forecast error: a positive contribution is associated with forecast error increase and a negative contribution with forecast error decrease. The forecast range investigated is 24 hours. It can be seen that, globally, the assimilated observations decrease the forecast error. Locally however, poor performance can also be found. The forecast deterioration can be related either to the data quality or to the data assimilation and forecast system. The data impact on the forecast is spatially and also temporally variable. It depends on atmospheric regimes, which may or may not be wellrepresented by the model or by the data. An example of a routine diagnostic assessment of observational impact on the shortrange forecast performance is shown. The example also illustrates the tool's flexibility to represent different degrees of detail of forecast improvement or deterioration.
6.1 Introduction
The European Centre for MediumRange Weather Forecasts (ECMWF) fourdimensional variational system (4DVar; Rabier et al., 2000) handles a large variety of both space and surfacebased meteorological observations (more than 30 million a day) and combines the observations with the prior (or background) information on the atmospheric state. A comprehensive linearized and nonlinear forecast model is used, counting a number of degrees of freedom of the order of ${10}^{8}$.
The assessment of the observational contribution to analysis (Cardinali et al., 2004; Chapnik et al., 2004; Lupu et al., 2011) and forecast is among the most challenging diagnostics in data assimilation and numerical weather prediction. For the forecast, the assessment of the forecast performance can be achieved by adjointbased observation sensitivity techniques that characterize the forecast impact of every measurement (Baker and Daley, 2000; Langland and Baker, 2004; Cardinali and Buizza, 2004; Morneau et al., 2006; Xu et al., 2006; Zhu and Gelaro, 2008; Cardinali, 2009). The technique computes the variation in the forecast error due to the assimilated data. In particular, the forecast error is measured by a scalar function of the model parameters, namely wind, temperature, humidity, and surface pressure, that are more or less directly related to the observable quantities.
In general, the adjoint methodology can be used to estimate the sensitivity measure with respect to any assimilation system parameter of importance. For example, Daescu (2008) derived a sensitivity equation of an unconstrained variational data assimilation system from the firstorder necessary condition with respect to the main input parameters: observation, background, and observation and background error covariance matrices.
The forecast sensitivity to observation technique (FSO) is complementary to the observing system experiments (OSEs) that have been the traditional tool for estimating data impact in a forecasting system (Bouttier and Kelly, 2001; English et al., 2004; Lord et al., 2004; Kelly, 2007; Radnoti et al., 2010, 2012). Very important is the use of OSEs in complement to FSO to highlight the contribution of, for example, a particular data set and to address the causes of degradation or improvement which FSO measures.
(p.168) The main differences between adjointbased and OSE techniques are as follows:

• The adjointbased observation sensitivity measures the impact of observations when the entire observational dataset is present in the assimilation system, while the observing system is modified in the OSE. In fact, each OSE experiment differs from the others in terms of assimilated observations.

• The adjointbased technique measures the impact of observations separately at every analysis cycle versus the background, while the OSE measures the total impact of removing data information from both background and analysis.

• The adjointbased technique measures the response of a single forecast metric to all perturbations of the observing system, while the OSE measures the effect of a single perturbation on all forecast metrics.

• The adjointbased technique is restricted by the tangent linear assumption and is therefore valid for forecasts up to 2 days, while the OSE can measure the data impact on longerrange forecasts and in nonlinear regimes.
This chapter introduces the mathematical concept and the application of the forecast sensitivity to the observation tool. The general ECMWF system performance in the 24hourrange forecast is shown as derived by the diagnostic tool. In Section 6.2, the theoretical background of the FSO and the calculation of the forecast error contribution (FEC) from observations are shown. The ECMWF forecast performance is illustrated in Section 6.3 and conclusions are drawn in Section 6.4.
6.2 Observational impact on the forecast
6.2.1 Linear analysis equation
Data assimilation systems for numerical weather prediction provide estimates of the atmospheric state x by combining meteorological observations y with prior (or background) information ${x}_{b}$. A simple Bayesian normal model provides the solution as the posterior expectation for x, given y and ${x}_{b}$. The same solution can be achieved from a classical frequentist approach, based on a statistical linear analysis scheme providing the best linear unbiased estimate (BLUE) (Talagrand, 1997) of x, given y and ${x}_{b}$. The optimal general leastsquares solution to the analysis problem (see Lorenc, 1986) can be written as
The vector ${x}_{a}$, is called the analysis. The gain matrix K (of dimension n x p,with n being the dimension of the state vector and p that of the observation vector) takes into account the respective accuracies of the background vector ${x}_{b}$ and the observation vector y as defined by the ( $n\times n$ )dimensioned covariance matrix B and the ( $p\times p$ )dimensioned covariance matrix R, with
${I}_{n}$ is the n x n identity matrix. Here, H is a ( $p\times n$ )dimensioned matrix interpolating the background fields to the observation locations, and transforming the model variables to observed quantities (e.g. radiative transfer calculations transforming the model's temperature, humidity, ozone, etc. to brightness temperatures as observed by satellite instruments). In the 4DVar context introduced above, H is defined to also include the propagation of the atmospheric state vector by the forecast model to the time at which the observations were recorded.
(p.169) From (6.1), the sensitivity of the analysis system with respect to the observations can be derived from
Equation (6.3) provides the observational influence in the analysis (Cardinali et al., 2004; see also Chapter 4 of this proceedings' book).
6.2.1 Sensitivity equation
Baker and Daley (2000) derived the forecast sensitivity equation with respect to the observations in the context of variational data assimilation. Let us consider a scalar function J of the forecast error. The sensitivity of J with respect to the observations can be obtained using a simple derivative chain as
where $\delta J/\delta {x}_{a}$ is the sensitivity of the forecast error to the initial conditions (Rabier et al., 1996; Gelaro et al., 1998). The forecast error is mapped onto the initial conditions by the adjoint of the model, providing, for example, regions that are particularly sensitive to forecast error growth (see Section 6.2.3). By using (6.2) and (6.3), the forecast sensitivity to the observations becomes
where ${({B}^{1}+{H}^{\text{T}}{R}^{1}H)}^{1}$ is the analysis error covariance matrix A. In practice, a secondorder sensitivity gradient is needed (Langland and Baker, 2004; Errico, 2007) to obtain the information related to the forecast error, because the firstorder sensitivity gradient only contains information on the suboptimality of the assimilation system (see Section 6.2.3 and Cardinali, 2009).
The forecast error is defined by $J=(1/2)\u3008{e}_{t},C{e}_{t}\u3009$, where t stands for the truth. e denotes the forecast error with respect to temperature, vorticity, and divergence, as well as surface pressure. In practice, the forecast error is computed as the difference between the 24hour forecast and the analysis valid at the same time. This implies that the verifying analysis is considered to be the truth:

• The verifying analysis is only a proxy of the truth and thus errors in the analysis can obscure the observation impact in the shortrange forecast.
C is a matrix of weighting coefficients that integrate the elements of the forecast error to a dry energy norm that is a scalar:

(p.170) • The energy norm is a suitable choice, because it depends directly on the most relevant model parameters also contained in the control vector x (the vectors used in the minimization process in, e.g., 4DVar). Nevertheless, alternative functions of model parameters can be used.
Equation (6.5) can be solved (Krylov method; Van der Vorst, 2003) and the forecast error sensitivity to all assimilated observations is then derived. The numerical method used is shown in Section 6.2.4 (see also Cardinali, 2009).
6.2.3 Sensitivity gradient
Let us consider two forecasts of length f starting from ${x}_{a}$ and length g starting from ${x}_{b}$, ${x}_{b}$ being the background field used in the ${x}_{a}$ analysis. Both forecasts verify at time t. Following Langland and Baker (2004) and Errico (2007), the secondorder sensitivity gradient is defined as
where ${J}_{f}=\u3008({x}_{f}{x}_{t}),C({x}_{f}{x}_{t})\u3009/2$ and ${J}_{g}=\u3008({x}_{g}{x}_{t}),C({x}_{g}{x}_{t})\u3009/2$ are quadratic measures of the two forecast errors ( ${x}_{t}$ being the verifying analysis) and C is the matrix of dry energy weighting coefficients. It is clear from (6.4) that the adjoint model maps the sensitivity (with respect to the forecast) of ${J}_{f}$ into $\delta {J}_{f}/\delta {x}_{a}$ along the trajectory f and the sensitivity of ${J}_{g}$ into $\delta {J}_{g}/\delta {x}_{a}$ along the trajectory g (for the firstorder sensitivity gradient definition and computation, see Rabier et al., 1996; Gelaro et al., 1998). Equation (6.6) is schematically represented in Fig. 6.1. Let us now compare the firstorder sensitivity gradient with the secondorder one, expressing the variation of the forecast error due to the assimilation of observations, $J({e}_{a})J({e}_{b})$, where ${e}_{a}$ and ${e}_{b}$ are the analysis and the background error. Following Langland and Baker (2004), the secondorder Taylor series decomposition is used to map such variation:
(p.171) Because the error cost function is quadratic, (6.7) reduces to
which at first order is
In an optimal assimilation system, the righthand side of this equation is on average zero (Talagrand, 2002), since, statistically, the innovation vector $d=yH{x}_{b}$ and the analysis error are orthogonal. Therefore, it is clear that the results obtained by using the firstorder sensitivity gradient only provide a measure of the suboptimality of the analysis system. It appears that it is necessary to include the secondorder term in the FSO calculation.
6.2.4 Numerical solution
In an optimal variational analysis scheme, the analysis error covariance matrix A is approximately the inverse of the matrix of second derivatives (the Hessian) of the analysis cost function ${J}_{a}$ ((Rabier et al., 2000), i.e. $A={({J}_{{a}^{\u2033}})}^{1}$ (Rabier and Courtier, 1992). Given the large dimension of the matrices involved, ${{J}^{\u2033}}_{a}$ and its inverse cannot be computed explicitly. The minimization is performed in terms of a transformed variable $\chi ={L}^{1}(x{x}_{b})$, with L chosen such that $B=L{L}^{\text{T}}$. The transformation L thus reduces the covariance of the prior to the identity matrix. In variational data assimilation, L is referred to as the changeofvariable operator (Courtier et al., 1998). Let us apply the change of variables in the analysis cost function and write
The Hessian becomes
By applying the change of variables in (6.7) and using (6.8), the forecast sensitivity to the observations is expressed as
Using the conjugate gradient algorithm, first the following equation for $\delta J/\delta y={R}^{1}Hz$ is solved:
(p.172) The solution z lies in the Krylov subspace generated by the vector ${L}^{\text{T}}{z}_{a}$ and the matrix $I+{L}^{\text{T}}{H}^{\text{T}}{R}^{1}HL$. The Krylov subspace dimension is the degree of the minimal polynomial of $I+{L}^{\text{T}}{H}^{\text{T}}{R}^{1}HL$. Therefore, if the degree is low, the Krylov method searches the solution on a smalldimensional space. The method is very efficient in an iterative solution of a linear system with large and sparse matrices (Van der Vorst, 2003). The forecast sensitivity to observations is then given by interpolating z (using the H operator) in the observation space and by normalizing with respect to the observation error covariance matrix R.
6.2.5 Observation impact measure
Once the forecast sensitivity has been computed, the variation $\delta J$ of the forecast error expressed by J can be found by rearranging (6.1) and using the adjoint property for the linear operator:
where $\delta {x}_{a}={x}_{a}{x}_{b}$ are the analysis increments and $\delta y=yH{x}_{b}$ is the innovation vector. δJ is computed across the 12hour window; the sensitivity gradients $\delta J/\delta {x}_{a}$, valid at the starting time of the 4DVar window (09 and 21 UTC in the ECMWF system), are distributed by ${K}^{\text{T}}$, which incorporates the temporal dimension, over the 12hour window. From (6.14), a few considerations should be taken into account:

• The forecast impact δJ (hereinafter called the forecast error contribution, FEC) of all observations assimilated depends on the forecast error ( $J(e)\to \delta J/\delta {x}_{a}$ ), the assimilation system ( ${K}^{\text{T}}$ ), and the difference between the observations and the model ( $yH{x}_{b}$ ).

• Positive forecast error variation $\delta J>0$ is synonymous of forecast degradation. Negative forecast error variation $\delta J<0$ is synonymous of forecast improvement.

• The verifying analysis is only a proxy of the truth. Therefore, errors in the analysis can mask the observation impact in the forecast.

• Biases in the model can result in forecast degradation that is erroneously interpreted as an observationrelated degradation.

• Since the computation is performed with the linearized model, only errors in the shortrange forecast can be diagnosed.

• The forecast error is measured using a dry energy norm that depends on wind, temperature, and surface pressure. Therefore, observables depending on these parameters are rather well assessed. Moreover, the dependence of the forecast error on humidity is represented by the linearized moist process, so that the forecast impact of humidity observations is also fully assessed (Janiskova and Cardinali, in preparation).

(p.173) • The variation of the forecast error due to a specific measurement can be summed over time and space in different subsets to compute the average contribution of different components of the observing system to the forecast error. For example, the contribution of all AMSUA satellites ( s ) and channels ( i ) over time T will be
$$\delta {J}_{\text{AMSUA}}={\displaystyle \sum _{s\subset S}{\displaystyle \sum _{i\subset \text{channels}}{\displaystyle \sum _{t\subset T}=}}}\delta {J}_{i,t}^{s}.$$This is one of the most important characteristics of the tool, because it allows any necessary level of analysis granularity for a comprehensive investigation.
Given all the points above, it is clear that a full diagnostic assessment is necessary to establish the causes for a forecast error increase.
6.3 Results
The routinely computed observational impact from the operational ECMWF 4DVar system (Rabier et al., 2000; Janiskova et al., 2002; Lopez and Moreau, 2005) is shown in Fig. 6.2 for September and October 2011. At ECMWF, the ‘observation impact' suite runs one day behind the model suite, in time to recover the actual verifying analysis for the forecast error computation. The 24hour forecast error contribution (FEC) of all the observing system components is computed and shown in Fig. 6.2(a) for different observation types as defined in Table 5.1 of Chapter 5. For technical reasons, microwave imagers (SSM/I and TMI) have not been considered in this study. The largest contribution to decreasing the forecast error is provided by AMSUA (∼ 25%), IASI, AIRS, and AIREP (aircraft data and GPSRO observations account for 10% of the total impact, respectively). TEMP and SYNOP surface pressure observations contribute 5%, followed by AMVs and HIRS (∼4%), then by ASCAT and DRIBU (3%). All other observations contribute less than 3%.
The error of the observation impact measure is also displayed in Fig. 6.2(a); it depends on the standard error and on the number of observation assimilated in that period. If the FEC measured variability is within the error range, the variation is not considered to be significant. In Fig. 6.2(b), the mean impact per individual observation is shown. In this case, the impact is independent of the observation number. The largest mean contribution is provided by DROP and DRIBU (surface pressure) observations, followed by the contribution of a second group of observations comprising MERIS, AMVs, ASCAT, GPSRO, SYNOP, TEMP, AMSUB, and AIREP. In contrast to the total forecast impact, which is largely provided by satellite observations, the largest perobservation impact is obtained from conventional observations. The difference between the two impact measures is mainly due to difference in observation accuracy, through which a single conventional observation is on average more influential in the analysis than a single satellite measurement. (p.174)
The monthly variation of forecast impact is shown in Fig. 6.3 per observation type and for June–October 2011. The only significant temporal variation is observed for AMSUA, with the largest forecast impact in August and September, and for GPSRO and IASI in July and August, respectively. (p.175)
The AMSUA forecast impact has been analysed in more detail.In Fig. 6.4, the contribution of all channels to the forecast error decrease is shown. Channel 8 has the largest overall impact and the stratospheric channels (11–14) the smallest. There is no significant difference in performance between September and October. The geographical distribution of mean forecast improvement or deterioration from channel 8 is shown in Fig. 6.5 for September–October 2011. The METOPA AMSUA performance is compared with that of NOAA15, since they have a similar satellite orbit. Nevertheless, there is a difference in the measurement time, since METOPA crosses the equator at around 9:30 and NOAA15 at 16:30. The overall impact of the instruments on the two satellites is comparable. The geographical location of the improvement instead differs quite substantially, with the exception of the polar and central Southern Hemisphere regions, where both perform similarly well. In the western part of the Southern Hemisphere, METOPA reduces the forecast error, while NOAA15 increases it. In contrast, in the eastern part, NOAA15 shows a large and consistent improvement, whereas METOPA shows small areas of degradation. A similar impact pattern is observed for the tropics and the Northern Hemisphere. (p.176)
Once the area of degradation or improvement and the periods of interest have been determined, the addition of OSEs can help to determine the possible causes. For example, it can be necessary to identify the explicit contribution of AMSUA channel 8 to the degradation over the Atlantic (METOPA) or central Africa (NOAA15). Comparison between the experiment where channel 8 is not assimilated and the control experiment (in which it is assimilated) will add information for the specific case and will help in evaluating suitability of the assimilation procedure for this data.
The variation of forecast impact with time for AMSUA channel 8 is shown for the North Atlantic region in Fig. 6.6 (a,b). Again, METOPA (a) and NOAA15 (b) are compared. METOPA shows much larger temporal variability than NOAA15 and displays more events of detrimental impact (positive values) than NOAA15, which, except for a few occasions, performs rather well over the entire period. The observation departures are also different: the departures with respect to the background (black line in Figs. 6.6(c,d)) are smaller for METOPA (on average 0.05 K) until the beginning of October, when the assimilation of METOPA restarted after a break of three days due to routine satellite maintenance. After 2 October, METOPA background departures become smaller, but the largest absolute decrease (0.025 K) is observed instead for NOAA15. And, from October onwards, the observation departure from the analysis (grey line in Figs. 6.6 (c, d)) becomes very similar (close to zero on average), while, before that day, NOAA15 shows a small positive bias. Interestingly, the forecast reduction also changes: METOPA shows larger variability than before and, to a lesser extent, so does NOAA15. However, on average, as shown in Fig. 6.5, the impact of the two satellites is quantitatively similar, though different in terms of location. Over the Pacific, for example, METOPA and NOAA15 time series of the forecast performance are more similar, with METOPA showing also few large improvements (not shown). The number of measurements provided by the two satellites is very similar (Figs. 6.6 (e,f)). The larger forecast error reduction of NOAA15 with respect to METOPA over the North Atlantic is due to the measurement time (Fig. 6.7). In fact, the NOAA15 satellite crosses the Atlantic close to 9 UTC, which corresponds to the end of the 12hour assimilation window in the 4DVar system used (Fig. 6.7, light grey), while the METOPA platform is observing the Atlantic at the beginning of the assimilation window (Fig. 6.7 dark grey). Owing to the evolution of the model error covariance matrix B across the assimilation window, observations assimilated towards the end of the window are more influential than observations assimilated at the beginning of the window. (p.177)
6.4 Conclusion
Over the last few years, the potential of using derived adjointbased diagnostic tools has been widely exploited. Recently, a compact derivation of the 4DVar sensitivity equations using the theoretical framework of the implicit function has been performed (Daescu, 2008). The analytical formulation of the sensitivity equations with respect to an extended set of input parameters has been shown and numerical applications (p.179) will soon follow. This chapter has introduced the use of the forecast sensitivity with respect to timedistributed observational data, for the first time in a 12hour 4DVar assimilation system, as a diagnostic tool to monitor the observation performance in the shortrange forecast. The fundamental principles on which the forecast sensitivity diagnostic tool is based have been illustrated and an example of a routine diagnostic has been provided.
The forecast sensitivity to observations can only be used to diagnose the impact on the shortrange forecast, namely for periods of 24–48 hours, given the use of the adjoint model and the implied linearity assumption. The tool allows the computation and visualization of the impact for each assimilated measurement, and therefore the diagnostic can be performed from local to global scales and for any period of interest. The use of the secondorder sensitivity gradient is necessary to identify the forecast impact of the observations. In fact, the projected firstorder sensitivity gradient only contains information on the suboptimality of the assimilation system. The tool use characteristics have been explained: in particular, the dependence of the tool on the verifying analysis used to compute the forecast error and the dependence of the sensitivity tool on the scalar function representing the global forecast error (energy norm). The function of the global forecast error is first mapped onto the initial conditions (using the adjoint operator of the model forecast) and then into the observation space (using the adjoint operator of the analysis system). The forecast error sensitivity of a specific measurement is transformed on forecast error variation via a scalar product with the innovation vector.
The global impact of observations is found to be positive, and the forecast errors decrease for all data type when monthly averaged. In fact, because of the statistical nature of the assimilation procedure, the observation impact must be averaged over a long enough period to be significant.
An example of observation impact monitoring has been shown, and from the global performance assessment the specific performance of one AMSUA channel has been illustrated for two polar orbiting satellites, namely METOPA and NOAA15, covering a similar orbit. The causes of degradation or improvement can be further investigated using observing system experiments.
Given the dependence of some observation types on the meteorological situation, it is suggested that the forecast sensitivity to the observation diagnostic tool be run on an operational basis and in relation to the operational suite error. A constant monitoring of the performance of the model forecast would allow the use of the observation network in an adaptive way in which observations with negative impact can be investigated and potentially denied in real time.
Acknowledgements
The author thanks Mohamed Dahoui, Anne Fouilloux, and Fernando Prates for their continued support in monitoring, displaying, and diagnosing the forecast performance of all observations assimilated at ECMWF.
Bibliography references:
Baker, N. L. and Daley, R. (2000). Observation and background adjoint sensitivity in the adaptive observation targeting problem. Q. J. R. Meteor. Soc., 126, 1431–1454.
Bouttier, F. and Kelly, G. (2001). Observing system experiments in the ECMWF 4DVar data assimilation system. Q. J. R. Meteor. Soc., 127, 1469–1488.
Cardinali, C. (2009). Monitoring the observation impact on the shortrange forecast. Q. J. R. Meteor. Soc., 135, 239–250.
Cardinali, C., Pezzulli, S., and Andersson, E. (2004). Influence matrix diagnostics of a data assimilation system. Q. J. R. Meteor. Soc., 130, 2767–2786.
Cardinali, C. and Buizza, R. (2004). Observation sensitivity to the analysis and the forecast: a case study during ATreC targeting campaign. In Proceedings of the First THORPEX International Science Symposium, 6–10 December 2004, Montreal, Canada, WMO TD 1237 WWRP/THORPEX N. 6.
Chapnik, B., Desroziers, G., Rabier, F., and Talagrand, O. (2006). Diagnosis and tuning of observation error in a quasioperational data assimilation setting. Q. J. R. Meteor. Soc., 132, 543–565.
Courtier, P., Andersson, E., Heckley, W., Vasiljevic, D., Hamrud, M., Hollingsworth, A., Rabier, F., Fisher, M., and Pailleux, J. (1998). The ECMWF implementation of threedimensional variational assimilation (3DVar). Part I: Formulation. Q. J. R. Meteor. Soc., 124, 1783–1807.
Daescu, D. N. (2008). On the sensitivity equations of 4DVar data assimilation. Mon. Weather Rev., 136, 3050–3065.
English, S., Saunders, R., Candy, B., Forsythe, M., and Collard, A. (2004). Met Office satellite data OSEs. In Proceedings of Third WMO Workshop on the Impact of Various Observing Systems on Numerical Weather Prediction, Alpbach, Austria. WMO/TD, 1228, pp. 146–156.
Errico, R. (2007). Interpretation of an adjointderived observational impact measure. Tellus, 59A, 273–276.
Gelaro, R., Buizza, R., Palmer, T. N., and Klinker, E. (1998). Sensitivity analysis of forecast errors and the construction of optimal perturbations using singular vectors. J. Atmos. Sci., 55, 1012–1037.
Janiskova, M., Mahfouf, J.F., Morcrette, J.J., and Chevallier, F. (2002). Linearized radiation and cloud schemes in the ECMWF model: development and evaluation. Q. J. R. Meteorol. Soc., 128, 1505–1527.
Kelly, G. (2007). Evaluation of the impact of the space component of the Global Observation System through observing system experiments. ECMWF Newsletter, Autumn.
Langland, R. and Baker, N. L. (2004). Estimation of observation impact using the NRL atmospheric variational data assimilation adjoint system. Tellus, 56A, 189–201.
Lorenc, A. (1986). Analysis methods for numerical weather prediction. Q. J. R. Meteorol. Soc., 112, 1177–1194.
Lopez, P. and Moreau, E. (2005). A convection scheme for data assimilation: description and initial tests. Q. J. R. Meteor. Soc., 131, 409–436.
(p.181) Lord, S., Zapotocny, T., and Jung, J. (2004). Observing system experiments with NCEP US global forecast system. In Proceedings of Third WMO Workshop on the Impact of Various Observing Systems on Numerical Weather Prediction, Alpbach, Austria. WMO/TD 1228, pp. 56–62.
Lupu, C., Gauthier, P., and Laroche, S. (2011). Evaluation of the impact of observations on analyses in 3D and 4DVar based on information content. Mon. Weather Rev., 139, 726–737.
Morneau, J. Pellerin, S., Laroche, S., and Tanguay, M. (2006). Estimation of adjoint sensitivity gradients in observation space using the dual (PSAS) formulation of the Environment Canada Operational 4DVar. In Proceedings of Second THORPEX International Science Symposium, 4–8 December 2006, Landshut, Germany, WMO/TD 1355, WWRP/THORP6X 7, pp. 162–163.
Rabier, F. and Courtier, P. (1992). Fourdimensional assimilation in the presence of baroclinic instability. Q. J. R. Meteorol. Soc., 118, 649–672.
Rabier, F., Klinker, E., Courtier, P., and Hollingsworth A. (1996). Sensitivity of forecast errors to initial condition. Q. J. R. Meteorol. Soc., 122, 121–150.
Rabier, F., Järvinen, H., Klinker, E., Mahfouf, J. F., and Simmons, A. (2000). The ECMWF operational implementation of fourdimensional variational assimilation. Part I: experimental results with simplified physics. Q. J. R. Meteorol. Soc., 126, 1143–1170.
Radnoti, G., Bauer, P., McNally, A., Cardinali, C., Healy, S., and de Rosnay, P. (2010). ECMWF study on the impact of future developments of the spacebased observing system on numerical weather prediction. ECMWF Tech. Memo., 638.
Radnoti, G., Bauer, P., McNally, A., and Horanyi, A. (2012). ECMWF study to quantify the interaction between terrestrial and spacebased observing systems on numerical weather prediction skill. ECMWF Project Report.
Talagrand, O. (1997). Assimilation of observations, an Introduction. J. Meteorol. Soc. Japan, 75, 191–209.
Talagrand O. (2002). A posteriori validation of assimilation algorithms. In Proceedings of NATO Advanced Study Institute on Data Assimilation for the Earth System, Acquafreda, Maratea, Italy.
Tompkins, A. M. and Janiskova, M. (2004). A cloud scheme for data assimilation: description and initial tests. Q. J. R. Meteor. Soc., 130, 2495–2517.
Van der Vorst, H. A. (2003). Iterative Krylov Methods for Large Linear Systems. Cambridge University Press, Cambridge.
Xu, L., Langland, R., Baber, N., and Rosmond, T. (2006). Development and resting of the adjoint of NAVDASAR. In Proceedings of Seventh latenational Workshop on Adjoint Applications in Dynamic Meterology, 8–13 October 2006, Obergurgl, Austria.
Zhu, Y. and Gelaro, R. (2008). Observation sensitivity calculations using the adjoint of the gridpoint statistical interpolation (GSI) analysis system. Mon. Weather Rev., 136, 335–351. (p.182)