Jump to ContentJump to Main Navigation
Advanced Data Assimilation for GeosciencesLecture Notes of the Les Houches School of Physics: Special Issue, June 2012$

Éric Blayo, Marc Bocquet, Emmanuel Cosme, and Leticia F. Cugliandolo

Print publication date: 2014

Print ISBN-13: 9780198723844

Published to Oxford Scholarship Online: March 2015

DOI: 10.1093/acprof:oso/9780198723844.001.0001

Show Summary Details
Page of

PRINTED FROM OXFORD SCHOLARSHIP ONLINE (www.oxfordscholarship.com). (c) Copyright Oxford University Press, 2019. All Rights Reserved. An individual user may print out a PDF of a single chapter of a monograph in OSO for personal use.  Subscriber: null; date: 26 August 2019

Observation impact on the short-range forecast

Observation impact on the short-range forecast

(p.165) 6 Observation impact on the short-range forecast
Advanced Data Assimilation for Geosciences

C. Cardinali

Oxford University Press

Abstract and Keywords

This chapter describes the concept of forecast error sensitivity to observations (FSO) and its use for diagnostic purposes. Assessment of the observational contribution to analysis and forecasting is among the most challenging aspects of diagnostics in data assimilation and numerical weather prediction. The FSO tool computes the contribution of all observations to the forecast error: a positive contribution is associated with an increase in forecast error and a negative contribution with a decrease. The technique is illustrated by an application to the weather prediction system of the European Centre for Medium-Range Weather Forecasts.

Keywords:   observational contribution, forecast sensitivity to observations, FSO, forecast error, numerical weather prediction

Observation impact on the short-range forecast

Advanced Data Assimilation for Geosciences. First Edition.

Edited by É. Blayo, M. Bocquet, E. Cosme, and L. F. Cugliandolo.

© Oxford University Press 2015. Published in 2015 by Oxford University Press.

(p.166) Chapter Contents

  1. 6 Observation impact on the short-range forecast 165


    1. 6.1 Introduction 167

    2. 6.2 Observational impact on the forecast 168

    3. 6.3 Results 178

    4. 6.4 Conclusion 178

    5. Acknowledgements 179

    6. References 180

(p.167) This chapter illustrates the concept of forecast error sensitivity to observations and its use for diagnostic purposes. The tool presented here computes the contribution of all observations to the forecast error: a positive contribution is associated with forecast error increase and a negative contribution with forecast error decrease. The forecast range investigated is 24 hours. It can be seen that, globally, the assimilated observations decrease the forecast error. Locally however, poor performance can also be found. The forecast deterioration can be related either to the data quality or to the data assimilation and forecast system. The data impact on the forecast is spatially and also temporally variable. It depends on atmospheric regimes, which may or may not be well-represented by the model or by the data. An example of a routine diagnostic assessment of observational impact on the short-range forecast performance is shown. The example also illustrates the tool's flexibility to represent different degrees of detail of forecast improvement or deterioration.

6.1 Introduction

The European Centre for Medium-Range Weather Forecasts (ECMWF) four-dimensional variational system (4D-Var; Rabier et al., 2000) handles a large variety of both space- and surface-based meteorological observations (more than 30 million a day) and combines the observations with the prior (or background) information on the atmospheric state. A comprehensive linearized and nonlinear forecast model is used, counting a number of degrees of freedom of the order of 108.

The assessment of the observational contribution to analysis (Cardinali et al., 2004; Chapnik et al., 2004; Lupu et al., 2011) and forecast is among the most challenging diagnostics in data assimilation and numerical weather prediction. For the forecast, the assessment of the forecast performance can be achieved by adjoint-based observation sensitivity techniques that characterize the forecast impact of every measurement (Baker and Daley, 2000; Langland and Baker, 2004; Cardinali and Buizza, 2004; Morneau et al., 2006; Xu et al., 2006; Zhu and Gelaro, 2008; Cardinali, 2009). The technique computes the variation in the forecast error due to the assimilated data. In particular, the forecast error is measured by a scalar function of the model parameters, namely wind, temperature, humidity, and surface pressure, that are more or less directly related to the observable quantities.

In general, the adjoint methodology can be used to estimate the sensitivity measure with respect to any assimilation system parameter of importance. For example, Daescu (2008) derived a sensitivity equation of an unconstrained variational data assimilation system from the first-order necessary condition with respect to the main input parameters: observation, background, and observation and background error covariance matrices.

The forecast sensitivity to observation technique (FSO) is complementary to the observing system experiments (OSEs) that have been the traditional tool for estimating data impact in a forecasting system (Bouttier and Kelly, 2001; English et al., 2004; Lord et al., 2004; Kelly, 2007; Radnoti et al., 2010, 2012). Very important is the use of OSEs in complement to FSO to highlight the contribution of, for example, a particular data set and to address the causes of degradation or improvement which FSO measures.

(p.168) The main differences between adjoint-based and OSE techniques are as follows:

  • The adjoint-based observation sensitivity measures the impact of observations when the entire observational dataset is present in the assimilation system, while the observing system is modified in the OSE. In fact, each OSE experiment differs from the others in terms of assimilated observations.

  • The adjoint-based technique measures the impact of observations separately at every analysis cycle versus the background, while the OSE measures the total impact of removing data information from both background and analysis.

  • The adjoint-based technique measures the response of a single forecast metric to all perturbations of the observing system, while the OSE measures the effect of a single perturbation on all forecast metrics.

  • The adjoint-based technique is restricted by the tangent linear assumption and is therefore valid for forecasts up to 2 days, while the OSE can measure the data impact on longer-range forecasts and in nonlinear regimes.

This chapter introduces the mathematical concept and the application of the forecast sensitivity to the observation tool. The general ECMWF system performance in the 24-hour-range forecast is shown as derived by the diagnostic tool. In Section 6.2, the theoretical background of the FSO and the calculation of the forecast error contribution (FEC) from observations are shown. The ECMWF forecast performance is illustrated in Section 6.3 and conclusions are drawn in Section 6.4.

6.2 Observational impact on the forecast

6.2.1 Linear analysis equation

Data assimilation systems for numerical weather prediction provide estimates of the atmospheric state x by combining meteorological observations y with prior (or background) information xb. A simple Bayesian normal model provides the solution as the posterior expectation for x, given y and xb. The same solution can be achieved from a classical frequentist approach, based on a statistical linear analysis scheme providing the best linear unbiased estimate (BLUE) (Talagrand, 1997) of x, given y and xb. The optimal general least-squares solution to the analysis problem (see Lorenc, 1986) can be written as


The vector xa, is called the analysis. The gain matrix K (of dimension n x p,with n being the dimension of the state vector and p that of the observation vector) takes into account the respective accuracies of the background vector xb and the observation vector y as defined by the ( n×n )-dimensioned covariance matrix B and the ( p×p )-dimensioned covariance matrix R, with


In is the n x n identity matrix. Here, H is a ( p×n )-dimensioned matrix interpolating the background fields to the observation locations, and transforming the model variables to observed quantities (e.g. radiative transfer calculations transforming the model's temperature, humidity, ozone, etc. to brightness temperatures as observed by satellite instruments). In the 4D-Var context introduced above, H is defined to also include the propagation of the atmospheric state vector by the forecast model to the time at which the observations were recorded.

(p.169) From (6.1), the sensitivity of the analysis system with respect to the observations can be derived from


Equation (6.3) provides the observational influence in the analysis (Cardinali et al., 2004; see also Chapter 4 of this proceedings' book).

6.2.1 Sensitivity equation

Baker and Daley (2000) derived the forecast sensitivity equation with respect to the observations in the context of variational data assimilation. Let us consider a scalar function J of the forecast error. The sensitivity of J with respect to the observations can be obtained using a simple derivative chain as


where δJ/δxa is the sensitivity of the forecast error to the initial conditions (Rabier et al., 1996; Gelaro et al., 1998). The forecast error is mapped onto the initial conditions by the adjoint of the model, providing, for example, regions that are particularly sensitive to forecast error growth (see Section 6.2.3). By using (6.2) and (6.3), the forecast sensitivity to the observations becomes


where (B1+HTR1H)1 is the analysis error covariance matrix A. In practice, a second-order sensitivity gradient is needed (Langland and Baker, 2004; Errico, 2007) to obtain the information related to the forecast error, because the first-order sensitivity gradient only contains information on the suboptimality of the assimilation system (see Section 6.2.3 and Cardinali, 2009).

The forecast error is defined by J=(1/2)et,Cet, where t stands for the truth. e denotes the forecast error with respect to temperature, vorticity, and divergence, as well as surface pressure. In practice, the forecast error is computed as the difference between the 24-hour forecast and the analysis valid at the same time. This implies that the verifying analysis is considered to be the truth:

  • The verifying analysis is only a proxy of the truth and thus errors in the analysis can obscure the observation impact in the short-range forecast.

C is a matrix of weighting coefficients that integrate the elements of the forecast error to a dry energy norm that is a scalar:

  • (p.170) The energy norm is a suitable choice, because it depends directly on the most relevant model parameters also contained in the control vector x (the vectors used in the minimization process in, e.g., 4D-Var). Nevertheless, alternative functions of model parameters can be used.

Equation (6.5) can be solved (Krylov method; Van der Vorst, 2003) and the forecast error sensitivity to all assimilated observations is then derived. The numerical method used is shown in Section 6.2.4 (see also Cardinali, 2009).

6.2.3 Sensitivity gradient

Let us consider two forecasts of length f starting from xa and length g starting from xb, xb being the background field used in the xa analysis. Both forecasts verify at time t. Following Langland and Baker (2004) and Errico (2007), the second-order sensitivity gradient is defined as


where Jf=(xfxt),C(xfxt)/2 and Jg=(xgxt),C(xgxt)/2 are quadratic measures of the two forecast errors ( xt being the verifying analysis) and C is the matrix of dry energy weighting coefficients. It is clear from (6.4) that the adjoint model maps the sensitivity (with respect to the forecast) of Jf into δJf/δxa along the trajectory f and the sensitivity of Jg into δJg/δxa along the trajectory g (for the first-order sensitivity gradient definition and computation, see Rabier et al., 1996; Gelaro et al., 1998). Equation (6.6) is schematically represented in Fig. 6.1. Let us now compare the first-order sensitivity gradient with the second-order one, expressing the variation of the forecast error due to the assimilation of observations, J(ea)J(eb), where ea and eb are the analysis and the background error. Following Langland and Baker (2004), the second-order Taylor series decomposition is used to map such variation:

Observation impact on the short-range forecast

Fig. 6.1 Geometrical representation of the sensitivity gradient calculation expressed in (6.6).

(p.171) Because the error cost function is quadratic, (6.7) reduces to


which at first order is


In an optimal assimilation system, the right-hand side of this equation is on average zero (Talagrand, 2002), since, statistically, the innovation vector d=yHxb and the analysis error are orthogonal. Therefore, it is clear that the results obtained by using the first-order sensitivity gradient only provide a measure of the suboptimality of the analysis system. It appears that it is necessary to include the second-order term in the FSO calculation.

6.2.4 Numerical solution

In an optimal variational analysis scheme, the analysis error covariance matrix A is approximately the inverse of the matrix of second derivatives (the Hessian) of the analysis cost function Ja ((Rabier et al., 2000), i.e. A=(Ja)1 (Rabier and Courtier, 1992). Given the large dimension of the matrices involved, Ja and its inverse cannot be computed explicitly. The minimization is performed in terms of a transformed variable χ=L1(xxb), with L chosen such that B=LLT. The transformation L thus reduces the covariance of the prior to the identity matrix. In variational data assimilation, L is referred to as the change-of-variable operator (Courtier et al., 1998). Let us apply the change of variables in the analysis cost function and write


The Hessian becomes


By applying the change of variables in (6.7) and using (6.8), the forecast sensitivity to the observations is expressed as


Using the conjugate gradient algorithm, first the following equation for δJ/δy=R1Hz is solved:


(p.172) The solution z lies in the Krylov subspace generated by the vector LTza and the matrix I+LTHTR1HL. The Krylov subspace dimension is the degree of the minimal polynomial of I+LTHTR1HL. Therefore, if the degree is low, the Krylov method searches the solution on a small-dimensional space. The method is very efficient in an iterative solution of a linear system with large and sparse matrices (Van der Vorst, 2003). The forecast sensitivity to observations is then given by interpolating z (using the H operator) in the observation space and by normalizing with respect to the observation error covariance matrix R.

6.2.5 Observation impact measure

Once the forecast sensitivity has been computed, the variation δJ of the forecast error expressed by J can be found by rearranging (6.1) and using the adjoint property for the linear operator:


where δxa=xaxb are the analysis increments and δy=yHxb is the innovation vector. δJ is computed across the 12-hour window; the sensitivity gradients δJ/δxa, valid at the starting time of the 4D-Var window (09 and 21 UTC in the ECMWF system), are distributed by KT, which incorporates the temporal dimension, over the 12-hour window. From (6.14), a few considerations should be taken into account:

  • The forecast impact δJ (hereinafter called the forecast error contribution, FEC) of all observations assimilated depends on the forecast error ( J(e)δJ/δxa ), the assimilation system ( KT ), and the difference between the observations and the model ( yHxb ).

  • Positive forecast error variation δJ>0 is synonymous of forecast degradation. Negative forecast error variation δJ<0 is synonymous of forecast improvement.

  • The verifying analysis is only a proxy of the truth. Therefore, errors in the analysis can mask the observation impact in the forecast.

  • Biases in the model can result in forecast degradation that is erroneously interpreted as an observation-related degradation.

  • Since the computation is performed with the linearized model, only errors in the short-range forecast can be diagnosed.

  • The forecast error is measured using a dry energy norm that depends on wind, temperature, and surface pressure. Therefore, observables depending on these parameters are rather well assessed. Moreover, the dependence of the forecast error on humidity is represented by the linearized moist process, so that the forecast impact of humidity observations is also fully assessed (Janiskova and Cardinali, in preparation).

  • (p.173) The variation of the forecast error due to a specific measurement can be summed over time and space in different subsets to compute the average contribution of different components of the observing system to the forecast error. For example, the contribution of all AMSU-A satellites ( s ) and channels ( i ) over time T will be


    This is one of the most important characteristics of the tool, because it allows any necessary level of analysis granularity for a comprehensive investigation.

Given all the points above, it is clear that a full diagnostic assessment is necessary to establish the causes for a forecast error increase.

6.3 Results

The routinely computed observational impact from the operational ECMWF 4D-Var system (Rabier et al., 2000; Janiskova et al., 2002; Lopez and Moreau, 2005) is shown in Fig. 6.2 for September and October 2011. At ECMWF, the ‘observation impact' suite runs one day behind the model suite, in time to recover the actual verifying analysis for the forecast error computation. The 24-hour forecast error contribution (FEC) of all the observing system components is computed and shown in Fig. 6.2(a) for different observation types as defined in Table 5.1 of Chapter 5. For technical reasons, microwave imagers (SSM/I and TMI) have not been considered in this study. The largest contribution to decreasing the forecast error is provided by AMSU-A (∼ 25%), IASI, AIRS, and AIREP (aircraft data and GPS-RO observations account for 10% of the total impact, respectively). TEMP and SYNOP surface pressure observations contribute 5%, followed by AMVs and HIRS (∼4%), then by ASCAT and DRIBU (3%). All other observations contribute less than 3%.

The error of the observation impact measure is also displayed in Fig. 6.2(a); it depends on the standard error and on the number of observation assimilated in that period. If the FEC measured variability is within the error range, the variation is not considered to be significant. In Fig. 6.2(b), the mean impact per individual observation is shown. In this case, the impact is independent of the observation number. The largest mean contribution is provided by DROP and DRIBU (surface pressure) observations, followed by the contribution of a second group of observations comprising MERIS, AMVs, ASCAT, GPS-RO, SYNOP, TEMP, AMSU-B, and AIREP. In contrast to the total forecast impact, which is largely provided by satellite observations, the largest per-observation impact is obtained from conventional observations. The difference between the two impact measures is mainly due to difference in observation accuracy, through which a single conventional observation is on average more influential in the analysis than a single satellite measurement. (p.174)

Observation impact on the short-range forecast

Fig. 6.2 Observation contribution to the global forecast error reduction grouped by observation type as defined in (5.1). The measure is given in percent and for the months of September and October. (a) Total forecast error contribution, where the error bars are computed using the standard error measure. (b) Average forecast error contribution (normalized by the number of observations used; unit J/kg).

The monthly variation of forecast impact is shown in Fig. 6.3 per observation type and for June–October 2011. The only significant temporal variation is observed for AMSU-A, with the largest forecast impact in August and September, and for GPS-RO and IASI in July and August, respectively. (p.175)

Observation impact on the short-range forecast

Fig. 6.3 Variation of total forecast error contribution in June, July, August, September, and October 2011 for the different observation types.

The AMSU-A forecast impact has been analysed in more detail.In Fig. 6.4, the contribution of all channels to the forecast error decrease is shown. Channel 8 has the largest overall impact and the stratospheric channels (11–14) the smallest. There is no significant difference in performance between September and October. The geographical distribution of mean forecast improvement or deterioration from channel 8 is shown in Fig. 6.5 for September–October 2011. The METOP-A AMSU-A performance is compared with that of NOAA-15, since they have a similar satellite orbit. Nevertheless, there is a difference in the measurement time, since METOP-A crosses the equator at around 9:30 and NOAA-15 at 16:30. The overall impact of the instruments on the two satellites is comparable. The geographical location of the improvement instead differs quite substantially, with the exception of the polar and central Southern Hemisphere regions, where both perform similarly well. In the western part of the Southern Hemisphere, METOP-A reduces the forecast error, while NOAA-15 increases it. In contrast, in the eastern part, NOAA-15 shows a large and consistent improvement, whereas METOP-A shows small areas of degradation. A similar impact pattern is observed for the tropics and the Northern Hemisphere. (p.176)

Observation impact on the short-range forecast

Fig. 6.4 Total forecast error contribution for all AMSU-A instruments as a percentage. The impact is shown for all assimilated channels and for September and October 2011.

Observation impact on the short-range forecast

Fig. 6.5 Mean forecast error contribution for AMSU-A channel 8 onboard METOP-A (a) and NOAA-15 (b) for the whole globe. Units are joules.

Once the area of degradation or improvement and the periods of interest have been determined, the addition of OSEs can help to determine the possible causes. For example, it can be necessary to identify the explicit contribution of AMSU-A channel 8 to the degradation over the Atlantic (METOP-A) or central Africa (NOAA-15). Comparison between the experiment where channel 8 is not assimilated and the control experiment (in which it is assimilated) will add information for the specific case and will help in evaluating suitability of the assimilation procedure for this data.

The variation of forecast impact with time for AMSU-A channel 8 is shown for the North Atlantic region in Fig. 6.6 (a,b). Again, METOP-A (a) and NOAA-15 (b) are compared. METOP-A shows much larger temporal variability than NOAA-15 and displays more events of detrimental impact (positive values) than NOAA-15, which, except for a few occasions, performs rather well over the entire period. The observation departures are also different: the departures with respect to the background (black line in Figs. 6.6(c,d)) are smaller for METOP-A (on average 0.05 K) until the beginning of October, when the assimilation of METOP-A restarted after a break of three days due to routine satellite maintenance. After 2 October, METOP-A background departures become smaller, but the largest absolute decrease (0.025 K) is observed instead for NOAA-15. And, from October onwards, the observation departure from the analysis (grey line in Figs. 6.6 (c, d)) becomes very similar (close to zero on average), while, before that day, NOAA-15 shows a small positive bias. Interestingly, the forecast reduction also changes: METOP-A shows larger variability than before and, to a lesser extent, so does NOAA-15. However, on average, as shown in Fig. 6.5, the impact of the two satellites is quantitatively similar, though different in terms of location. Over the Pacific, for example, METOP-A and NOAA-15 time series of the forecast performance are more similar, with METOP-A showing also few large improvements (not shown). The number of measurements provided by the two satellites is very similar (Figs. 6.6 (e,f)). The larger forecast error reduction of NOAA-15 with respect to METOP-A over the North Atlantic is due to the measurement time (Fig. 6.7). In fact, the NOAA-15 satellite crosses the Atlantic close to 9 UTC, which corresponds to the end of the 12-hour assimilation window in the 4D-Var system used (Fig. 6.7, light grey), while the METOP-A platform is observing the Atlantic at the beginning of the assimilation window (Fig. 6.7 dark grey). Owing to the evolution of the model error covariance matrix B across the assimilation window, observations assimilated towards the end of the window are more influential than observations assimilated at the beginning of the window. (p.177)

Observation impact on the short-range forecast

Fig. 6.6 Daily variation of mean FEC (a, b), background (black line) and analysis (grey line) departure (c, d) and observation number (e, f) over the Atlantic region from September to mid-November 2011 for METOP-A (a, c, e) and NOAA-15 (b, d, f) AMSU-A channel 8.

Observation impact on the short-range forecast

Fig. 6.7 Data coverage for METOP-A (a) and NOAA-15 (b) AMSU-A. The swath shading is related to the measurement time from 21 UTC to 9 UTC.

6.4 Conclusion

Over the last few years, the potential of using derived adjoint-based diagnostic tools has been widely exploited. Recently, a compact derivation of the 4D-Var sensitivity equations using the theoretical framework of the implicit function has been performed (Daescu, 2008). The analytical formulation of the sensitivity equations with respect to an extended set of input parameters has been shown and numerical applications (p.179) will soon follow. This chapter has introduced the use of the forecast sensitivity with respect to time-distributed observational data, for the first time in a 12-hour 4D-Var assimilation system, as a diagnostic tool to monitor the observation performance in the short-range forecast. The fundamental principles on which the forecast sensitivity diagnostic tool is based have been illustrated and an example of a routine diagnostic has been provided.

The forecast sensitivity to observations can only be used to diagnose the impact on the short-range forecast, namely for periods of 24–48 hours, given the use of the adjoint model and the implied linearity assumption. The tool allows the computation and visualization of the impact for each assimilated measurement, and therefore the diagnostic can be performed from local to global scales and for any period of interest. The use of the second-order sensitivity gradient is necessary to identify the forecast impact of the observations. In fact, the projected first-order sensitivity gradient only contains information on the suboptimality of the assimilation system. The tool use characteristics have been explained: in particular, the dependence of the tool on the verifying analysis used to compute the forecast error and the dependence of the sensitivity tool on the scalar function representing the global forecast error (energy norm). The function of the global forecast error is first mapped onto the initial conditions (using the adjoint operator of the model forecast) and then into the observation space (using the adjoint operator of the analysis system). The forecast error sensitivity of a specific measurement is transformed on forecast error variation via a scalar product with the innovation vector.

The global impact of observations is found to be positive, and the forecast errors decrease for all data type when monthly averaged. In fact, because of the statistical nature of the assimilation procedure, the observation impact must be averaged over a long enough period to be significant.

An example of observation impact monitoring has been shown, and from the global performance assessment the specific performance of one AMSU-A channel has been illustrated for two polar orbiting satellites, namely METOP-A and NOAA-15, covering a similar orbit. The causes of degradation or improvement can be further investigated using observing system experiments.

Given the dependence of some observation types on the meteorological situation, it is suggested that the forecast sensitivity to the observation diagnostic tool be run on an operational basis and in relation to the operational suite error. A constant monitoring of the performance of the model forecast would allow the use of the observation network in an adaptive way in which observations with negative impact can be investigated and potentially denied in real time.


The author thanks Mohamed Dahoui, Anne Fouilloux, and Fernando Prates for their continued support in monitoring, displaying, and diagnosing the forecast performance of all observations assimilated at ECMWF.

(p.180) References

Bibliography references:

Baker, N. L. and Daley, R. (2000). Observation and background adjoint sensitivity in the adaptive observation targeting problem. Q. J. R. Meteor. Soc., 126, 1431–1454.

Bouttier, F. and Kelly, G. (2001). Observing system experiments in the ECMWF 4D-Var data assimilation system. Q. J. R. Meteor. Soc., 127, 1469–1488.

Cardinali, C. (2009). Monitoring the observation impact on the short-range forecast. Q. J. R. Meteor. Soc., 135, 239–250.

Cardinali, C., Pezzulli, S., and Andersson, E. (2004). Influence matrix diagnostics of a data assimilation system. Q. J. R. Meteor. Soc., 130, 2767–2786.

Cardinali, C. and Buizza, R. (2004). Observation sensitivity to the analysis and the forecast: a case study during ATreC targeting campaign. In Proceedings of the First THORPEX International Science Symposium, 6–10 December 2004, Montreal, Canada, WMO TD 1237 WWRP/THORPEX N. 6.

Chapnik, B., Desroziers, G., Rabier, F., and Talagrand, O. (2006). Diagnosis and tuning of observation error in a quasi-operational data assimilation setting. Q. J. R. Meteor. Soc., 132, 543–565.

Courtier, P., Andersson, E., Heckley, W., Vasiljevic, D., Hamrud, M., Hollingsworth, A., Rabier, F., Fisher, M., and Pailleux, J. (1998). The ECMWF implementation of three-dimensional variational assimilation (3D-Var). Part I: Formulation. Q. J. R. Meteor. Soc., 124, 1783–1807.

Daescu, D. N. (2008). On the sensitivity equations of 4D-Var data assimilation. Mon. Weather Rev., 136, 3050–3065.

English, S., Saunders, R., Candy, B., Forsythe, M., and Collard, A. (2004). Met Office satellite data OSEs. In Proceedings of Third WMO Workshop on the Impact of Various Observing Systems on Numerical Weather Prediction, Alpbach, Austria. WMO/TD, 1228, pp. 146–156.

Errico, R. (2007). Interpretation of an adjoint-derived observational impact measure. Tellus, 59A, 273–276.

Gelaro, R., Buizza, R., Palmer, T. N., and Klinker, E. (1998). Sensitivity analysis of forecast errors and the construction of optimal perturbations using singular vectors. J. Atmos. Sci., 55, 1012–1037.

Janiskova, M., Mahfouf, J.-F., Morcrette, J.-J., and Chevallier, F. (2002). Linearized radiation and cloud schemes in the ECMWF model: development and evaluation. Q. J. R. Meteorol. Soc., 128, 1505–1527.

Kelly, G. (2007). Evaluation of the impact of the space component of the Global Observation System through observing system experiments. ECMWF Newsletter, Autumn.

Langland, R. and Baker, N. L. (2004). Estimation of observation impact using the NRL atmospheric variational data assimilation adjoint system. Tellus, 56A, 189–201.

Lorenc, A. (1986). Analysis methods for numerical weather prediction. Q. J. R. Meteorol. Soc., 112, 1177–1194.

Lopez, P. and Moreau, E. (2005). A convection scheme for data assimilation: description and initial tests. Q. J. R. Meteor. Soc., 131, 409–436.

(p.181) Lord, S., Zapotocny, T., and Jung, J. (2004). Observing system experiments with NCEP US global forecast system. In Proceedings of Third WMO Workshop on the Impact of Various Observing Systems on Numerical Weather Prediction, Alpbach, Austria. WMO/TD 1228, pp. 56–62.

Lupu, C., Gauthier, P., and Laroche, S. (2011). Evaluation of the impact of observations on analyses in 3D- and 4D-Var based on information content. Mon. Weather Rev., 139, 726–737.

Morneau, J. Pellerin, S., Laroche, S., and Tanguay, M. (2006). Estimation of adjoint sensitivity gradients in observation space using the dual (PSAS) formulation of the Environment Canada Operational 4D-Var. In Proceedings of Second THORPEX International Science Symposium, 4–8 December 2006, Landshut, Germany, WMO/TD 1355, WWRP/THORP6X 7, pp. 162–163.

Rabier, F. and Courtier, P. (1992). Four-dimensional assimilation in the presence of baroclinic instability. Q. J. R. Meteorol. Soc., 118, 649–672.

Rabier, F., Klinker, E., Courtier, P., and Hollingsworth A. (1996). Sensitivity of forecast errors to initial condition. Q. J. R. Meteorol. Soc., 122, 121–150.

Rabier, F., Järvinen, H., Klinker, E., Mahfouf, J. F., and Simmons, A. (2000). The ECMWF operational implementation of four-dimensional variational assimilation. Part I: experimental results with simplified physics. Q. J. R. Meteorol. Soc., 126, 1143–1170.

Radnoti, G., Bauer, P., McNally, A., Cardinali, C., Healy, S., and de Rosnay, P. (2010). ECMWF study on the impact of future developments of the space-based observing system on numerical weather prediction. ECMWF Tech. Memo., 638.

Radnoti, G., Bauer, P., McNally, A., and Horanyi, A. (2012). ECMWF study to quantify the interaction between terrestrial and space-based observing systems on numerical weather prediction skill. ECMWF Project Report.

Talagrand, O. (1997). Assimilation of observations, an Introduction. J. Meteorol. Soc. Japan, 75, 191–209.

Talagrand O. (2002). A posteriori validation of assimilation algorithms. In Proceedings of NATO Advanced Study Institute on Data Assimilation for the Earth System, Acquafreda, Maratea, Italy.

Tompkins, A. M. and Janiskova, M. (2004). A cloud scheme for data assimilation: description and initial tests. Q. J. R. Meteor. Soc., 130, 2495–2517.

Van der Vorst, H. A. (2003). Iterative Krylov Methods for Large Linear Systems. Cambridge University Press, Cambridge.

Xu, L., Langland, R., Baber, N., and Rosmond, T. (2006). Development and resting of the adjoint of NAVDAS-AR. In Proceedings of Seventh latenational Workshop on Adjoint Applications in Dynamic Meterology, 8–13 October 2006, Obergurgl, Austria.

Zhu, Y. and Gelaro, R. (2008). Observation sensitivity calculations using the adjoint of the gridpoint statistical interpolation (GSI) analysis system. Mon. Weather Rev., 136, 335–351. (p.182)