The Two-and-One-Half PC Solution

In his RealClimate post, West Antarctica Still Warming 2, Eric Steig discusses some of the criticisms made by Ryan O’donnell et al. (Improved methods for PCA-based reconstructions: case study using the Steig et al. 2009 Antarctic temperature reconstruction) of his 2009 Nature paper, Warming of the Antarctic ice-sheet surface since the 1957 International Geophysical Year.

Second, that in doing the analysis, we retain too few (just 3) EOF patterns. These are decompositions of the satellite field into its linearly independent spatial patterns. In general, the problem with retaining too many EOFs in this sort of calculation is that one’s ability to reconstruct high order spatial patterns is limited with a sparse data set, and in general it does not makes sense to retain more than the first few EOFs. O’Donnell et al. show, however, that we could safely have retained at least 5 (and perhaps more) EOFs, and that this is likely to give a more complete picture.

 

Some background may be required for the non-statistically oriented reader. A singular value decomposition allows one to take a set of numerical data sequences (usually arranged into a matrix array) and to decompose it into a new set of sequences (called Principal Components or Empirical Orthogonal Functions), a set of weights indicating the amount of variability of the original set accounted for by each of the PCs (called eigenvalues or singular values) and a set of coefficients which relate each PC to each of the original sequences. To reconstruct the original sequences, one can take each PC, multiply it by its eigenvalue and then use the coefficients to reproduce the original data.

This decomposition has certain properties which can be very useful in understanding and analyzing the original data. In particular, when there are strong relationships between the data sequences, several of the eigenvalues may be much larger than the rest. Using only the PCs which belong to those eigenvalues can create a very good replica of the data, but with fewer “moving parts”.

In the Steig paper, the authors divided the Antarctic into 5509 grid cells. They took a huge amount of satellite data and from it formed a monthly temperature sequence ( from January 1982 to December 2006 – 300 months) for each of the grid cells. The problem was to estimate the behavior of various regions of Antarctica during the longer period from January 1957 to the end of 2006 (a total of 600 months). Since the data before the satellite era was sparse both geographically and temporally, it was decided to try to “extend” the satellite data to the earlier period by first relating it to the ground temperatures that were available and then using that relationship to guess what the satellite temperatures might have been prior to 1982.

This is a good idea, but, as always, the devil is in the details. Using the totality of the available satellite sequences was unwieldy, both from a mathematical and statistical standpoint. This is where the decision to use a PC approach came in handy. The satellite temperature sequences have a great deal of relationship within their structure. For example, one would expect that geographically adjacent grid cells would have very similar behavior so it was apparent that this approach could reasonably produce something useful.

How many PCs should one use? This is the specific disagreement mentioned in the above quote from the RC post. Too many PCs mean a larger number of values to be estimated from the earlier station data (300 for each PC wherein the “overfitting” claim arises). On the other hand, too few PCs mean that the reconstruction will be unable to properly separate the spatial and temporal temperature patterns. The temperatures from the peninsula can be “smeared” to West Antarctica, there could be no ability to make separate examinations of the temperatures during the various seasons or any combination of these items.
One could argue that the graph from Odonnell et al. displayed in the RC post illustrates this difference:

Too few PCs will produce a monotone colored plot: With a single PC, every grid cell will have exactly the same characteristics since only a single multiplier and a single PC are available to reconstruct it. With two PCs, only two coefficients are available to differentiate the entire cell sequence from all of the others, etc. As more PCs are added, more variation in the coloring becomes possible. Whether this represents a greater reality or not becomes an issue.

What method should be used to “establish the relationship” between the satellites and the ground and to extend the sequences to the pre-1982 era? There are several of these that are available. They have different properties and it is important to understand what the drawbacks of each can be. The two mentioned in the RC post are TTLS (truncated total least squares – advocated by Mann and Steig) and ‘iridge’ (individual ridge regression – part of the methodology used by OD 2010). Discussion of these is a complicated matter which is beyond the discussion here.

The point of this post is to look at the specific 3 PCs which were used by the Steig paper. One can download the Antarctic reconstruction from the papers web site (warning –it is a VERY large text file) and using the R svd function decompose it using a singular value decomposition. Since the means of the sequences are not all zero, the “PCs” are not orthogonal, but it does not impact the point being made here. A plot of the 3 PCs used by Steig et al (oriented so that all trends are positive) produce the following:

The third PC looks somewhat different from the other two. The (extended) portion prior to 1982 is pretty close to being identically zero. Any reconstruction of the satellite data using these PCS becomes essentially a two PC reconstruction prior to 1982 and three PCs afterward. The end effect of the third PC on the overall results is to put a bend in the trends at that point (upward if the coefficient is positive and down if negative).

However, can this reconstruction differentiate well between the Antarctic regions in the early temperature record? I sincerely doubt it unless someone believes that the record is sufficiently homogeneous both spatially and temporally to justify that possibility. Perhaps, the authors of Steig et al. could explain this in more detail – I presume that they would have seen the same graph when they were writing the paper.

Why did this occur? My best guess is that it might be a result of using the total least squares function in the procedure.

I will give three more plots. Each of these is a plot of the relative size of the grid cell coefficient for each of the PCs when combining them to create the three PC satellite series.  NOTE:  These are NOT the values of the trends.

PC1 produces a general increasing trend throughout the continent. This trend is somewhat more pronounced in the central area.

PC2 is the main driver for the peninsula – West Antarctica relationship.

The main effects of this are felt after 1982 – it adds to the cooling lower right and the warming upper left.

11 Comments

Filed under Uncategorized

11 responses to “The Two-and-One-Half PC Solution

  1. Layman Lurker

    Good to see you back at it Roman. Would you consider a part #2 to this post explaining how the uneven distribution of Antarctic surface station data would interact with Steig’s 3 PC scenario?

    • RomanM

      Good seeing you, LL. Everything in it’s time. I have some other things to look at first and I am a slow typist. 🙂

      What I would like to do is to get the original 300 x 5509 set of satellite observations from which the 3 PCs were calculated. I have some methodology that I have been working on for doing the type of reconstructions as done in Steig et al., but by techniques not yet used by anyone else that I know of. I think they could be more advantagous in some ways than what has been done up to now.

  2. Greg Simpson

    Thanks, this helps a bit. At least I now know why they were talking about End Of Files, or rather Empirical Orthogonal Functions, so much.

    • RomanM

      People who who are used to looking at specific material tend to forget that others may not be as familiar with it. Sometimes it is useful just to step back and try to think of stuff as simply as possible.

      If people can better understand the issues, one could hope they might back off the ad hom stuff a little.

  3. bernie

    Roman:
    Thanks for the very clear explication. Can you say a few more words on the rationale for using 3 as opposed to more PCs. Also isn’t important to check to ensure that a factor is not simply emerging because of a single source of data that itself has a strong and distinctive pattern – as in the infamous BristleCone Pine dependent Hockey Stick?

  4. AMac

    Nick Stokes ran O’Donnell’s Steig’09-like algorithm, retaining 3, 4, 5, 6, or 7 PCs. His findings seem broadly consistent with what you show here.

    • RomanM

      What Nick has graphed is somewhat different.

      He has calculated the trends which you obtain for each grid cell in the cases where you first use PC1, then the pair PC1 and PC2, the threesome PC1, PC2 and PC3, etc.

      What I have graphed is not the trends but the relative effect that a PC has on the forming the anomaly value for the cells. Secondly, I consider the 3 PCS individually, not cumulatively combining their effects as Nick has done. This shows how the PC differentiates (or not) between cells in the various regions. The values typically go from -.4 to .4, however, if the PCs are combined, the combination is further weighted by the PC eigenvalues as well.

      My first graph should look very similar to Nick’s, but the similarity would decrease as the number of PCs used increases. The point I was trying to make is that the number of different patterns that can be replicated with these PCs is relatively small and it raises some doubts about the ability of the resulting reconstruction to, for example, show differing seasonal patterns for the various regions of Antarctica. .

      • bernie

        The physical meaning of each PC needs to be explained. Rooling them all up together seems a very strange idea – except that it does show the impact of whatever infilling has taken place, does it not?

  5. Pingback: Climate Blog and News Recap: 2011 02 19 « The Whiteboard

  6. Pingback: In Case You Missed It « the Air Vent

  7. Geoff Sherrington

    Roman M – thank you for the above.

    In a simpler sense, might we please look at some ground station data since 1982 and its incorporation into the satellite data. If there was a strong upward trend 1992-1998, followed by a flat response to now, would that have a different effect to a flat response from 1962 followed by a similar upward trend in the last 5 years?

    The question arised tangentially because of Macquarie Island, which was in some of the early Antarctic work. I did a manuscript that has a data start year of 1968 to get over the change to Australian decimal use in 1966. The graph of 40 years from 1968-2008 is here, Tmax in blue and Tmin in pink and I do not detect any change of any significance:


    Extended further back, where more infilling is needed to make simple Excel use easier, one gets

    Now, here is part of an email from Dr David Jones, Head of Climate Analysis at BOM, mid July 2009 –
    “Macquarie Islands data shows strong warming – about 0.5C in the last 50 years. It also shows an unusal pattern of increasing rainfall and decreasing cloudiness, which is entirely consistent with climate change projections.
    An analysis was presented at a climate conference in Uni Tas 2 months back, and the science paper will follow.”

    (Unfortunately, I have been refused a copy of the paper by the BOM).

    I guess the question is, does the timing of a several-year anomaly, in the context of the S09 and O10 papers, have a rather complicated effect on the analysis of the satellite data then the conclusions – especially if it is before the 1982 start date for ground stations used? I’m looking particularly at your plot of PC3 above.

Leave a comment