Precipitable Water as Temperature Proxy

Precipitable water is a measure of how high water would stack up if all the water vapor in the atmosphere would rain down, right now! It typically ranges between 22.4 and 24.2 millimeters. All the water vapor raining down would add up to about 0.9 inches.

Now a little bit of logic: the amount of water vapor in the atmosphere depends on how hot the oceans/lakes/rivers and whatever water is on/in the ground is. The hotter, the more evaporation. Simple. Therefore precipitable water should be a good proxy for surface water temperature. Let’s see what the history of precipitable water looks like. For that we go to NOAA’s ESRL.

We fill out the form, like this:

And this is what we get:

Precipitable Water Column Height, in millimeters

One would think that with constant warming, we should see the precipitable water always going up. But we don’t see that. We clearly see a CYCLE here, an invisible letter U or V. In fact, it reminds me of something we discovered here:

Global Averge Temperature Anomaly after Latitude Drift Adjustment

Let’s combine the two, while shifting temperatures forward 7 years:

Latitude Drift Adjusted Berkeley Temperature vs. Precipitable Water

Now that makes sense. You know what doesn’t make sense? The “consensus” temperature data. Here it is:

Berkeley Global Summary Temperature vs. Precipitable Water

It is clear that Berkeley (and other similar outfits) do not perform proper latitude drift adjustment and so their result does not match what we should expect to happen to precipitable water level.

What we have here is a great confirmation that mainstream climate science has gone off the rails.

Enjoy 🙂 -Zoe

Published by Zoe Phin

https://phzoe.com

16 thoughts on “Precipitable Water as Temperature Proxy

  1. I’m not sure I understand why drift isn’t a problem for land based temperatures in general. The actual satellite data doesn’t have a problem, but that’s not what is used in the other datasets like Berkeley or Hadcrut4

    Liked by 1 person

    1. I’m not sure I understand the question?
      Latitude drift is a problem for both land and ocean temperatures because the missing data biases the composite average. The problem is indeed only for the pre-satellite era.

      What Berkeley, etc. do is to take a simple area-weighted average of the available places without any regard how the missing data biases their “global” average.

      So while they can claim there’s “global” warming, there is no actual GLOBAL warming.

      Imagine that yesterday all we had was temperatures above 65 degrees latitude, and today we have the whole earth. Such a chart would have a HUGE spike due to latitude drift. Sane scientists would seldom take this averaging seriously, but if it happened slowly over time …

      Like

    2. Now I’m really puzzled. What the heck does latitude drift mean? I presumed it referred to orbital drift when I asked the original question.

      Like

        1. I just now read your earlier article and think I know what you mean. The “drift” term confused me, because of its association with satellite drift. If I understand it, you’re using “Latitude Drift Adjustment” to remove the warming bias in the data that’s due to an increase in temperature data from latitudes nearer the equator, that is, the low-latitude data increase.

          Liked by 1 person

  2. Zoe, Let me make my point clearer. Clearly altitude and latitude matter to temperature. The grid cell approach recognizes this fact. We know there is lots of missing data in the early years as well as today. I read your post as saying that Berkley Earth and other data set have incorrectly calculated grid cell temperatures, and hence anomalies, because of incorrect adjustments for latitudes and altitudes.

    Suppose we had data for every cell, but the data in each cell came from altitudes well below the average for the cell. We would then expect the cell temp data to be biased upward. How this would effect the anomaly calculation isn’t clear. It could be that even though the cell temperature is biased downward that the anomaly calculation could still equal the “true” value. By true value a mean the average cell temperature if we had observations for every square meter in the cell that we averaged.

    So what happens when we have cells with no data. If you look at some of the earlier work on addressing this issue (I’m to lazy to find the cites) There is lots of statistical work showing correlations in temperatures by distance etc. When I read through the analysis it totally left me cold. I’m a Econometrician by training and what struck me was how ad hoc it was. Anyway, lots of higher latitude cells have missing data. My reading of your basic results for missing cell data is that lots of northern cells get artificially warmed.

    My point was that I don’t understand what the beginning of the satellite data has to do with Berkeley Earth, Had CRUT4, etc. To my knowledge, the UAH satellite data doesn’t play a role. I’m not sure why in 1979 all of a sudden that the land data used in Berkeley is “corrected” in a way that the latitude data is zeroed on the equator. In my view, the issues you raise don’t just disappear in 1979. It is true that the satellite data doesn’t suffer from the same issues.

    What I find interesting is that the data set you calculated looks awfully like the US unadjusted land temperature data as well as stations around the north Atlantic. However, it looks nothing like the Central England temperature record, which is the longest land record we have.

    I have never liked the idea of presenting a single temperate record for the world. I just don’t think it means anything. If you look at actual records of stations by regions, there are lots of parts of the world that have been cooling for many years. The SE US is a prime example.

    I think the right way to ferret out whether CO2 is having an impact is to exploit the cross sectional time series properties of the data. All you need are a sample temperature records from high quality stations around the world. Forget about grid cells and anomalies. If you assume CO2 is well mixed and you have data on that, then it would be easy to estimate the relationship of temperature to changes in CO2.

    Liked by 1 person

    1. You’d be surprised, but Berkeley actually uses a lot of satellite data after 1979 to fill in the blanks. I’ve looked at their code. I don’t fully understand their MatLab code, but I can clearly see that they mix in satellite data post-1979.

      “Suppose we had data for every cell, but the data in each cell came from altitudes well below the average for the cell. We would then expect the cell temp data to be biased upward.”

      Completely correct. For this analysis I have to assume that the cells are perfectly correct. My issue can only be how they combine the cells to form a composite global average.

      “there are lots of parts of the world that have been cooling for many years. The SE US is a prime example.”

      Completely true. I’m in the SE US. Atlanta, to be specific.

      I don’t disagree with anything you said. Thank you for the comment.

      Like

  3. Zoe:

    I found your website more or less by accident and enjoyed reading through your various posts (though, I confess, my math is not up to assessing much of what you do). On a tangential note, when I went to try to find your site again, I used google. I think you’ve made their naughty list – since I found it exceptionally difficult to get your site to return in a search with them (though, not with Bing using the same search terms).

    Personally, I’d take that as a compliment.

    Keep up the interesting work.

    Cheers,

    Ian.

    Liked by 1 person

    1. Thank you very much, Ian.
      It may be my own fault. I like to drop my links on youtube. It’s probably a bad idea to post my content in intolerant places, as others probably flag it.

      Like

  4. Zoe: i just discovered your site today. It is terrific. You are a great writer, an original thinker, and incredibly bright. I intend to share your stuff as widely as I can.

    Mrs. Smarty Pants, in the nicest possible way!

    Liked by 1 person

  5. Yeah looks like dishonest temperature data for sure. And the precipitable water graph therefore looks like a better temperature proxy then the temperatures they are giving us. Except maybe for two things. There is a delay factor and an overshoot factor. The floating water level shoots up after the hottest days, and by more in absolute terms than what the earlier years high temperatures would have it. Or at least thats what it looks like to my eyes. So if you put an averaging factor in that graph, and pulled it back a few years, we might have the best temperature data proxy around.

    Liked by 2 people

  6. “The hotter, the more evaporation. Simple.” Perhaps somewhat too simple, as it also depends on the air above that hotter water – air temperature, humidity and velocity play into it, as well as the surface area relative to depth of that water.
    Another factor to consider will be the lag time between solar influx of heat and eventual release by the oceans. As the cycle wanes, air temps drop, allowing for a more conducive evaporation coefficient. One half of one half of a complete ~22 year cycle is ~5.5 years, and so in this way of thinking, at least part of that seven year discrepancy will be accounted for by natural cause.
    Forest cover (via transpiration) also affects TPW to some extent, mainly seasonally, but over Amazonia that season is long.
    As always, it’s nice to read what people are thinking about all of this. Learning better how to use their data to debunk their baseless conclusions is priceless.

    Like

Leave a comment

Design a site like this with WordPress.com
Get started