Are surface temperature records reliable?
What the science says...
Select a level... | Basic | Intermediate | Advanced | ||||
The warming trend is the same in rural and urban areas, measured by thermometers and satellites, and by natural thermometers. |
Climate Myth...
Temp record is unreliable
"We found [U.S. weather] stations located next to the exhaust fans of air conditioning units, surrounded by asphalt parking lots and roads, on blistering-hot rooftops, and near sidewalks and buildings that absorb and radiate heat. We found 68 stations located at wastewater treatment plants, where the process of waste digestion causes temperatures to be higher than in surrounding areas.
In fact, we found that 89 percent of the stations – nearly 9 of every 10 – fail to meet the National Weather Service’s own siting requirements that stations must be 30 meters (about 100 feet) or more away from an artificial heating or radiating/reflecting heat source." (Watts 2009)
Surveys of weather stations in the USA have indicated that some of them are not sited as well as they could be. This calls into question the quality of their readings.
However, when processing their data, the organisations which collect the readings take into account any local heating or cooling effects, such as might be caused by a weather station being located near buildings or large areas of tarmac. This is done, for instance, by weighting (adjusting) readings after comparing them against those from more rural weather stations nearby.
More importantly, for the purpose of establishing a temperature trend, the relative level of single readings is less important than whether the pattern of all readings from all stations taken together is increasing, decreasing or staying the same from year to year. Furthermore, since this question was first raised, research has established that any error that can be attributed to poor siting of weather stations is not enough to produce a significant variation in the overall warming trend being observed.
It's also vital to realise that warnings of a warming trend — and hence Climate Change — are not based simply on ground level temperature records. Other completely independent temperature data compiled from weather balloons, satellite measurements, and from sea and ocean temperature records, also tell a remarkably similar warming story.
For example, a study by Anderson et al. (2012) created a new global surface temperature record reconstruction using 173 records with some type of physical or biological link to global surface temperatures (corals, ice cores, speleothems, lake and ocean sediments, and historical documents). The study compared their reconstruction to the instrumental temperature record and found a strong correlation between the two:
Temperature reconstruction based on natural physical and biological measurements (Paleo, solid) and the instrumental temperature record (MLOST, dashed) relative to 1901-2000. The range of the paleo trends index values is coincidentally nearly the same as the GST although the quantities are different (index values versus temperature anomalies °C).
Confidence in climate science depends on the correlation of many sets of these data from many different sources in order to produce conclusive evidence of a global trend.
Last updated on 13 January 2013 by dana1981. View Archives
Let us not forget what exactly the CA/McIntyre effort basic drive consists of, in summary: we do not like what the scientific research concludes on this issue, so we are going to review every single detail, fishing for anything that could lead in the direction that we favor.
On the other hand, the actual climate research follows this basic process: study climate, by considering the physical laws governing atmospheric dynamics and their interrelations, by modeling these on supercomputers, by gathering as much data as can be obtained and carefully sorting through and analyzing that data.
It is not very suprising that when the CA folks actually get into a scientific way to analyze data, their conclusions confirm the prior ones from real researchers.
http://wattsupwiththat.wordpress.com/
So far, with 40% of this supposedly gold standard network surveyed, 85% of sites are showing errors in the site and operations that are likely to be > 1 degree C. In other words larger than the entire GW signal to date.
People promoting catastrophic warming scenarios frequently refer to graphs from this very data set to support their claims. It is clear that we just can't make a useful reconstruction of surface temperatures using these sites.
I am somewhat dismayed by the idea that modeling with supercomputers is somehow climate research. While going thru and evaluating actual data and methodology is apparently not what "real researchers" do. I could have saved so much time in grad school if I had only known that computer models were real research and the actual data wasn't.
Modeling is a valuable tool in science but the models are not evidence in any way of what is happening in the climate. Adding the word supercomputer does not make it science, in fact quite the opposite. By the way since Cray isn't making them anymore what makes a computer super these days?
The term Urban heat island is probably not a good one it is a land use issue not simply an urban issue. A station in NYC central park may be just fine while a station in the middle of nowhere can be bad if it is placed on asphalt next to an air conditioner exhaust.
Last month, Energy and Environment 18:985-995, published a not very kind report by Douglass J Keenan. It shows that two well known and influential papers, that are still the basis for the IPCC claim that UHI has been removed from the global climate data sets, are in fact incorrect. In fact the word used is Fraudulent.
While Tom Wigley has sent me some references on sea temperature that seem pretty robust, (thank you sir) the land surface temperature measurements are in serious trouble. It looks to me like at least half of the late 20th century warming signal in this data is about to vanish. We really need a data set that is not badly contaminated, that uses sites that are properly placed and maintained, USHCN is not it.
http://pubs.acs.org/subscribe/journals/esthag-w/2005/aug/policy/pt_skeptics.html
This article treats of how UHI affects observations:
http://www.ncdc.noaa.gov/oa/climate/research/population/article2abstract.pdf
As mentioned higher, John V has plotted the data from the "good" sites (per Watts definition) and has found very good agreement with GISSTEMP, so they must be doing something right. It is worth emphasizing that Watts'effort concentrated on micro site effects, a different problem than UHI; nevertheless, agreement was still there in the data.
I would not venture to say that climate science dispenses from going through and evaluating actual data. This RC post is of some interest as to how the UHI effect is accounted for:
http://www.realclimate.org/index.php/archives/2007/07/no-man-is-an-urban-heat-island/#more-454
I believe the post references this article:
http://www.ncdc.noaa.gov/oa/climate/research/population/article3abstract.pdf
Is that one of the 2 criticized by E&E;?
I should have sited the actual papers but I thought you'd rather look yourself now I'll have to remember to dig it back out.
http://tamino.files.wordpress.com/2007/08/global2.jpg
The sources are:
http://data.giss.nasa.gov/gistemp/
http://www.remss.com/pub/msu/monthly_time_series/
Yes the graphs from the surface data agree with the graphs from the surface data.
You say:
"Compare to the Balloon data and the satellite data where the anomally is much smaller, and the trend much less pronounced."
Graphs, sources, data, links?
We have been discussing the surface station data. I am not familiar with all of the ways these different data sets are compiled. I am pointing out that the USHCN has clearly got problems in their data collection end.
Is it your contention that the balloon and satellite data show the large anomaly that the surface stations data does? The satellite and ballon data match each other well but neither is nearly as dramatic as the "surface record".
That can't be right.
Are you really sure about that?
Did you also look at what reference period is used to compute anomalies?
I suspect that there must be something else in there that we aren't seeing, if they are using different reference periods for their anomaly calculations then combining them in the graphs as they have; that would be amazing incompetence so I doubt that's what it is.
It could be what I have suggested on other threads we ought to quit using the land surface record until we get a better handle on what the heck the problems with it are. This meets enormous resistance because the warming signal from balloon measures has been so much weaker and the satellite record is so short.
About the satellite record:
The T2 channel, used for troposphere measurements is influenced by the stratosphere. The T4 channel is all stratosphere.
http://www.ncdc.noaa.gov/oa/climate/research/rss-msu.pdf
http://climate.envsci.rutgers.edu/pdf/VinnikovGrody2003.pdf
http://climate.envsci.rutgers.edu/pdf/TrendsJGRrevised3InPress.pdf
http://www.ncdc.noaa.gov/oa/climate/research/nature02524-UW-MSU.pdf
About balloons:
http://www.sciencemag.org/cgi/content/short/309/5740/1556
"if the surface warms more than the atmosphere then the atmosphere cannot be the cause as this would violate the second law of thermodynamics,"
The surface is not really the surface as in the surface of a spheroid. Surface temps measurements and estimates are rather the lowest troposheric temps and should be thought of that way. Sea surface temps would probably correspond better to the idea of surface as you use it in you thermodynamic view. But any AIR temperature can not be considered as surface that way, it is always atmospheric, even if it's 2 cm off the ground.
"if they are using different reference periods for their anomaly calculations then combining them in the graphs as they have; that would be amazing incompetence so I doubt that's what it is."
Actually that's exactly what it is and I don't know who you mean exactly by "they." This graph:
http://tamino.files.wordpress.com/2007/08/global2.jpg Is a compilation by Tamino to show agreement in the trends and agreement does show, in spite of having different time periods for anomaly computation.
The reference period for GISS is 1951-1980. Obviously, the satellite record can not use this same period. Satellite records use 1979-2000, during which average temps were already higher, making warm anomalies smaller than those seen on GISS. It is not incompetence to represent these on the same graph, so long as we know what we're looking at. In fact, it is a good test of the true trend. Incompetence would lie rather in the ignorance of the difference or using the graph for interpretations that ignore these differences. Putting these on the same graph to verify identical trends is not incompetence.
One last thing: satellite measurements are, in fact, lower troposphere measurements, a sizable layer of atmosphere, and even the T2 channel includes a strong stratospheric influence. The papers cited higher give some details on that.
Another way of saying 1979-2000 was already warming though is to say 1951-1980 was the coldest stretch in a century. Too bad GISS used this for a baseline but what can you do about that.
Excellent first paragraph too. Though there is a lot of debate about where warming should be greatest vs where it is the greatest. I'm not sure who is right there. I think between the two things you took care of the problem I had with the anomaly numbers.
Hurrah.
On a scale of 1 - 10 how would we rate the accuracy of this data source? And how reliable does this make any model we try to construct?
Most of the stations are land-based and the sea based ones limited to particular sea routes; this means we have less data about sea temps then land temps...despite the sea being somewhat bigger.
What skew does that put on any resultants?
Coincident with the fall in stations the GMTemp has apparently risen.(???any connection here???)
Look at a map of the current station locations and then tell me they are providing data that can be seriously used to construct a global model.
Yes satellites provide additional cover but only during their overpass which is limited. Yes, their instrumentation is more accurate than land-based stations, but there are too few of them, so their 'correcting' effect on the overall dataset is diluted.
The earth has around 510 million sqkm in surface; 150 land and 360 water. The vast majority of stations are land based and with around 4000 in use that works out to a station roughly every 38,000 sqkm. To try and model from that low level of distribution would be rejected by most reasonable people. The fact that most of these stations are actually concentrated in a much smaller area leaving HUGE areas un-monitored simply makes the data collected even more worthless for constructing any realistic model.
http://science.nasa.gov/headlines/y2000/ast21jul_1m.htm
http://climate.geog.udel.edu/~climate/html_pages/ghcn_T_stn.html
Note the closing comments......
"The improved temperature record will guide efforts to refine computer models of the world's climate so that the behavior of the models more closely resembles the observed behavior of the atmosphere.
Current models suffer from several shortcomings.
For example, clouds are not well represented by the models. The resolution of current models is too coarse for features as small as clouds, Spencer said. Yet clouds clearly play a crucial role in climate due to their influence on humidity, precipitation and albedo (the percentage of solar energy reflected back into space as light).
"The role of clouds is still regarded as one of the biggest uncertainties in global warming predictions," Spencer said.
The ability of plants to remove carbon dioxide from the atmosphere and the role of soils have only recently been added to the models, and scientists aren't confident yet of how the models portray these factors, Spencer said.
"While we know that vegetation takes up some of the carbon dioxide we generate from burning of fossil fuels, how that sink of carbon will change in the future is still pretty uncertain," Spencer said.
Climate models are also limited by the computing power available.
"The global models would be much better if computers were much faster," Spencer said. "Instead, a lot of approximations are made to make the models simple enough to do climate simulations over the whole globe.
"Unfortunately," Spencer continued, "we know that many of the processes that are crudely represented are quite non-linear, and so have the potential to respond in unexpected ways."
Just to tack another thought onto this: as people generally have an overriding opinion on AGW, do you think the multitude of factors and questions such as the one I've asked above are generally explained to support one's own 'overriding opinion'? And further, how many factors and questions would it take for someone educated in this field to 'change' their overriding opinion?
I openly admit that I plead ignorance before I plead an opinion! The back and forth on this subject is dizzying.
Check out Wikipedia: It is (guess)timated that around 14 terawatts of heat is released from the earth's core through tectonic/vulcanic activities, around the same amount of energy that we currently consume.
Science is about facts, not opinions. Opinions are shaped by the kind of person you are and you will find a lot of people will deny facts because they do not fit 'their' model of reality. That's why we need science, not opinion, not emotional hype, not fear induced reactions to an un-proven hypothesis.
Science enables us to respond rather than react.
Yes it does have an effect, producing a false feedback through CO2 release which is a GHG. The argument on CO2 is climate sensitivity. Hansen claims a high sensitivity while Spencer claims a low one. The results thus far indicate Spencer is scientifically but not politically correct.
These questions have been addressed fairly conclusively by the science.
(i) You are correct (Tree) that the earth took many millions of years to sequester atmospheric CO2 in the form of fossil fuels (oil, gas, coal, shale and so on). Around 4000 billion tons of carbon is "stored" in this manner, and it's taken around 600 million years to do this.
In the last 100 years we've released around 500 billion tons of this carbon back into the atmosphere, of which around 200 billion tons has remained there (around 300 billion tons has been absorbed by the oceans and terrestrial environment).
see for example:
http://www.physicalgeography.net/fundamentals/9r.html
(ii) It's very clear that volcanic activity is on a miniscule scale with respect to our massive release of carbon dioxide. It's easy to demosstrate this. If one examines the high resolution atmospheric CO2 record over the last 1000 years, for example, one can see that the atmospheric CO2 levels remain rather constant over the period up to around the mid to late 19th century and rise massively in response to our emissions. The absence of significant activity from volcanoes can be observed by the absense of jumps in the atmospheric CO2 record as a result of the truly hummungous volcanoes of the last 1000 years (e.g. Santorini, Krakatoa and Tambora). Volcanic activity results in the release of something a ggod bit less than 1% of our current industrial emissions.
see, for example, the high resolution atmospheric CO2 record compiled on page 3 of the IPCC summary for policymakers:
http://www.ipcc.ch/pdf/assessment-report/ar4/wg1/ar4-wg1-spm.pdf (see page 3)
(iii) Heat from tectonic activity is trivial with respec to greenhouse gas warming. This is one of those fallacious "arguments" that is doing the rounds! The practicioners avoid the three pertinent points. These are:
(i) is there any evidence for enhanced tectonic activity during the period of very large warming (especially last 30-odd years)? After all tectonic activity has been occurring for millions of years. Has it suddenly intensified? Evidence please!
(ii) how can it be that the areas of which major tectonic activity show little match to areas of temperature increase? For example Iceland is one of the most tectonically active regions on earth. However it is one of the few places on earth that has undergone a tiny bit of COOLING during the period of global warming:
e.g. data on the scale and location of Arctic warming over the last 50 years from the Colorado University Arctic research center:
http://arctic.atmos.uiuc.edu/CLIMATESUMMARY/2003/IMAGES/annual.1954-2003.tchange.png
(iii) the heat released by undersea tectonic activity is around that of the geothermal background. This is around 0.1% of the heat energy from solar/greenhouse activity.
e.g. according to Jeff Severinghaus of the Scripps Institution of Oceanography
"... the average heat added from volcanoes to the ocean is of order 0.1 Watt per square meter. But the heat added (or removed) to the ocean from the sun and atmosphere is of order 100 Watt per square meter. So it is very hard for volcanoes to compete."
So it's not just a question of showing that tectonic activity on the ocean bottom is significant with respect to warming (the evidence indicates it isn't), but of showing that this activity has increased in the last several decades to an extent that can have contributed to warming (the evidence indicates that it hasn't)...
What highres records are we talking about please?
Paleoproxies?
No, not paleoproxies. That's clear from the data I linked to:
http://www.ipcc.ch/pdf/assessment-report/ar4/wg1/ar4-wg1-spm.pdf (see page 3)
The atmospheric CO2 record is the directly measured atmospheric CO2 either in the atmosphere (from the many sites around the world and the continuous record from Manua Loa since 1959), and trapped in bubbles in ice cores extending back many 100's of thousands of years, but at a high resolution extending back 1000 years:
e.g. D. M. Etheridge et al (1996) "Natural and anthropogenic changes in atmospheric CO2 over the last 1000 years from air in Antarctic ice and firn" J. Geophys Res. 101, 4115 -4128.
and later extended to 2000 years:
CM Meure et al (2006) "Law Dome CO2, CH4 and N2O ice core records extended to 2000 years BP" Geophys Res. Lett. 33 Art. # L14810
This thread is about temperature records and how reliable /accurate/representative are they.
CO2 levels are assumed to vary only slightly due to effective atmospheric mixing, but this is very different from temperature which has much greater variation. Given the paucity of temperature recording stations I cannot accept that the data used for models is sufficiently representative of the global condition, and thus the resultant of the model is questionable.
Even satellite records are questionable as recently demonstrated by the modification needed to the attitude correction algorithm.
that's good...you agree that the atmospheric CO2 levels vary only slightly due to effective atmospheric mixing. You say that this is "assumed", but of course, as we both know [http://www.skepticalscience.com/co2-measurements-uncertainty.htm], this isn't an "assumption" at all..it's a real world observation [so long as we are careful to make CO2 measures in isolated locations and average over the relevant timescales for mixing (yearly averages are appropriate)].
There's still a few problems with your post:
(i) Temperature data isn't "used" for models
of course. And so the "resultant" of the models isn't in any way "questionable" in relation to the temperature data which is an entirely independent data set. Model output (as predictions or hindcasts) might well be compared with the real world temperature....but that's another matter altogether.
(ii) Notice that one doesn't need a huge number of "temperature recording stations" to assess changes in global temperature. Remember that the aim is not to determine the Earth's "average temperature" or "global temperature". These are terms with little meaning (after all the Earth's average sea level surface temperature will differ from the Earth's average 200 metres altitude temperature and so on). The Earth's temporal temperature evolution is determined as a change in the "temperature anomaly", which is the change in temperature in single locations averaged over a very large number of locations. Thus temperature stations at a whole range of locations and altitudes provide valid data sets. On similar lines, the fact that there is a strong correlation between temperature anomalies over large distances (100's of kilometres) means that the whole Earth doesn't need to be minutely sampled. Obviously we couldn't assess absolute global temperatures in this manner. But we're not assessing absolute global temperatures. We're assessing the change is absolute temperature at single locations and averaging these changes.
So one needs to be clear about what the surface temperature anomaly means and how this is determined before attempting to trash it! [you might read the relevant descriptive papers here [*****]. Notice that in relation to the subject of this thread, the Earth's temperature anomaly progression under the influence of a marked 20th (and especially late-20th) century warming is essentially unchanged if the entire set of urban stations is omitted from the analysis.
[e.g. Hansen et al (cited below) state in an analysis of urban heat effects that: “These examples illustrate that urban effects on temperature in specific cases can dominate over real climate trends. Fortunately, there are far more rural stations than urban stations, so it is not essential to employ the urban data in analysis of global temperature change.”]
So the "urban heat island effect" is somewhat of a red herring (or a stalking horse) in the context of global temperature anomaly measures.
[*****] Hansen et al (1999) GISS analysis of surface temperature changes J. Geophysical Res (Atmos) 104, 30997-31022
or (for the Hadley analyses):
Rayner NA et al (2003) Global analyses of sea surface temperature, sea ice, and night marine air temperature since the late nineteenth century J. Geophys. Res. (Atmos) 108 (D14): Art. No. 4407 JUL 17 2003 etc. etc.
(iii) Of course the proof is in the pudding. We've observed a large warming, especially of the high Northern latitudes (as predicted by models) with large attenuation of Arctic sea ice....we've observed large scale retreat of mountain glaciers....we've observed increased concentrations of atmospheric water vapor in response to atmospheric warming much as predicted ......we've observed widespread increases in the sea surface temperature...and so on.
In fact it's possible to leave out direct surface temperature measures and construct a completely independent temperature scale by analysis of the record of mountain glacier retreat:
e.g. J. Oerlemans (2005) "Extracting a climate signal from 169 glacier records" Science 308, 675-677.
And as John Cook outlined in his top post, there are many other indicators of rising surface temperatures that are independent of direct temperature measures.
Notice that one doesn't need a huge number of "temperature recording stations" to assess changes in global temperature.
Now I am glad she mentions this, there are enough rural sites certainly in the US that will give complete coverage, these rural sites show NO significant warming since 1900. Because of this problem (no warming) Hanson/Giss use over 1100 US weather stations many urban so they can then manipulate the raw data to push their cause.
http://www.youtube.com/watch?v=LcsvaCPYgcI
http://www.climateaudit.org/?p=1859
http://www.climateaudit.org/wp-content/uploads/2007/08/peters27.gif
Chris says,
So one needs to be clear about what the surface temperature anomaly means and how this is determined before attempting to trash it! [you might read the relevant descriptive papers here [*****]. Notice that in relation to the subject of this thread, the Earth's temperature anomaly progression under the influence of a marked 20th (and especially late-20th) century warming is essentially UNCHANGED if the ENTIRE SET OF URBAN STATIONS IS OMITTED from the analysis.
Fortunately, there are FAR MORE RURAL stations than urban stations, so it is NOT ESSENTIAL to employ the urban data in analysis of global temperature change.”]
So the "urban heat island effect" is somewhat of a red herring (or a stalking horse) in the context of global temperature anomaly measures.
I SAY PERHAPS CHRIS SHOULD TELL HANSON THAT.
PERHAPS CHRIS CAN INFORM US WHY HANSON USES STATION PAIRS INSTEAD OF USING THE PRISTINE RURAL STATION DATA ALONE THAT IS READILY AVAILABLE OVER THE ENTIRE GLOBE.
Station pairs disguise the actual temperature, by suggesting in flawed studies that there is little UHI effect, try London 9 degree C difference and every other major cities and towns on the planet.
Google population growth and the UHI effect, 9 degrees C is NOT high.
Since I am quoting Hanson directly from one of his papers, I don't really need to tell him anything.
Urban areas are generally warmer than the surrounds. Therefore one either eliminates urban areas from the record to establish the Earth's surface temperature evolution, or one corrects the data from urban stations by reference to local urban stations.
However one does this (leaves out the urban stations or corrects these) the Earth's surface temperature anomaly is the pretty much the same.
You would benefit from reading John Cook's article on urban heat island effect.
Here in lies the crux of the problem. --------------------------------------------------------- So If I'm not mistaken this is the AGW Hypothesis:
1. The world has been warming for a century, and this warming is beyond any cyclical variation we have seen over the last 1000 or more years, and beyond the range of what we might expect from natural climate variations.
2. Almost all of the warming in the second half of the 20th century, perhaps a half a degree Celsius, is due to man-made greenhouse gases, particularly CO2
3. In the next 100 years, CO2 produced by man will cause a lot more warming, from as low as three degrees C to as high as 8 or 10 degrees C.
4. Positive feedbacks in the climate, like increased humidity, will act to triple the warming from CO2, leading to these higher forecasts and perhaps even a tipping point into climactic disaster
5. The bad effects of warming greatly outweigh the positive effects, and we are already seeing the front end of these bad effects today (polar bears dying, glaciers melting, etc)
6. These bad effects, or even a small risk of them, easily justify massive intervention today in reducing economic activity and greenhouse gas production [1] http://www.conservapedia.com/AGW_hypothesis ---------------------------------------------------------
In order for this to be proven true at this point in time, the surface temperature record needs to be accurate, because the other forms of temperature data collection have not been around long enough to be relied on. We simply do not have upper atmospheric temperature measures for long enough to see any long term trends. Let alone trends that are not expected. This is also true of the surface temperature record, although it is slightly older.
Let's put it into perspective, if we scaled earth's total existence in time to a period of 1 year, the 50-100 years of data collection we now have would still be a fraction of a second on that time scale.
So... claiming that you "Don't need" the temperature record is simply an act of hand waving by those too stubborn to admit defeat. At least for now, there is more work to be done.
I'm sure nobody would suggest that it doesn't matter if we "lose the validity of the surface temperature record". I've had a look through the thread and haven't found any post which claims that, let alone "claiming that you "Don't need" the temperature record"...that would be an odd claim indeed!
Notice that in order to take action in response to real world observations we don't need "proof". Proof is a mathematical/philosophical concept. What we need is strong evidence.
So the pertinent question is: "is there strong evidence that the temperature record is robust to the extent that we can reliably assess the Earth's temperature response in relation to our understanding/predictions of massive enhancement of greenhouse gas concentrations".
The answer is yes I suspect we would agree for some of the reasons already outlined on this thread:
(i) The record is independently assessed by three different organizations. Although there are differences in data compilation/analysis methods and some differences that relate to the nature of covering sparsely-monitored regions, the different compilations yield a consistent interpretation of the surface temperature evolution over the last 100 and a bit years.
(ii) the surface record seems not to have significant contamination from the UHI since (a) a number of direct analyses indicate that the UHI isn’t significant [comparison of temperatures on windy days (with rapid excess heat dispersal) cf calm days, and other types of analysis, for example as described here: http://www.skepticalscience.com/Does-Urban-Heat-Island-effect-add-to-the-global-warming-trend.html, or in John Cooks introductory summary on this thread]; (b) one can remove all of the urban records from the analysis, and the temperature profile is pretty much unaffected; (c) those regions showing the largest warming are far, far away from urban centres and generally there is no correlation between local temperature evolution and local urban density [see for example: http://www.skepticalscience.com/urban-heat-island-effect.htm]
(iii) completely independent records of the consequences of a warming Earth are consistent with the surface record [these include high latitude ice recession; independent temperature scales constructed from the record of high altitude glacier recession; tropospheric warming; enhanced tropospheric absolute humidity and so on].
So the evidence supports the interpretation that the temperature record is robust.
Your point about scaling of the record with respect to the Earth’s “total existence” isn’t an important comparison with respect to the question of the consequences of massive enhancement of the atmospheric greenhouse gas concentrations at this particular time in the Earth’s long history. In any case we have a huge amount of information about temperatures in the recent and much more distant geological past. This also informs our understanding and provides strong evidence in support of the expected surface warming response to enhanced greenhouse gas concentrations. For example there is a good correlation between atmospheric CO2 concentrations and the Earth’s “temperature” in proxyCO2 and proxytemperature data stretching back right through the Phanerozoic
So in general, the paleorecord reinforces the data from our contemporary temperature record and all of the vast amount of information from understanding of basic atmospheric physics, to the spectroscopy of greenhouse gases, ice core records and so on and on, that informs us on the consequences of massive enhancement of greenhouse gas concentrations.
That’s not to say that there isn’t much more work to be done!
Another question is what does science say about the type of average that is used to calculate the presnet state of the earth's "fever" (referring to your spokesman Al Gore's expression for it.)
Do we do RMS averages, a simple arithemetic mean, or is a modal or median average most appropriate in determing the state of the this "fever"?
Maybe can we mix them all up and wave some abra-cadabra (correcting the data) over it and voila - the current state of the earths fever is determined to 5 decimal points.
How does science deal with migrating weather stations? If I decide to place 500 weather stations in Arizona next year and call them "official" will the USA develop a strong "global warming" signal or just Arizona? or doesn't it matter at all? Conversely if I fund 10,000 weather stations in Siberia can I cool the planet's present fever?
Unfortunately most of the weather stations in Siberia have been shut down and if you look up the current distribution of weather stations globally they are distributed very unevenly...the highest density being in the USA. Many other parts of the world are not 'thermally' represented so any global mathematical average (however it is derived) is going to be wrong.
Satellite measurement has been around now for only 30 years so whilst we have a more even distribution of data (not necessarily more accurate) the data series is too short for any predictive climate modelling.
Interestingly, the current series of satellite temperature data shows a clear cooling trend since 2002 despite increasing CO2 levels.
When some outfit like Hadley or GISS offers an estimate of the global mean temp for a given year, do they present along with it an error estimate? e.g. In 2008 the average temp was 25 degrees C + or - 5 degrees. Seems like they would have to, given all that goes into coming up with an estimate. How do they assess the range of error, and how much confidence can we place on such estimates?
cordially
Frank
Another point is this, why do skeptics refuse to pillory Dr Spencer (a skeptic) for his errors, yet are quick to attack any apparent errors made by climatologists with CRU or GISS? That smacks of hypocrisy to me!