• Log in with Facebook Log in with Twitter Log In with Google      Sign In    
  • Create Account
  LongeCity
              Advocacy & Research for Unlimited Lifespans


Adverts help to support the work of this non-profit organisation. To go ad-free join as a Member.


Photo
- - - - -

[Video] States of Fear: Science or Politics

video

  • Please log in to reply
22 replies to this topic

#1 rwac

  • Member
  • 4,764 posts
  • 61
  • Location:Dimension X

Posted 24 January 2011 - 03:28 AM


Name: States of Fear: Science or Politics
Category: Policy, Economics, Philosophy
Date Added: 24 January 2011 - 04:28 AM
Submitter: rwac
Short Description: Michael Crichton

Michael discusses Chernobyl, the origins of the novel State of Fear, the language of ecological scare-tactics, the ongoing, 150-year trend toward energy decarbonization, and the concept of "information invalids"-people sickened by bad information.

The Independent Institute
November 15, 2005

View Video

#2 niner

  • Guest
  • 16,276 posts
  • 1,999
  • Location:Philadelphia

Posted 24 January 2011 - 05:12 AM

I just watched this video in its entirety, and I'm not very impressed with his message. He cites a number of cherry-picked instances of grossly inaccurate predictions and half-assed attempts at the management of complex systems in order to suggest that our understanding of complex systems is so poor that we should... what? Do nothing? Well, only a certain kind of nothing. There's no suggestion that we should slow down in the least in our mad rush to exploit the riches of the world around us. There only seems to be an implication that we should cease any attempt to prevent damage to our planet, because we are too ignorant to understand its complexity. We're certainly not too ignorant to keep on doing whatever we want if we can make a buck at it, though. He takes particular aim at environmental fear-mongering, and that's a reasonable target. I would have been a lot more impressed if Crichton would have pointed out that not everyone has been wrong about the environment. There were plenty of woodland management experts who have known for many years that putting out all wildfires is a bad idea. Crichton gave them impression that the big Yellowstone burn happened because we just didn't understand that. It happened because we didn't listen to the people who knew better. The thrust of his talk seems to be that we should continue to do whatever we want and that we shouldn't listen to anyone whose job it is to understand the complexity of the environment, because, gosh, the environment's just too dang complicated for anyone to understand. Crichton was a guy who could spin a hell of an entertaining yarn, and I've devoured more than one of them. I'm afraid, however, that he might know enough science to be dangerous. He presented the interesting concept of "information invalids"; people sickened by bad information. So why did he put out more of it? This might be a good lecture to show to Earth First! or other crunchy granola types, but I'm afraid it will get a lot more traction with AGW denialists, a camp with which Crichton was famously sympathetic.

#3 firespin

  • Guest
  • 116 posts
  • 50
  • Location:The Future

Posted 29 January 2011 - 11:23 AM

Crichton gave them impression that the big Yellowstone burn happened because we just didn't understand that. It happened because we didn't listen to the people who knew better.

Niner how and who decide that someone "know better" of a complex situation? Everyone listening to people who supposely "knew better" of the complex historically have been shown to create much more negative/destructive results than positive around the world creating cults, dictators, socialism, nazi-eugenics, pseudoscience, pathological science, government abuse, and even genocide in certain cases. The bad results far out number any good ones so far.

People should be very critical of anyone claiming to know better, become most people including professionals do not.

Edited by firespin, 29 January 2011 - 11:37 AM.


sponsored ad

  • Advert

#4 maxwatt

  • Member, Moderator LeadNavigator
  • 4,952 posts
  • 1,626
  • Location:New York

Posted 29 January 2011 - 03:10 PM

...He takes particular aim at environmental fear-mongering, and that's a reasonable target. ...


Speaking of environmental fear-mongering:
Hot: Living Through the Next Fifty Years on Earth by Mark Hertsgaard.

Scares the shite out of me. He interviews many credible scientists, and some of his nuggets: the predicted 3-foot sea level rise at current trends is too conservative; we'll probably see 3 feet in 20 years. Or in twenty years the climate of Chicago will resemble that of Houston." That's just from a quick skim, there's more. The problem I have is the science seems to be sound.

...
People should be very critical of anyone claiming to know better, become most people including professionals do not.


I prefer the "know betters" to the "know nothings".

#5 rwac

  • Topic Starter
  • Member
  • 4,764 posts
  • 61
  • Location:Dimension X

Posted 29 January 2011 - 03:33 PM

He interviews many credible scientists, and some of his nuggets: the predicted 3-foot sea level rise at current trends is too conservative; we'll probably see 3 feet in 20 years.



Right now, the sea level rise is a few mm per year. He's talking of a rise of a few cm/year. We have seen no signs of such a large sea level rise.
Current trends: http://tidesandcurre...trendstable.htm

This fits perfectly into the category of scare mongering.

#6 maxwatt

  • Member, Moderator LeadNavigator
  • 4,952 posts
  • 1,626
  • Location:New York

Posted 29 January 2011 - 05:30 PM

He interviews many credible scientists, and some of his nuggets: the predicted 3-foot sea level rise at current trends is too conservative; we'll probably see 3 feet in 20 years.



Right now, the sea level rise is a few mm per year. He's talking of a rise of a few cm/year. We have seen no signs of such a large sea level rise.
Current trends: http://tidesandcurre...trendstable.htm

This fits perfectly into the category of scare mongering.

You are correct unless the rise is accelerating, which Greenland glacier melt is accelerating, which does appear to be the case. The question of course is the rate of increase.

Posted Image

The IPCC synthesis reports offer conservative projections of sea level increase based on assumptions about future behavior of ice sheets and glaciers, leading to estimates of sea level roughly following a linear upward trend mimicking that of recent decades. In point of fact, observed sea level rise is already above IPCC projections and strongly hints at acceleration while at the same time it appears the mass balance of continental ice envisioned by the IPCC is overly optimistic ( Rahmstorf, Nature 2010 ).

A meter by 2100 seems, from the data alluded to by Rahmsdorf, highly likely, but recent measurements showing acceleration make 20 years not out of the question. The Dutch are taking this very seriously; they have to.

#7 rwac

  • Topic Starter
  • Member
  • 4,764 posts
  • 61
  • Location:Dimension X

Posted 29 January 2011 - 09:07 PM

The IPCC synthesis reports offer conservative projections of sea level increase based on assumptions about future behavior of ice sheets and glaciers, leading to estimates of sea level roughly following a linear upward trend mimicking that of recent decades. In point of fact, observed sea level rise is already above IPCC projections and strongly hints at acceleration while at the same time it appears the mass balance of continental ice envisioned by the IPCC is overly optimistic.


The issue is that this depends directly on the IPCC projections, which are in turn based on computer models. These computer models are entirely based on empirically determined constants, basically curve fitting. For instance there is some argument about whether clouds are a negative or positive feedback. These models also have not been rigorously verified, and we have no reason to believe that they are capable of predicting future temperatures.

The predictions that these models make are so vague, that it's not even possible to say whether these models are accurate. How can we consider this good science when it's impossible to verify a model, especially when the worst case scenario is always hyped to scare everyone.

#8 niner

  • Guest
  • 16,276 posts
  • 1,999
  • Location:Philadelphia

Posted 29 January 2011 - 10:24 PM

Crichton gave [the] impression that the big Yellowstone burn happened because we just didn't understand that. It happened because we didn't listen to the people who knew better.

Niner how and who decide that someone "know better" of a complex situation?

You look to science. You look at credentials. Do they have the requisite training to fully understand what they are talking about? Do they have a paper trail? (papers, patents, presentations, society memberships, awards, grants...) What do their peers think of their work? I wouldn't ask a climate scientist to build me a bridge that would be guaranteed not to collapse, and I wouldn't ask an engineer, a physicist, or a weatherman for a definitive answer on climate science.

Everyone listening to people who supposely "knew better" of the complex historically have been shown to create much more negative/destructive results than positive around the world creating cults, dictators, socialism, nazi-eugenics, pseudoscience, pathological science, government abuse, and even genocide in certain cases. The bad results far out number any good ones so far.

Everyone? c'mon. I'm not talking about people who simply stand up and say "trust me". I'm talking about people who have appropriate training and experience. See above. This has nothing to do with Adolph Hitler or Jim Jones.

People should be very critical of anyone claiming to know better, become most people including professionals do not.

Who would you rather have replace your heart valve, a cardiac surgeon or a WWF wrestler?

#9 Connor MacLeod

  • Guest
  • 619 posts
  • 46

Posted 29 January 2011 - 11:17 PM

The predictions that these models make are so vague, that it's not even possible to say whether these models are accurate. How can we consider this good science when it's impossible to verify a model...


I have heard almost those same exact words during discussions I have participated in with climate scientists. Even if one is willing to assume the correctness of the models, a very big assumption; the problem is estimating the statistical variability of the predictions coming out of the more physically realistic models is incredibly difficult, and, as far as I can tell, not any where close to being solved.

#10 niner

  • Guest
  • 16,276 posts
  • 1,999
  • Location:Philadelphia

Posted 29 January 2011 - 11:34 PM

The predictions that these models make are so vague, that it's not even possible to say whether these models are accurate. How can we consider this good science when it's impossible to verify a model...

I have heard almost those same exact words during discussions I have participated in with climate scientists. Even if one is willing to assume the correctness of the models, a very big assumption; the problem is estimating the statistical variability of the predictions coming out of the more physically realistic models is incredibly difficult, and, as far as I can tell, not any where close to being solved.

Are they able to use inputs from the distant past to predict the climate of the less-distant past? If they can do that, and they have a reasonable enough data stream, they should at least be able to say something about the expected range of variation, within some parameter space. I can't believe that it's impossible to verify their models in any way. Do these models not have some analog of a training and test set? How do you even build a model without reference to actual data?

#11 Soma

  • Guest
  • 341 posts
  • 105

Posted 30 January 2011 - 01:23 AM

Classic straw-man argumentation.


There was great fear that thousands would die from Chernobyl. They did not. Therefore the current fears concerning the environment are unfounded.

There was great fear concerning Y2K. It turned out be unfounded. Therefore the current concerns about environmental degradation are unfounded.


I would have thought Crichton could have done a little better than this. Nonsense.

#12 rwac

  • Topic Starter
  • Member
  • 4,764 posts
  • 61
  • Location:Dimension X

Posted 30 January 2011 - 02:06 AM

Classic straw-man argumentation.


It's not a straw-man as much as it is a pattern of fear-mongering.

#13 maxwatt

  • Member, Moderator LeadNavigator
  • 4,952 posts
  • 1,626
  • Location:New York

Posted 30 January 2011 - 05:06 AM

The IPCC synthesis reports offer conservative projections of sea level increase based on assumptions about future behavior of ice sheets and glaciers, leading to estimates of sea level roughly following a linear upward trend mimicking that of recent decades. In point of fact, observed sea level rise is already above IPCC projections and strongly hints at acceleration while at the same time it appears the mass balance of continental ice envisioned by the IPCC is overly optimistic.


The issue is that this depends directly on the IPCC projections, which are in turn based on computer models. These computer models are entirely based on empirically determined constants, basically curve fitting. For instance there is some argument about whether clouds are a negative or positive feedback. These models also have not been rigorously verified, and we have no reason to believe that they are capable of predicting future temperatures.

The predictions that these models make are so vague, that it's not even possible to say whether these models are accurate. How can we consider this good science when it's impossible to verify a model, especially when the worst case scenario is always hyped to scare everyone.



The predictions that these models make are so vague, that it's not even possible to say whether these models are accurate. How can we consider this good science when it's impossible to verify a model...

I have heard almost those same exact words during discussions I have participated in with climate scientists. Even if one is willing to assume the correctness of the models, a very big assumption; the problem is estimating the statistical variability of the predictions coming out of the more physically realistic models is incredibly difficult, and, as far as I can tell, not any where close to being solved.

Are they able to use inputs from the distant past to predict the climate of the less-distant past? If they can do that, and they have a reasonable enough data stream, they should at least be able to say something about the expected range of variation, within some parameter space. I can't believe that it's impossible to verify their models in any way. Do these models not have some analog of a training and test set? How do you even build a model without reference to actual data?


Let me put this shibboleth to rest, that climate models are unreliable. While there are uncertainties with climate models, they successfully reproduce the past and have made predictions that have been subsequently confirmed by observations. Models reproduce the past since 1900 with great accuracy. Climate models have to be tested to find out if they work. We can’t wait for 30 years to see if a model is any good or not; models are tested against the past, against what we know happened. If a model can correctly predict trends from a starting point somewhere in the past, we could expect it to predict with reasonable certainty what might happen in the future.

So climate models are first tested by 'Hindcasting'. The models used to predict future global warming can accurately map past climate changes. If they get the past right, there is no reason to think their predictions would be wrong. Testing models against the existing instrumental record suggested CO2 must cause global warming, because the models could not simulate what had already happened unless the extra CO2 was added to the model. Nothing else could account for the rise in temperatures over the last century.

Models have also made accurate predictions. For example, the eruption of Mt. Pinatubo allowed modelers to test the accuracy of their models by feeding in the data about the eruption. They successfully predicted the climatic response after the eruption. Models have also correctly predicted other effects subsequently confirmed by observation, including greater warming in the Arctic and over land, greater warming at night, and stratospheric cooling.

If anything, the IPCC models are overly optimistic. We are on a path to increase temperatures by 5C which would result in something catastrophic, a positive feedback effect: the geological record contains evidence of sudden sharp increases in warming when this tipping point is passed. This is likely to happen again, because it would trigger release of some 3000 Gt of methane from the arctic and deep sea clathrates. This is not, to my knowledge, included in the IPCC models. We have empirical data from the geologic record that shows us what this looks like. And it is far worse than even the most dire forecasts we are now looking at. Ironically, it is — at this point — the most likely scenario.

#14 Connor MacLeod

  • Guest
  • 619 posts
  • 46

Posted 30 January 2011 - 09:33 AM

So climate models are first tested by 'Hindcasting'. The models used to predict future global warming can accurately map past climate changes. If they get the past right, there is no reason to think their predictions would be wrong.


Do you know what over-fitting is? Do you see how that might happen over 15 years with many researchers using the period of 1900-present to test the accuracy of their models?

#15 Connor MacLeod

  • Guest
  • 619 posts
  • 46

Posted 30 January 2011 - 12:16 PM

The predictions that these models make are so vague, that it's not even possible to say whether these models are accurate. How can we consider this good science when it's impossible to verify a model...

I have heard almost those same exact words during discussions I have participated in with climate scientists. Even if one is willing to assume the correctness of the models, a very big assumption; the problem is estimating the statistical variability of the predictions coming out of the more physically realistic models is incredibly difficult, and, as far as I can tell, not any where close to being solved.

Are they able to use inputs from the distant past to predict the climate of the less-distant past? If they can do that, and they have a reasonable enough data stream, they should at least be able to say something about the expected range of variation, within some parameter space. I can't believe that it's impossible to verify their models in any way. Do these models not have some analog of a training and test set? How do you even build a model without reference to actual data?


Some of the more physically realistic models require a week of computation on a really *fast* computer to get the results for a *single* setting of the parameters. Given that the parameter space can have over 100 dimensions, and that the model output (the solution of a massive PDE) is highly non-linear in the parameters, it is currently computationally impossible to determine whether or not a given set of parameters is really the best fit to historical data. What this means is that it is possible that there are values of the parameters that could give better fit the historical data, but also predict a less significant increase in global temperatures, or even no increase at all.

Edited by Connor MacLeod, 30 January 2011 - 12:21 PM.


#16 maxwatt

  • Member, Moderator LeadNavigator
  • 4,952 posts
  • 1,626
  • Location:New York

Posted 30 January 2011 - 01:17 PM

So climate models are first tested by 'Hindcasting'. The models used to predict future global warming can accurately map past climate changes. If they get the past right, there is no reason to think their predictions would be wrong.


Do you know what over-fitting is? Do you see how that might happen over 15 years with many researchers using the period of 1900-present to test the accuracy of their models?


Of course I know what overfitting is, and so do the modelers. They are sophisticated enough not to use the same observations fit parameter values, to avoid the risk of overfitting. When I was involved in the use of back-propagating neural networks to predict Japanese equity prices 20 years ago, we called it overtraining, and assiduously avoided it. Similarly, limits on predictor selection are imposed in climate models to reduce
overfitting.

...
Some of the more physically realistic models require a week of computation on a really *fast* computer to get the results for a *single* setting of the parameters. Given that the parameter space can have over 100 dimensions, and that the model output (the solution of a massive PDE) is highly non-linear in the parameters, it is currently computationally impossible to determine whether or not a given set of parameters is really the best fit to historical data. What this means is that it is possible that there are values of the parameters that could give better fit the historical data, but also predict a less significant increase in global temperatures, or even no increase at all.

Possible? Maybe, but damn unlikely. Computational power has advanced advanced considerably since that claim was first made. Parallel computing networks have to an extent obviated the need for supercomputers to perform calculations in a finite time. The modelers are quite aware of this limit, and work within it.

I don't understand what is with the right-wing denial of science. Religious zealots attacking evolution is one thing, but it seem to have rubbed off into other areas. Attacking the messenger is not going to change the message.

#17 Connor MacLeod

  • Guest
  • 619 posts
  • 46

Posted 30 January 2011 - 07:30 PM

Of course I know what overfitting is, and so do the modelers. They are sophisticated enough not to use the same observations fit parameter values, to avoid the risk of overfitting. When I was involved in the use of back-propagating neural networks to predict Japanese equity prices 20 years ago, we called it overtraining, and assiduously avoided it. Similarly, limits on predictor selection are imposed in climate models to reduce
overfitting.

No, you are not seeing it. Over-fitting can occur even with each individual researcher fitting their models on different a period than which they use for evaluation. I'm not going to explain it; you should be able to figure it out.

...
Some of the more physically realistic models require a week of computation on a really *fast* computer to get the results for a *single* setting of the parameters. Given that the parameter space can have over 100 dimensions, and that the model output (the solution of a massive PDE) is highly non-linear in the parameters, it is currently computationally impossible to determine whether or not a given set of parameters is really the best fit to historical data. What this means is that it is possible that there are values of the parameters that could give better fit the historical data, but also predict a less significant increase in global temperatures, or even no increase at all.

Possible? Maybe, but damn unlikely. Computational power has advanced advanced considerably since that claim was first made. Parallel computing networks have to an extent obviated the need for supercomputers to perform calculations in a finite time. The modelers are quite aware of this limit, and work within it.

Damn unlikely? Can you quantify what damn unlikely means? Can anybody? No. And that's a problem. When I said it took a week to solve the PDEs for a single setting I am talking about today's technology.

#18 maxwatt

  • Member, Moderator LeadNavigator
  • 4,952 posts
  • 1,626
  • Location:New York

Posted 30 January 2011 - 07:44 PM

Of course I know what overfitting is, and so do the modelers. They are sophisticated enough not to use the same observations fit parameter values, to avoid the risk of overfitting. When I was involved in the use of back-propagating neural networks to predict Japanese equity prices 20 years ago, we called it overtraining, and assiduously avoided it. Similarly, limits on predictor selection are imposed in climate models to reduce
overfitting.

No, you are not seeing it. Over-fitting can occur even with each individual researcher fitting their models on different a period than which they use for evaluation. I'm not going to explain it; you should be able to figure it out.

So who should I believe? You or my lying eyes? ;)

...
Some of the more physically realistic models require a week of computation on a really *fast* computer to get the results for a *single* setting of the parameters. Given that the parameter space can have over 100 dimensions, and that the model output (the solution of a massive PDE) is highly non-linear in the parameters, it is currently computationally impossible to determine whether or not a given set of parameters is really the best fit to historical data. What this means is that it is possible that there are values of the parameters that could give better fit the historical data, but also predict a less significant increase in global temperatures, or even no increase at all.

Possible? Maybe, but damn unlikely. Computational power has advanced advanced considerably since that claim was first made. Parallel computing networks have to an extent obviated the need for supercomputers to perform calculations in a finite time. The modelers are quite aware of this limit, and work within it.

Damn unlikely? Can you quantify what damn unlikely means? Can anybody? No. And that's a problem. When I said it took a week to solve the PDEs for a single setting I am talking about today's technology.


Since there is some uncertainty we should ignore the risk clearly demonstrated by a wide range of models? No model has been brought forth that predicts little or no warming, and can also account for past climate data.
I suggest the American right stop ignoring the warnings of 85% of the world's scientists and 99.9% of the bona fide climate scientists, and deal with a very real risk.

#19 firespin

  • Guest
  • 116 posts
  • 50
  • Location:The Future

Posted 30 January 2011 - 08:01 PM

Crichton gave [the] impression that the big Yellowstone burn happened because we just didn't understand that. It happened because we didn't listen to the people who knew better.

Niner how and who decide that someone "know better" of a complex situation?

You look to science. You look at credentials. Do they have the requisite training to fully understand what they are talking about? Do they have a paper trail? (papers, patents, presentations, society memberships, awards, grants...) What do their peers think of their work? I wouldn't ask a climate scientist to build me a bridge that would be guaranteed not to collapse, and I wouldn't ask an engineer, a physicist, or a weatherman for a definitive answer on climate science.


When our present science actually become better, and humans become better at weeding out bias and agendas then I'll trust it more. Present science we have is extremely young. Stuff we believe we know now often become obselet in 10-20 years. I prefer to look at good proof and evidence of a claim, and not just a "science degree" or credentials. Example with global warming, there no good proof. If the earth is warming, where is the proof humans significantly is the cause? The earth cooled and warmed many times in the past. The earth have been warmer historically several times in the past than now. How is a warmer earth, (if it is really becoming warmer) which happened before, is a bad thing? If the earth is really becoming warmer, it may just be another natural cycle.

There is various groups claiming global warming is increasing, while at the same time the news is telling me my area is recieving back to back snowstorms and I experienced snow up to my knees. :laugh:
We know about europe recently experienced its coldest winter. Also did you know the National Climate Center said last year winter was the coldest for the USA in 25 years?
http://www.ncdc.noaa...national/2010/2

This global warming ideology is not matching with reality.


Everyone? c'mon. I'm not talking about people who simply stand up and say "trust me". I'm talking about people who have appropriate training and experience. See above. This has nothing to do with Adolph Hitler or Jim Jones.

Even Adolph Hitler and Jim Jones were once thought to be good guys and qualified by their followers. We now know they were not only AFTER the bad/horrible events they caused to occur.

People should be very critical of anyone claiming to know better, become most people including professionals do not.

Who would you rather have replace your heart valve, a cardiac surgeon or a WWF wrestler?

A cardiac surgeon only if there is good proof and evidence that the surgery they want to perform on me can work. I want to also know how many previous actual patients they performed successful surgeries on. I am not going to immediately believe them just because they happen to have a degree, or some written paper. I want some real proof that they are right, or else they might be a bad or crazy surgeon who happen to gained a degree and kill me.

Edited by firespin, 30 January 2011 - 08:08 PM.

  • dislike x 2

#20 niner

  • Guest
  • 16,276 posts
  • 1,999
  • Location:Philadelphia

Posted 31 January 2011 - 01:08 AM

The predictions that these models make are so vague, that it's not even possible to say whether these models are accurate. How can we consider this good science when it's impossible to verify a model...

I have heard almost those same exact words during discussions I have participated in with climate scientists. Even if one is willing to assume the correctness of the models, a very big assumption; the problem is estimating the statistical variability of the predictions coming out of the more physically realistic models is incredibly difficult, and, as far as I can tell, not any where close to being solved.

Are they able to use inputs from the distant past to predict the climate of the less-distant past? If they can do that, and they have a reasonable enough data stream, they should at least be able to say something about the expected range of variation, within some parameter space. I can't believe that it's impossible to verify their models in any way. Do these models not have some analog of a training and test set? How do you even build a model without reference to actual data?

Some of the more physically realistic models require a week of computation on a really *fast* computer to get the results for a *single* setting of the parameters. Given that the parameter space can have over 100 dimensions, and that the model output (the solution of a massive PDE) is highly non-linear in the parameters, it is currently computationally impossible to determine whether or not a given set of parameters is really the best fit to historical data. What this means is that it is possible that there are values of the parameters that could give better fit the historical data, but also predict a less significant increase in global temperatures, or even no increase at all.

Normally, one would validate a parameter in a smaller context; you don't develop a multifactorial model on a giant problem, but on a number of well-characterized small systems. You then work up to validations based on larger systems. Sensitivity analyses from smaller simulations can point the way to parameters that need to be examined more carefully in a larger system. A given model might have 100 dimensions, but I'll bet that some of those are trivial dimensions that don't really need to be explored carefully. I don't think the models are as unreliable as some people are trying to cast them. Modelers understand these issues better than a lot of people seem to think.

#21 Connor MacLeod

  • Guest
  • 619 posts
  • 46

Posted 31 January 2011 - 04:51 AM

I don't think the models are as unreliable as some people are trying to cast them. Modelers understand these issues better than a lot of people seem to think.

There is considerable uncertainty concerning the degree to which human activity has impacted global temperature, but more practically relevant is the fact that the confidence intervals around model predictions under various possible interventions are so wide as to be useless.

Modelers understand these issues better than a lot of people seem to think.

They understand they have a problem, but they are still looking for the solution.

#22 niner

  • Guest
  • 16,276 posts
  • 1,999
  • Location:Philadelphia

Posted 31 January 2011 - 05:49 AM

I don't think the models are as unreliable as some people are trying to cast them. Modelers understand these issues better than a lot of people seem to think.

There is considerable uncertainty concerning the degree to which human activity has impacted global temperature, but more practically relevant is the fact that the confidence intervals around model predictions under various possible interventions are so wide as to be useless.

Modelers understand these issues better than a lot of people seem to think.

They understand they have a problem, but they are still looking for the solution.

Useless for what? Do you mean that the predictions are utterly meaningless? If so, how do these things get published? My own area is the molecular sciences, and I couldn't publish a paper with the sort of statistical gibberish that I'm hearing the climate community being accused of. I just find it hard to believe that a whole branch of the physical sciences is so bereft of statistical knowledge that they wouldn't understand something as basic as an overtrained model or some of the other things they're accused of. Who are you referring to when you say that they understand they have a problem? Nobody's model is perfect, and everyone wants to improve their model. The question is, does the model provide any useful information? Even models that aren't capable of quantitative prediction can sometimes provide valuable insight into a system. Are you saying that climate scientists are unaware of the limitations of their models, or that they are so ethically bankrupt that they are just lying about them? Or are the modelers ok but some third party is taking their conclusions and misusing them? Sorry to pepper you with questions; I'm just trying to understand what you're saying.

#23 Connor MacLeod

  • Guest
  • 619 posts
  • 46

Posted 31 January 2011 - 07:54 AM

Useless for what?

Useless in terms of making policy decisions, i.e. where to spend to money, or even whether to spend the money.

The question is, does the model provide any useful information? Even models that aren't capable of quantitative prediction can sometimes provide valuable insight into a system.

Yes, that is certainly true.

Sorry to pepper you with questions; I'm just trying to understand what you're saying.

I'm sorry. I don't have the time or the inclination. You have raised some good questions, but it would take too much effort to answer them satisfactorily.





Also tagged with one or more of these keywords: video

1 user(s) are reading this topic

0 members, 1 guests, 0 anonymous users