to continue the discussion from :
http://www.fluwikie2.com/pmwiki.php?n=Forum.ChancesForAPandemic
that thread was closed for length. Long threads are currently slower to load here at fluwiki.
>European at 12:13 >anon, >you are running around asking for data that doesnot really exist.
European, not data, but estimates.
>We are dealing with far too many unpredictable and unknown items here.
in areas with even more unpredictables and unknown items predictions are made frequently. E.g. share-prices and betting markets.
>There are no hostorical statisics (not history)
?
>that we can use either.
books are full of it.
>The scenario is actually extremely fuzzy in terms of predictablity.
not more than e.g. long term political predictions.
>What we know today is that this virus kills and kills very >efficiently at that, once it infects a host. We do not even >know if the virus will ever develop the capability of infecting >easily and uniformly across the human population. There are not >enough knowledge around to even make valid guesses about its >future capabilities.
you can always guess. What you probably mean is, that there will be large deviation in different “guesses” (=estimates)
>It continues to surprise us.
so does the weather. But we still have weather forecasts.
>Even if all available data had been in the public domain >I assume we wouldnot have had much more success in making >predictions about its future. I believe this is because it >is a biological entity, and we do not know enough about how >they evolve to make such predictions.
predictions _are_ being made. Just unclearly worded. Presumably we all agree that e.g. Osterholm considers the probability larger than 10% that there will be a H5N1-pandemic in the next 5 years although he avoids giving numbers.
>What we can say on the other hand is that if so and so happens, >then we would expect to see the following, etc. In my book that >is guesswork, not valid predictions.
different words for the same thing
>These guesses are useful for contingency planning,
… so they _are_ useful ?!
>but they do not give me a prediction that I would put my name under.
why not ? It’s just only your prediction. No one will blame you if it turns out nonprofound later. It’s just the best you can do. Of course, if you continue to give silly estimates, at some point your qualification might be questioned.
>They would be based on a lot of preconditions, most of them unstated, >like how much Tamiflu is available, the time to manufacture vaccines, >population mobility, population density, the effectiveness of the >precaution people take, the length of the pandemic, general >availability of food, drugs, and other goods, etc., etc.
yes. It’s a good strategy to estimate these probabilities independently and then compute the overall estimate from these.
>Basically far too many datapoints to make them useful for anything.
huh ? The more datapoints the better.
>They remain guesses and speculations.
other words for the same thing. But yes, of course, it’s just only a probability estimate and a speculation. Still useful, but no certainety about the outcome.
>When experts answers enquettes about a subject they will
what’s “enquette” ?
>most probably tend to answer based on personal outlook and >experience. If they on the other hand were asked to write >a formal paper on the same subject the answer would be >completely different. The data would be backed up by sources >and reasoning. The methods they employed to get to their >answer would be described etc. Repeatability of the procedure >is the important factor. An enquette is nowhere near that. >To use data collected by enquettes is a bit like trying to >predict the stock market based on registerred stock transactions >- very, very difficult ;-).
yes. You get the same effect, probably even better, if you can make the various experts enter an engaged discussion about their positions. And if you make them think about these “Alexander-questions”.
>DemFromCT – at 12:14 >thread closed for length. >Paralegal – at 12:15 >DemFromCT, I am in complete agreement that all points of view >should be welcomed. However, a discussion conducted solely for >the purpose of being argumentative (not you) seems pointless. >I am probably missing the entire point of the exchange (I can >be a little dim sometimes!).
I thought, the fact that CDC did form the 2 panels in 1976 with the purpose to come up with probability estimates should already proof that this is an important issue.
Ok the chances for a pandemic are 100% but the question is when? I’d say based on current events, I would not be surprised if it started this week. I need it to wait till the winter so I can prepair more but I hope it takes a couple of years to come.
So that’s my three answeres.
Oil and Water. Anonymous demands everyone validate their points. This is a discussion, not a scientific paper to be proved. I rely on my intuition in every aspect of my life. If I have a feeling or a thought I check it out, and find it valid 99% of the time. So I trust it. I don’t expect anyone else to do so, its for my use alone. But how many times have data proved false. Butter is bad, now its better than margerine. HRT it wonderful, for how many years, then it is bad, now its good. All data. all statistics, and I’m sure every one of you has been bombarded with medical data that proved skewed, innacurate and sometimes dangerous. I think the sun has a great impact on our lives. Without it we would not exist.The world as we know it would not exist.
>These guesses are useful for contingency planning,
… so they _are_ useful ?! ← These guesses are boundary case predictions, and not dependent on anything “real”.
Sequential Decision Making Under Uncertainty
…. “Finding the optimal strategy turns out to be a very hard problem, both computationally and statistically, and it seems staggeringly unlikely that most human beings, when faced with such situations, respond in anything like the optimal manner. Or, rather, if we do act optimally, it’s with respect to a non-obvious criterion.” ….
http://cscs.umich.edu/~crshalizi//notebooks/sequential-decisions.html
LOL. I remind some newer participants this is an old discussion, dating back months. ;-P.
Don’t expect to convince our friend. His mind is set on this. But of all of us, he’s the one most likely to come up with the probabilties if anyone actually publishes them.
I know, I saw the last one as well :-)
Interesting ideea/link by Northstar – at 10:46 (Making Your Own Vaccine thread)
Smithsonian Magazine article about Dr. Robert Webster (Jan ‘06, pg 40) when he and his staff began investigating H5N1. ’’They created their own crude, inactivated virus vaccine and dripped it in their noses’’
“They declined to discuss the process in detail.” stated the article
DemFromCT – at 14:01
Maybe what we need is a “How’s your gut” thread where people can give us their “gut feelings,” intuition, or “reading of the tea leaves” on where we stand with this possible pandemic. We would purposly leave out any necessity to defend ones position, since it is afterall not science that we are after.
Lets just have a thread where people can compare their own feelings/predictions with others who have similar concerns.
There are places where people can post answers and an automagic statistic thing comes out the other end. I’d call it an “intuitometer”, and it would give an exact and accurately precise measure … of intuitions. If “the wisdom of crowds” works (google for that, there was a book about it) then so be it.
I am just going with what I have read many times here: “Hope for the best, prepare for the worst.” I don’t know, can’t know, when/what/if it will happen. Mistrustful of the WHO and the CDC, among others and feel it’s incredibly irresponsible not to prepare, since I have a small child. I do know that since I began making preparations I am less anxious about avian flu. Nothing has changed except my ability to cope with it.
Greetings. I am new to flu wiki but not avian flu, I have been following a Canadian blog for longer than I can remember and the Wall Street Journal tracker since it was initiated. I subcribe to Nature and Science, and there are frequent and fascinating articles on avian flu in both.
I decided to make more formal preparations today, and headed to Costco. I am back with a (much) lighter wallet and in a quandry about where to store everything in my small Manhattan apartment, and still have access to those areas for dusting, etc. Anyone else in small urban quarters? How are you coping? Do you plan to SIP in the city or try to get outside? Would love some insight on SIP in major metropolitan areas. Thank you.
Welcome CityGirl! There are quite alot of people in small urban quarters (I’m not one of them sorry). Try your questions on the link to main current prep thread. You will be sure to find lots of helpful advice there.
CityGirl,
Welcome! I will also be SIP in my urban condo. I got storage containers for under the bed and pushed all of the books back on a couple of bookshelves to line with canned goods.
melanie- SO as you consume your canned goods, you might see titles like:
War and Peas and A Tale of Two Tuna Fish
<I’m sorry. I’ll go back into my lab and shock myself silly with one of my Dr. Eccles Do it Yourself Frankenstein kits>
Eccles,
Would that be Poindexter Frankenstein?
That’s FrahnkenShteen.
Note to self: buy DVD of Young Frankenstein, still the funniest movie I’ve ever seen.
Oy!
CityGirl…what is Dusting? And why do you do it? Must be a woman thing! Dust can be controlled easily be stacking more stuff on top of it. See…problem solved.
3L120,
Doesn’t everybody do it that way?
From a guy perspective---The easiest way to “dust”? Compressed air. And I only vacuum the floor where I’ve walked. If I didn’t walk over there since last month, there’s no sense in vacuuming over there ‘til next month. ;-) ;-)
The problem with predicting the Bird Flu rests on the idea of a standard error of estimate. In psychology we know that when we tell someone their IQ score that is actually an “estimate” plus or minus 10 points. That is the SEM for the traditional IQ. So actually your IQ estimate of 100 is in the range of 90 – 110. This places you above “Dull normal” and below “Bright Normal.” This is a reasonable estimate that you are neither college material nor a candidate for a group home.
The problem with predicting a pandemic is with the SEM. The fuzzy data set and the low level of validity with regard to many of the significant variable leads to a predictive system that would have an SEM in the range of 40. So if you were to say the chances are 50% the actual prediction could range from 10% to 90%. Under these circumstances the estimate is basically useless from a numerical and from a practical perspective. And that, Anonymous, is why you find so few people willing to make a prediction. What is left is pattern recognition; some people can see it coming but not on any numerical basis.
To ask people to make a prediction when the SEM is so large is inappropriate until such time as the SEM is a reasonable number. BTW the Wiki survey participants did provide estimates of a pandemic. See “Flu Wiki Summary.” Results.
The post should be using the Standard Error of Measurment (SEM) as an example, not the standard error of estimate. These are not the same things. Without getting technical I’ll just leave it at that.
JoeW,
You summed it up very nicely, thanks :-)
>European at 13:53 >>> These guesses are useful for contingency planning, >> so they _are_ useful ?! > These guesses are boundary case predictions, and not > dependent on anything real
sorry, I don’t understand
>informatic at 14:01 >Sequential Decision Making Under Uncertainty >Finding the optimal strategy turns out to be a very hard problem, >both computationally and statistically, and it seems staggeringly >unlikely that most human beings, when faced with such situations, >respond in anything like the optimal manner. Or, rather, if we do >act optimally, it is with respect to a non-obvious criterion. >http://cscs.umich.edu/~crshalizi//notebooks/sequential-decisions.html
suboptimal strategies are also fine. Better than no strategy at all or just strategies based on random guesses for probabilities. Because it’s a hard problem we need the experts to work on it.
>DemFromCT at 14:01
>LOL.
I don’t understand, why it’s funny.
>I remind some newer participants this is an old discussion, >dating back months. ;-P.
as I mentioned there is a new aspect in this thread with the Neustadt-Fineberg-May analysis which seems to prove my points and which elaborates on the probability estimates. It’s also in the book “Influenza” from Gina Kolata, (you can get it for $6). Nobody ever commented on this :-(
>Do not expect to convince our friend.
don’t discourage people to try it. But please with arguments, not just repeating the “no data” nonsense.
>His mind is set on this.
maybe there is a -however remote- chance that he is right ? Can you exclude that ?
>But of all of us, he is the one most likely to come up with >the probabilties if anyone actually publishes them.
I have no intention to publish them. I could post here what I’ll find, if there is interest. But would you be interested at all ? Won’t this be meaningless by your logic anyway ?
JoeW at 19:10
>The problem with predicting the Bird Flu rests on the idea of a >standard error of estimate. In psychology we know that when we >tell someone their IQ score that is actually an estimate plus >or minus 10 points. That is the SEM for the traditional IQ. >So actually your IQ estimate of 100 is in the range of 90 to 110. >This places you above Dull normal and below Bright Normal. >This is a reasonable estimate that you are neither college material >nor a candidate for a group home. >The problem with predicting a pandemic is with the SEM. >The fuzzy data set and the low level of validity with regard >to many of the significant variable leads to a predictive system >that would have an SEM in the range of 40.
I don’t know what psychologists do and how you define SEM. Usually I just calculate expectation value and deviation or variance. How do you know it’s 40 ? Have you measured it ? See the Tuft’s data at http://www.setbb.com/fluwiki2/viewtopic.php?t=29&mforum=fluwiki2
We have my=28.55 and sigma=27.31 Here the deviation sigma is indeed almost so high as it would be expected with random values. However the expectation value my is considerably lower than 0.5 as would be expected with random values, so this is still useful data. I think the reason for the high sigma is the formulation of the question. They say “efficient h2h” but then they explain “capable of at least a 2-chain →h→h” so efficient is missing in the explanation and that last condition is apparantly already fullfilled meanwhile.
>So if you were to say the chances are 50% the actual prediction >could range from 10% to 90%. Under these circumstances the estimate >is basically useless from a numerical and from a practical perspective. >And that, Anonymous, is why you find so few people willing to make >a prediction.
this is just your hypothesis. It is not what we observe. If it were this way then there would be much more disagreement and even these inexact statements of experts which we see frequently (like “the danger is real” or “pandemics happen” or “we are underprepared”) would not make any sense. When expects were indeed as indifferent about the probability as you indicate, then they should reasonable avoid statements like that as well.
But this just shows that experts have a deficit here and that they should work on this and discuss it to improve their estimates. If their research is meaningful at all then the deviation should decrease with more discussion.
>What is left is pattern recognition; some people can see it coming >but not on any numerical basis.
It is always on a numerical basis, even when you don’t recognise it. You intuitively assign probabilities. They are defined as a model to describe this feeling numerically. It’s like saying, some people had a feeling for the time but can’t express it in hours.
>To ask people to make a prediction when the SEM is so large >is inappropriate
yes, if also the average is near 50% , then it would be pretty useless. But you can’t know the SEM before you actually ask them. And if it were indeed continuously distributed over [0,1] then this in itself would be interesting to know. But I’d bet that the SEM would be less than 40 (assuming SEM of continuous distribution is 0.4) So, let’s do it. If it is indeed 40 , then what did we lose ? Just refusing to answer because the SEM might be 40 (who knows ?) is no acceptable expert’s position.
>until such time as the SEM is a reasonable number. >BTW the Wiki survey participants did provide estimates of a pandemic. >See Flu Wiki Summary.Results.
ahh, yes,your fluwiki survey. You asked many questions which were much harder to answer than the question for a probability estimate of a pandemic. But yet people did answer and did not resort to some “no one knows” excuse, as the experts do.
Here is what I found:
>Use a percentage scale for the following items where 100% = 100% >positive, 75% = 75% sure, 50%= 50 percent sure, 0%= will not happen. >Use any percentage estimate that is appropriate for you. >Your estimate of a pandemic in the next six months >Your estimate of a pandemic in the next year > > Respondents are 45% sure that a pandemic will occur in the > next six months (range =0 to 100%). They are 72% sure that > a pandemic will occur in the next year (with nearly dual > modes at 50% and 75%).
it’s not clear, were only one of the 4 answers 100,75,50,0 allowed ? I assume quite some people thought this. By omitting 25% then, you already suggest that the probability is high.
Can you please calculate the deviation (or SEM) from these data ? Or just provide the numbers so we can calculate it.
Isn’t it strange that fluwiki-members did answer that question, while experts refuse to do so ? Were fluwikians just wrong or silly when they decided to answer ? As you seem to think the only reasonable answer would have been: “no one knows”
>JoeW at 19:23
>The post should be using the Standard Error of Measurment (SEM)
>as an example, not the standard error of estimate. These are not
>the same things. Without getting technical I will just leave it at that.
hard for me to find the definition online. Not at wikipedia or mathworld. Maybe you have a quick link. Anyway, I go with deviation , that should also demonstrate the degree of uncertainety in the estimates.
OK, I tried it with the curevents-poll. Question on Apr.1st was:
> >how likely do you think is a pandemic in 2006 ? >
my=.47 , sigma=.257 , n=134 . That looks indeed almost random. (continuous distribution). I expect the expert’s estimates to show smaller sigma. And I expect it would decrease once they start to discuss the estimates.
And for the estimates being useless we would also need my = .5, but we could change my by changing the time duration. So, if it’s 50% in 9 months then it should be almost 6% in 1 month. I mean, it can’t be my=.5 and sigma=.28 for all time ranges. So the data _must_ be useful…
Anonymous, I am just a layperson, but it seems to me that the experts may be intimidated in this day of strong criticism of science in general in the US, and the changing of science findings by non-scientists in particular. That in itself would be a reason for the experts to hesitate.
Back in 1976 science was more accepted and well thought of (I remember, showing my age :-D). People thought of science as something that would help us in our quest to better ourselves. Not so much nowadays.
I could be wrong on this, but it seems to be a possibility as to one reason why they won’t give probabilities.
sorry, the previous post was badly formatted.
here it is again:
>European at 13:53
>>> These guesses are useful for contingency planning,
>> so they _are_ useful ?!
> These guesses are boundary case predictions, and not
> dependent on anything real
sorry, I do not understand
===============================================
>informatic at 14:01
>Sequential Decision Making Under Uncertainty
>Finding the optimal strategy turns out to be a very hard problem,
>both computationally and statistically, and it seems staggeringly
>unlikely that most human beings, when faced with such situations,
>respond in anything like the optimal manner. Or, rather, if we do
>act optimally, it is with respect to a non-obvious criterion.
>http://cscs.umich.edu/~crshalizi//notebooks/sequential-decisions.html
suboptimal strategies are also fine. Better than no strategy at all
or just strategies based on random guesses for probabilities.
Because it is a hard problem we need the experts to work on it.
=========================================================
>DemFromCT at 14:01
>LOL.
I do not understand, why it is funny.
>I remind some newer participants this is an old discussion,
>dating back months. ;-P.
as I mentioned there is a new aspect in this thread with the
Neustadt-Fineberg-May analysis which seems to prove my points
and which elaborates on the probability estimates. It is also
in the book -Influenza- from Gina Kolata, (you can get it for $6).
Nobody ever commented on this :-(
>Do not expect to convince our friend.
do not discourage people to try it. But please with arguments,
not just repeating the -no data- nonsense.
>His mind is set on this.
maybe there is a -however remote- chance that he is right ?
Can you exclude that ?
>But of all of us, he is the one most likely to come up with
>the probabilties if anyone actually publishes them.
I am not doing this to publish it. I could post here when
I find something interesting, if there is interest.
But would you be interested at all ?
Would not this be meaningless by your logic anyway ?
Well, maybe later generations will examine this in the
aftermath of the H5N1-pandemic…
========================================================
JoeW at 19:10
>The problem with predicting the Bird Flu rests on the idea of a
>standard error of estimate. In psychology we know that when we
>tell someone their IQ score that is actually an estimate plus
>or minus 10 points. That is the SEM for the traditional IQ.
>So actually your IQ estimate of 100 is in the range of 90 to 110.
>This places you above Dull normal and below Bright Normal.
>This is a reasonable estimate that you are neither college material
>nor a candidate for a group home.
>The problem with predicting a pandemic is with the SEM.
>The fuzzy data set and the low level of validity with regard
>to many of the significant variable leads to a predictive system
>that would have an SEM in the range of 40.
I do not know what psychologists do and how you define SEM.
Usually I just calculate expectation value and deviation or
variance. How do you know it is 40 ? Have you measured it ?
Let’s look at the Tufts data at
http://www.setbb.com/fluwiki2/viewtopic.php?t=29&mforum=fluwiki2
We have my=28.55 and sigma=27.31 Here the deviation sigma is
indeed almost so high as it would be expected with random values.
However the expectation value my is considerably lower than 0.5 as
would be expected with random values, so this is still useful data.
I think the reason for the high sigma is the formulation of the
question. They say -efficient h2h- but then they explain
-capable of at least a 2-chain h2h- so efficient is missing
in the explanation and that last condition is apparantly
already fullfilled meanwhile.
>So if you were to say the chances are 50% the actual prediction
>could range from 10% to 90%. Under these circumstances the estimate
>is basically useless from a numerical and from a practical perspective.
>And that, Anonymous, is why you find so few people willing to make
>a prediction.
this is just your hypothesis. It is not what we observe.
If it were this way then there would be much more disagreement
and even these inexact statements of experts which we see
frequently (like -the danger is real- or -pandemics happen-
or -we are underprepared-) would not make any sense.
When expects were indeed as indifferent about the probability
as you indicate, then they should reasonable avoid statements
like that as well.
But this just shows that experts have a deficit here and that
they should work on this and discuss it to improve their estimates.
If their research is meaningful at all then the deviation should
decrease with more discussion.
>What is left is pattern recognition; some people can see it coming
>but not on any numerical basis.
It is always on a numerical basis, even when you do not recognise it.
You intuitively assign probabilities. Probabilities are defined as a model
to describe this feeling numerically. It is like saying, some people
had a feeling for the time but can not express it in hours.
>To ask people to make a prediction when the SEM is so large
>is inappropriate
yes, if also the average is near 50% , then it would be pretty useless.
But you can not know the SEM before you actually ask the experts and
do the statistics.
And if it were indeed continuously distributed over [0,1] then this
in itself would be interesting to know. But I bet that the SEM would
be less than 40 (assuming SEM of continuous distribution is 0.4)
So, let us do it. If it is indeed 40 , then what did we lose ?
Just refusing to answer because the SEM might be 40 (who knows ?)
is no acceptable experts position IMO.
>until such time as the SEM is a reasonable number.
>BTW the Wiki survey participants did provide estimates of a pandemic.
>See Flu Wiki Summary.Results.
ahh, yes,your fluwiki survey. You asked many questions which were
even harder to answer than the question for a probability estimate
of a pandemic. But yet people did answer and did not resort to some
-no one knows- excuse, as the experts do.
Here is what I found:
>Use a percentage scale for the following items where 100% = 100%
>positive, 75% = 75% sure, 50%= 50 percent sure, 0%= will not happen.
>Use any percentage estimate that is appropriate for you.
>Your estimate of a pandemic in the next six months
>Your estimate of a pandemic in the next year
>
> Respondents are 45% sure that a pandemic will occur in the
> next six months (range =0 to 100%). They are 72% sure that
> a pandemic will occur in the next year (with nearly dual
> modes at 50% and 75%).
it is not clear : were only one of the 4 answers 100,75,50,0 allowed ?
I assume quite some people thought this. By omitting 25% then,
you already suggest that the probability is high.
Can you please calculate the deviation (or SEM) from these data ?
Or just provide the numbers so we can calculate it.
Is not it strange that fluwiki-members did answer that question,
while experts refuse to do so ? Were fluwikians just wrong or silly
when they decided to answer ? As you seem to think the only reasonable
answer would have been: -no one knows-
>JoeW at 19:23
>The post should be using the Standard Error of Measurment (SEM)
>as an example, not the standard error of estimate. These are not
>the same things. Without getting technical I will just leave it at that.
hard for me to find the definition online. Not at wikipedia
or mathworld. Maybe you have a quick link. Anyway, I go with
deviation , that should also demonstrate the degree of
uncertainety in the estimates.
I tried it with the curevents-poll. Question on Apr.1st was:
>
>how likely do you think is a pandemic in 2006 ?
>
my=.47 , sigma=.257 , n=134 . That looks indeed almost random.
(continuous distribution). I expect the experts estimates to
show smaller sigma. And I expect it would decrease once they
start to discuss the estimates.
And for the estimates being useless we would also need
my = .5, but we could change my by changing the time duration.
So, if it’s 50% in 9 months then it should be almost 6% in 1 month.
I mean, it can’t be my=.5 and sigma=.28 for all time ranges.
So the data _must_ be useful…
Anonymous-seems to me that you tend to argue each and every sentence and each and every word that anyone says just to hear yourself talk. No matter what answers or estimates are given, you have an argument against everything being said. Why can’t you accept any arguments against what you have to say?
mountainlady – at 02:48
I have no feeling for the change in science-acceptance over time
since 1976. Any idea how to find this examined somewhere ?
Any suitable keyword for keyword-search ?
A chart would be fine. I’ll keep searching…
anonymous – at 03:15
It’s just what I have seen in my lifetime, and totally unprovable by me from a statistical, numerical, or whatever form of proof is needed sort of way. I have no idea how to find out whether I am right, and I probably should not have even mentioned it because of that.
Carry on folks…
anonymous,
We’ve argued this out for months. What’s your point?
Gold Dust – at 03:04
first, it is normal to pick the points where you disagree since those where
you agree need not be discussed. So you often get the impression that
there is more dissent than there actually is. I.e. I did not feel so much
dissent with JoeW above.
second, I’m disappointed that fluwiki doesn’t support multi-quoting
and answering inline to sentences. So others rarely use it, while I’m editing this offline.
third,I fail to see the arguments against what I say. I feel that my points are being ignored and not addressed. “no data” when actually I showed the data.Why is that data not valid ? What about the CDC-panels and the
Neustadt-aftermath of swineflu 1976 ? Why can experts make unclear predictions but not give numbers ? Why can nonexperts give numbers but not experts ?
Why can experts give numbers anonymously or privately but not in public ?
Noone addressed these points.
Melanie – at 03:21
I gave my points in the posts here.
You are just opposing without ever arguing.
What’s your point ?
Are you really convinced about your position
or is it just some agenda because you think probability estimates discussion
isn’t good for you ?
Anonymous - I can’t answer for anyone other myself, but I see university libraries full of papers and studies and theory that can prove or disprove any point - just as selective use of biblical passages can be used to justify anything from compassion to genocide.
Humanity has survived a few million years on primary instinct. I find it useless to argue over statistical minutia of whether a raindrop will hit me or not when I can just get under some shelter and be sure it won’t. Can you argue with that logic? The intent of fluwikie is to prepare for a pandemic, to save lives, and to preserve some sort of ordered society in the end if the pandemic comes sooner rather than later. I just want my kids are alive long enough to grow to adulthood with most of the world they know still intact.
So going with mountainlady and Melaine, let it go, let’s move on to do something useful with that brain of yours. You would be more of an asset helping calculate the needs of a healthcare facility to have adequate supplies for a disaster. All IMHO.
TRay 75 at 04:06
>Anonymous - I can not answer for anyone other myself,
>but I see university libraries full of papers and studies
>and theory that can prove or disprove any point - just as
>selective use of biblical passages can be used to justify
>anything from compassion to genocide.
yet scientists do consider most books useful. You have to
select the good ones.
>Humanity has survived a few million years on primary instinct.
>I find it useless to argue over statistical minutia of whether
>a raindrop will hit me or not
so you compare panflu with a raindrop…
>when I can just get under some
>shelter and be sure it will not.
preparing for panflu is not so easy. It costs many billions, and even
then it can only reduce the impact.
>Can you argue with that logic?
not sure what you mean. We could all get easily some shelter -
problem solved ? Economy goes down, 3rd world people can’t
afford shelters, not enough shelters in towns,
people uncontent, revolution,…
>The intent of fluwikie is to prepare for a pandemic, to save
>lives, and to preserve some sort of ordered society in the
>end if the pandemic comes sooner rather than later.
>I just want my kids are alive long enough to grow to
>adulthood with most of the world they know still intact.
>So going with mountainlady and Melaine, let it go, let us
>move on to do something useful with that brain of yours.
>You would be more of an asset helping calculate the needs
>of a healthcare facility to have adequate supplies for a
>disaster. All IMHO.
but how much should we prepare ?
when is the right time ?
what to concentrate on , vaccines ? antivirals ? isolation ?
quarantine ? economics ? military ? homeland or 3rd world ?
These decisions all depend on the probabilities.
I am away from home and have use of a friend’s computer only briefly, but I am amused at Guenter’s behaviour and a trifle annoyed. Is he leading us on a merry chase partly for his own ego gratification?
Anyway, as to the thread. Look at it this way: the problem posed CAN, as unlikely as this will first sound, be reduced to the analogy of black and white balls drawn blind from a large container Universe! Let us say there are a very large number of balls, and that NOT ALL are white.
Now to bring this into real-world focus and H5N1 emergence, let production of a black ball draw represent an emergent pandemic, of whatever magnitude of qualities: transmissibility, lethality, etc.
In the world of virus emergence, EACH infected fowl, each infected mammal or person, is a sub-set of this Universal set of black and white balls. From the evidence of the past ten years, it appears statistically likely there are far, far more white balls than black in this game.
Where reality diverges most markedly from the usual highschool probability illustration, is that the speed and frequency of the draws is very, very high indeed, as befits the life span and reproduction speed of this virus.
That means, if not ALL balls are white, the Probability of pandemic outbreak, at whatever level of potency, is not merely near certain, but even frequent if the number of black balls is even a very, very modest proportion.
It follows, being brief, that we can confidently expect not merely “a” but “many, many” pandemic outbreaks, most of which are apt to not have access to adequate fuel to ignite a general conflagration … but from a large population of attempts, some realistically will.
As has been agreed elsewhere, an outbreak in Indonesia in NO WAY precludes other, independent, emergences in Nigeria or in China. Outbreaks in 2006 should not extinguish expectations for other cases or strains in 2007 … 2008 …
Sorry at the haste, and have to run, but hopefully others (NOT everyone!) can consider this idea sympathetically, for it is merely extemporaneous.
Cheers!
Sorry! I’m so used to author name being set! Nikolai, being forward again.
anonymous at 04:38
>I am away from home and have use of a friends computer only briefly, >but I am amused at Guenters behaviour and a trifle annoyed. >Is he leading us on a merry chase partly for his own ego gratification?
sigh. I knew this would happen. The more I’m asking to concentrate on arguments, the more people will try to decide this from authors behaviour and their subjective judgement about this.
>Anyway, as to the thread. Look at it this way: the problem posed CAN, >as unlikely as this will first sound, be reduced to the analogy of >black and white balls drawn blind from a large container Universe! >…
sounds unlikely first and second
SEM = SD X (sq route(1-rtt))
Where:
SD = Standard Deviation of the score distribution (estimate in this case)
r= rtt, the reliability of the score distribution (for now we will assume Chronbach’s Alpha)
SEM is driven by the reliability of the measure, higher level of reliability yield smaller SEM.
However, when r is constant a smaller SD yields a smaller SEM
For further study of SEM see any graduate text on “Tests and Measurements in Psychology and Education.” Some undergraduate texts cover the topic briefly. For a more scholarly introduction see, Chronbach (19xx) “Intro to Test and Measurements” or Magnuson (~1989) “Test Theory.” Or see http://tinyurl.com/hte4m Select the power point presentation for testing 2005. Tiny URL would not list this long URL and you will need to go to the parent directory (via tiny URL) then select testing 2005. The discussion with equations is about 75% of the way through the presentation and follows a general introduction to test score theory for graduate students.
In essence what this discussion of SEM shows is that the reliability of the estimate drives the usefulness of the estimate. It is assumed that estimates are composed of true score variance plus error score variance. Where error score variance is assumed to be normally distributed with a mean of 0. True score are actually more complicated than that but for present purposes this definition will suffice.
Therefore, when you have unreliable estimates the usefulness of the estimate is null. In general, test reliability estimates must be greater than .89 for the estimate to be useful.
The current state of affairs with regard to the validity of the variables (and their measurements) that would be used to arrive at an estimate of a pandemic are suspect by nearly everyone’s definition. With no validity one does not know what variables to use and the estimates are based on measures with unknown reliability.
Think of the estimate as a composite variable, Estimate = X1 + X2 + X3 (for example purposes I am using a simple linear composite, other equations can be submitted to the same type of analysis). The estimate can be no better than the reliability of the composite measure though each variable may contribute more or less to the overall measure. If any one variable is too unreliable it will effect the overall reliability. Thus, one would need to carefully select all variables to be included in the estimate.
At this point no one seems to know what variables to use and what the reliability of these variables and the overall estimator would be. Thus, the SEM is unknown.
BTW rtt coefficients were not calculated for the Flu Wiki survey and so we have no way of calculating the SEM. I gave samples of estimates from the survey. If you would like to see the actual results they are on the WIKI side under “Flu Wiki Survey Results.” Past experience leads to the conclusion that in a survey of this sort the reliability could not be expected to be much higher than .4 or .5 with a range of estimates as disparate as was found the SEM would be quite high and the measures of central tendency would realistically range from 0 – 100%.
Note that while the estimators might or not be normally distributed the SEM is by definition an error term and is normally distributed.
The conclusion that this leads to is that until we have reliable variables that can be used in some sort of composite estimator, no useful prediction can be made. Others might have attempted to construct predictions in the past and they may or not have known about the components that are the basis of an estimator’s usefulness. Such prior work does not justify creating an unreliable estimator until we know how well it can be expected to work. Estimators can be constructed but in general, it would take people sophisticated in the characteristics of estimators.
People who work in the areas of virology and epidemiology would certainly need to work with individuals who know how to construct estimators to arrive at useful tools. To my knowledge this is not being done and whatever estimates we are getting they are based on composites wherein the individual does not specify what variables they are using and are thus assumed to be using variables with unknown reliability. The estimates are essentially useless. One can ask others to estimate the number of green men on the back side of the moon. These estimates are not of much use.
When various authorities are saying such things as “it will happen,” “it may happen,” “it won’t happen,” they are simply presenting their educated guess based on their own personal experience which has not been quantified, let alone submitted for analysis. It is for this reason that people give vague, wordy estimates.
I agree with anonymous that everything can be quantified and that all estimates rest on some sort of quantifying principles by whatever metric is used. However, this does not mean that these people, even groups of scientists not trained the analysis of measurements, know how to assess the effectiveness of their work.
Paul Mheel in the 1950s (I believe) demonstrated that an algorithm will outperform the clinical decisions of the experts whose knowledge was used to create the system. This is so because the mathematical model never deviates and therefore has a better overall hit rate. We all use math to estimate all the time, we simply do not know that we do it.
With regard to the idea that policy makers should be using some specified (who cares what) model to make their public statements, I would suggest that, we need more time to determine the pertinent variables and we need improvement in our ability to generate good estimators through the collection of data. In the mean time, our governments are spending our dollars flying by the seat of their pants. You may not like it but that is the way it is done.
Imagine trying to estimate if the USSR was actually going to use those Cuban missiles in the 1960s. Unique events are impossible to predict. In the mean time, we need some decisions. I think that most policy makers attempt to get the best estimators possible before making a decision to say, “Plan for a pandemic.” However, good mathematical estimators are not available so we use pattern recognition, which, BTW is a form of math analysis if you think about it.
JoeW, as I read your scholarly treatise [understanding individual words, but unable to grasp the math] the throught crossed my mind that in physics they have proven that the observer affects that which is being observed. The implication being that no scientific test can be totally objective. This might be the reason behind the saying “We create our own reality.” Now, I’ll attempt to study your worthy contribution to see if I can glean anything else from it!
Nikolai: I like you analogy. Did you know that William Jmaes, many years ago, said that in order to prove there are white crows one need not study all the crows. You only need one white crow.
I wonder, what is the color of this virus?
Observer effects can be taken into consideration and they can, to some extent be removed from the estimator.
MainVA: What this boils down to is this. Lets say that you estimate the speed a car is going by listening to the engine as it travels. We can further assume that you are not very good at it. Some time you are right sometimes you are off by 10 MPH, sometimes you are off by 20 MPH. We could determine how much you are usually off and then bracket your next estimate. Lets say that we determine that you are usally off by 7 MPH and you now say that car went by doing 50MPH. So we would then say they car was going between 43 MPH and 57 MPH. Not bad and somewhat useful if I have to cross the street blindfold when you say “Go.”
Now lets say that we determine that you are usually off by about 40 MPH and you tell me the car going by is doing 50MPH. The thing is going anywhere from 10 – 90 MPH. This is not very useful information. I think I will go ask someone else.
And therein lies the problem. The estimators (not the people) are probably off (my guess) by 40 – 50% so if they say the chance of a pandemic is 50% the actual odds are from 10 – 90% and of little use.
How is that – no math.
Much easier to understand!! Thanks for translating for me. It is clear you are a gentleman and a scholar.
I’m one of those “intuitive” people. My sense is that we are moving towards a crisis or a “crescendo” around mid-August to early September. Is that the real opener or is it a peak with a drop off after that time? Can’t tell yet. Such lack of precision would obviously not be welcome to Mr. A, however.
I think it was in one of the Beetle’s movies, “If your going to give us guesses, give us easy ones.” That is what we need.
29 May 2006 JoeW at 01:54
>SEM …
I just need a computer program to calculate these. Why is SEM better than deviation here ?
>The current state of affairs with regard to the validity of >the variables (and their measurements) that would be used >to arrive at an estimate of a pandemic are suspect by nearly >everyones definition. With no validity one does not know what >variables to use and the estimates are based on measures with >unknown reliability. > >Think of the estimate as a composite variable, Estimate = X1 + X2 + X3 >(for example purposes I am using a simple linear composite, >other equations can be submitted to the same type of analysis). >The estimate can be no better than the reliability of the >composite measure though each variable may contribute more >or less to the overall measure. If any one variable is too >unreliable it will effect the overall reliability.
but not necessarily dramatically. It depends on the ranges of the variables.
>Thus, one would need to carefully select all variables to be >included in the estimate. At this point no one seems to know >what variables to use and what the reliability of these variables >and the overall estimator would be. Thus, the SEM is unknown. >BTW rtt coefficients were not calculated for the Flu Wiki >survey and so we have no way of calculating the SEM.
let’s take the expectation value and the deviation instead and compare it with a random sample (0.5,0.28)
>I gave samples of estimates from the survey. If you would like >to see the actual results they are on the WIKI side under >Flu Wiki Survey Results.
yrs, I’d like the numbers but could not find it with that keyword
>Past experience leads to the conclusion that in a survey of this >sort the reliability could not be expected to be much higher >than .4 or .5 with a range of estimates as disparate as was >found the SEM would be quite high and the measures of central >tendency would realistically range from 0 to 100%. > >Note that while the estimators might or not be normally >distributed the SEM is by definition an error term and >is normally distributed. >The conclusion that this leads to is that until we have reliable >variables that can be used in some sort of composite estimator, >no useful prediction can be made. Others might have attempted to >construct predictions in the past and they may or not have known >about the components that are the basis of an estimators usefulness. >Such prior work does not justify creating an unreliable estimator >until we know how well it can be expected to work. Estimators can >be constructed but in general, it would take people sophisticated >in the characteristics of estimators. > >People who work in the areas of virology and epidemiology would >certainly need to work with individuals who know how to construct >estimators to arrive at useful tools.
or they inform themselves about this. Neith presents a serious problem
>To my knowledge this is not being done
but it should
>and whatever estimates we are getting they are based on composites >wherein the individual does not specify what variables they are >using and are thus assumed to be using variables with unknown >reliability. The estimates are essentially useless.
they can be improved later by discussion
>One can ask others to estimate the number of green men on the >back side of the moon. These estimates are not of much use.
huh ? Anyone knowledgable will estimate this as zero.
>When various authorities are saying such things as it will happen, >it may happen,it wonot happen, they are simply presenting their >educated guess
define -educated guess- and why it is different from probability estimate.
>based on their own personal experience which has not been quantified, >let alone submitted for analysis.
it is also based on what they read about H5N1. Papers are being published to better understand H5N1 and thus better understand the chance that it goes pandemic.
>It is for this reason that people give vague, wordy estimates.
and for what reason do people give numbers for sport events ? For what reason do physicians and fluwikians give numbers but experts not ?
>I agree with anonymous that everything can be quantified and >that all estimates rest on some sort of quantifying principles >by whatever metric is used. However, this does not mean that >these people, even groups of scientists not trained the analysis >of measurements, know how to assess the effectiveness of their work.
we can easily decide this by trying it out. Form the CDC-panels etc. as in 1976. We will see how big the deviation is and whether it’s useless. What can we lose ? Why don’t you want trying it ? I bet, the results will not be as useless as you think.
>Paul Mheel in the 1950s (I believe) demonstrated that an algorithm >will outperform the clinical decisions of the experts whose knowledge >was used to create the system. This is so because the mathematical >model never deviates and therefore has a better overall hit rate.
yes, but this clearly depends on the sort of clinical decision. And it could be improved when experts may make use of that program to improve their decision.
>We all use math to estimate all the time, we simply do not >know that we do it.
you told us - now we do know that we do it.
>With regard to the idea that policy makers should be using >some specified (who cares what) model to make their public >statements, I would suggest that, we need more time to >determine the pertinent variables and we need improvement >in our ability to generate good estimators through the >collection of data. In the mean time, our governments >are spending our dollars flying by the seat of their pants. >You may not like it but that is the way it is done.
we have not so much time. Let’s do the best we can do now. The probability estimate is not a political decision, it should be done by scientists.
>Imagine trying to estimate if the USSR was actually going >to use those Cuban missiles in the 1960s. Unique events are >impossible to predict.
Every event is unique and can be predicted. Some may show that behaviour with large SEM, but most won’t. We could and did predict the probability of Cuba using missiles in 1960. We were quite alarmed and later analysis showed that we were right to be alarmed.
>In the mean time, we need some decisions. I think that most >policy makers attempt to get the best estimators possible >before making a decision to say, Plan for a pandemic. >However, good mathematical estimators are not available >so we use pattern recognition, which, BTW is a form of >math analysis if you think about it.
let’s use virology experts and train them in “pattern recognition”. Let’s give them the chance to discuss and improve their estimates.
I can’t see why you are so pessimistic about the SEM for that estimate. If it were indeed useless and had that large SEM, then all these unclear statements by experts like : -there could well be a pandemic this year-, -it’s more likely than in any other year since 1970-, etc. were all completely meaningless too and should be refused, as the giving of numbers is refused.
Also with this common withholding of data we expect the experts to have access to data which we have not. So that should give them an advantage over our estimates and an opportunity to inform the public without revealing the details of the secret data.
sorry again.I often forget to include these double-slashes for line-end.
So here again with better formatting:
29 May 2006 JoeW at 01:54
>SEM …
I just need a computer program to calculate these.
Why is it better than deviation here ?
>The current state of affairs with regard to the validity of
>the variables (and their measurements) that would be used
>to arrive at an estimate of a pandemic are suspect by nearly
>everyones definition. With no validity one does not know what
>variables to use and the estimates are based on measures with
>unknown reliability.
>
>Think of the estimate as a composite variable, Estimate = X1 + X2 + X3
>(for example purposes I am using a simple linear composite,
>other equations can be submitted to the same type of analysis).
>The estimate can be no better than the reliability of the
>composite measure though each variable may contribute more
>or less to the overall measure. If any one variable is too
>unreliable it will effect the overall reliability.
but not necessarily dramatically. It depends on the ranges of the variables.
>Thus, one would need to carefully select all variables to be
>included in the estimate. At this point no one seems to know
>what variables to use and what the reliability of these variables
>and the overall estimator would be. Thus, the SEM is unknown.
>BTW rtt coefficients were not calculated for the Flu Wiki
>survey and so we have no way of calculating the SEM.
let’s take the expectation value and the deviation instead
and compare it with a random sample (0.5,0.28)
>I gave samples of estimates from the survey. If you would like
>to see the actual results they are on the WIKI side under
>Flu Wiki Survey Results.
yrs, I’d like the numbers but can’t find it with that keyword
>Past experience leads to the conclusion that in a survey of this
>sort the reliability could not be expected to be much higher
>than .4 or .5 with a range of estimates as disparate as was
>found the SEM would be quite high and the measures of central
>tendency would realistically range from 0 to 100%.
>
>Note that while the estimators might or not be normally
>distributed the SEM is by definition an error term and
>is normally distributed.
>The conclusion that this leads to is that until we have reliable
>variables that can be used in some sort of composite estimator,
>no useful prediction can be made. Others might have attempted to
>construct predictions in the past and they may or not have known
>about the components that are the basis of an estimators usefulness.
>Such prior work does not justify creating an unreliable estimator
>until we know how well it can be expected to work. Estimators can
>be constructed but in general, it would take people sophisticated
>in the characteristics of estimators.
>
>People who work in the areas of virology and epidemiology would
>certainly need to work with individuals who know how to construct
>estimators to arrive at useful tools.
or they inform themselves about this. Neith presents a serious problem
>To my knowledge this is not being done
but it should
>and whatever estimates we are getting they are based on composites
>wherein the individual does not specify what variables they are
>using and are thus assumed to be using variables with unknown
>reliability. The estimates are essentially useless.
they can be improved later by discussion
>One can ask others to estimate the number of green men on the
>back side of the moon. These estimates are not of much use.
huh ? Anyone knowledgable will estimate this as zero.
>When various authorities are saying such things as it will happen,
>it may happen,it wonot happen, they are simply presenting their
>educated guess
define -educated guess- and why it is different from probability
estimate.
>based on their own personal experience which has not been quantified,
>let alone submitted for analysis.
it is also based on what they read about H5N1. Papers
are being published to better understand H5N1 and thus better
understand the chance that it goes pandemic.
>It is for this reason that people give vague, wordy estimates.
and for what reason do people give numbers for sport events ?
For what reason do physicians and fluwikians give numbers but experts not ?
>I agree with anonymous that everything can be quantified and
>that all estimates rest on some sort of quantifying principles
>by whatever metric is used. However, this does not mean that
>these people, even groups of scientists not trained the analysis
>of measurements, know how to assess the effectiveness of their work.
we can easily decide this by trying it out. Form the CDC-panels etc.
as in 1976. We will see how big the deviation is and whether it’s useless.
What can we lose ? Why don’t you want trying it ? I bet, the results
will not be as useless as you think.
>Paul Mheel in the 1950s (I believe) demonstrated that an algorithm
>will outperform the clinical decisions of the experts whose knowledge
>was used to create the system. This is so because the mathematical
>model never deviates and therefore has a better overall hit rate.
yes, but this clearly depends on the sort of clinical decision.
And it could be improved when experts may make use of that
program to improve their decision.
>We all use math to estimate all the time, we simply do not
>know that we do it.
you told us - now we do know that we do it.
>With regard to the idea that policy makers should be using
>some specified (who cares what) model to make their public
>statements, I would suggest that, we need more time to
>determine the pertinent variables and we need improvement
>in our ability to generate good estimators through the
>collection of data. In the mean time, our governments
>are spending our dollars flying by the seat of their pants.
>You may not like it but that is the way it is done.
we have not so much time. Let’s do the best we can do now.
The probability estimate is not a political decision, it
should be done by scientists.
>Imagine trying to estimate if the USSR was actually going
>to use those Cuban missiles in the 1960s. Unique events are
>impossible to predict.
Every event is unique and can be predicted. Some may show that
behaviour with large SEM, but most won’t.
We could and did predict the probability of Cuba using missiles in 1960.
We were quite alarmed and later analysis
showed that we were right to be alarmed.
>In the mean time, we need some decisions. I think that most
>policy makers attempt to get the best estimators possible
>before making a decision to say, Plan for a pandemic.
>However, good mathematical estimators are not available
>so we use pattern recognition, which, BTW is a form of
>math analysis if you think about it.
let’s use virology experts and train them in “pattern recognition”.
Let’s give them the chance to discuss and improve their estimates.
I can’t see why you are so pessimistic about the SEM for that estimate.
If it were indeed useless and had that large SEM, then all these unclear
statements by experts like : -there could well be a pandemic this year-,
-it’s more likely than in any other year since 1970-, etc.
were all completely meaningless too and should be refused,
as the giving of numbers is refused.
Also with this common withholding of data we expect the experts
to have access to data which we have not. So that should give
them an advantage over our estimates and an opportunity to
inform the public without revealing the details of the secret data.
To review the actual Flu Wiki Survey results go to http://tinyurl.com/gq92y then proceed to DemfromCT at 21:55 and select the link to “Part 2”. To download the large (1.5 meg) DOC file.
The SEE (standard error of estimate) is derived from the dispersion of scores within a distribution and yields information about the location of a particular score. That is it tells us how far a person’s estimate is likely to deviate from the measure of central tendency (usually the mean).
The SEM (Standard Error of Measurement) is derived based upon the reliability of the measuring device. It answers the question, “Do we get the same measurement each time we measure?” It is used to determine if one is measuring “length” with a “rubber ruler” or with a “stainless steel ruler.”
One uses some set of variables to construct an estimate of a pandemic. These variables constitute a measuring device and we would have to determine (or at least consider) the reliability of the device used to measure. In general, it is known that one must have devices with rtt >.89 to be useful. To achieve this level of reliability all variables used in the construction must be quite reliable or there must be a lot of items with moderate levels of reliability and a small SEE. A lot is here defined as 16+ items (variables).
To determine the usefulness of any estimate we need to now what is being used and the reliability of the observations. At this time neither of these conditions are being met and what we have is “pattern recognition.” In which the variables are not identified, per se but in which some people are better at seeing the pattern than others. Who these people are is not necessarily constrained to virologists and epidemiologists. There could be others and policy maker need to listen to those who have been good at forecasting in the past. Hell, it might be a gypsy for all I know and the person probably does not use a numerical estimate. They just say “its gonna happen.” On hindsight we find they were right and we do not know (and neither do they) which variables they used.
Simply put we need better data.
bump
Oh my. Tell me is all the above worth reading? I’ll look it over tomorrow if it is. As a skimmer of most novels and unimportant data, are we going to make this into an endless thread of estimate, estimate, estimate.
the data is at:
http://www.fluwikie2.com/uploads/Forum/survey2.doc
it’s in “word-“ format , not computer-readable. It took me lots of time
to convert the data.
Here are the values:
probability of pandemic [starting?] within 6 months:
0,0,0,0,0,0,5,5,5,6,10,10,10,10,10,10,10,10,10,10,15,15,15,17,20,20,20,20,20,20,25,25,25,25,25,25,25,25,25,25,25,25,25,25,30,30,33,33,40,40,40,40,50,50,50,50,50,50,50,50,50,50,50,50,50,50,50,50,50,50,50,50,50,50,50,50,50,50,50,50,50,50,50,50,50,50,50,50,50,50,50,50,50,50,50,50,55,70,75,75,75,75,75,75,75,75,75,75,75,75,75,75,80,90,100,100\\mean=40.4,deviation=23.8
probability of [at least one] a pandemic [starting] within 12 months:
10,10,10,14,15,15,20,20,25,25,25,25,25,25,25,30,30,30,30,40,40,40,40,50,50,50,50,50,50,50,50,50,50,50,50,50,50,50,50,50,50,50,50,50,50,50,50,50,50,50,50,50,50,60,60,67,70,75,75,75,75,75,75,75,75,75,75,75,75,75,75,75,75,75,75,75,75,75,75,75,75,75,75,75,75,75,75,75,75,75,75,75,75,80,80,80,80,80,80,80,90,90,90,90,90,100,100,100,100,100,100,100,100,100,100,100,100,100\\mean=62.2 , deviation=24.3
With random data I would have expected a deviation of 26.2 for X and 29.9 for Y, so this is only slightly better than random.
Let X be the random variable representing the 1st dataset and Y the one for the 2nd dataset.
Then - assuming each day has same likelyhood for the start of a pandemic within 12 months - we would expect the distribution of Y to be the same
as 100-(100-X)*(100-X)/100.
In that case I had an expected mean of 58.8 and 28.3 so people seem to think
that a start of a pandemic in months 7–12 is a bit more likely than start in months 1–6.
As I said, I’d expect the deviation from expert’s estimates to be smaller and even more decreasing when they start discussing and improving their estimates.
sorry again for the formatting. Now the whole thread is corrupted :-(
I will start a new thread : chance of a pandemic 3 here:
http://www.fluwikie2.com/pmwiki.php?n=Forum.ChanceForAPandemic3
it’s hard for me to predict how pmwiki will handle the formatting
Old thread - Closed to increase Forum speed.