Why statistics are not reliable




















Providing solely the percentage of change without the total numbers or sample size will be totally misleading. Likewise, the needed sample size is influenced by the kind of question you ask, the statistical significance you need clinical study vs business study , and the statistical technique. If you perform a quantitative analysis, sample sizes under people are usually invalid. Misleading statistics in the media are quite common.

On Sept. Based on the structure of the chart, it does in-fact appear to show that the number of abortions since experienced substantial growth, while the number of cancer screenings substantially decreased. The intent is to convey a shift in focus from cancer screenings to abortion. The chart points appear to indicate that , abortions are greater in inherent value than , cancer screenings.

Yet, closer examination will reveal that the chart has no defined y-axis. This means that there is no definable justification for the placement of the visible measurement lines. Politifact, a fact checking advocacy website, reviewed Rep. Using a clearly defined scale, here is what the information looks like:. Once placed within a clearly defined scale, it becomes evident that while the number of cancer screenings has in fact decreased, it still far outnumbers the quantity of abortion procedures performed yearly.

As such, this is a great misleading statistics example, and some could argue bias considering that the chart originated not from the Congressman, but from Americans United for Life, an anti-abortion group. This is just one of many examples of misleading statistics in the media and politics.

The claim, which was based on surveys of dentists and hygienists carried out by the manufacturer, was found to be misrepresentative as it allowed the participants to select one or more toothpaste brands. Based on the misuse techniques we covered, it is safe to say that this sleight off-hand technique by Colgate is clear example of misleading statistics in advertising, and would fall under faulty polling and outright bias. Much like abortion, global warming is another politically charged topic that is likely to arouse emotions.

It also happens to be a topic that is vigorously endorsed by both opponents and proponents via studies. It is generally agreed upon that the global mean temperature in was In , the global mean temperature was measured at It is, therefore, argued by global warming opponents that, as there was a 0. The below graph is the one most often referenced to disprove the global warming. It demonstrates the change in air temperature Celsius from to It is also worth noting that, as there is a large degree of variability within the climate system, temperatures are typically measured with at least a year cycle.

The below chart expresses the year change in global mean temperatures. While the long-term data may appear to reflect a plateau, it clearly paints a picture of gradual warming.

Therefore, using the first graph, and only the first graph, to disprove global warming is a perfect misleading statistics example. But you cannot know until you ask yourself a couple of questions and analyze the results you have in between your hands. As entrepreneur and former consultant Mark Suster advises in an article , you should wonder who did the primary research of said analysis. Independent university study group, lab-affiliated research team, consulting company?

From there naturally stems out the question: who paid them? As no one works for free, it is always interesting to know who sponsors the research. Likewise, what are the motives behind the research? What did the scientist or statisticians tried to figure out? Finally, how big was the sample set and who was part of it? How inclusive was it?

These are important questions to ponder and answer before spreading everywhere skewed or biased results — even though it happens all the time, because of amplification. A typical example of amplification happens often with newspapers and journalists, who take one piece of data and need to turn it into headlines — thus often out of its original context.

One sign of biased statistics is that respondents had incentives to answer a certain way. There are other ways to look at incentives, too. For example, a journalist could go on and on about the fact that Americans want hybrid cars based on a statistic, but the journalist might never mention the context of the study and statistic. That is a red flag that the statistic is misleading.

Remember: just because something sounds authoritative does not mean it actually is authoritative. When a statistic says that people are now twice as likely to die from something, that could be an example of context not being reported. What were the odds of dying from that cause in the first place?

If they were something like 0. What might someone think if a survey came out tomorrow saying that skin cancer actually is not all that common?

It is the most common of all cancers, according to the American Cancer Society and many other organizations. Beware of statistics that go against the grain.

Look at the groups sponsoring or carrying out the research. Many consumers are savvy. They know when something is not on the up and up, and its often best for businesses to be straightforward about how they conducted research and reached their conclusions.

Businesses need to be sure that the companies they work with for, say, tracking consumer data, are presenting information accurately. By being aware of these pitfalls of misleading data and looking at signs such as sample size, methodology, and sample representation, a company can get a good idea of whether research is being performed accurately.

Curious about what can happen when companies get the data wrong or ignore the data entirely? TraQline continuously completes market research to bring unbiased, reliable statistics to businesses. When you are ready to get access to accurate data and statistics for your industry , the market research experts at TraQline are here for you. Contact our market experts to learn more or to get started today! No matter how big or small, each company functions in a unique market.

What you must be examine, if you wish to use statistics as evidence, are the above questions. Who Did the Study. Let us examine first "who did the study.

The problem arises when you find statistics that support every way of viewing an idea. You can find statistics that show cigarettes are killers and that they have no effect on anyone's health. You can find statistics that say you should cut down on the consumption of dairy products and that dairy products are good for you. You can find statistics that prove that so ft drinks will give you cancer and that they have no effect on anything but your thirst or even that they make you thirstier.

Every one of these sets of statistics is absolutely true. The phrase "numbers don't lie" is true; what you need to examine is who is publishing the numbers, and what are they trying to prove with them. Did the latter give you pause? It should. Both are reputable. Yet both have differing opinions based on statistics.

Every point of view uses statistics to support their ideas. It's your job to examine all statistics supporting all points of view, to arrive at your own conclusions based on all of them. If you can't arrive at a conclusion, do your own study. An easier course, naturally, is to find out what all possible sides have to say and what other evidence they have in support of their statistics. Once you have determined whether or not there is prejudice involved in the statistics please recall that subjectivity is unavoidable , then it is time to move on to the next question: what are the statistics measuring?

What are the Statistics Measuring. When asking yourself , "what are the statistics measuring," bear in mind the old saw about measuring apples and oranges. Most people will say that you can't compare apples and oranges. This is both true and false. Overall appearance? Sugar content? Vitamin, mineral, carbohydrate, or fat content? As you can see, it is possible to compare apples and oranges, if you know what you are measuring. Your job, in using statistics as evidence, is to determine what exactly is being measured, and not simply spout numbers that seem to apply to your topic.

If your topic is "Nutritional Value of Oranges," statistics proving that apples are nothing like oranges may be measuring the wrong things.

Who was Asked? Once you've determined what the statistics are measuring, you next need to find out how the research was done. Many studies, the results of which are disseminated using statistics, are done by asking people their opinions or what they do or think or feel or. Such areas are often referred to as "soft sciences", as opposed to "hard sciences" that do research designed to minimize as much as possible the human factor in the evidence and conclusions.

The "human factor" is, naturally, impossible to eliminate totally as long as humans are involved, but the studies, to be "scientific," must be repeatable and predictive in nature.

That is, once a study has been done, equivalent results must appear when the study is done again by other researchers who have no connection with the original researchers, and the results should allow researchers to say what will happen next. Let us say that scientific statistics show meteors fall during a specific period say, August at an average rate say, 60 per hour.

This study is repeated several years during August and the rate stays the same. Thus the study is repeatable. From those statistics it is possible to predict that in future years the average rate of shooting stars in August will continue to be 60 per hour.

In this case, "who is being asked" are the impersonal forces of nature. It is the soft sciences that most often, intentionally or unintentionally, misuse or misapply statistics. The studies are often not repeatable and usually not predictive.

The reason for this is that people and what they say or do are the bases of t he statistics. It seems axiomatic that people will perversely refuse to say or do the same thing twice running, or let anyone predict what they will do.

In fact, many people consider themselves insulted when called predictable, and anything from the weather to the time of day to who's asking the question can change what they will say or do about something.

What does this mean to you as you examine the statistics you plan on using as evidence? First, try to determine whether the statistics are hard or soft science based. The simplest way to do this is simply find out if people or nature is being studied. If nature it's hard science, if people it's soft.

Second, if the statistics are hard science, check to see what results other researchers who have repeated the study obtained.

Of course, hard science statistics often require that you examine who was asked. The entire population of the US? The population of New York or San Francisco? The population of Otumwa , Iowa? Or a selection of towns and cities, rural, urban and suburban, in all parts of the country? Statistics on the incidence of rape in the US vary wildly depending on whether the study asks law enforcement or rape counseling centers one set is based on the number of reported rapes, the other on the number of women needing counseling whether or not they reported the rape to law enforcement.

Both examples above appear to be hard science, since they are based on "hard" facts, but nonetheless must be examined for who was asked. Soft science statistics are even more slippery than hard science statistics. First, there are few hard, repeatable, non-subjective facts on which to base the statistics. If you wish to show how people react to violence, how do you define violence? And how do the people in your study define violence a victim of a mugging may define violence as getting within five feet of him, while a mugger may define it as anything that happens that causes him physical damage what he does to others is simply high spirits.

Also bear in mind that any study that uses human subjects is almost impossible to conduct under laboratory conditions, in which all factors that could effect the outcome of the experiment are controlled, including the variable under study. For a truly statistically valid study showing the effects of television violence on children, the children would have to isolated from all other factors that could have an influence.

These other factors would include contact with other human beings, with other expressions of violence people, reading, radio, movies, newspapers, video games, etc. This would obviously work to the social and developmental detriment of the children.

As a matter of fact, a recent controversy arouse over using medical data collected by the Nazis in the concentration camps. These data were collected with absolutely no regard for the fact that the test subjects were human beings; they were treated much worse than any laboratory animal in the world today.

Ethical and moral considerations aside, the data are viewed as valuable. However, there are people who believe that the ethical and moral considerations are paramount, and that the data, no matter how valuable, should be destroyed because of the way they were gathered.

In addition to the fact that any study involving humans must take into account human and humane considerations, you should never underestimate the perversity of a human being. In studying comedy one of the first things I learned was never tell the audience I was going to be funny. The moment a comedian says to an audience, "You're really going to find this funny," the same audience that moments before was falling out of their chairs laughing will turn cold and silent, with an "Oh, yeah?



0コメント

  • 1000 / 1000