On Numeracy, Naïveté, Google & Pew
Pew Religion in American Life says 21% of atheists believe in God. Or so our metro newspaper reported—and refused to clarify the reporting.
A review of Dirty Words: A Literary Encyclopedia of Sex in that same paper notes that the Google searches in the book are “revealing, if not exactly saucy.” It then quotes from the book, comparing the number of “Google pages” for a one-word sexual term I won’t use here with the number for Nabokov—the first being almost three times as high as the second.
It is quite possible that my discussion of Google search results last issue was fundamentally misguided—and there’s no real way to know whether that’s true or not.
Checking the Schwab website just now, I see a truly dramatic rise in stocks—they’re just climbing like wildfire. (It’s not just Schwab: I’d see the same thing on most stock sites.)
How do these four items fit together? Numeracy—or some combination of numeracy, naïveté and common sense. When I included a chapter on numeracy in Being Analog: Creating Tomorrow’s Libraries, at least one reviewer sneered at the inclusion, since everybody learns this stuff in grammar school. But it’s fairly clear that people don’t (or at least don’t retain it)—and, I’m afraid, “people” sometimes includes the librarians who should be helping other people understand what they’re dealing with.
Consider the four examples:
This one’s not so much numeracy as sloppy reporting—sloppy reporting that Pew almost certainly knew would happen. The Pew Religion in American Life survey did not ask “Do you believe in God?” Instead, it biased its survey toward a positive response: It asked “Do you believe in God or a universal spirit?” That last clause is vague enough that almost anyone who feels there’s something more important than themselves would answer Yes.
I did a whole piece on Google result counts last time. Bluntly, large result counts from Google simply don’t have any clear meaning—and can’t be used to make valid comparisons between different topics. That’s particularly true when one of the terms is sex-oriented: Spam alone can add literally millions of hits that don’t relate back to any actual content. On the other hand, I’d guess there are very few uses of “Nabokov” in spam. The comparison isn’t “revealing”—it’s pointless. Does the web contain more actual content on this particular sexual activity than it does on Nabokov? There’s no real way to know (and I’m not about to do this particular exploration, thank you).
Seth Finkelstein suggests that part of my discussion of Google search counts was based on false assumptions. To wit, where I found substantially fewer displayable results than the 1,000-result limit for some terms showing very high result counts, Finkelstein believes Google’s just grabbing the first 1,000 results (all it will ever give you in any case) and eliminating duplicates and spam from that result before presenting it. In which case, elements of my discussion might not be right—but there’s no way of knowing. Google searching is a black box with no instruction book: You can only judge it based on what emerges. If my analysis was naïve, it was a naïveté that 99% of those users who investigated would share. Unfortunately, that’s probably less than one in ten Google users; the rest will simply take the big numbers as being meaningful.
Schwab’s daily stock chart is classic chartjunk, of a type that’s incredibly prevalent, particularly in financial reporting. The daily chart is a non-zero chart: Neither axis begins at zero. It is, in fact, always scaled to show the most dramatic possible interpretation. The scale and numbers on the chart are designed so the day’s low and day’s high define the bottom and top of the chart itself. In this case, what looks like an astounding bull market actually amounts to just over a 1% gain—which is nice given the last couple of weeks, but would be nearly invisible on a proper chart.
I’m not going to say much more about either Google or Pew. So the title of this essay is misleading—it’s really about numeracy and naïveté, using Pew and Google as examples.
Sometimes, numeracy problems are obvious—or they should be, if you understand basic arithmetic and have common sense. They deal with non-reversability of percentages, being able to do basic multiplication and division, meaningful and non-meaningful digits in reports—and one form of survey bias.
Let’s look at a few others.
A survey can be no better than the quality of its sampling and the wording of its questions. Unfortunately, sampling quality is getting harder and harder to assure. As far as I know, no survey outfit attempts to compensate for the kinds of people who simply won’t answer telephone surveys. We don’t (and we probably average one survey request a week); do you?
If you don’t have a landline telephone, the answer’s simple: You don’t get called. If you just don’t have time for extended surveys, you may get called but you won’t be included. As for internet surveys, they have other sets of problems. (I’ve seen surveys where you can’t complete the survey without stating your income range; lots of us simply will not do that.)
Question bias is difficult, especially since most reporting of survey results won’t include the questions. I regard the Pew Religion question as deliberately biased toward a positive result—after all, “Do you believe in God” is a straightforward question (and could be varied for adherents to other religions).
You can usually count on surveys from reputable firms having a large enough sample so that first-level breakdowns are statistically meaningful. But that can break down when you get to subsamples.
Let’s say a survey asks 2,000 adults about ebook reading but also asks them about their computer platform. Let’s say 4% of the respondents use Macs and 2% use Linux. So far, so good. Then the survey reports “20% of Mac users and a remarkable 30% of Linux users are interested in buying ebook readers.”
Remarkable? They found 12 people who use Linux and are interested in buying ebook readers—and 16 who use Macs and have similar interests. Neither result is particularly meaningful. (I’ve seen widely-publicized survey results where the magic number was four people, extrapolated into a trend likely to include millions.)
Non-zero axes are one common form of chartjunk, serving to magnify the apparent significance of any change. (Doing the opposite—scaling a chart so that changes are minimized—is fairly obvious, since most of the chart is empty.) There’s a much worse form that turns up in PowerPoint presentations and sometimes elsewhere: Unlabeled and partially labeled axes. You can make results show almost any trend you want if you’re willing to combine the two. (I can imagine a chart on blogging frequency that has days per post rather than posts per day as a vertical axis…)
Being Analog: Creating Tomorrow’s Libraries was published in 1999. I’m ending this essay with portions of Chapter 4 from that book, “Coping with Nonsense: Numeracy and Common Sense.”
The following questions test some aspects of your real-world numeracy. If you’re sure you know all the answers, you may not need to read further—but otherwise you do need to read on, particularly if you say, “Who cares?”
2. Define the user population of an ARL library as being the sum of FTE faculty and FTE students on the campus. Given that definition, the average per capita library funding for 1992/93 at Arizona State University, Princeton University, Stanford University, and the University of Houston was $1,467. Is that statement: a. True? b. Meaningful?
3. Your city council says there is a budget crisis and your library budget must be cut one-third (33 percent) for the new fiscal year. When that year begins, the city treasurer finds there was a mistake: there is no crisis. The council adds one-third (33 percent) to your library’s budget. Does this make you happy?
4. A professor asks how your million-volume library’s focus on French literature compares with national averages for academic libraries. Consulting the National Shelflist Count tables, you find that the national average was 0.5025 percent in French literature, where your library’s figure was 0.5021 percent. What should you report back to the professor?
5. You read that a new computer “cuts retrieval time by 200 percent.” Should you be excited?
6. Your local newspaper runs the results of a survey on the areas local taxpayers are most willing to pay more for. Longer library hours or better library collections aren’t in the top ten. Neither are other library issues. Does this mean your community doesn’t care about libraries or feels they’re adequately funded?
There’s the quiz. How did you do? If you’re not sure, read on.
Here are my answers and why I think the answers and questions are important.
The statement is factual as an average of averages, but “true” only in that limited sense. It is not at all meaningful. No meaningful average can be stated for a population of two large and lean public universities combined with two wealthy private universities. The population is too small and too heterogeneous. It's also not true in the proper sense of averages: that is, if you added the funding for all four libraries and divided by the total of the four campus populations, the result would be lower than $1,467.
For that year, Arizona State’s per capita library funding was $355; Stanford’s was $2,325; Princeton’s was $2,932; and the University of Houston had $257. The $1,467 number is wildly misleading for any one of the four institutions, and cannot be used to draw any judgments about them.
Moral: An average means nothing without knowing the size and characteristics of the sample population. Since you can’t escape averages, you need to be able to demonstrate their fallacies when that’s appropriate.
You lost 33 percent, then immediately gained 33 percent. You might be relieved, but you should not be happy: you are down more than 11 percent from the original budget!
Percentages are not symmetrical. A reduction of a certain percentage is always more significant than an increase of the same percentage. This is one of the most common real-world mathematical problems and one of the most dangerous.
Look at the numbers in this case. Your library was to have a $1,000,000 budget. Cutting that by 33 percent makes the budget $666,667. Adding 33 percent to $666,667 means adding $222,222 (666,667 over 3), bringing the budget up to $888,889. Ouch!
Moral: Percentages are not symmetrical and can be the most dangerous numbers when used loosely.
You should tell the professor that you are right at national averages, with about half of one percent of your collection being French literature. The difference between 0.5025 percent and 0.5021 percent is meaningless. “About half of one percent” is as precise as you would want to be—and if the number was 0.5993 percent, you should probably still say “about half of one percent.”
If your library has absolutely accurate reporting mechanisms, then 5,021 of your million volumes are in French literature. If every library reporting in the count had accurate reporting mechanisms, then the overall average would be 5,025 out of a million: a difference of four books, not significant under any plausible circumstances…
It’s rare for anything past the second non-zero digit of any result to mean much—e.g., so what if your collection is 0.503% rather than 0.504%?
Moral: Calculating something to four decimal places does not make those decimals meaningful.
Yes, you should be excited—in fact, you should be outraged by the sloppiness of the writer. Either that or you should be in awe, as the computer has achieved faster-than-light communication.
To “cut retrieval time by 200 percent,” the computer would have to return data as long before the data was requested as the earlier model returned it afterwards. Similarly, if a computer store advertises that it has “cut prices 200 percent,” you may be entitled to go in, pick up a product, and expect to be paid for it: a 200 percent cut from $1,000 means giving you $1,000.
If you’re the head of the local public library or the Friends organization, you need to talk to the newspaper—or whoever provided them with the survey—and find out two things:
What questions were on the survey, and with what wording?
How was the survey conducted—who was surveyed, and using what methodology?
There’s a good chance that the survey listed a group of possible answers and asked respondents to choose those they considered most important—and that there were no library issues on the list. That happened in Santa Cruz, California (in a survey taken by one city department) and it’s probably happened elsewhere. Even with the possibility of adding new issues, most survey respondents will deal only with what they’re given. If libraries aren’t on the list, they won’t be in the responses.
If the survey was conducted entirely among business executives, it’s quite possible that most of them simply aren’t aware of the public library’s importance or problems.
It’s possible that your library is adequately funded, but it’s also possible that the survey is flawed—or that you haven’t done enough to keep the public informed about your strengths and shortfalls.
This test omits some important aspects of real-world numeracy because there is no easy way to state them as questions. For example, real-world numeracy will help you to scan a set of figures and spot possible problems, things that “stand out” and may need double-checking. Numeracy can help you to scan a spreadsheet and spot significant facts that would otherwise stay hidden—and can certainly help you to spot the flaws in conclusions drawn from the spreadsheet. Numeracy is vital in evaluating responses to a Request for Proposal. Any time you see a graph, you must bring numeracy to bear.
Setting aside deliberate lies, problems with real-world numbers come in two major flavors: mistakes and distortions. Mistakes, honest errors, can come about because someone has used inappropriate statistical tools, because of transcription error, or because of spreadsheet disasters or other mechanical problems. The nice thing about mistakes is that they can be corrected without controversy. Sometimes those who make the mistakes will even be grateful for the corrections. The bad thing about mistakes is that they so often avoid detection—after all, if someone you trust and know to be ethical presents you with a set of number-based conclusions, you probably won’t investigate the conclusions and the numbers behind them.
Ethical, trustworthy people can also produce distorted figures, usually by accident or misunderstanding. I have produced charts that were distorted, simply because the software I was using had unfortunate defaults and I didn’t immediately catch the problem. In most cases, I am willing to assume that distortions are innocent—except when it becomes fairly clear that they are intentional. Intentional distortions are perhaps the most dangerous, because the underlying numbers may be sufficiently complex or sophisticated that the distortion will be difficult to uncover.
Pay attention. Think it through. Ask tough questions, and never assume that the computer is always right. Those are all easier said than done, but they are at the heart of effective numeracy.
The engineer asks another question, frequently and urgently: What factors have been missed? Nothing is ever as simple as people would have you believe. No new development takes place in a vacuum; no product can be sold without customers; the most “logical” distribution change does not make any sense if people don’t like the results.
Tomorrow’s librarians will face nonsensical projections and calculations just as much as today’s do. Real-world numeracy helps you to deal with such nonsense. It’s not uncommon to say, “Ugh. Math,” but it’s a mistake.
Cites & Insights is sponsored by YBP Library Services, http://www.ybp.com.
Opinions herein may not represent those of PALINET or YBP Library Services.
Comments should be sent to email@example.com. Cites & Insights: Crawford at Large is copyright © 2008 by Walt Crawford: Some rights reserved.
All original material in this work is licensed under the Creative Commons Attribution-NonCommercial License. To view a copy of this license, visit http://creativecommons.org/licenses/by-nc/1.0 or send a letter to Creative Commons, 559 Nathan Abbott Way, Stanford, California 94305, USA.