disContent
A Twofer: Two Favorite disContent Columns
Just for fun, I’m throwing in two of my favorite “disContent” columns from EContent Magazine—one recent and short, one older and longer.
10. Putting things together into a list seems to connect them. Surely you’ve seen lists where some elements don’t quite seem to fit—or where the organizing principle seems forced. Not a problem. It’s a list. The title connects individual elements, even if that connection is artificial. You can be philosophical about this: Bogus lists encourage people to think about possible connections. Or you can be realistic: A lazy writer spots 10, 15, 25 or 42 items that can fit under a title, no matter how ill the fit.
9. Lists are quotable, searchable, Tweetable. Honorable bloggers, Tweeters, Facebookers, and FriendFeeders will link back—but they’ll probably use one item at a time. Great! Just make sure topic phrases are less than 140 characters long and paragraphs run less than 140 words. You’re on your way to big-link love. A good 20-item 1,600-word list probably results in 10 times the links of a single discursive 1,600-word post or article and probably takes less than half as long to write.
8. Lists are typically made up of short independent paragraphs, great for people with short attention spans. If you believe some gurus, we’re all losing our ability to concentrate for long periods of time—and a “long period of time” might be the time required to read a coherent, single-focus article or even an 800-word column. But almost anybody (except possibly those who have become true Twitterphiles) can focus long enough to read an 80-word paragraph—like this one.
7. Lists almost write themselves. Not only can you throw in things that don’t belong, you can reuse the same topic phrases (full sentences are so 20th century!) with slightly different slants and wordings. Once you have your topic phrases (or websites, or what have you), writing the paragraphs couldn’t be easier. If your list is websites, you describe each one. If there’s substance, it’s still easier to write a list element than most any other paragraph. That’s particularly true because…
6. Lists eliminate the need for smooth transitions. Hey, it’s 2009. Writing a coherent sentence is becoming a postgrad skill. Writing a coherent paragraph is hot stuff. Good editors expect that you’ll connect those paragraphs to create a narrative flow. Why, I’ve had editors (hi Michelle!) forbid subheadings in columns to force me to think about the flow of an entire column. But nobody expects one list entry to flow into the next entry; they’re supposed to change abruptly.
5. Lists neither require nor reward full attention or close reading. We’re all supposed to be multitasking—reading while watching TV while texting on a cell phone. Lists suit multitasking: Half a minute’s reading (10 seconds’ reading!) gets you through a single paragraph, and if all you really get is the topic phrase, that’s OK. For that matter, slowing down and paying full attention to the list won’t help much: There’s nothing deeper to understand.
4. With luck, you can expand a list into a manifesto, then into a best-selling book. Not only can you build popular blogs from nothing but lists, you can make much more from them. What might have been a plain list can, with lots of near-repetition and other easy creative effort, become a manifesto. Then you need only add a couple more paragraphs after each point and shazam! You’re a best-selling author.
3. Numbered lists imply ranking without requiring actual effort. After all, this article isn’t just some random number of items. It’s 10 items and they’re numbered from 10 to 1. That must mean the 10th item is least significant and the first item most, right? The beauty here is that you don’t have to demonstrate significance—it derives from the act of numbering. What? You think No. 4 is more important than No. 1? Well, you’re entitled to your (obviously wrong) opinion.
2. People love lists. Why not? They’re easy to read, they rarely require deep thought (or even shallow thought), they can be quotable. Sometimes you get entire magazine issues consisting of nothing but lists—and you can bet those issues are widely read. Fifteen ways to seduce your neighbor; 10 ways to speed up Vista; the top 25 reasons X will do Y. The possibilities are endless, but the lists are never long enough to pose reading challenges.
1. Lists are easy ways to write articles and columns—much easier than actual writing. This column was inspired by a worldly personal computer magazine that had a “special list issue” where all the articles were numbered lists (instead of half or so, which would be typical). I noticed that the issue was remarkably fluffy and must have been unusually easy to put together. So was this column.
Quod erat demonstrandum. No, Michelle, I won’t pull this stunt again for at least five more years.
Let me list the 25 reasons this is one of my favorite columns. On second thought, I won’t bother. Lists still strike me as lazy substitutes for journalism and writing.
You probably create econtent that quotes the results of surveys and statistical analysis. You probably run stories with headlines and lead paragraphs that overstate results and may be misleading in other ways. I’m not calling on all econtent creators to avoid overstated, misleading, and badly justified projections (though that isn’t such a bad idea). I am suggesting that it wouldn’t hurt to be aware of some of the problems with surveys and statistics.
What’s wrong with online surveys? For the insta-polls on so many web sites, a better question is “What isn’t?” The questions are frequently badly worded but that’s the least of it. Some online polls register all responses—including those from bored people and axe-grinders who just click, and click, and click again. Others make some attempt to prevent multiple voting either by cookie (easy to defeat!) or by checking IP address. That may be a little better, but not all that much.
High-profile online polls tend to be dominated by special interest groups with instant response lists, true believers with time on their hands, and others intent on showing that their version of the truth is the only one that matters. Even without deliberate attempts to unbalance online polls, they’re mostly a toy for people who spend too much time online. Low-profile polls, those that aren’t political or are held within a relatively closed community, may be a bit more plausible but it’s hard to take most of those seriously.
Remember “nine out of ten doctors”? Ever wonder whether that really meant ten specific doctors, one of whom wouldn’t take the cigarette company’s consultancy fee? You see plenty of statistics and results these days based on little more than a handful of responses. That isn’t to say small studies are meaningless—just that their meaning is anecdotal, not statistical. When a hundred people tell you something about any aspect of American society, projection of those results to the society as a whole is worthless.
Sometimes a study’s overall size is large enough to give it some likelihood of meaningfulness but the results include all sorts of demographic breakdowns, sometimes involving much smaller numbers. If you see comments about the answers provided by male Caucasians ages 40-54 with masters degrees or better, who earn less than $25,000 per year…take a good look at the number of such responses in that big survey. I’ve seen more than one major study where at least one “important” result was based on fewer than 50 survey responses.
Then there’s faulty extrapolation—drawing trend lines based on two data points. That’s always iffy and sometimes worse. Say 54% of those surveyed in 1992 did something but only 47% of those surveyed in 2002 did the same thing. Can you reasonably project that the percentage will drop by a flat 7% each ten years, so the activity in question disappears entirely in a little less than 70 years? Or is a drop of 13% (47% is 87% of 54%)—and, if so, what do you project, since you can keep dropping 13% indefinitely? (After 70 years, that would still yield 18%.) These are nonsensical questions. Without a longer series of data points, any extrapolation is unreasonable.
Faulty extrapolation makes for amusing looks back after a decade or so, but that’s of little comfort to those who have warned of crises or based business plans on small studies and faulty extrapolation.
When you see press releases and news stories based on polls and surveys, do they show the precise questions asked? If not, be on the lookout for slanted questions. You see them most often in online surveys, particularly at sites that favor a certain outcome.
You’ve certainly seen multiple-choice questions that don’t offer a reasonable choice. You’ve seen satisfaction surveys where a disastrous consumer experience could wind up looking pretty good if all the questions are answered: Customers were overcharged and got terrible information, but the stock was good, bathrooms were clean, service was prompt, and the store was laid out well. That comes out as “67% of responses were favorable.”
Any good political pollster knows how to word “Have you stopped beating your wife?”-type questions so they seem objective on first reading. But that assumes you even see the actual questions rather than a polished interpretation of the results.
Here’s one that may be less common these days: Confusing correlation and causation. Are rainy days caused by people carrying umbrellas? The correlation is certainly strong, and (given decent weather forecasting) the umbrellas typically appear before the rain. If that example seems ludicrous, how do you know that other claimed causative factors aren’t equally ludicrous?
Quite apart from inappropriate claims of causation, we see too many silly correlations. With statistical software it’s trivially easy to run a full set of correlations and backing statistics within a set of survey results—even if there’s no reason to believe that two factors could be correlated. Unfortunately, there’s an all-too-human tendency to accept mild correlations that fit our own prejudices, and to assume that such correlations imply causation.
“As many U.S. adults read literature in 2002 as in 1982.” If the NEA’s Reading at Risk survey is correct, that’s a true statement—but it’s not one the NEA highlights. Here’s one that was highlighted: “In 1992, 76.2 million adults in the United States did not read a book. By 2002, that figure had increased to 89.9 million.” Here’s exactly the same information restated: “In 1992, 113.8 million adults in the United States read at least one book. In 2002, that figure increased to 125.2 million.” Not quite as desperate a situation? It’s the same set of facts.
Did your site feature the “big drop” in book purchases in 2003—when 23 million fewer books were purchased in the U.S. than in 1992? That was a drop of 1.02% in unit sales (and a small rise in revenue)—but I’m guessing your headline didn’t feature that or the 2.222 billion books that were sold. “American adults only buy an average of 11.7 books; literacy doomed” just doesn’t make it as a headline. But, of course, when sales of a niche technology jump from 1,000 to 3,000, that’s “200% rise in sales!” but why mention the actual numbers?
Caution: Wild speculation ahead. What about surveys that use a sufficiently large sample, chosen with appropriate care, with carefully-worded questions and cautious statistical analysis? Surely they must be as meaningful as ever.
Maybe not. We could be seeing the flip side of Dewey’s presidency. Remember? Polls taken by telephone resulted in a confident projection that Dewey would win—because the people with telephones back then tended to be wealthier and more conservative than most voters. What if substantial portions of America’s population just don’t respond to telephone surveys any more? What if those portions have things in common that tend to throw off survey results?
In 1982 I would answer telephone surveys. In 1992, I might. Now, I almost never do and we get a lot more requests to participate in surveys. If it’s a survey on book reading, my wife and I may both be too busy reading books to spend five or ten minutes answering intrusive questions. So someone else with similar demographics answers instead—maybe because they don’t waste time reading books.
We’re just one case. Or are we? Do you respond to telephone surveys? Do your friends? (I’d take a survey, but…) What if a quarter of those who are well educated, involved in society and their communities, readers, thinkers, and doers just don’t respond to surveys? What if that’s the quarter that’s most involved, that reads the most, that works enough so they need their home time for all those other activities. What does that mean for survey results?
Maybe this is nonsense. Part of me hopes my wife and I are statistical outliers; that everyone else is only too happy to respond to surveys. But part of me doesn’t quite believe that. Even without this wild speculation, there’s plenty to watch out for when reporting on surveys and statistics.
People continue to misquote surveys—and surveys continue to have all sorts of flaws. Increasingly, you see organizations (especially Pew Internet) quoting a plurality result—even one as low as, say, 23% of those responding—as a universal result. (That is: If more people in the survey within a given “generation” answer A to a question than provide any other answer, even if considerably less than a majority give that answer, the press release tells us that “generation X prefers A.”)
As for my wild speculation—I’m increasingly inclined to believe it. I would also note that most telephone surveys don’t reach cell phones at all. But there are so many problems with how questions are worded and how results are presented that survey bias through unwillingness (or inability) to respond may not be as important a factor.
Cites & Insights: Crawford at Large, Volume 11, Number 6, Whole # 141, ISSN 1534-0937, a journal of libraries, policy, technology and media, is written and produced by Walt Crawford.
Comments should be sent to waltcrawford@gmail.com. Cites & Insights: Crawford at Large is copyright © 2011 by Walt Crawford: Some rights reserved.
All original material in this work is licensed under the Creative Commons Attribution-NonCommercial License. To view a copy of this license, visit http://creativecommons.org/ licenses/ by-nc/1.0 or send a letter to Creative Commons, 559 Nathan Abbott Way, Stanford, California 94305, USA.
URL: citesandinsights.info/civ11i6.pdf