The Social Network Scene, Part 1
I’m one of those grumps who regard “social media” as a nonsense term (all media, whether online or offline, are to some extent social, and I don’t find that the term defines anything useful)—but social networks are real, as they have been ever since humans and animals started congregating in groups of more than two.
Most folks mean internet social networks when they say “social networks,” to be sure. The Rotary is a network of social networks, as is PTA, as is Girl Scouts, as are churches. That may not be all they are, but it’s part of what they are. In some ways internet social networks are weaker than face-to-face social networks: You’re mostly dealing with text, a low-res version of person-to-person interaction, and there are probably people in your internet social networks who you’d never dream of having in your face-to-face social networks. (I was about to say “real-life” or “real-world,” but LinkedIn, Twitter, FriendFeed and those other ones are certainly real-world enough, although I’m not sure I’d make that claim for Second Life.)
This relatively specific section heading has emerged from the Great Cites & Insights Reduction of 2012 because it doesn’t fit well elsewhere and because I’ve gotten more involved in thinking about and researching social networks. (Originally, I’d been researching blogs, and I do not regard blogs as social networks—they’re online publishing, a whole different ballgame.) If nothing else, my survey of the social networking presence of 5,958 public libraries in 38 states establishes me as a tenacious social network researcher, if not necessarily a guru or big success at it.
All of which is preface for a set of cites & insights covering social networks in general (and, for a few lesser ones, in specific)—the “sn” tag in my Diigo library. (Don’t bother looking. I changed these items to “snx” as I printed out leadsheets, and I delete Diigo tags as I write about items.) That’s as compared to a number of more specific tags for possible future essays: As of February 24, 2012, the list includes sn-delicious (12 items), sn-fb (29 items), sn-googleplus (53 items) and sn-twitter (46 items). Meanwhile, some notes about a variety of social network commentaries over the past three or four years…mostly arranged chronologically, oldest first.
Part 1? It’s already clear that the 43 items that were tagged “sn” without a qualifier offer me too much opportunity for comment to put them all into one essay, especially since this essay doesn’t begin to be substantive enough for a one-essay issue. So I’ll have to split them into several parts—at least two, probably three or four.
That’s Marcia Conner’s title for this April 6, 2008 post on the Fast Company blog—a post that begins with an odd disclaimer: “This blog is written by a member of our expert blogging community and expresses that expert's views alone.” The title “expert blogger” is interesting by itself, as is the felt need to explicitly disclaim Fast Company endorsement. Here’s the opening:
A woman, who as a girl in gradeschool taunted me enthusiastically, contacted me through a social network site asking if I planned to attend an upcoming reunion.
At first I didn't think much about it. I assumed she was on some committee for the gathering of once inelegant adolescents and she was contacting me as part of her new do-good campaign.
I replied in a perfunctory noncommittal way, and tucked her married name into my mental rolodex of people to avoid calls from if they appear on callerID.
She wrote again, reporting I looked healthy in my miniature photo and that I must be happy, how did I do it? Then she asked if we could connect directly on the site so we could correspond again.
After a little more discussion of this particular case she gets to the meat of the issue:
Should our social networks include only people we like, those we want to socialize with, and as my friend Jimm says, "Those we’d agree to take camping"? I don't believe they were designed to be personal discomfort-free zones. Do you?
No. Or sort of. Or…well, consider the last two paragraphs carefully:
If this former mean-girl (who has been nothing but sweet and cheerful in our recent communiqué) has a relationship with someone who can help me close an important deal or land a dream assignment, it should not matter she invited my friends to a slumber party in fifth grade while stridently leaving me out. However, what about announcing to everyone in the junior high cafeteria I'd sneezed peas out my nose (which I hadn't, it was mustard)?!
All social situations offer us the opportunity to be uncomfortable in unexpected ways. We shouldn't expect online social networks to be any different. It just seems easier to avoid the awkwardness when there's no auto-reminder in seven days you haven’t yet engaged.
I have loads of people who I chat with (somewhat asynchronously) on Friendfeed and would never take camping (but then, I’m an introvert and a little shy, and there are very few people I would take camping)—but nobody who’s gone out of their way to torment me in the past. If I treated social networks as primarily business tools, as ways to “close an important deal or land a dream assignment,” I’d probably cast an even broader net. But to me, that’s LinkedIn. I’d like to think that most people who are active participants on Facebook, Twitter, Google+ and especially places like Friendfeed aren’t only there for the money and self-advancement. That’s not social networking, it’s business networking: “How can this person forward my own agenda?” I guess business networks should be socially awkward, and I’d probably be a real downer at a Tupperware “party.”
After nearly four years, there are no comments whatsoever. Given the thrust of this post, I found myself clicking through to Conner’s website. She’s a consultant on “social learning” and collaboration. Why am I not surprised? Here’s the first paragraph of the “Work with me” section of her fancy, rotating-billboard, site:
For large corporations, I address change readiness and overcome stymied collaboration with strategic consulting, cultural assessments, level-setting education, and blueprints to remove the obstacles in your path to success.
I’m impressed. Or not. I do sometimes wonder how many people effectively treat all social networks as variants of LinkedIn, as places where they’re primarily pushing their own agenda? Maybe I don’t want to know…
Remember the odd claim that the power of a network increased as the square of its numbers, or something like that? Metcalfe’s Law, “the value of a telecommunications network is proportional to the square of the number of connected users of the system.” Whether the “law” makes sense or not from a pure telecommunications perspective, it’s bizarre-world when you substitute “social” for “telecommunications,” yet some social networks seem predicated on the idea that you should be constantly adding more connections. If you find it great to connect with 100 people on Facebook, it would be nine times as great to connect with 300, and 100 times as great if you just had 1,000 contacts. (David P. Reed ups the ante with “Reed’s Law,” which says the factor is really 2 to the nth, where n is the number of contacts. That makes a network of 20 people one thousand times more valuable than a network of 10 people—actually 1,028—and so on, with each additional 10 people making the network 1,028 times more powerful.)
Or not. This theme will come up a couple of times in this meandering journey. This time, it’s actually the name of a wiki page, CommunityMayNotScale. (At first, I thought this was a wiki—but it turns out to be just one page on MeatballWiki, “a community of active practitioners striving to teach each other how to organize people using online tools.”) If “community” equates to social network (or face-to-face social network), it’s an interesting perspective. The first two paragraphs on the page:
A community relies on trust and respect. These qualities are easy to find in communities where all of the members recognize and know each other. They are much harder to find among people who are interacting mostly as strangers.
When a group grows from dozens of individuals to thousands, it becomes impossible to feel any real acquaintance with more than a fraction of the population. When this happens, community standards and unwritten rules stop working. The group loses focus. Things fall apart.
Maybe the best quick response to any of these “laws” is one that appears on the wiki: The anti-reductionism law, “Every attempt to capture a human-interaction phenomena by just one number, however smartly derived, is doomed to failure.”
The page includes some good commentary about the need for (and danger of) subcommunities. There’s philosophizing, some of it pointed when you consider social networking issues—such as this (slightly excerpted and anonymous):
Currently what intrigues me is that communities are doomed to failure. So when activity dies down on this Wiki (or when it gets too much to handle!) I'll (gasp!) have to find some other place to carry on this sort of conversation? What if there are other communities that I'll never find that talk about this, and I'll never benefit from their conversation? These sorts of questions nag me a tad, but I guess I'll file it under the unfortunate consequences of HumanNature?. I guess I'm simply sad to see the passing of communities. (E.g., SlashDot is certainly not what it was, though opinions differ on whether what it has become. My own opinion is that it's degrading terribly.) Maybe I should focus more on the excitement of discovering new ones like MeatballWiki.
My other point of curiosity is why we even pursue OnlineCommunity. There are perfect outlets for community in the RealWorld amongst our neighbors, coworkers, etc. Granted, OnlineCommunity enables us to commune on the basis of topics of interest, rather than physical proximity, but is either one intrinsically more worthwhile than the other? Why have I only met one of my neighbors? (My wife and I try to fix cookies for new neighbors.) What do we gain from OnlineCommunity that we cannot get from PhysicalCommunity?? And vice-versa? Nebulous thoughts in my mind right now as I explore my priorities wrt computing, my job, and simply enjoying life.
For those of you who get as tired of CombinedWordWikiSpeak as I do, apologies—I’ll avoid more long quotations. Indeed, I think that’s all I’ll say about the page—and about the larger wiki, which soon becomes too self-referential and layered for my liking. The wiki seems to be more about wikis as communities than social networks as such, and is interesting in its own right. My sense is that wikis as a movement or methodology have declined from their glory years, perhaps even more so than blogging, but I could be wrong. As far as I can tell, the page referenced here has been largely inactive since late 2009, although the wiki continues to have some activity. You might find it interesting.
Clive Thompson’s January 25, 2010 Wired column addresses the issue of social network scaling, and as is frequently the case for Thompson, he makes better sense than I’m used to from Wired. After noting conventional wisdom—the bigger your social network, the better—he’s “been thinking about the downside of having a huge online audience. When you go from having a few hundred Twitter followers to ten thousand, something unexpected happens: Social networking starts to break down.”
He offers a case in point: Maureen Evans, who started tweeting in 2006, got almost 100 followers, enjoyed the conversational nature of Twitter, and started tweeting recipes.
She soon amassed 3,000 followers, but her online life still felt like a small town: Among the regulars, people knew each other and enjoyed conversing. But as her audience grew and grew, eventually cracking 13,000, the sense of community evaporated. People stopped talking to one another or even talking to her. “It became dead silence,” she marvels.
Why? I think I’m with Thompson on this:
Because socializing doesn’t scale. Once a group reaches a certain size, each participant starts to feel anonymous again, and the person they’re following—who once seemed proximal, like a friend—now seems larger than life and remote. “They feel they can’t possibly be the person who’s going to make the useful contribution,” Evans says. So the conversation stops. Evans isn’t alone. I’ve heard this story again and again from those who’ve risen into the lower ranks of microfame. At a few hundred or a few thousand followers, they’re having fun—but any bigger and it falls apart. Social media stops being social. It’s no longer a bantering process of thinking and living out loud. It becomes old-fashioned broadcasting.
His “lesson” is in the title: There’s value in obscurity. You can have lively, strange, open conversations among a few dozen (or maybe a few hundred) people—but when the conversation gets a little too big, it starts to shut down. “Not only do audiences feel estranged, the participants also start self-censoring. People who suddenly find themselves with really huge audiences often start writing more cautiously, like politicians.”
It’s the problem of the middle.
If someone’s got 1.5 million followers on Twitter, they’re one of the rare and straightforwardly famous folks online. Like a digital Oprah, they enjoy a massive audience that might even generate revenue. There’s no pretense of intimacy with their audience, so there’s no conversation to spoil. Meanwhile, if you have a hundred followers, you’re clearly just chatting with pals. It’s the middle ground—when someone amasses, say, tens of thousands of followers—where the social contract of social media becomes murky.
Admittedly, I’m one of those confused old souls who find Twitter’s “conversations” unsatisfactory, maybe because I’m not camped there all the time. Friendfeed’s conversational mode works far better for me (as would, I suspect, the clone of that mode in Facebook—and the emulation in Google+). I think one reason Friendfeed works well for me in general is that it’s a “failure” as a social network: It’s never grown much beyond a million or two. Even there, though, the LSW community has more than 700 members; if those members were all active (they’re not), I wonder whether the conversations would diminish. Right now, I suspect, LSW folks censor ourselves a bit more than we might like, at least at times.
The comment stream? A couple of realists point out that the primary purpose of most social networks is to expose as many people as possible to ads in as many ways as possible, so companies have no motivation to encourage obscurity. One interesting point becomes clear toward the end of the comments: Wired does a crappy job of monitoring older content for spam comments.
When I Get Back To You. That’s the full title of a June 10, 2009 post by John Scalzi at Whatever—and as you might expect, it’s just full of good sense. Scalzi starts off with a New York Times piece on smartphones “morphing from luxury to necessity” with this observation on responding to email or text:
“The social norm is that you should respond within a couple of hours, if not immediately,” said David E. Meyer, a professor of psychology at the University of Michigan. “If you don’t, it is assumed you are out to lunch mentally, out of it socially, or don’t like the person who sent the e-mail.”
Here’s Scalzi’s one-paragraph response, which might actually be all that needs to be said:
All together, now: Bullshit.
MSWord thinks “All together” should be “Altogether.” Word is wrong, as is frequently the case with its grammar/spelling corrections. Scalzi, on the other hand, is right. He does provide a little expansion of his one-word summary. He makes three basic points, and I’ll quote just the first sentence of each—after all, you really need to get the Full Scalzi by reading his blog (and the extended comment streams), and I don’t see a waiver of copyright:
First: If you are the sort of person who believes that all your e-mails/texts must be responded to instantaneously or sooner, you may be a self-absorbed twit…
Second: If you’re the sort of person who believes that all e-mails/texts must be responded to instantaneously or sooner, that probably means you’re ignoring something important right in front of you, like the other person at the table, or traffic on the freeway, or a large dog about to savage you because you’re carelessly walking on his lawn…
Third: Can we all agree that we don’t want to live in a world where we are obliged to respond to e-mails/text in an unrealistically short period of time, lest we be thought an enormous douchenozzle?...
Oh my yes. Scalzi does have a smartphone and does use it for email and texting, but, well… (and with a disclaimer regarding his wife, who of course he responds to as rapidly as possible)
Not answering immediately does not mean I don’t like you; it means I have my own life and I’m busy with it. If you can’t manage to grasp that basic and obvious fact, that goes into the bin marked “your problems,” not mine.
I have to say this has almost never been an issue with me: Either library folk are more understanding of asynchronicity, or those who tried to converse with me have already given up and regard me as an antisocial jerk. I’d like to think it’s the former. I use Friendfeed a lot, more than I probably should—but I also keep it on pause, all the time, because I can’t cope with the InstaUpdates.
Only 87 comments, which isn’t a lot for Whatever. I see a lot of expansion on Scalzi’s response and very little pushback. For example, JJS:
One thing I like about e-mail is that it will sit there patiently until I get home or get unbusy. Even though I am retired and have lots of free time, I still am not going to sit in front of my computer all day just to see if I get an e-mail that “must be answered immediately.”
And if some self-important twit decided that means I don’t like him/her, s/he is probably correct.
Or MattMarovich (before a much longer paragraph):
I’ve had some one try to tell me that it was rude that I didn’t respond to their e-mail right away.
I told them that it was rude for them to assume they had any say in how I ran my personal life.
About the only pushback regards work email, and of course that’s a different situation (as Scalzi notes in the comment stream), although even there it’s absurd to expect instant responses to most email (that’s what phones are for—or, better yet, walking over to the other person’s desk/cubicle/office/phone booth). I’m a little bemused by this, from “rick”:
Now, at some point a lack of a reply is either rude or unprofessional assuming the person involved is someone with whom you have a relationship (personal or professional).
Really? A fair number of the semi-personal/semi-professional emails I receive don’t appear to require a response. Should I be saying “Thank you for sending that!” each time somebody emails something? Really? (Yes, I know, I’m supposed to do a “Thanks for the comment!” every time somebody comments on Walt at Random. I’m such a baaaad blogger.)
And this, from “coolstar”:
hmmm, I consider NOT answering emails after you read them to be uncivilized. I’m in academia, and I tell students I’ll NEVER answer the phone if they call, but can get back to them very quickly thru email. I treat friends mostly the same way in regards to email. On the third hand, I consider smartphones and cell phones in general to be the height of uncivilzed behavior and only own a cell phone for emergencies. I suspect most people in academia feel more or less the same about email. Twitter? Text messaging? corporal punishment isn’t ALL BAD. (ask your local k-12 or college teacher about texting…..)
So not answering the phone is entirely reasonable and polite, but not responding to emails is “uncivilized”? As Scalzi would say, Whatever.
The best comment of the bunch might be this from George William Herbert, even if it is slightly offtopic:
“But the main reason I have the phone is so that if my car flips and I’m pinned under two tons of Honda steel, I can call for help.”
Wow, is it very common or standard that whenever a Honda flips, another pair of Hondas immediately dogpile on top of it to get enough combined mass for the police and passers-by to take the incident seriously?
A basic 2012 Civic weighs just under 2,900 pounds and an Accord EX is about 1.8 tons, but what a great line!
Here’s a somewhat unusual perspective from 2.5 years ago, as expressed by Paul Benjou on July 26, 2009 at Media Life: “Listen for the pop of social media.” By which Benjou means the pop of the balloon:
How foolish can we be? Plenty, it appears, even after we said we learned our lesson after the dot.com meltdown eight years ago.
We have billions being invested in what's called social media, from Facebook to MySpace to Twitter, and billions more to come, and yet no one has yet to figure out how to monetize them--make money.
He uses the $580 million purchase of MySpace as a horrible example of absurd valuation, and in that particular case it’s hard to argue: Rubert Murdoch managed to buy high and sell low, paying $580 million in July 2005 and selling for $35 million just six years later (in June 2011). Benjou seems to think MySpace is typical:
Facebook still has no business model that offers even a hint of promise for making money, and Zuckerberg has said, hey, no hurry, in three years we set about figuring that out. More growth is the near-term focus.
In just three years, Twitter had lept to become the third-most-popular social network, and it too has no business model offering even a hint of return for investors…
Twitter is just the latest pretty, helium-filled balloon that everyone wants to hold until the novelty wears thin or the gas escapes.
The fact is, it's a good bet these social networking sites will never figure out a workable business model because there may not be one…
He believes that advertising just won’t work on social network sites because it’s “social interference” and because telemarketing has been so badly received. (Which is why there are no more telemarketing firms, right?)
One might argue that over time internet users will give in and accept advertising on their social networking sites. One might also reason that over time hell will indeed freeze over and Canada will indeed run dry. But it is the sort of bet anyone in their right mind would place billions on? No.
Or one might argue that smart site designers will find ways to add advertising that don’t bother users too much. Maybe Facebook’s sidebar full of right-wing ads (at least for me) is a case in point. Maybe Twitter’s “sponsored tweets” (which seem less deranged in general than Facebook’s “click here for a nutcase rightwing survey” ads) will work. Google seems to have found a way to sell a buck or two in advertising without wholly losing us.
He offers lessons to learn—e.g., that “we still don’t fully appreciate how different and unique a medium the internet really is.” That’s true. I, for one, don’t regard the internet as a medium (in any meaningful sense) any more than I regard paper as a medium (in any meaningful sense). The internet is a way of carrying messages; it includes many different media, just as print includes many different media. Newspapers aren’t magazines aren’t textbooks aren’t print-on-demand micropublications aren’t…why should the internet be different? The internet is a carrier, just as paper is a carrier.
In old media, if you were a Murdoch, you could throw billions at something, newspapers or magazines or television, and gain market share.
On the internet, you can throw billions at something and watch those billions disappear into a netherworld, never to be seen again. What matters on the internet is not bucks but imagination.
What matters in “old media” is also thought and imagination, at least in the long run—and it’s naïve to say that money doesn’t matter at all on the internet.
It’s been 2.5 years since this post. Facebook apparently had $3.7 billion in revenues in 2011, most of it from advertising, most of that from Microsoft-supplied banners. Even Twitter has some revenue ($140 million estimated for 2010—a little more than MySpace’s $109 million estimate for 2011), although not much.
Am I certain Twitter will be around for the long term? Not really. For that matter, I think it no more certain that Facebook will be a major player ten years from now than I do that MySpace would be a profitable investment for Rupert Murdoch. But this article’s considerably overstated: Sometimes, ads do work.
The title of Steven Hodson’s August 27, 2009 piece at The Inquisitr is “Is Social Media ruining the good old heated debate”—without a question mark. Hodson’s answer is clearly yes—and he seems to think this is a bad thing. He cites other posts with a common thread that “we are all becoming a bunch of agreeable wishy washy Charlie Brown types,” excludes trolls and issue-oriented blogging (and Slashdot) from the discussion, and defines his study space as follows:
What we are talking about is the Social Media arena where services like Twitter and Facebook are the face of social media networks. We are talking about those bloggers who deal with the whole social media ecosphere. We are talking about the marketers, PR people and other promoters of the whole idea of Social Media.
Maybe that last phrase is key: He’s focusing on “promoters of the whole idea of Social Media.” In which case, my response to the pseudo-question would be “Who cares?” As he discusses comments on other posts, he reveals that even those who Believe in Social Media (the repeated capital S and M can’t just be a typing problem) have problems defining it—e.g., the person who disagrees cites online journalism as a counterpoint. “Excuse me but none of those examples have anything to do with social media.” Aha: So media that are online and encourage comments are not social media?
If the point is that social networks tend to encourage agreement more than disagreement, well, yes, that’s probably true. After all, Facebook doesn’t have an “enemies and antagonists” flag, where you and someone else agree that you want to argue with each other constantly. Most social networks are social—they are designed to bring together people who have things in common. And social networks represent just one aspect of online communications, including places where disagreement is frequent and sometimes sharp, even without necessarily being hostile or trollish.
I could cite Friendfeed, and especially the LSW contingent, as a counterexample, but even there, it’s easier to agree than it is to disagree—and sharp disagreements are frequently misunderstood.
Hodson offers four basic reasons why “can’t we all just get along?” seems to be the prevailing theme in social networks: Time (it takes time to craft a reasoned objection and to defend one’s own viewpoint), attention span (some folks aren’t willing to follow lengthy discussions), fear (nobody wants to be called a troll) and closed circles (“closed” is the wrong word, but yes, social networks tend to encourage circles of people with similar views—explicitly so in Friendfeed, Google+, and Facebook’s new Circles feature).
The rest of the discussion makes it even clearer that Hodson’s audience of interest is solipsistic: It’s the Social Media Gurus, talking about Social Media to other Social Media Gurus. Talk about your closed circles! Think I’m joking?
This idea that Social Media is all about “goodness and light” can be seen in the popularity and reader, follower/friends, numbers. Take a look on Twitter and the Social Media leader board there and you will see that the “always positive” contingent has follower numbers that are through the roof, whereas those that like to push the limits, those that question the ‘status quo’ have a lot less followers e.g.: @1938media. When it comes to blogs it is people like Chris Brogan, Louis Gray and others who find their readership grow by leaps and bounds. Those on the other hand who constantly question the “social media party line” often find themselves relegated to the blogging hinterland.
Now blogs do seem to be part of social media, where when it suited Hodson’s argument he excluded them. In any case, we’re talking about the promoters here. The problem Hodson sees is that “all this warm and fuzzy can make things very boring and eventually drain the life out of Social Media.” If that means less blather from promoters and maybe the term Social Media disappearing altogether, I can only say “Hooray!” and pat Hodson on the virtual back.
There do not appear to be any comments. Maybe because the post is a trifle disagreeable?
In practice, there’s loads of discussion on the internet. I don’t believe social networks are the natural homes for sharp disagreement, and I don’t believe they need to be: There are lots of other venues, lots of other media carried over the internet.
There’s a post that I never got around to posting, having to do with a flavor of Hodson’s issue. To wit: It’s sometimes unclear whether somebody offering an opinion while on Friendfeed or Facebook is interested in alternative opinions—or only in agreement and support. That can be very troublesome.
For example, let’s say you post “I really like Veal Scallopini” on Friendfeed. You’ll get some people saying “Yum” or various badly spelled cute sayings, some mentioning restaurants that serve great veal scallopini, some mentioning other veal treatments. But you’re also likely to get someone saying “I don’t eat baby calves,” possibly somebody saying “Yuck!” and maybe somebody starting in on the merits of a vegetarian or vegan lifestyle.
That’s just one example. Somebody could be espousing the merits of a musical group, or an author they love, or a flick or TV show, or…
Sometimes there ensues a fascinating range of opinions. But sometimes the person making the original comment lashes out at anybody who disagrees, in essence telling them that their negativity is not welcome here.
Maybe it’s because I’m an introvert, maybe it’s because this is all happening in a text environment rather than face to face, maybe I’m not sufficiently tuned in to nuance—but damned if I can tell when somebody’s interested in alternative ideas and when they only want agreement. Maybe there should be a special emoticon that means “agree or shut up!” or one that says “I welcome disagreement.” Or maybe I’m just not wholly attuned to the idea that social networks don’t deal well with disagreement?
Dan Wallach used that title for this October 2, 2009 post at Freedom to Tinker (which has been a group blog for some time). It’s initially about Google Wave (remember Google Wave?), but it’s really about inconsistencies in the way social networks handle datasharing and comments.
It’s an interesting perspective, and since it carries a Creative Commons BY-NC license, I’ll share most of it (omitting the first para) before taking mild issue with it (and noting how things have and haven’t changed):
How am I supposed to know that there's something new going on at Wave? Right now, I need to keep a tab open in my browser and check in, every once in a while, to see what's up. Right now, my standard set of tabs includes my Gmail, calendar, RSS reader, New York Times homepage, Facebook page, and now Google Wave. Add in the occasional Twitter tab (or dedicated Twitter client, if I feel like running it) plus I'll occasionally have an IM window open. All of these things are competing for my attention when I'm supposed to be getting real work done.
A common way that people try to solve this problem is by building bridges between these services. [Describes some of those ways.]
The bigger problem is that these various vendors and technologies have different data models for visibility and for how metadata is represented…
Comments are a favorite area for people to complain…
Given these disparate data models, there's no easy way to unify Twitter and Facebook, much less the commenting disaspora, even assuming you could sort out the security concerns and you could work around Facebook's tendency to want to restrict the flow of data out of its system. This is all the more frustrating because RSS completely solved the initial problem of distributing new blog posts in the blog universe. I used to keep a bunch of tabs open to various blog-like things that I followed, but that quickly proved unwieldy, whereas an RSS aggregator (Google Reader, for me) solved the problem nicely. Could there ever be a social network/microblogging aggregator?
There are no lack of standards-in-the-wings that would like to do this. (See, for example, OpenMicroBlogging, or our own work on BirdFeeder.) Something like Google Wave could subsume every one of these platforms, although I fear that integrating so many different data models would inevitably result in a deeply clunky UI.
In the end, I think the federation ideas behind Google Wave and BirdFeeder, and good old RSS blog feeds, will ultimately win out, with interoperability between the big vendors, just like they interoperate with email. Getting there, however, isn't going to happen easily.
Among the relatively small group of comments is one from Khürt L Williams who says he’s “never had a problem getting twiiter, facebook, friendfeed, and my blog comments to follow me around the Web” and notes some of the paths and special tools (both Williams and Wallach are, or have been, Friendfeed users, and Wallach notes that Friendfeed could be a pretty good aggregator). Wallach says the third-party tools “really strike me as a kludge” and that interoperability should be integrated by design; “anonymous” asserts that the internet itself is a kludge. “Henson” says “I think we all just want a way to have a single presence online.” He offers another “we all” that I regard as only slightly less probable.
I dunno. I’m fairly active (by my somewhat asocial standards) in Friendfeed (nearing 10,000 posts and comments, and I spend a lot of time there), a little more active than I expected on Facebook and Google+ (although still infrequently, still with very few actual comments), vaguely present on Twitter and almost wholly inactive on LinkedIn and ALA Connect. I don’t want a “single presence online.” I don’t feed all my Netflix queue additions to Friendfeed or Facebook or anywhere else. While my blog posts automatically pop up on Friendfeed, they don’t on Twitter, Facebook or Google+, and that’s deliberate. I really don’t want Facebook as the only game in town, any more than I want Google+ to serve that purpose.
But that’s what they want. The services increasingly make it easy to import stuff automatically—and hard to export stuff automatically. That’s reasonable from the services’ perspectives: They need as many eyeballs for as many hours as possible, so they have product to sell to their actual customers, the advertisers. That’s reality as long as we have free social networking services. I suspect it limits interoperability—and I’m not at all certain that’s a bad thing. Of course, I’m not Wallach: I rarely have more than three tabs open at any time.
The previous set of section names for Cites & Insights included one that not only didn’t make the cut, it’s not reflected in the replacements: The Zeitgeist. It wasn’t used that often—five times in all, as far as I can tell—and the last one landed with such a “tree in the forest” non-effect that I pretty much gave up the idea. Iris Jastram suggested the name (actually “preserving the zeitgeist”) as something Cites & Insights does or has done, for which I thank her: Even if I dropped the section name, I like the idea.
The very first essay tagged as The Zeitgeist appeared in the Spring 2010 issue (and was the entirety of that issue other than a Bibs & Blather on sponsorship and the surprise loss of my part-time job). The essay-specific subtitle was hypePad and buzzkill.
I reread that essay recently as part of my ongoing process of interleaving old Cites & Insights printed issues in with my flow of other magazines. At this writing, I’m about two months behind on other magazines and slightly less than two years behind on C&I, but the latter’s deliberate: I insert one issue of C&I in front of each Condé Nast Traveler when that magazine arrives. By the time I reread an issue, I’ve long since forgotten it, so I can read it freshly. That’s an attempt to replicate the experience of reading my magazine columns (all of which are now defunct, but it was a good 27 years) in print, a few months after writing them.
So I read this essay. At first I thought it would be a prime candidate for a “wrong, wrong, wrong” mea culpa about how badly off I was on my projections. Nothing wrong with being wrong once in a while—and admitting it.
Except that I didn’t make any projections regarding sales for the iPad: That part of the article wasn’t about the iPad itself, it was about the sheer hype and hyperbole (not quite the same thing) before and immediately after its introduction. I don’t see any need to apologize for anything I said in the article. In fact, while the iPad has sold much better than most non-Apple-centric observers expected, it has not destroyed ereaders, it has not wiped out netbooks or PCs or open computing (unless you’re one of those for whom a slowing of sales increases constitutes “wiped out”), and I don’t believe it’s changed everything. I’m still not part of the target market. My brother and sister-in-law are (they travel a lot more, for one thing), and they both have iPads (one of them is on a second-generation unit). They love them. They’re very intelligent people. We’ve tried them out. So far, we’ve found no particular desire to buy one—although there have been uses for which I’ve suggested that my wife might want one. So far, she doesn’t. If we wanted to spend more on computing and media consumption, switching to cable broadband from our increasingly-flaky DSL would probably come way ahead of buying iPads. (By the way, Apple’s down to 57% of the tablet market…but you can’t prove that by pundits who still proclaim that there is no tablet market, only an iPad market. Using that logic. there is no personal computing market, only a Windows market—except that Windows still has more than 90% market share.)
As for the buzzkill section, for which the actual section heading was Buzzkill: Google Screws Up, I still think that’s a fair summary. Remember Google Buzz? How it was an instant success—because Google simply dumped everybody into it, populating your “social network” with email contacts? It was pretty much a disaster, and Google bailed out. Google+ may not be perfect (not by a long shot!), but it’s better.
I’m going to quote the final subsection of that essay, “Thinking about the Parallels.” I believe it’s held up pretty well:
Both Google and Apple are large companies in Silicon Valley, both of which rely heavily on user trust and faith. Both have groups of admirers who proclaim they can do no wrong and assail doubters.
As far as I can tell, Apple didn’t actively generate the level of hype, although the company certainly did its share of leaking and dissembling. Most of the hypePad story is about reactions and expectations, not about the device itself or Apple’s handling of it. I’ve never been much of an Apple person, and I’m not a great fan of Steve Jobs. That said, and discounting nonsense like “magical” and “revolutionary,” the iPad will succeed or fail largely on its own merits. While those merits may not meet my needs—and while I do believe you’re better off thinking of the iPad as an appliance, not another kind of computer, and that the closed model is dangerous—there’s no doubt its merits are real. It’s up to the public, early adopters and others, to decide whether the tablet form factor finally makes sense. It’s up to other companies to raise the bar that the iPad sets—which, depending on what people are looking for, may be easy or difficult.
Google was in charge of its own destiny. Google screwed up big time. I’ve generally been a cautious fan of Google. I like Gmail a lot. I think the Google Books project has many good aspects and could have been a blow for fair use (if Google hadn’t caved). I’ll be more cautious in the future about turning any part of my virtual life over to my former neighbors in Mountain View. Where I’ve usually been negatively disposed toward Apple, I’ve usually been positive (if cautious) about Google. In this case, Google screwed up. With any luck, Buzz will go the way of Orkut and Google users will get a lot more cautious.
Apple +1, Google -1. Is that a fair parallel?
Now a quick confession: This began as a blog post and was copied for use in The Front in this issue—but as I was organizing items in Diigo, I found three that relate primarily to Google buzz. So I’ve moved the other stuff here, followed by notes on those three items.
I may not have the right number of zs in the title of this February 10, 2010 post at Informationoverlord, but you get the idea.
And so it came to pass that Google decided it wanted to be Friendfeed. Yes, the Gman has rolled out its attempt to get in on some Twitter/Friendfiend/Facebook Lifestreaming action. Are you excited? No, neither is anyone else really. We remember that Google bought Jaiku a few years back, sat on it, did nothing and then stopped supporting it and left it essentially to die. In case you don’t know, Jaiku was the first real challenger to Twitter—and, get this, it was BETTER. No, really, it was. When Google bought it I was one of a number of people who thought that they were going to wipe the floor with Twitter with it. Back then they could have done it, Twitter was still mostly free of celebs and indeed anyone other than web2.0 obsessives, but they did nothing.
The writer throws in a quick slap at Google Wave—“a ‘er, sorry but no one is really sure what the hell this is actually for yet’ system”—and provides a fairly thorough discussion of buzz. The good? You were likely to be tempted to try it if you already had a Google account and gmail. But without a gmail account and Google profile, you really couldn’t use it, and if you already liked Facebook jes’ fine, why would you care? (That’s a huge “if,” to be sure.) Also, routing everything through your gmail inbox was a bad idea. The writer also offers some comments about Google’s skill at seamlessness, and gets at what buzz really is all about: “Mobile and advertising.” Oddly, as a non-mobile user, I didn’t get that first part.
Does it Fly
Yes and No. As with Yahoo’s attempt last year, if you live your life in the email client then there is a good chance that you might find yourself using Buzz, even if you are only using it as a lifestreaming service. Are people, even Google geeks, going to abandon Twitter or Facebook for it, no. Could Google conceivably get them to use buzz to interact with those services—especially for status updates—absolutely.
In practice, this didn’t happen. buzz killed itself off pretty rapidly, and Google came back a couple of years later with Google+.
That’s Johnny Worthington at JohnnyWorthington.com, posted February 11, 2010. After a note about “being let into the fairground while they’re still setting it up” and how his wave account’s become unusable, he gets to the point:
[t]he use of social media has now matured. There is a certain level of expectation for features such as selective hide and lists. I want a scalpel, not a sword. I have spent many hours crafting my FriendFeed and Facebook instances into carefully managed gardens. Just because you’re shinny and new doesn’t make we want to invest the same amount of care if the tools for such management are still ‘coming soon’.
I don’t need ANOTHER social media space, so you better shit gold bars straight out of the gate or your get in put in the ‘meh, I’ll keep an eye on you’ box.
Or you drop it, wait a while, add a lot more development and restart with a new name…
Long title and an interesting post, by Jeremiah Owyang on February 11, 2010 at Web Strategy—and I’m mostly pointing to it as a snapshot in time. Owyang says his “career mission” is “To cut out the hype and help companies make sense of what to do. For those fraught with information overload, this definitive matrix distills what matters.” Note “companies,” not “people,” and that’s probably significant. Indeed, his “executive summary” on Google buzz is full of the bafflegab I’ve come to associate with “social media” folks, especially those selling to companies. And, to be sure, he’s got that bottom-line attitude: “The feature set of newly spawned Google Buzz isn’t important, what matters is their ability to aggregate social content which will impact search strategy for businesses trying to reach consumers…”
That’s followed by a long five-column table offering his take on four social networks in each of several areas. He doesn’t think Twitter will be a destination; as of 2010, he regards Myspace users as “heavily engaged” and thinks that will continue; and lots more. He’s surprisingly negative about Twitter and positive about Myspace (he’s repetitive about his assertion that Twitter will become an invisible utility) and if you have the proper corporate mindset, it’s at least interesting. Quite a few comments, but given that most of those I checked were back-slapping agreement from other ad and SEO folks, I didn’t read the whole group. One thing becomes clear, and is probably something useful for people to remember: Those who make money from social networks think of them as “B2C channels”—business to consumer. Any actual conversation among “consumers” (not people, not citizens, consumers) is peripheral.
What better way to end this assortment of mostly two- to three-year-old blather about social networks than with a post related to one of my favorite bugaboos, “personal branding.” Not the kind that happens in the Haight and involves heated metals and flesh, but the far less wholesome idea that we should all treat ourselves as brands, as little tiny corporations intent on selling ourselves.
This particular screed is entitled “10 Ways to Get Fired For Building Your Personal Brand,” it’s dated October 19, 2009 and it’s by Dan Schawbel, who is “the Managing Partner of Millennial Branding LLC, is a world renowned personal branding expert. He is the international bestselling author of Me 2.0, and the publisher of the Personal Branding Blog.” If you believe in personal branding as a healthy or necessary activity, you may already subscribe to his blog—and you’re probably not reading this anyway, or doing so only to sneer at my Luddite lack of enthusiasm for treating self as corporation.
In this case, Schawbel’s focusing on what you probably shouldn’t do if you’re currently employed and want to stay that way.
I view web 2.0 technologies at the driving force that converges our professional and social lives. Who you are and how you behave outside of work can impact how you’re perceived inside of work and visa versa. The way the world works now is that you have to spend more time thinking about your actions than you did ten years ago because words spread faster and they are accessible by everyone.
The ten ways, without the sometimes-lengthy commentaries?
1. Friending your manager on Facebook and then complaining about your job.
2. Putting your personal brand in front of your company’s brand.
3. Complaining that your company blocks social networking sites.
4. Attracting the wrong attention to your company’s brand because of your own.
5. Announcing your new job on Twitter when you’re still employed.
6. Thinking you’re superior to older workers because you’re tech literate.
7. Wearing rags to work because it’s part of your brand.
8. Posting inappropriate photos on Facebook, forgetting that your profile is public.
9. Spending more time on yourself than being productive during work hours.
10. Calling in sick, when you’re not, so that you can focus on your brand.
I gotta love #9. I guess as long as 60% of your work hours go toward your (current) employer, you’re OK. Otherwise…well, this all seems to boil down to “If you’re working for someone else, you might try to be as little of a douchebag as a ‘personal brand’ builder can be.” Schawbel closes by saying three times that you should use common sense—but, in my worldview, if you had common sense you’d drop the “personal brand” nonsense anyway.
Comments should be sent to firstname.lastname@example.org. Cites & Insights: Crawford at Large is copyright © 2012 by Walt Crawford: Some rights reserved.
All original material in this work is licensed under the Creative Commons Attribution-NonCommercial License. To view a copy of this license, visit http://creativecommons. org/licenses/by-nc/1.0 or send a letter to Creative Commons, 559 Nathan Abbott Way, Stanford, California 94305, USA.