The future may seem to be closer or farther off, depending on the era you're living in. That's one of the possible conclusions you can draw from this chart (embedded below), created by Stephanie Fox for io9, based on research we've done over the past month. We wanted to know whether there are historical trends in how far in the future we set our science fiction — and there definitely are. Here we present our data, as well as some preliminary conclusions about why the future changed so much from decade to decade over the past 130 years.
This all started with a relatively innocent musing on Twitter, from io9 pal Tom Coates:
Infographic I'd like to see: How far ahead did we set our science fiction at various points in history. That would be interesting.
Please click charts below to expand.
To get our data, we worked with intrepid researchers Ben Vrignon and Gordon Jackson, who helped track down when "the future" was in a random sampling of over 250 works of science fiction (books, movies, TV, and some comics) created between 1880 and 2010. Purely for sanity purposes, we narrowed our search to pieces of science fiction widely available in English, in America, though the works sampled include several pieces of European and Japanese SF. (We were not able to sample nearly as many works as we would have liked, so if you would like to expand on this project, please get in touch and I can share our dataset with you.)
Once we had our data, we divided it up into works set in the Near Future (0-50 years from the time the work came out), Middle Future (51-500 years from the time the work came out) and Far Future (501 years from the time the work came out).
Why did we pick these boundaries? In part they were just necessary (and slightly arbitrary) cutoffs for categories that are arguably much softer than such rigid demarkations can capture. Still, they are justified for a few reasons. First of all, I wanted to reflect an idea of "near future" SF that encompasses works that are set just barely into the future, works that are generally intended to be about how the present day is already science fictional. George Orwell's 1984 was probably the first work of SF to popularize this notion of the near future, and William Gibson and Ken MacLeod's recent works also take it up. I picked 51-500 as the "mid future" because, frankly, it includes the Star Trek universe, which I consider to be a kind of model of mid-future SF because it includes radically new technologies and social structures, but the world is still recognizably our own. There is a ton of science fiction set in this mid-future which functions similarly - we're still the same old humans, just in space. And finally, works set 500 years in the future are often of a markedly different character than mid-future ones. We see a humanity that's radically altered, like the one in The Time Machine or Alasdair Reynolds' series. The Earth is unrecognizable or long gone. This is Deep Time territory, when anything goes.
Some caveats: I thought about making Near Future 0-100 years in the future, but decided that generally once you get beyond 50 years you start seeing SF that includes really radical changes and isn't intended to be "five minutes into the future" like recent William Gibson novels or George Orwell's 1984. I also thought about adding another "mid future" category between 51-200 years, since that's such a popular time period. If we had more data, I think that would have been reasonable.
The Analysis and Conclusions
I would like to say at the outset that these conclusions are preliminary, as we'll need a lot more data before we're on solid ground — and I would also like to see some cross-cultural comparisons, too. There are, however, a few things we observe right off the bat.
There are a few moments in history when all futures are almost equally represented, notably in the 1920s and the 1960s. Those are both periods of liberalization in the United States, when social roles were changing rapidly and the economy was booming. Perhaps these eras of rapid change turned people's eyes to both the near and far future. Interestingly, both eras were followed by periods of economic downturn that led to opposite effects: In the 1930s, we saw a spike in far future stories (indeed, the most of any era in our data); and in the 1970s we saw a spike in near future stories.
At other times, the future seems right around the corner. In the 1900s and the 1980s, there were huge spikes in near-future science fiction. What do these eras have in common? Both were times of rapid technological change. In the 1900s you begin to see the widespread use of telephones, cameras, automobiles (the Model T came out in 1908), motion pictures, and home electricity. In the 1980s, the personal computer transformed people's lives.
In general, the future got closer at the end of the twentieth century. You can see a gradual trend in this chart where after the 1940s, near-future SF grows in popularity. Again, this might reflect rapid technological change and the fact that SF entered mainstream popular culture.
The future is getting farther away from us right now. One of the only far-future narratives of the 1990s was Futurama. Then suddenly, in the 2000s, we saw a spike in far-future stories, many of them about posthuman, postsingular futures. It's possible that during periods of extreme uncertainty about the future, as the 00s were in the wake of 9/11, creators and audiences turn their eyes to the far future as a balm.
Again, these are all speculative comments. More data and analysis are needed.
Supplemental Materials: More Detail on Middle Future Dates
One of the interesting discoveries we made was that the mid-future (51-500 years in the future) seems to be the most popular period for science fiction, across the last 130 years. So Stephanie created this chart breaking out our data on mid-future SF so you can get a sense of which periods are most popular — you can see that the 100-200 year future is common.
Research by Ben Vrignon and Gordon Jackson.
Saturday, May 19, 2012
Wednesday, May 16, 2012
According to a new survey the number of women in senior technology positions at U.S. companies is down for the second year in a row.
The survey, published by U.S. division of the British tech recruitment group Harvey Nash, attests that just 9% of U.S. chief information officers (CIOs) are female, down from 11% last year and 12% in 2010. According to Reuters, 30% of the 450 American tech executives polled said their IT groups have no women at all in management positions. What’s more, when the same group of executives was asked whether women were underrepresented, roughly one half said no.
Which, I concede, is all bad news for women. To the boy’s club of CIOs in America, women aren’t around and nobody seems to have a problem with it.
But I do. I think it’s wrong and bad and exactly the attitude that’s keeping women from earning anything close to our brothers, boyfriends and husbands. But that’s not what this post is about.
This post is about whether surveys and research like this are bad for women in another way: whether they’re looking at women in tech with a set of blinders on. Determined to find some good in the ongoing conversation about the black hole for women that is the tech debate, I set about looking to prove that this latest research was misrepresenting women in technology by only looking at a particular group of companies at the top.
Unfortunately, as Bob Miano, President and CEO of Harvey Nash USA soon filled me in on, I was wrong. His isn’t a study that looks only at the 500 most profitable companies in America, but rather a sampling of over 450 companies that range from Silicon Valley startups to “a large computer software company with three letters in its name.” And of these companies, fewer than 40 chief information officers were female.
In terms of women at the top, at least, the reports of decline are verified. But what’s worse, where I had hoped that the recent blizzard of startup activity might be helping to change the ratio in favor of the fairer sex, Miano says the opposite is true. He blames the decline in women in tech roles on the uptick of startup companies, which he says tend to be less interested in diversity than many of his older, more established clients who often put major emphasis on recruiting female talent.
But as a woman who covers women for a living, I know anecdotally that this research is not indicative of the number of girls, women, ladies I meet every week who are kicking tech’s butt in the startup world. I look at companies like Joyus and Fab, both highly funded ventures with women leading their tech teams. I look at women like Caterina Fake, Allyson Kapin and programs like Black Girls Code and Women 2.0. I look at numbers that say women are starting businesses at 1.5 times the national average.
This is an issue that has plagued us for too long, says Tara Hunt, CEO of Buyosphere, “Even though we’re seeing an increase in the numbers of women enrolling and graduating college with technical degrees. And even though there is an upswing in women joining and launching startups, we are quite far from parity. The more women we see in high profile technical roles at these companies, the more young women will be inspired to pursue a career in technology.”
Monday, May 14, 2012
A group of activists and mothers in Oakland, Calif. have started an annual Mother’s Day tradition that would probably put Hallmark to shame. Fed up with the mainstream image of mothers as domestic, middle class, and white, they’ve made a real effort over the past two years to celebrate who they call “mamas on the margins”: all those single, queer, immigrant, and young mothers whose stories are often glossed over by corporate card makers.
“I can’t find a Mother’s Day card that looks at our identities in a way that is sentimental for me and my mom,” says Shanelle Matthews, communications coordinator at Forward Together, an Oakland-based organization that’s leading the e-Card drive through its Strong Families initiative. Matthews grew up as one of three kids in a single-parent black household, and wants to celebrate her mother’s hard work. “This campaign is personally close to be because I can finally say something to my mom on Mother’s Day that’s actually of cultural relevance and value.”
Friday, May 11, 2012
Can Tech Companies Continue To Innovate With No Women At The Table?BY Allyson Kapin | 05-08-2012 | 10:17 AM
Women dominate social networks, according to the latest Nielsen report. This is not news. Women have been ruling social networks like Facebook, Twitter, and social gaming platforms for the past few years. Women also bring in half or more of the income in 55% of U.S. households. And women ages 50 and older control a net worth of $19 trillion and own more than three-fourths of the nation’s financial wealth, according to MassMutual Financial Group. Simply put, women are influential and drive the economy.
Wednesday, May 9, 2012
frog’s Creative Director, Scott Jenson, first UI designer at Apple and recent head of UX at Google mobile, recently blogged about smart devices and how that changes the design process. this is relevant to the very real near future of robotics. I’m continuing the zeitgeist sampling here.
Triumph of the MundaneBy Scott Jenson - April 18, 2012
Smart devices require a significant shift in thinking
This blog explores how to design smart devices. But these new devices are just so new and require such new insights, that our quaint, old school notions of UX design are completely blinding us. We are stuck between the classic paradigm of desktop computers, and the futuristic fantasy of smart dust. The world is either fastidious or fantastic. The path ahead is hard to see. Alan Kay said the best way to predict the future is to invent it… but what if we don’t know what we want?
Coffee Maker Syndrome
I’ve long proposed just-in-time interaction as a core approach to smart devices but while presenting on this topic over the past year, it has astounded me that people have such a hard time just thinking about the overall landscape of smart devices. Take for example this tweet:
Overheard at #CES: “I’m pretty sure my coffee maker doesn’t NEED apps.
On the face of it, this makes perfect sense. It seems unlikely you’ll be reading your email on your coffee maker. But this dismissive approach is an example of what Jake Dunagan has called “the crackpot realism of the present”. We are so entrenched in our current reality that we dismiss any exploration of new ideas. By stating that apps on a coffee maker would be silly (which is true), we easily dismiss any discussion of other potential visions of functionality.
When television was first introduced, the earliest programs were literally radio scripts read aloud in front of the camera. Radio had been dominant for decades so broadcasters just coasted into TV without thinking creatively about how to approach the medium differently. As Marshall McLuhan said, “We look at the present through a rearview mirror; we walk backwards into the future.”
Smart devices require three big shifts
Assuming that smart devices require apps is like walking backwards into the future. We don’t need our smart devices to run Microsoft office, we just need to them to, say, log their electrical usage (internal, invisible functionality) or give us quick how-to videos (simple user facing functionality).
If we want to properly discuss how to design smart devices, we must appreciate how they shift away from standard computers in three significant ways: micro functionality, liberated interaction, and a clustered ecosystem.
Shift 1: Micro Functionality
In my last post I discussed a fundamental UX axiom that Value must be greater than Pain. This handy little axiom implies many useful theorems. The most radical is that if pain gets very low, the actual value can also be low. While expensive tablets demand significant functional apps, cheap processing allows for more humble micro functionality. It’s one of the biggest hurdles that people have in discussing smart devices. They are so entrenched in the PC paradigm that they assume every device with a CPU must be bristling with functionality.
However, simple doesn’t equate to useless. For example, whenever I offer up the possibility of a ‘smart toaster’ people often chuckle; it’s the coffee maker syndrome all over. But there are lots of small and even fun things a toaster could do: log it’s electrical usage, offer up an instructional video on how to clean the crumb tray, report any diagnostic errors, call the support line for you, or even tweak it’s ‘toast is done’ sound. All of these are fairly trivial but are still useful if a) the value is genuine and b) the cost of adding the functionality is small. $600 tablets must to do quite a bit but this isn’t true for a $40 toaster.
The biggest impact of micro functionality is in how little real interactive power is required. So often when I talk of using HTML5 as the lingua franca of smart devices, people trot out the ‘it can’t do X’ argument, extolling the superiority of native apps. But micro functionality is so basic and simple that HTML5 is more than adequate: you’ll normally only need to view or change a simple value. Micro functionality only requires micro expressivity.
Shift 2: Liberated Interaction
Remember that Value must be > Pain. Micro functionality requires micro pain to be viable. No one is going to argue with their toaster; this type of functionality has to be quick, fast, and easy. Unfortunately, the trend today is that any device with functionality will usually have a tiny display, tinier buttons, a complex user manual, and a tech support line.
Smart devices need to be liberated from being solely responsible for all interaction. I’ve written previously about just-in-time interaction which allows any smart display (a phone, tablet, TV, interactive goggles, and yes, a laptop) to interact with a smart device. Using a significantly more capable device is so much better than cobbling together a cheap LCD display with fiddly little buttons on the device itself. A generation raised on rich phone interaction will expect, even demand better.
Moving interaction to smart displays also has a huge benefit for manufacturers. The cost of computation will likely be the least of a manufacturer’s concerns. Small displays, buttons, complex instruction manuals, and tech support lines are all very expensive. What if manufacturers could assume that any smart device they built would have free access to a big interactive color screen? Not only that but it would have excellent computing power, smooth animated graphics, a robust programming environment and to top it off a universally accepted consumer interaction model that didn’t require any training? Using these displays would allow enormous cost reductions, not only in parts costs, but in simpler development costs as well.
Shift 3: A Clustered Ecosystem
Once we liberate the interaction from the device, we’ve unlocked its functionality across many locations. Not only can I use my phone in front of my thermostat but also with my TV across the room and with my laptop at work across the city. By liberating functionality from devices, we liberate our ability to use these devices from anywhere. My previous post was called “Mobile Apps must die” not because apps will actually die (they will always have a role) but the shortsighted desire to funnel functionality exclusively through them must stop. If these very simple apps are written in HTML5, they can be used across any device which is very powerful indeed.
It is inevitable that devices will ship with interactivity built in. But as more devices become functional, it’s going to become overwhelming to have each device be it’s own island. The three shifts discussed here: micro functionality, liberated interaction, and a clustered ecosystem all point to a new pattern of thinking: small devices with small functionality that all work together in profound ways. This is a triumph of the mundane; a challenge to our PC soaked way of thinking.
But this new approach requires an open standard that all devices would use to announce their presence and offer their functionality in a universal language like HTML. In many ways we are at the cusp of a new hardware era of discoverability much like the web first had in the 80s.
What’s holding smart devices back is our oh-so-human ability to misunderstand their potential. These three shifts are a big step in understanding what we need to do. Let’s be clear, this is not technically challenging! We just need to understand what we want. Alan Kay is right: we have to invent our future. frog, where I work, is just starting to build simple prototypes to validate these concepts. As they mature, I’ll be sharing more information about them. It’s clear that technology is not the limiting factor, it’s just our desire to imagine a different future.
blog post in the wild at frog design’s designmind
Like this:One blogger likes this post.
Monday, May 7, 2012
There is a wave of excitement about the very real future of robotics, which is coming very soon. I’m posting some of the zeitgeist here.
from Wired Magazine.
A longtime technology forecaster, Saffo is a managing director at the Silicon Valley investment research firm Discern. Formerly the director of the Institute for the Future, he is also a consulting professor in Stanford University’s engineering department.
The second indicator is an inversion, where you see something that’s out of place. When the Mexican police captured the head of a drug cartel, in the photos the perpetrators were looking proudly at the camera while the cops were wearing ski masks. Usually it’s the reverse. To me that was an indicator that Mexico was very far from winning its war against the cartels.There are four indicators I look for: contradictions, inversions, oddities, and coincidences. In 2007 stock prices and gold prices were both soaring. Usually you don’t see those prices high at the same time. When you see a contradiction like that, it means more fundamental change is ahead.
Then there are oddities. When the Roomba robot vacuum was introduced in 2002, all the engineers I know were very excited, and I don’t recall them owning vacuums. I said, this is damn strange. This is not about cleaning floors, this is about scratching some kind of itch. It’s about something happening with robots.
Finally, there are coincidences. At the fourth Darpa Grand Challenge in 2007, a bunch of robots successfully drove in a simulated suburb. The same day, there was a 118-car pileup on a California highway. We had robots that understand the California vehicle code better than humans, and a bunch of humans crashing into each other. That said to me, really, people shouldn’t drive.
Like this:Be the first to like this post.