Wed December 5, 2012
Editor's note: Amber Case is the Director of the Esri R&D Center, Portland, a company which aims to understand our world and "unleash the power of location." She was formerly the CEO of geolocation platform creator Geoloqi, Inc., acquired by Esri in Oct 2012. As a student of cyborg anthropology, Case studies the symbiotic interactions between humans and technology.
What exactly is cyborg anthropology?
Cyborg anthropology is the study of the interaction between humans and technology, and how technology affects culture. Mobile technology allows one to stand almost anywhere in the world, whisper something, and be heard elsewhere. These devices that live in our pockets need to be fed every night require our frequent attention. In only a few years these devices have become stitched into the fabric of our everyday lives. Phones offer us respite from the boredom of waiting in lines, but they also inhibit us when they run out of batteries.
I'm fascinated with mobile devices for another reason -- they are a bundle of sensors that we walk around with every day. That sensor data can be used to do very interesting things, such as automatically turn on the lights in your house when you get home, or turn the lights off when you leave.
In traditional anthropology, somebody goes to another country, says: "How fascinating these people are! How interesting their tools and their culture are," and then they write a paper, and maybe a few other anthropologists read it, and we think these cultures are very exotic. cyborg anthropologists step back from the modern world and look at the everyday life and how the people around us are influenced by technology in everyday life.
Why did you decide to study cyborg anthropology?
When I was little, I was very interested in technology, science and mathematics. I grew up in the '80s, but read my dad's copy of 1960 World Book Encyclopedia. My favorite entry was on the modern computer. The machine filled an entire gymnasium and was used for military and business. As I grew up, I saw technology transition towards being used in everyday life. The only problem was that technology was still a pain in the neck to use. Most systems had too many menus and buttons.
During my freshman year of college, I was introduced to the field of cyborg anthropology -- the study of humanity and technology. What I really liked about cyborg anthropology is that it crossed multiple fields of study. In academia, you can learn a lot about a certain field, but know nothing about another. Technology is so intertwined with humanity at this point that it takes multiple fields to understand both tools and people.
How would you define your cyborg self?
A cyborg is simply someone who interacts with technology. The technology can be a physical or a mental extension, and doesn't need to be implanted in the person. The origin of the word cyborg was from a 1960 paper on space travel, where it was used to describe the placement of external devices and clothing on a human to make them fit for space travel.
For thousands and thousands of years, everything has been a physical modification of self. It has helped us to extend our physical selves, go faster, hit things harder, and there's been a limit on that. But now what we're looking at is not an extension of the physical self, but an extension of the mental self. And because of that, we're able to travel faster and communicate differently through the use of technology.
A cyborg is not Terminator or Robocop, but the experience of everyday life that's been altered by technology. Everyone that uses technology is a superhuman. It's not so strange anymore because it's the norm -- most everyone else around us is also a superhuman. The only time we notice it is when our devices run out of power. We're all super humans until our devices lose energy.
You talk about a new form of "human connection," can you explain this to us?
A vehicle is a physical transportation device, but there are limits to how small it can be made. A computer is a mental transportation device, but it need not be limited by its size and shape. We can put anything we want into computers and phones, and they don't get heavier because that information is invisible and weighs nothing, or is stored elsewhere, and then we can take anything out. What does the inside of your computer actually look like? If you print it out, it looks like a thousand pounds of material that you're carrying around all the time.
When you use a social network, your sense of self extends into that virtual space. Getting a "Like" on Facebook or a comment on a status is a dopamine hit the equivalent of getting a hug. This isn't really a new form of communication, but a new way of connecting.
Can you tell us about some projects you're working on?
I've always inspired by the technology built by Mark Weiser at Xerox Parc in the '70s, especially ubiquitous and calm computing. My projects have always been about using data in new ways, the future of the interface and the button, and the future of location.
For instance, you should always be able to get information based on when you need it. Location plays a big role in that. Right now, data is stuck on the web, not where you are. When you land at the airport, you often have to look through your email to get to the information you need in order to get to your destination. It should already be there on your phone.
There is a lot of talk that we're finally entering into an era of the "Internet of Things." The exciting part is that we have all of these devices now that are sensors for reality -- sound, noise, temperature, images, location, air quality and so on. We can wear trackers to count our steps or measure our weight, and we can use tech to make a picture of where we've been. All of this data is interesting by itself, but all of the devices speak different languages. Devices made by different companies use different protocols. Some are open; others are closed. It's a modern day Tower of Babel.
Taking data from across many different silos is where the opportunity is. If I knew my mood, hunger level, and location at a given time of the day, I could figure out if my mood caused me to want to eat, or if I was unhappy at work and needed a different job. I could correlate amount of sleep with weight gain, and so on. What we need is a common language that allows all of these devices to communicate with one another. We saw this with SMS (allowing different phones to talk to each other across networks), SMTP/POP that allowed for modern email to exist (because it allowed modern email accounts to talk to each other across networks), and Interpress (allowing the modern printer industry to form.) That hasn't been solved for devices.
Once we get a lot of data onto maps, we can really begin to understand reality in much different ways, like where to build a house to increase the happiness of the people inside it, how to make routes that avoid accidents, and how to design better urban systems. I'm really excited about platform that Esri provides. We'll be integrating the Geoloqi location technology into the Esri platform in July 2013 and will be able to open up an entire world of datasets and solutions accessible before only by geographers, scientists and researchers.