Note from the Editor, Tricia Wang: Next up in our Co-designing with machines edition is Steven Gustafson (@stevengustafson), founder of the Knowledge Discovery Lab at the General Electric Global Research Center in Niskayuna, New York. In this post, he asked what is the role of humans in the future of intelligent machines. He makes the case that in the foreseeable future, artificially intelligent machines are the result of creative and passionate humans, and as such, we embed our biases, empathy, and desires into the machines making them more “human” that we often think. I first came across Steven’s work while he was giving a talk hosted by Madeleine Clare Elish (edition contributor) at Data & Society, where he spoke passionately about the need for humans to move up the design process and to bring in ethical thinking in AI innovation. Steven is a former member of the Machine Learning Lab and Computational Intelligence Lab, where he developed and applied advanced AI and machine learning algorithms for complex problem solving. In 2006, he received the IEEE Intelligent System’s “AI’s 10 to Watch” award. He currently serves on the Steering Committee of the National Consortium for Data Science, based out of University of North Carolina. Recently. he gave the Keynote at SPi Gobal’s Client Advisory Board Summit in April 2016, titled “Advancing Data & Analytics into the Age of Artificial Intelligence and Cognitive Computing”.
Recently we have seen how Artificial Intelligence and Machine Learning can amaze us with seemingly impossible results like AlphaGo. We also see how machines can generate fear with perceived “machine-like” reasoning, logic and coldness, generating potentially destructive outcomes with a lack of humanity in decision making. An example of the latter that has become popular is how self driving cars decide to choose between two bad outcomes. In these scenarios, the AI and ML are embodied as a machine of some sort, either physical like a robot or car, or a “brain” like a predictive crime algorithm made popular in the book and film “Minority Report” and more recently TV show “Persons of Interest.
I am a computer scientist with the expertise and passion for AI and machine learning, and I’ve been working across broad technologies and applications for the past decade. When I see these applications of AI, and the fear or hype of their future potential, I like to remember what first inspired me. First, I am drawn to computers as they are a great platform for creation and instant feedback. I can write code and immediately run it. If it doesn’t work, I can change the code and try it again. Sure, I can make proofs and develop theory, which has its own beauty and necessity at times, but I remember one of the first database applications I created and how fun it was to enter sample data and queries and see it work properly. I remember the first time I developed a neural network and made it play itself to learn without any background knowledge how to play tic tac toe. This may be a very trivial example, but it is inspiring nonetheless.
Can a machine write its own code? Can a machine design a new, improved version of itself? Can a machine “evolve” like humans into a more intelligent species? Can a machine talk to another machine using a human language like English? These were all questions that excited me as an undergraduate computer scientist, and that led me to study AI and ML during grad school, and these are all questions that can be answered with a Yes! Machines, or computers and algorithms, have been shown in different circumstances to achieve these capabilities, yet both the idea that machines have the capabilities and the idea that machines can learn are scary concepts to humans in the general sense. But when we step into each one of these achievements, we find something that I believe is both creative, inspiring and human.
But let me step back for a minute. Machines can not do those things above in a general sense. For example, if I put my laptop in a gym with a basketball, it can’t evolve a body and learn to play basketball. That is, it can’t currently do that without the help of many bright engineers and scientists. If I downloaded all my health data into my phone, my phone is not going to learn how to treat my health issues and notify my doctor. Again, that is it can’t do that currently without the help of many smart engineers and scientists. So while my machine can’t become human today on its own, with the help of many engineers and scientists solving some very interesting technology, user experience, and domain specific problems, machines can do some very remarkable things, like drive a car or engage in conversation.
The gap that creative, intelligent and trained engineers and scientists play today is a gap that must be closed for intelligent machines that both learn and apply that learning. That gap is also a highly human gap – it highlights the desire of our species, accumulation of knowledge, our ability to overcome challenging problems, and our desire to collaborate and work together to solve meaningful problems. And yes, it can also highlight our failures to do the right thing. But it is a human thing, still.
So when we think of something like machine learning models to detect fraud, or to play Go, or to park a car, I see the brilliance and passion of humans to take those problems and formulate them into something achievable with today’s technology, and then to persevere to build and deploy their solutions. They are the results of human passion. When we think of agents on our phones or computers that talk to us or have instant messaging dialogues, I see the thoughtfulness of the user experience designers who spent days and weeks understanding how a human would interact with the device, what kind of voice should the machine have, how to display empathy and build a rapport with the human to jointly solve problems like paying a credit card bill, reordering medication, or reading the news.
I believe we are still in the early phase of understanding computing technologies and how they fit into the human world. The recent years have seen computers draw our eyes and attention to them, their interfaces and capabilities. But now with progress in the areas of Internet of Things, wearables, virtual reality, and yes, artificial intelligence, I believe we will start “seeing” machine and computers less, and start doing what most of us really want, which is to spend time with other people, people we care about, people we can help, and people who we need to help us. To achieve that vision, we need creativity, empathy for the end users, and the human engineers to continue to move up the design chain: shaping and then “nudging” the machines to have the desired behaviors. We still do not fully understand our own human biases when it comes to machines, and so this will be a journey of developing both human as well as machine understanding.
Today, machines are human, in so far as they are extensions and creations of our own desires, creativity and passions. And the systems that make machines display human-level attributes were the result of hundreds and thousands of hours of time from highly creative and intelligent humans. One question I find myself asking more often than not when I see a cool, new AI or ML applications is “how’d they do that?” The process and people that produced that result is often much more interesting than the result right now, as it marks a step of our species in creating technology and a future that is both unknown, but ultimately very human.
This article is part the Co-designing with Machines Edition. Read other articles in this edition.
Like what you’re reading? Ethnography Matters is a volunteer run site with no advertising. We’re happy to keep it that way, but we need your help. We don’t need your donations, we just want you to spread the word. Our reward for running this site is watching people spread our articles. Join us on twitter, tweet about articles you like, share them with your colleagues, or become a contributor. Also join our Slack to have deeper discussions about the readings and/or to connect with others who use applied ethnography; you don’t have to have any training in ethnography to join, we welcome all human-centric people from designers to engineers.