Quantcast
Viewing all articles
Browse latest Browse all 18

Co-designing with machines: moving beyond the human/machine binary

Image may be NSFW.
Clik here to view.



Image may be NSFW.
Clik here to view.
web-7525square
Letter from the Editor: I am happy to announce the The Co-Designing with Machines edition. As someone with one foot in industry redesigning organizations to flourish in a data-rich world and another foot in research, I’m constantly trying to take an aerial view on technical achievements. Lately, I’ve been obsessed with the future of design in a data-rich world increasingly powered by of artificial intelligence and its algorithms. What started out over a kitchen conversation with my colleague, Che-Wei Wang (contributor to this edition) about generative design and genetic algorithms turned into a big chunk of my talk at Interaction Design 2016 in Helsinki, Finland. That chunk then took up more of a my brain space and expanded into this edition of Ethnography Matters, Co-designing with machines. In this edition’s introductory post, I share a more productive way to frame human and machine collaboration: as a networked system. Then I chased down nine people who are at the forefront of this transformation to share their perspectives with us. Alicia Dudek from Deloitte will kick off the next post with a speculative fiction on whether AI robots can perform any parts of qualitative fieldwork. Janet Vertesi will close this edition giving us a sneak peak from her upcoming book with an article on human and machine collaboration in NASA Mars Rover expeditions. And in between Alicia and Janet are seven contributors coming from marketing to machine learning with super thoughtful articles. Thanks for joining the ride! And if you find this to be engaging, we have a Slack where we can continue the conversations and meet other human-centric folks. Join our twitter @ethnomatters for updates. Thanks. @triciawang

Image may be NSFW.
Clik here to view.
giphy (1)

Who is winning the battle between humans and computers? If you read the headlines about Google’s Artificial Intelligence (AI), DeepMind, beating the world-champion Go player, you might think the machines are winning. CNN’s piece on DeepMind proclaims, “In the ultimate battle of man versus machine, humans are running a close second.” If, on the other hand, you read the headlines about Facebook’s Trending News Section and Personal Assistant, M, you might be convinced that the machines are less pure and perfect than we’ve been led to believe. As the Verge headline puts it, “Facebook admits its trending news algorithm needs a lot of human help.”

The headlines on both sides are based in a false, outdated trope: The binary of humans versus computers. We’re surrounded by similar arguments in popular movies, science fiction, and news. Sometimes computers are intellectually superior to humans, sometimes they are morally superior and free from human bias. Google’s DeepMind is winning a zero-sum game. Facebook’s algorithms are somehow failing by relying on human help, as if collaboration between humans and computers in this epic battle is somehow shameful.

The fact is that humans and computers have always been collaborators. The binary human/computer view is harmful. It’s restricting us from approaching AI innovations more thoughtfully. It’s masking how much we are biased to believe that machines don’t produce biased results. It’s allowing companies to avoid taking responsibility for their discriminatory practices by saying, “it was surfaced by an algorithm.” Furthermore, it’s preventing us from inventing new and meaningful ways to integrate human intelligence and machine intelligence to produce better systems.

Image may be NSFW.
Clik here to view.
giphy
As computers become more human, we need to work even harder to resist the binary of computers versus humans. We have to recognize that humans and machines have always interacted as a symbiotic system. Since the dawn of our species, we’ve changed tools as much as
tools have changed us. Up until recently, the ways our brains and our tools changed were limited to the amount of data input, storage, and processing both could handle. But now, we have broken Moore’s Law and we’re sitting on more data than we’re able to process. To make the next leap in getting the full social value out of the data we’ve collected, we need to make a leap in how we conceive of our relationships to machines. We need to see ourselves as one network, not as two separate camps. We can no longer afford to view ourselves in an adversarial position with computers.

To leverage the massive amount of data we’ve collected in a way that’s meaningful for humans, we need to embrace human and machine intelligence as a holistic system. Despite the snazzy zero-sum game headlines, this is the truth behind how DeepMind mastered Go. While the press portrayed DeepMind’s success as a feat independent of human judgement, that wasn’t the case at all.

Google’s DeepMind was trained with Go techniques from the best Go players in the world. The more the AI played itself against the data, the more it learned, to the point where it started coming up with its own moves. But the key here is that it learned using data from humans. Secondly, it took a team of computers scientists to come up with new ways of combining machine learning with other AI techniques. DeepMind didn’t say, hey, let’s use something like Marvin Minsky’s tree search; it was a computer scientist on the DeepMind team who made that connection. DeepMind’s win is clearly a feat of human and machine intelligence combined. DeepMind is evidence of what we can achieve when humans co-design with machines.

And yet, popular narrative circulates that idea that the ultimate technological triumph will happen when machines are designed to work without any human involvement. It’s this narrative that is casting DeepMind as a milestone in an inevitable march to the day when the binary becomes a clear hierarchy of computers on top, humans on the bottom. This binary is what feeds the larger cultural narrative that algorithms are unbiased and machines are intelligent — that we can right our human frailties by handing everything over to the machines.

But we’re starting to see this popular narrative crumble. We’re hearing more public discourse about algorithmic discrimination in areas like criminal justice, content labeling, public policy, and job search. Even data scientists such as DJ Patel have moved from the tech industry into policy to work on this topic. He and his team at the White House just released a report about the potential for algorithms to discriminate. The red thread in all of these developments is that machines are highly capable of creating discriminatory outcomes that negatively impact human life. And this is one of the most important reasons why a pure machine/AI/computer run world isn’t possible. We will always need humans to course-correct the machine.

There are several other key flaws in the narrative of the human-vs-machine binary. For one, there is the simple fact that not everything is quantifiable. There will be limits to what an AI can do simply because the availability of large enough and clean enough data sets won’t exist. In addition, AI does well in structured situations that have a set of tightly constrained rules. There’s a reason why DeepBlue worked on chess and DeepMind tackled Go and not climbing Mount Everest or deciding how to clean up a natural disaster. The harder task is to model human intuition in unstructured situations. Furthermore, it fails to capture that we’ve always evolved along with our tools just as much as we’ve designed our tools. Lastly, there is the basic epistemological limitation of machines: They reflect the intelligence and biases of their creators. Humans tell the machine what patterns to recognize, the selection criteria to privilege a solution, and more importantly, how to execute it.

Despite a strong chorus of voices arguing for a pure AI run world without human intervention, there is a new crop of scientists who argue that any AI model that does not involve a human is missing the opportunity to deliver better results — and, some would even say, is inherently flawed. For example, look at the new developments coming out of at MIT where researchers like Kartak Dinakar are working on human-in-the-loop, integrating the human into how an AI is trained. Or consider AI pioneer Stuart Russell’s work in using inverse reinforcement learning to “provably align” with human values. Or note super practical solutions such as Solon Barocas’s and Andrew Selbst’s traveling workshop,  Fairness, Accounting, and Transparency in Machine Learning, designed to be a primer on the topic for anyone — yes, anyone. Or recognize computer scientists like Sorelle Friedler who are straddling law and programming to prevent machine learning algorithms from discriminating. AI is moving out of computer science labs, and as it broadens into practical applications, it’s finally partnering with the liberal sciences.

In recognition of this important and growing field, this month’s Ethnography Matters features writers from both the technical and humanities side who think about machines and humans in a symbiotic way. These are practitioners who haven’t fallen for the machines-vs-human binary. They have pushed the boundaries of their work to imagine what a world where computers act more like humans would look like. Above all, these are people who have embraced the complexity of where we’re going, and are encouraging us to adopt a new lens to interact with that complexity:

  • Alicia Dudek, design ethnographer at Deloitte, speculates if an AI can do the job of an ethnographer. Her science fiction-esque article is the first of its kind to re-imagine what parts of fieldwork can be accomplished by a robot.
  • Che-Wei Wang, architect, designer, and Autodesk fellow, contemplates why engineers and architects will need to become more like ethnographers with generative design. He asks if it is possible to convert ethnographic data into quantitative data as algorithmic input.
  • Angèle Christin, sociologist, studies how people deal with technologies of quantification in data-rich and data-poor environments. She highlights her learnings that span a decade of witnessing first hand the impact of “big data” inside courtrooms and newsrooms.
  • Astrid Countee, anthropologist and software developer, reveals how her anthropological background informs her new line of work as a computer programmer.
  • Madeleine Clare Elish, cultural anthropologist and fellow at Data & Society, gives us a peek into her study on large scale automated and autonomous systems, like military drones and labor platforms.
  • Molly Templeton, audience development expert, digital ethnographer, and one of the first Youtube stars, writes about why it’s important for brands’ social media strategy to look beyond the numbers when working in the digital entertainment and marketing industry. Her article looks at how to balance the emotional labor in audience management with data analysis.
  • Steve Gustafson, computer scientist and AI expert at GE, shares why machine learning is very human.
  • Liz Kaziunas, researcher at University of Michigan, studies open-source DIY health communities. Her ethnographic work reveals how Type 1 diabetics are hacking into their FDA-approved medical device (e.g. continuous glucose monitor) in order to access personal medical data, and how this behavior differs among class.
  • Janet Vertesi, sociologist and historian of science, gives us a sneak peak into her next book on NASA’s Mars Rover project where she looks at how organizational context changes how a machine is defined.

These writers are all wrestling with important questions in this moment, as we are at a turning point at the intersection of AI and personal computing. We finally have the technical capabilities in machine learning and natural language processing to harness the data that is being collected. But to achieve this, we need to match technical capabilities with social capabilities, such as empathy and fairness.

If we look through the lens of human-machine systems, we stop asking  how do we make computers more empathetic and fair and start asking instead, how do we design entire systems to be  more empathetic and fair? We stop asking, how do we make machines smarter than humans, and start thinking about how we make our machines work for us and with us.

To do this, we need to wrestle with what the philosopher and cognitive scientist David Chalmers calls the “hard problem of consciousness”: How do we explain why we experience the world the way we do? A machine doesn’t understand discrimination, but a human can. This means we need people with the skillsets who can provide the human intelligence to make the machines smarter. We need people who can understand not just technical systems, but entire social systems in which the technology is used. We need people who know how to integrate big and thick data. AI needs to move into the area of the meaning, values, morals, and ethics.

To do that, we have to turn to ethnographers and more broadly, anyone who takes a human-centered approach, be they designers, marketers, doctors, or engineers. Human-centered experts, especially ethnographers, have to stop being afraid of machines. Technologists, especially programmers and engineers, have to stop idealizing a human-free interaction system.

In this new world of human-machine systems, companies pushing new algorithmic systems would celebrate having teams of computer and data scientists working alongside human-centered designers and ethnographers. Venture capitalists would expect companies selling AI and machine learning products to prove that they’ve incorporated moral and ethical considerations into their product workflow. Facebook would be proud, not ashamed, of integrating humans into all of their bots and algorithmic content products. DeepMind’s future wins over humans would be portrayed as an accomplishment of human and machine intelligence.

And when we start doing that, maybe we’ll stop calling people users, and we’ll start calling them participants, or just straight up, humans.


This article is part the Co-designing with Machines Edition. Read other articles in this edition

Like what you’re reading? Ethnography Matters is a volunteer run site with no advertising. We’re happy to keep it that way, but we need your help. We don’t need your donations, we just want you to spread the word. Our reward for running this site is watching people spread our articles.  Join us on twitter, tweet about articles you like, share them with your colleagues, or become a contributor. Also join our Slack to have deeper discussions about the readings and/or to connect with others who use applied ethnography; you don’t have to have any training in ethnography to join, we welcome all human-centric people from designers to engineers.  


Viewing all articles
Browse latest Browse all 18

Trending Articles