Quantcast
Channel: virtual – Ethnography Matters
Viewing all articles
Browse latest Browse all 18

The future of designing autonomous systems will involve ethnographers

$
0
0


elish_photoNote from the Editor, Tricia Wang: Next up in our Co-designing with machines edition is Madeleine Clare Elish, (@mcette), is an anthropologist and researcher at Data & Society, presents a case for why current cultural perceptions of the role of humans in automated systems need to be updated in order to protect against new forms of bias and worker harms. Read more about her research on military drones and machine intelligence at Slate. Madeleine also works as a researcher with the Intelligence & Autonomy Initiative at Data & Society which develops empirical and historical research in order to ground policy debates around the rise of machine intelligence.

“Why would an anthropologist study unmanned systems?” This is a question I am often asked by engineers and product managers at conferences. The presumption is that unmanned systems (a reigning term in the field, albeit unreflexively gendered) are just that, free of humans; why would someone who studies humans take this as their object of study? Of course, we, as ethnographers, know there are always humans to be found.  Moreover, few if any current systems are truly “unmanned” or “autonomous.” [1] All require human planning, design and maintenance. Most involve the collaboration between human and machine, although the role of the human is often obscured. When we examine autonomous systems (or any of the other terms invoked in the related word cloud: unmanned, artificially intelligent, smart, robotic, etc) we must look not to the erasures of the human, but to the ways in which we, as humans, are newly implicated.

My dissertation research, as well as research conducted with the Intelligence and Autonomy Initiative at Data & Society, has examined precisely what gets obscured when we call something, “unmanned” or “autonomous.” I’ve been increasingly interested in the conditions and consequences for how human work and skill become differently valued in these kinds of highly automated and autonomous systems. In this post, Tricia has asked me to share some of the research I’ve been working on around the role of humans in autonomous systems and to work through some of the consequences for how we think about cooperation, responsibility and accountability.

giphy

Modern Times, 1936 [giphy]

The Driver or the System?

Let me start with a story: I was returning to New York from a robot law conference in Miami. I ordered a Lyft to take me to the Miami airport, selecting the address that first populated the destination field when I typed the phrase “airport Miami” into the Lyft app. The car arrived. I put my suitcase in the trunk. I think the driver and I exchanged hellos–or at the very least, a nod and a smile. We drove off, and I promptly fell asleep. (It had been a long week of conferencing!) I woke up as we were circling an exit off the highway, in a location that looked distinctly not like the entrance to a major airport. I asked if this was the right way to the airport. He shrugged, and I soon put together that he did not speak any English. I speak passable Spanish, and again asked if we were going to the right place. He responded that he thought so. Maybe it was a back way? We were indeed at the airport, but not on the commercial side. As he drove on, I looked nervously at the map on my phone.

Finally, the driver stopped in front of a nondescript hangar and said, “We’re here.” Quite upset, I pointed out that we were not “here,” and that this was clearly not the airport. He responded with the explanation that this was where the app/map had told him to go. Exasperated and worried I would miss my flight, I nearly cried, “Haven’t you ever driven anyone to the airport?” (It turned out this was his first week of driving.) I pulled up Google Maps on my phone, found what seemed to be the right address and handed him my phone to follow. After driving twenty more minutes around the perimeter of the airfield, we arrived at the correct commercial air passenger entrance.

He clicked off the ride, sending the message to the app that the trip was complete. (Fortunately, I made my flight just in the nick of time.) I admit I did not give him a tip. I knew, however, that I would be faced with another decision: how would I rate the trip? I closed the app and got on the plane, deciding to wait until my emotions cooled and I could think carefully about what I would do in a situation that I had, in fact, studied in other forms: what are the ways in which human operators can become liability sponges for the failures of intelligent or automated systems?

The failure of this interaction, I would argue, is uniquely symptomatic of the design principles that still shape highly automated and autonomous systems. I will return to this larger observation below. First, let me explain more about my dilemma in this specific instance, as I saw it. From one perspective, my driver deserved a bad rating. He should have known that I wanted to go to the passenger terminal of the airport. He had picked me up from a hotel. I had a suitcase. He drove a car for hire in Miami. This should not have been an unusual trip. I wanted to go to the airport and he did not take me there.

From another perspective, my driver did not deserve a bad rating because I had to take into consideration what role I, the passenger, and what role Lyft, the app, had played in the outcome. Who really deserved the blame? I am quite certain I never said out loud, “I’m going to the airport.” Moreover, I fell asleep. Had I been awake, I would have seen that we missed the main signed exit to the airport. I would have realized the situation sooner. Moreover, it seems reasonable to find fault in Lyft’s autocomplete map algorithm, which selected a pinpoint at the airport but not at the airport passenger terminal.

Was it the responsibility of the driver to second-guess the app? The driver had done his job as it was defined by Lyft. He had taken me quite literally from point A to point B, both of which I had selected. All of this was not only complicated but also consequential because ratings are directly tied to drivers’ reputation and ability to work. Because of the way the feedback system within the app is designed, I can only rate the driver, not Lyft or the overall experience. I am structurally required to put all the responsibility–for good or bad–on the driver, while in fact, the responsibility is ultimately shared. In the end, I ignored the app for a few weeks, and never went back to rate the driver.

Patterns from the Past

This experience with Lyft and the Lyft driver is an example of the new ways in which the relationship between control and responsibility in autonomous or intelligent systems do not necessarily align with traditional conceptions of these terms. As ethnographers, I think we are uniquely equipped to catch sight of the ways these misalignments are emerging. Hopefully, we can also be part of creating alternative structures–whether formal regulations, design principles, or informal understandings around the conference table–of the ways in which automated and autonomous technologies can mask the role of humans in unintended but harmful ways.

Trained as an anthropologist and ethnographer, my perspective has been deeply informed by looking at the histories of proto-autonomous systems, such as autopilots and cruise controls. In a previously published case study of the history of aviation autopilot litigation, Tim Hwang and I documented a steadfast focus on human responsibility in the arenas of law and popular culture, even while human tasks in the cockpit have been increasingly replaced and structured by automation. Our analysis led us to thinking about the incongruities between control and responsibility and the implications for future regulation and legal liability in intelligent systems. The dilemma, as we saw it, was that as control has become distributed across multiple actors (human and nonhuman), our social and legal conceptions of responsibility have remained generally about an individual. We developed the term moral crumple zone to describe the result of this ambiguity within systems of distributed control, particularly automated and autonomous systems. Just as the crumple zone in a car is designed to absorb the force of impact in a crash, the human in a highly complex and automated system may become simply a component—accidentally or intentionally—that bears the brunt of the moral and legal responsibilities when the overall system malfunctions. The metaphor of the moral crumple zone isn’t just about scapegoating. The term is meant to call attention to the ways in which automated and autonomous systems deflect responsibility in unique, systematic ways. While the crumple zone in a car is meant to protect the human driver, the moral crumple zone protects the integrity of the technological system, itself.

Chrysler Auto-Pilot Brochure, 1958

Building on this work, I presented a paper (at the conference in Miami) about a set of challenges at stake in shared control between human and machine by examining specifically how responsibility for the operation of a system may be misaligned with how control is exercised within that system. I described two cases where moral crumple zones emerged, first the partial nuclear meltdown at Three Mile Island and second the fatal crash of Air France Flight 447. The circumstances surrounding these accidents demonstrate how accountability appears to be deflected off of the automated parts of the system (and the humans whose control is mediated through this automation) and focused on the immediate human operators, who possess only limited knowledge and control.

Cultural perceptions of the role of humans in automated and robotic systems need to be updated in order to protect against new forms of consumer and worker harms. The symptoms of moral crumple zones (at the risk of mixing metaphors) are some of the phenomena that human factors researchers have been studying for years, such as deskilling, skill atrophy, and impossible cognitive workloads. One of the consequences is that the risks and rewards of technological development do not necessarily develop in the broader public interest. As with previous transitions in the history of automation, new technologies do not so much do away with the human but rather obscure the ways in which human labor and social relations are reconfigured.

Detecting Moral Crumple Zones

Ethnography is about finding the people, and this is why ethnographic methods must be central to the study and design of autonomous and complexly automated systems. When we think about the future designs of autonomous systems, we should conduct researcher among engineers, designers and end users, but we also need to expand our frame of analysis to encompass the new ways in which humans are pushed just beyond immediate sight. In my own work, I always try to be vigilant of the ways in which the automated parts of systems effectively deflect responsibility or accountability while also seeming to represent infallibility.

By way of conclusion, I’d like to end this post with some suggestions for ways to approach the study and design of human-machine systems that take into account the unintended consequences of keeping a human in the loop and creating moral crumple zones. As ethnographers, designers, engineers and also users of complex systems, we all should be asking: How do machines and automated systems distribute the work of accomplishing an action or goal? How is control composed in each part–is there an element of physical control and also an element of representational or otherwise substantive control, like the difference between executing code and writing code? And, how do users and managers perceive the locus of control and with what implications for the assignment of blame, praise and value to the humans involved?

____________

[1] In this piece I use the terms autonomous, automation and robot as related technologies on a spectrum of computational technologies that perform tasks previously done by humans. A framework for categorizing types of automation proposed by Parasuraman, Sheridan and Wickens is useful for specifically analyzing the types of perceptions and actions at stake in autonomous systems. Parasuraman et al. define automation specifically in the context of human-machine comparison and as “a device or system that accomplishes (partially or fully) a function that was previously, or conceivably could be, carried out (partially or fully) by a human operator.” This broad definition positions automation, and autonomy by extension, as varying in degree not as an all or nothing state of affairs. They propose ten basic levels of automation, ranging from the lowest level of automation involving a computer that offers no assistance to a human to the highest level of automation in which the computer makes all the decisions without any input at all from the human. Parasuraman et al. 2000. “A Model for Types and Levels of Human Interaction with Automation.” IEEE Transactions on Systems, Man and Cybernetics (30)3.


This article is part the Co-designing with Machines Edition. Read other articles in this edition

Like what you’re reading? Ethnography Matters is a volunteer run site with no advertising. We’re happy to keep it that way, but we need your help. We don’t need your donations, we just want you to spread the word. Our reward for running this site is watching people spread our articles.  Join us on twitter, tweet about articles you like, share them with your colleagues, or become a contributor. Also join our Slack to have deeper discussions about the readings and/or to connect with others who use applied ethnography; you don’t have to have any training in ethnography to join, we welcome all human-centric people from designers to engineers.  


Viewing all articles
Browse latest Browse all 18

Trending Articles