Navigate

Sounding an Alarm About AI

Although Liz O’Sullivan ’12 loves the “magic” of technology, she says she is concerned about companies deploying artificial intelligence with “their fingers crossed that … it won’t do harm.” (Jason D. Smith ’94)

Liz O’Sullivan ’12 made national news when she quit her job with a company employing the growing power of artificial intelligence, becoming a public voice for concern within the tech industry about AI’s expanding reach and how it’s used.

O’Sullivan left her job in 2019 as director of customer solutions for New York-based Clarifai to protest the company’s work on Project Maven, a Department of Defense program.

Clarifai’s technology specializes in computer vision and machine learning, used for tasks such as helping social media monitors spot obscene photos or doctors look for signs of disease. The military wanted to harness that power to help its analysts sort the blizzard of still and moving images that bombard them, picking out and prioritizing targets for drones.

Liz O’Sullivan ’12

Liz O’Sullivan ’12

O’Sullivan and other Clarifai workers feared the company’s artificial intelligence would be used to build autonomous weapons, programmed to find targets and kill people without human intervention. “With respect to military contracts, is it even possible for us (or any private sector company) to know whether our aerial photography object recognition will be combined with drones that have the power to make a lethal decision without a human in the loop?” she asked in an open letter to Clarifai’s CEO on behalf of concerned employees shortly before she quit.

CEO Matt Zeiler defended the work. “We’re not going to be building missiles or any kind of stuff like that at Clarifai,” he told National Public Radio. “But the technology … is going to be useful for those. And through partnerships with the DOD and other contractors, I do think it will make its way into autonomous weapons.”

Zeiler, noting that other countries already were developing such technology, said the improved accuracy could save soldiers’ and civilians’ lives. “If we can provide the best technology so that they can accurately do their mission, in the worst case, there might be a human life at the other end that they’re targeting,” he told NPR. “But in many cases it might be a weapons cache, [without] any humans around, or a bridge, to slow down an enemy threat.”

Current Department of Defense policy requires that such weapons would allow military personnel “to exercise appropriate levels of human judgment.”

In a later New York Times article about companies’ efforts in developing facial recognition technology — including Clarifai’s using images from the OkCupid dating app and an unidentified social media company — Zeiler positioned Clarifai’s intentions in the industry as benign. “There has to be some level of trust with tech companies like Clarifai to put powerful technology to good use and get comfortable with that,” he told the Times.

Other tech companies have been ensnared in concerns over martial use of artificial intelligence. Google didn’t renew its Project Maven contract after thousands of employees demanded it quit. Microsoft employees have objected to its work with Immigration and Customs Enforcement.

“It’s a historic moment of the employees rising up in a principled way, an ethical way and saying, ‘We won’t do this,’ ” Ben Shneiderman, a computer scientist at the University of Maryland who researches human and computer interaction, told NPR about O’Sullivan’s and other tech workers’ concerns.

O’Sullivan, who grew up in Staten Island, N.Y., and Charlotte, calls herself a conscientious tech objector. Since leaving Clarifai, she has gone on to co-found and become VP/commercial at ArthurAI, which works with financial services, insurance, health care and other industries to develop, deploy and keep control over their AI products and services. She also is a leader of a tech-watchdog group — Surveillance Technology Oversight Project (STOP) — where she monitors and draws attention to potentially harmful effects of technology, even as she says she’s loved the “magic” of it her entire life.

You’ve been described as an artificial intelligence activist. What is that?

Politicians are, in a lot of cases, not very deeply informed about technology, and they just don’t know how powerful it is. For instance, the reason the [Federal Trade Commission] allowed Facebook to acquire WhatsApp and Instagram — looking back on that, we can go, well, there was a huge issue there about combining data that causes the consolidation of power. But I think they missed it, and they didn’t understand the potential implications of giving that much surveillance power to one organization. So, my activism typically takes the form of trying to get people talking about facial recognition or autonomous killer weapons and, hopefully, the goal is that they reach out to their lawmakers and tell them that they don’t want this technology.

Would you have quit Clarifai had it been working on anything weapons related — or what about this particular project could you not be a part of?

It would be hard for me to have felt comfortable working on regular weapons. But with object localization, it’s a system that a computer vision-powered drone would need to select, acquire and destroy its own targets. And because there is no regulation prohibiting that from being used in a fully autonomous weapon, it’s hard to imagine that it won’t be used for that. I don’t believe that we’re exaggerating when we worry that this could be part of an extinction event — or, at the very least, ethnic cleansing and oppression of marginalized communities around the world.

Positive and negative, what were the reactions to your decision to quit?

Any time you kind of go out on a limb and take a stand like that, the fear is that you’re never going to work in tech again and that you’re always going to be a pariah. I had pretty much given up hope of having a career in tech after that. And certainly it’s closed a bunch of doors; I don’t think Google or Microsoft would ever hire me. But it doesn’t matter, because there’s this whole other world of people who are monitoring things, the social scientists and the nonprofits. They were there kind of waiting with open arms.

You’re also worried about facial recognition technology. Why?

This is just one part of this broad encroachment of government surveillance and private surveillance into the lives of everyday people, both from a law enforcement standpoint and from a private consumer standpoint. Facial recognition is biometric ID at a distance — which means if you have a camera that can see clearly from a distance, you can hypothetically track anybody going anywhere doing anything. And right now, there’s complete lack of [federal] regulation governing when and how and under what circumstances this technology can be used.

You’ve also launched ArthurAI. What’s that about?

I work for an AI monitoring platform where we’re trying to basically put in some safety controls for companies that are building AI. I still very much believe in the power of AI; I think it has the potential to transform the world. But it’s become painfully clear that it needs to have limits and controls; and right now, most companies are just kind of putting it out there, their fingers crossed that it will do well and that it won’t do harm. But we’re seeing more and more that that’s just not the case. … Self-driving cars have killed pedestrians because they aren’t programmed to look for humans that exist outside of crosswalks. Hiring algorithms have discriminated against women candidates at Amazon. Deepfake [manipulated media in which a person in an existing image or video is replaced with someone else’s likeness] has given rise to revenge-porn attacks and a developer who tried to monetize it. Facial recognition and predictive policing are the driving forces behind the internment of the Uighurs in Xinjiang [China]. In Michigan, there’s a class action lawsuit for 40,000 people who were falsely accused of fraud by an AI, where some were garnished and resulted in bankruptcy, divorce and suicide. All this and AI has only been in production use fairly recently. Imagine what it can do unchecked at scale.

Technology seems so overwhelmingly powerful and ever-rapidly advancing. Is there really anything individuals can do to protect themselves?

Absolutely. The regulation is being discussed right now at the highest levels of government, but the way it’s being most successful is in the states and cities. San Francisco, Oakland and Somerville, Mass., banned facial recognition and put constraints on how law enforcement can use it, in some cases. There’s a statewide ban being proposed in Michigan. Those are so important to prevent a ham-handed federal regulation that would supersede localities. I think there’s a real moment here for state and local governments, where we really do have a voice. They have the ability to put constraints on these grave abuses of power. … There’s much in discussion at the federal level, but nothing has been adopted. The Algorithmic Accountability bill [authorizing the FTC to require companies to conduct impact assessments of highly sensitive AI systems and correct any bias issues uncovered] is stuck in the Senate. [U.S. Rep.] Yvette Clarke [of New York] proposed a deepfake bill, also stalled. Michigan hasn’t moved forward in some time, but there’s a new bill proposed in New York state that we’re excited to help along its way. The scariest thing on the table right now is White House guidance on AI regulation that subtly encourages regulators to consider federal preemption [of state and local ordinances].

— Tom Kertscher


 

Share via: