It’s a Good Thing: Is ethical AI possible?

From manufacturing to financial services, artificial intelligence offers huge opportunities for economic transformation. It also presents some ethical traps.

Credit: Samineh Afrough

Shalaleh Rismani, who researches the ethics of AI at UBC, is also a founding director of the Open Roboethics Institute

Artificial intelligence is the most significant technological shift of the past decade. With it comes a raft of moral considerations

Last November, more than 100 people gathered in Microsoft’s airy downtown Vancouver offices to learn about the future of artificial intelligence. They listened as the panellists—Tim O’Brien, Microsoft’s GM of AI programs; Maya Medeiros, an intellectual property lawyer at Norton Rose Fulbright; and Tim O’Connell, CEO of Vancouver-based medical data company Emtelligent—hotly debated how to “design and deploy ethical AI.” 

Few in the crowd—or the public at large—would doubt that intelligent machines are taking over. The evidence is everywhere: from Google Maps recommending how we get from A to B, to Netflix offering a selection of movies “we might like to watch,” to Amazon suggesting a list of goods “we might like to buy.” AI is reshaping sectors from manufacturing to financial services, creating massive opportunities for economic transformation—as well as a few ethical traps.

Natalie Cartwright has her eyes trained on both. Cartwright’s Vancouver-headquartered firm, Finn AI, builds virtual banking platforms that are powered by artificial intelligence: white-label virtual assistant products, for retail banks such as BMO, that enable digital self-service, customer acquisition, fraud detection and smarter money management. But Cartwright, who has a BA in psychology and pathology, as well as a master’s degree in public health, is keenly aware of the social implications of this AI revolution. 

“Artificial intelligence is probably the most fundamental transition in technology that we will see in our lifetimes,” she says. “There’s a lot of opportunities to leverage it to do good in the world—and there’s also some potential risks that we need to be actively managing for.” To that end, she sits on the federal government’s Advisory Council on Artificial Intelligence, providing advice on how to build an AI industry that (per the government) “reflects Canadian values.”

“One of the core principles is building artificial intelligence that serves us,” Cartwright says. “We need to ensure that we’re building something that creates the world that we want to live in.” Is the AI inclusive? Is it human-centric? And how is bias accounted for? 

One of the much-discussed biases is the gendered nature of personal digital assistants. From Alexa to Siri, to the bobbing avatars on self-serve web pages, women are the ones in the “helper” role, programmed to talk in soothing, subservient ways (see below). “If you’re attaching gender to that, what does that mean for reinforcing the problems within our society?” Cartwright asks. “It’s a really easy thing to not gender your virtual assistants.”

Developing ungendered digital assistants is, arguably, the lowest-hanging fruit; harder to pick out are some of AI’s more hidden biases. Shalaleh Rismani, who researches the ethics of AI as an adjunct professor at UBC, is also a founding director of the Vancouver-based Open Roboethics Institute. Ethical machine learning can only come as a result of decisions made early on in the development of predictive algorithms, Rismani says. 

She points to a recent example of a client she worked with that does safety inspections. The client was trying to develop an AI solution that would predict which work sites are most likely to be hazardous; AI wouldn’t replace the inspectors but guide them to where they’re needed most. Rismani helped the client to re-evaluate the data being used and to ensure that data fields prone to bias—such as where the contractor was based, or their level of schooling—weren’t included, in favour of strictly technical attributes. If you don’t choose the right data sets, Rismani says, bias “just gets reinforced” in AI. 

Is a truly ethical AI even possible? According to Microsoft’s Tim O’Brien, not really. “Mitigation, I think, is a more realistic goal. But even then, there’s a lot of subjectivity,” he told the crowd. “It’s a bit of a Russian nesting doll: with each ethical decision we make, there’s an ethical ethical decision.”