Making AI accountable easier said than done, says U of A expert

As artificial intelligence reshapes society, experts discuss how to make it transparent and accountable to the people it's meant to serve.

If you had to program a self-driving car, which option would you choose if only two were available: hit a pedestrian who suddenly appears in front of the vehicle or veer off into a baby carriage on the sidewalk?

It's the kind of ethical conundrum that could shape artificial intelligence in years to come, and one of many the University of Alberta's Geoffrey Rockwell has been pondering lately.

Earlier this month, the professor of philosophy and digital humanities joined a national brainstorming forum on the ethics of AI in Montreal, along with industry leaders, federal government officials and other academics, including philosophers.

They gathered to grapple with an industry currently worth US$7.4 billion, according to figures circulated at the forum, and expected to reach almost US$16 trillion by 2065-amounting to a seismic shift in how we live and work.

The forum followed the signing last June of the Canada-France Statement on Artificial Intelligence, meant to jump-start an international coalition charged with exploring the societal implications of a technology that promises to soon be as ubiquitous as the internet, only with the power to potentially make life-and-death decisions on our behalf.

The consensus among experts, said Rockwell, is that when designing algorithms that make a difference in our lives-those that come up with medical diagnostics or even credit approval-they should be accountable and transparent. But the reality is that may be easier said than done.

"If you get turned down for a mortgage, you should be able to petition what data, what algorithm, it was based on," he said. And you should be able to petition the assumptions underlying the calculations.

"Some of the new, successful algorithms in AI produce black boxes that, by their very nature, are very hard to open up and inspect," said Rockwell, since machine learning relies on computers adapting to "thousands upon thousands of examples.

"It's very hard to look inside and get the transparency, but those are issues we have to start negotiating," said Rockwell. "It may require regulation, and even business people who are usually resistant to that are talking about it."

Including women

Another issue that dominated discussions at the Montreal forum was the inclusion of women in an industry dominated by men.

"The field in general has seen dropping participation by women, and it's been dropping since the '80s, to around 13 per cent," said Rockwell. "If AI is taking off now, now is the time to make sure they are included in the pipeline."

While more women were involved in software programming in the 1950s and '60s, that changed in the early 1980s with the introduction of the personal computer, which was heavily marketed to boys. The resulting gender gap was then reinforced in academia, he said.

"If you have a computer science program with no women in it, it's very hard to attract women students. You get a very male culture. It's fortunately not a problem here at the U of A, but in many other places in Canada it is."

Encouraging women to pursue computing science is one solution, but looking to that discipline alone to propel artificial intelligence into the future could prove to be drastically short-sighted, said Rockwell.

"If you look to other areas-whether library and information science or digital humanities programs like we have here at the U of A-there are lots of women.

"We also really need to be thinking about inclusion of Indigenous students too, and I don't think anyone has wrapped their head around how to do that."

Automation angst

Top of mind for many, said Rockwell, are the jobs lost as automation replaces traditional labour in mining, farming, manufacturing and other large-scale industries. The biggest losses could be among rural men, he said, adding to a growing discontent in western industrial societies.

"Physical or mental jobs that are repetitive will be the first targets for automation, and that could include farming and mining," he said.

"What's going to happen politically if you have a lot of rural, white men out of work? We need to be prepared for that. AI could lead to the increased disgruntlement of white men."

Jobs most likely to be "robot proof" are those with a high interpersonal component, such as those in the service sector, or those requiring advanced education and creativity, he said.

The U of A is among three universities, along with the universities of Toronto and Montreal, that are spearheading the social inquiry around AI, said Rockwell.