メインコンテンツへスキップ

Article 9 min read

Regulating AI—a call for transparency and ethical use

著者 Susan Lahey

更新日 September 21, 2021

Artificial Intelligence (AI) is often considered a scary thing. It can–and has been–used to destabilize governments, as in the case with Russian bots. It can be used to collect your personal information and disseminate it for a profit. In some cases, it can do your job, even if you’re an artist. It can be an engine of bias and discrimination.

AI can also be really useful. We already use it all day every day—in our phones, on social media, searching the internet, requesting customer service, the list goes on…. AI is used globally for everything from agriculture to defense. It’s even used in space. So it’s not surprising that many sessions at SXSW 2019 focused or touched on AI. Discussions ranged from practical implications—like AI-enabled scientific research—to emotional and aesthetic uses. One woman, for example, is using AI to capture interviews with people so their loved ones can “converse” with them after they die. And many people talked about how AI is going to impact work.

As rapidly as AI has mushroomed and seemingly ‘taken over the world,’ its version of intelligence is still limited. It can gather and process data with incredible speed, but it has no moral center and no comprehension of nuance. It is capable of doing great harm as well as great good. Yet many of those in a position to regulate AI seem hesitant to impose laws regarding its creation or use.

Who is in charge of governing or regulating AI?

One panel, “Algorithms Go to Law School: The Ethics of AI” included Lucilla Sioli, director of AI & Digital Industry for the European Commission; Francesca Rossi, AI Ethics Global Leader for IBM; Tess Posner, CEO of AI4All; and Lynne Parker, assistant director for Artificial Intelligence for the White House Office of Science and Technology Policy.

This international panel, like others, concurred that the rules created around AI in one part of the world have to apply in another, since so many softwares and devices are used around the globe. But while they all discussed the importance of transparency and respect for human dignity, few specific laws or regulations, regarding the programming or use of AI, were named. The panel gave the impression that AI regulation would be a long time coming; meanwhile more applications are being devised and deployed every day.

A group out of the European Union has created a set of guidelines for the programming and use of AI. The only part of this that has any teeth, at present, is governed by the General Data Protection Regulation (GDPR), which requires companies collecting and using information through cookies on internet sites to notify users that it’s doing so. The White House has also issued an executive order laying out broad guidelines for how various departments should approach AI. The guidance speaks in general about leading the world in AI and protecting civil liberties, but offers no funding and leaves specifics up to the agencies using the technology.

There aren’t laws, at present, that require, say, companies that deploy discriminatory AI, be subject to the same penalties as a company committing discrimination in the “real” world.

There aren’t laws, at present, that require, say, companies that deploy discriminatory AI, be subject to the same penalties as a company committing discrimination in the “real” world.

As the panelists hedged on naming specific regulations, one audience member stated: “I don’t know why we’re being precious about innovation. AI is not in its infancy anymore. It’s a teenager; and it’s behaving badly.”

[Read also: How AI assistants close the gaps in customer service]

Too many use cases

If you know how to build an AI-powered product, there’s no one looking over your shoulder to make sure it’s not designed to poke around in places where it shouldn’t. Which is a little chilling/concerning/etc. since AI is being used in all kinds of ways. One set of experiments puts an implant in the brains of amputees or people with spinal injuries so they can use their limbs. Cool right? It’s a proof of concept, but begs the question: what else could be programmed into a brain implant?

As many pointed out in various SXSW sessions, your phone could be listening to you all the time and generating suggestions for things to buy based on your “private” conversations. Across a preponderance of websites, most people sign away their privacy rights without even thinking. Facebook claimed the 10-year-challenge, for example, was a user generated meme and not a sneaky effort to crowdsource inputs to train facial recognition software. But it could have been.

And what about the fact that AI is constantly being deployed that perpetuates racial, gender, and other biases? The examples are legion.

A 2019 study from the Georgia Institute of Technology showed, for example, that self-driving cars often can’t recognize brown-skinned pedestrians. AI used for screening job applicants often assigns data from one person to another and costs them work. Nonny de la Pēna, an immersive storyteller, who spoke in a session on women and branding, noted that Amazon couldn’t figure out why—despite all its efforts to hire more women—it kept hiring men. Then Amazon realized that the AI was favoring applicants who used the words “capture” and “execute,” terms that were used by significantly more males than females.

What about the fact that AI is constantly being deployed that perpetuates racial, gender, and other biases?

There is no governing body to address those issues. Panelists talked about the importance of transparency, civil liberties, and human dignity, but without specifics for implementing. Everyone seems tenuous about making—and having to enforce—hard and fast ethics-based rules about how AI is programmed and used.

5 ways that AI is already benefitting the customer experience

Free market or international law?

The EU guidelines do say that AI should be transparent—similar to the “bot law” recently passed in California, which will require anyone doing business with California customers—beginning July 1, 2019—to let them know when they’re talking to a chatbot instead of a human. This could impact the customer service industry, in ways as yet unknown. While many companies are working diligently to make chatbots more human to blur the distinction, this law might take it the other direction. Instead of making people uncertain about the “Am I talking to a human?” question, chatbots might become more botlike, a tool for self-service-oriented customers. That’s just a guess.

The guidelines state that whatever function the AI serves, AI training should take care to respect civil liberties and avoid discrimination. Also that AI should be ethical and human-focused—serving human autonomy and doing no harm.

“The correct approach…heavily depends on specific details of the AI system, its area of application, its level of impact on individuals, communities or society and its level of autonomy,” the EU guidance says. “The level of autonomy results from the use case and the degree of sophistication needed for a task. All other things being equal, the greater degree of autonomy that is given to an AI system, the more extensive testing and stricter governance is required.”

It continues: “Mechanisms can range from monetary compensation in circumstances where AI systems caused harm, to negligence or culpability-based mechanisms for liability, to reconciliation, rectification and apology without the need for monetary compensation.”

Instead of making people uncertain about the “Am I talking to a human?” question, chatbots might become more botlike, a tool for self-service-oriented customers.

But what that actually means in terms of specific regulation is unclear.

“Standards are usually not enforced, just adopted,” said the White House’s Lynne Parker. “When a standard has been defined, if the big players decide to adopt it, all the other players want to come in.”

Parker’s approach seemed to favor the idea that a free market economy would force decent behavior in the AI world. She went on to say, “We don’t want fear mongering: ‘AI might be biased against me,’” she said. “It’s not the bogeyman behind the scenes making decisions against you. Part of the education is recognizing that AI is not out to get us…though the AI techniques can have shortcomings, such as decisions that have unbalanced effects on different groups of people.”

Sioli took a different tack. The EU, she said, is working to align their AI ethics policies with the United Nations’ 17 Sustainable Goals—though they also did not give concrete examples of regulations. And they were not leaving it up to the market to play nicely. But, she said, “International cooperation is very important because it’s crucial that our citizens in Europe can also trust the AI that comes from other parts of the world.”

[Read also: Why AI will transform how customer service teams work]

After all, no one has forgotten about the spy chips in Chinese-manufactured laptops. And Europe’s GDPR laws were, in part, a direct response to U.S.-based social media companies collecting and selling information without disclosing what they were doing. The Cambridge Analytica scandal was about data mining and information dissemination using AI to impact the U.S. 2016 presidential election. Cambridge Analytica was fined roughly $19,700 for violating the GDPR.

The dialogue is part of the process

The European Union has set up a forum for people to weigh in on topics like the ethics of biometrics and discrimination in AI. An examination of standards will begin in the summer of 2019, and next steps will be discussed in 2020.

In addition to transparent, ethical, and fair, panelists across SXSW suggested that AI become democratized by making it more of a commodity. Right now, only a fraction of the population understands the mechanics of AI. On the other hand, there are a lot of people driving cars who can’t explain the workings of the combustible engine. AI, it was argued, should not be the purview of the few, but the commodity of the many.

Slowly, the world is transforming from being amazed and terrified of what AI can do to taking practical steps to ensure that it doesn’t fuel the dystopian nightmare we’re afraid of, where our worst qualities and biases are learned and amplified at an alarming rate. But it’s going to be a process—one that begins, at least, with talking about the ramifications.

関連記事

Article
6 min read

Science-based targets are the key to sustainable business

To help combat climate change, many companies are setting science-based emissions reduction targets. Learn more about these efforts and the impact they can have on the planet.

Article
5 min read

Here's how customer service teams are actually using AI

From bots to automated workflows under the hood, Generative AI tools for customer service driving higher productivity, happier agents, and satisfied customers

Article
5 min read

That’s a wrap: A look back at Zendesk Relate 2022

We loved seeing you at Zendesk Relate 2022. Here are a few highlights—and a peek into what you can still explore online.

Article
6 min read

We’re placing some bets on the future of customer experience

Join us at Relate to hear our five big bets on what the customer experience will look like by 2030.