AI Summit Silicon Valley 2021: Accelerating AI Through Trust and Ingenuity

First day of the conference saw experts from Microsoft and IBM Watson offer insights into their AI work.

Ben Wodecki, Junior Editor - AI Business

November 5, 2021

2 Min Read

Business leaders in AI must rely on the resources of their teams and not solely on technology, according to Mitra Azizirad, corporate vice president for Microsoft AI and Innovation.

Azizirad told attendees of the 2021 AI Summit Silicon Valley that bringing together the ingenuity of people with technology can “truly change the world.”

“Adapting and changing is about so much more of tech – it’s the combination of human and machine that will help organizations both reimagine and transform their businesses,” she said, adding, “human ingenuity with AI is truly a force multiplier.”

Azizirad cited a McKinsey report that found that 61% of high-performing companies increased their investment in AI during the COVID-19 crisis.

“This underscores just how integral AI capabilities have become in terms of how work gets done,” she said.

“Even before the pandemic, my team and I were working with many customers around the best ways to inculcate an AI-ready culture in their organizations.”

Transparency and trust

In a later session, IBM Watson’s chief AI officer, Seth Dobrin, said that trust is a key part of how to enable the adoption of AI and drive it at scale.

Dobrin told the AI Summit attendees that achieving trustworthy AI requires thinking holistically.

“In business, we need to understand context, jargon, and code to get a better sense of data that hasn’t been mined in a while.”

Dobrin touched on potential regulations related to trust in AI.

One jurisdiction that’s pressing ahead on this is the EU, with the bloc’s proposed ‘Artificial Intelligence Act‘ potentially forcing all AI systems to be categorized in terms of their risk to citizens’ privacy, livelihoods, and rights.

Any system determined to pose ‘unacceptable risk’ would be outright banned, while those deemed ‘high risk’ would be subject to strict obligations before they can be put on the market.

Dorbin said that such governance shouldn’t cover all AI, but solely systems that affect human health and employment.

“As corporations, it’s our responsibility that not just us produce trustworthy AI,” he said, adding the need for teams like his to participate with the community in relation to governance.

“Transparency drives trust, without it, you’re not going to get people to trust AI.”

He likened his ideal for AI transparency to nutritional labels on foods, saying that it should be easy to understand.

About the Author(s)

Ben Wodecki

Junior Editor - AI Business

Ben Wodecki is the junior editor of AI Business, covering a wide range of AI content. Ben joined the team in March 2021 as assistant editor and was promoted to junior editor. He has written for The New Statesman, Intellectual Property Magazine, and The Telegraph India, among others.

Sign Up for the Newsletter
The most up-to-date news and insights into the latest emerging technologies ... delivered right to your inbox!

You May Also Like