Knowledge gaps, inaccuracies remain a weak spot for AI

Deborah Yao, Editor, AI Business

November 4, 2022

6 Min Read

Twenty-five years ago, a supercomputer faced a world chess champion and won in a rematch seen around the world. In a tense 6-game competition in New York, IBM’s Deep Blue prevailed over Garry Kasparov, still regarded as one of the greatest chess players of all time.

Today, Kasparov met one of the founders of the Deep Blue team, IBM Distinguished Research Scientist Murray Campbell, at IoT World and the AI Summit Austin in Texas. The years have made Kasparov more contemplative but he still slightly bristled over his 1997 defeat.

“This match, by the way, was a rematch. I won the first one,” Kasparov pointed out at the event.

It is true. Kasparov first competed against Deep Blue in 1996 in Philadelphia and won 4-2 in a 6-game match. A year later, IBM asked for another match-up, in New York City. In the year leading up to the second contest, the IBM team spent it enhancing Deep Blue’s capabilities. Kasparov wanted a third match but IBM declined.

Reflecting on the pace of innovation at the time, Kasparov said a machine prevailing over human champions was “just a matter of time.” Now the pace of technology has advanced such that “today if you have a chess app on your phone it’s better than Deep Blue,” he said.

Kasparov said he also felt it was “my duty as a chess champion to accept the challenge. Yes, I lost the match and it was a personal sacrifice, but I think it inspired tens of thousands of young computer experts.”

“It was a milestone not just for AI but it opened up new horizons,” Kasparov said. 

However, IBM purposely avoided using the term artificial intelligence at the time to describe Deep Blue.

“There was the movie − 2001 – that had come up,” Campbell said. “Everybody knew AI had some concerns so we often used the term data analytics or data science or other terms like that in order to soften it but in fact, Deep Blue was an AI system as we now call it.” 

But Kasparov rejects a black-and-white view of AI. “We should consider all these technologies as products of human ingenuity. … AI is not a magic wand. It’s not a harbinger of utopia or dystopia. It’s something that we invented. And as machines in the past made us stronger and faster, I believe AI will make us smarter.”

“I’m very skeptical about all these singularities and Doomsday predictions, for a simple reason,” Kasparov continued. AI still has a long way to go. “Everything that we have been dealing with now … we’re dealing with closed systems: Chess, Go, all the video games, Starcraft. The problem is machines have yet to find a way to transfer data from one closed system to another.”

Kasparov argued that if one trains a computer to play one game, it has to be trained again to play another game. “There’s always room for humans to play a pivotal role. 

“So machines can do better in 95% (of the cases) but still there’s room and the future very much depends on our ability to work with smart machines,” Kasparov added. “We have to recognize what the machine needs for these tasks and what is missing − what is the human contribution to make this … as close as possible as 100%.” 

“The human role has been shrinking but it will never disappear.”

Plus ça change … 

But while there have been big strides in AI advancements with an explosion of data and computing power, in many ways things have remained the same.

“AI systems tend to be quite narrow, focused on a particular task. In the past two to three years, I think that trend has started to shift where AI systems, particularly based on large language models, can do more than one thing,” Campbell said. 

“But for the most part that hasn’t changed much,” Campbell continued. “In 25 years … AI systems for the most part tend to be as narrow now as they were in 1997.”

Priya Krishnan, IBM’s director of governance and data science product, agreed that even with the explosion in data and computer power of the last 25 years, “it’s still a challenge out there.”

Campbell added that “computer chess gives us a good preview of what we’re going to see in many other fields.” 

Campbell noted four phases in AI development. The first phase is when computers are too weak to help a strong player. The second phase is when computers did certain things quite well but still are not as good as people and humans remain in charge. 

The third phase is when the computer gets “very good but with gaps in knowledge where humans can influence it and make it better,” Campbell said. The fourth phase is when it becomes “very hard for humans to influence computers to make it play better.” He said chess is in this fourth phase.

However, software engineering still is in the first phase, where computers are “very good at doing certain things and can help software engineers develop their systems,” Campbell added. For example, a system called Copilot is used by many software engineers to suggest ways of writing little snippets of code. He believes this is a trend that will continue to make them more productive for years.

Another factor that continues to plague computers is knowledge gaps and inaccuracies, Kasparov said.

For example, even in powerful chess-playing AI engines of today, there could be a “gap between the value of a bishop and a knight,” Kasparov noted. Due to the millions of chess games the computer was trained on, it would statistically value the bishop much higher than the knight. 

The truth is it all depends on what the player wants to accomplish. But “for the machine to actually understand the inconsistency … it will need (to experience) tens of thousands” of losses before adapting to something a human picks up immediately, Kasparov said.

Campbell cited a new research paper that found a way to defeat the game of Go, which is a more complex game than chess. The researchers “showed that this highly proficient system can be defeated by a very simple system, by using a technique showing that there are gaps.”

“I’m concerned that whether it be games or software development or other fields, no matter how good these AI systems get, there are going to be gaps that a human can see just from our experience and world knowledge that the AI systems can’t recognize themselves,” Campbell said.

This article first appeared in IoT World Today’s sister publication AI Business

About the Author(s)

Deborah Yao

Editor, AI Business

Deborah Yao is an award-winning journalist who has worked at The Associated Press, Amazon and the Wharton School. A graduate of Stanford University, she is a business and tech news veteran with particular expertise in finance. She loves writing stories at the intersection of AI and business.



Sign Up for the Newsletter
The most up-to-date news and insights into the latest emerging technologies ... delivered right to your inbox!

You May Also Like