Вештачка интелигенција

AI Innovation: Unveiling the Power of Knowledge-Driven Artificial Intelligence

Summary

Winter season of 1958 marked a significant milestone in the field of artificial intelligence (AI). Frank Rosenblatt, a 30-year-old psychologist, was on his way from Cornell University to the Office of Naval Research in Washington when he stopped for a […]

AI Innovation: Unveiling the Power of Knowledge-Driven Artificial Intelligence

Winter season of 1958 marked a significant milestone in the field of artificial intelligence (AI). Frank Rosenblatt, a 30-year-old psychologist, was on his way from Cornell University to the Office of Naval Research in Washington when he stopped for a coffee with a journalist. During their conversation, Rosenblatt unveiled a groundbreaking discovery that captured the attention of the nascent computer field at the time. He revealed the creation of “the first machine capable of having original ideas.”

Rosenblatt’s scientific endeavor was the perceptron, a program inspired by human neurons, which ran on a new generation of computers: the IBM mainframe, weighing five tons and towering like a wall. Give the perceptron a deck of cards with punched holes, and it would learn to distinguish between those marked on the left and those marked on the right. Despite the triviality of the task, the machine had the ability to learn.

Rosenblatt believed that this discovery marked the dawn of a new era, and the New York journalist clearly agreed. “It seems to be the first serious rival to the human brain,” the journalist wrote. When asked about what the perceptron couldn’t do, Rosenblatt mentioned love, hope, and despair. In short, the complexities of human nature. “If we don’t understand the human sexual instinct, why would we expect a machine to?” he remarked.

The perceptron was the first neural network, a primitive version of the more sophisticated “deep” neural networks that underpin most modern AI. However, almost 70 years after Rosenblatt’s discovery, there still isn’t a serious rival to the human brain. “What we have today are artificial parrots,” says Professor Mark Girolami, Chief Scientist at the Alan Turing Institute in London. “It’s fantastic progress in itself, providing us with great tools for the benefit of humanity, but we should not let ourselves go too far.”

The history of AI, as written today, is filled with numerous accounts, many of which produce the same descendants. Rosenblatt is sometimes called the father of deep learning, a title he shares with three others. Alan Turing, the codebreaker and founder of computer science, is also considered the father of AI. He was one of the first to seriously contemplate the idea that computers could think.

In his 1948 report “Intelligent Machinery,” Turing explored how machines could mimic intelligent behavior. One possible path to a “thinking machine,” he noted, was to replace parts of a person with machines: cameras for eyes, microphones for ears, and “some kind of electronic brain.” To acquire knowledge, the machine would “roam about the field freely,” Turing quipped. “Danger to the ordinary citizen would be serious,” he remarked, dismissing the idea as too slow and unrealistic.

However, many of Turing’s ideas endured. Machines can learn, just as children do, he said, with the help of rewards and punishments. Some machines can modify themselves by changing their own code. Today, machine learning, rewards, and modification are fundamental concepts in artificial intelligence.

As a measure of progress toward thinking machines, Turing proposed the Imitation Game, known as the Turing Test, which centers around whether a human can distinguish a series of written exchanges coming from a human or a machine.

It’s a brilliant test, but attempts to pass it have caused a great deal of confusion. In a recent surprise, researchers claimed to have passed the test with a chatbot posing as a 13-year-old Ukrainian boy with a pet caviark named Beethoven who squealed the Ode to Joy.

Girolami says Turing has another major contribution to AI that often gets overlooked. A declassified document from Turing’s time at Bletchley Park reveals how he relied on a method called Bayesian statistics to decipher encrypted messages. Word by word, Turing and his team used statistics to answer questions like, “What is the probability that this particular German word generates this encrypted set of letters?”

A similar Bayesian approach now powers generative AI programs that create essays, artworks, and images of people who never existed. “There has been an entire parallel universe of Bayesian-based activities in the last 70 years that has made generative AI possible today, and we can trace it back to Turing’s work on cryptography,” says Girolami.

The term “artificial intelligence” didn’t come into existence until 1955. John McCarthy, a computer scientist at Dartmouth College in New Hampshire, used the term in a proposal for a summer school. He was incredibly optimistic about the prospects of advancement.

“We believe that significant progress can be made … if a carefully selected group of scientists work on it together,” he wrote.

“It was the post-war period,” says Dr. Johnny H.P. Poon, Professor of AI Ethics at the University of Cambridge. “The U.S. government realized that nuclear weapons had won the war. So science and technology could not be at a higher level.”

In the end, those who gathered left progress on the sidelines. Nevertheless, researchers plunged into the golden age of building programs and sensors that equipped computers to perceive and react to their environment, solve problems, plan tasks, and engage in human language.

Computer robots carried out commands in simple English on unassuming cathode-ray monitors, while laboratories showcased robots that moved around, bumping into tables and drawers. Speaking to Life magazine in 1970, Marvin Minsky, a significant figure in AI from the Massachusetts Institute of Technology, said that within three to eight years, the world would have a machine with the general intelligence of an average person. It could read Shakespeare, butter toast, tell jokes, play corporate politics, and even get into fights. Within a few months, with its own learning, its capabilities would be “unbounded.”

The bubble burst in the 1970s. In the UK, leading mathematician Lord James Lighthill penned a scathing review of AI’s meager progress, which led to immediate funding cuts.

Revival came with a new wave of scientists who saw knowledge as the solution to AI’s problems.

Their goal was to directly transfer human knowledge to computers. The most ambitious, although different words were used, was Cyc. It was supposed to possess all the knowledge a person has and [source].

FAQ