Вештачка интелигенција

Can Your Vacuum Robot Kill Your Pet…or You?

Summary

As the inherent dangers of artificial intelligence (AI) and its vehicles are currently the subject of much debate, it’s worth considering the more serious consequences of these machines, such as death by robot. Unfortunately, robot killings are all too common. […]

Can Your Vacuum Robot Kill Your Pet…or You?

As the inherent dangers of artificial intelligence (AI) and its vehicles are currently the subject of much debate, it’s worth considering the more serious consequences of these machines, such as death by robot. Unfortunately, robot killings are all too common. Since the first recorded murder in 1979, over 50 people have lost their lives after being crushed or otherwise attacked by destructive machines. A study of fatalities between 1991 and 2017 documented 42 such deaths, but there is likely a possibility of other cases outside that timeframe.

I am interested in this topic for two main reasons. First, AI seems limited in its real-world utility, such as household cleaning and indoor climate control. Second, there appears to be a lack of research on the ethical and legal implications beyond commercial gain.

To illustrate a similar dilemma, let’s remember what happened after American physicist Robert Oppenheimer led the invention of the atomic bomb in the 1940s. His concerns about military control over further development of nuclear weapons led to endless global nuclear disputes. After the war, during the American anti-communist movement of McCarthyism, Oppenheimer was practically accused of treason. Although he was eventually declared loyal to his country, he was excluded from government roles that required security clearance. This witch hunt harmed many scientists who respected Oppenheimer, including Albert Einstein, and led to conflicts with Edward Teller, the main developer of much more destructive thermonuclear bombs. Are we facing a similar sharp but ultimately fruitful debate about AI? It seems to be heading in that direction.

Particularly exciting is the debate currently taking place in the United States leading up to the next presidential elections, where participants and experts argue about the dangers of fake images and documents generated by artificial intelligence. Controversies in this particular area are nothing new. I remember one day in 1973 when an angry truck driver burst into my office at the Taranaki Herald newspaper, demanding that we withdraw a story showing he had broken the law. He denied crossing the center of a bridge while negotiating between New Plymouth and Inglewood, claiming that we had used tricks in a darkroom to move his truck. We didn’t do that, and I doubt there was even a way to do it back then. A similar claim was made towards the end of that decade when the New Zealand Herald published a photograph of bulls that had escaped from a livestock truck and had to be caught on the approaches to a bridge in Auckland. The picture showed an animal with all four hooves above the road but without a shadow underneath it. Clearly, the bull picture had been pasted on, claimed outraged readers. It hadn’t, but they wouldn’t let it go. As far as I know, such tricks became widely available in the United States in the mid-1980s. I was touring some of America’s top newspapers, and when I got to the Los Angeles Times, they showed me such image processing in operation. They photographed me, then showed me on a computer how the press pass I had on my jacket could disappear. It’s easy to do that today with Adobe Photoshop and similar programs, but it was new in 1986. I was impressed but wondered about the ethical risks. No problem, said the illustrations editor, “We’re not allowed to use it for editorial purposes. It’s just for the advertising department…”

Major media outlets around the world can refer to different internal codes that limit such manipulation, but most, if not all, of these guidelines are voluntary guides subject only to industry-led consequences and expensive legal oversight that will likely be established in line with the growing role of artificial intelligence. Although this is at least better than what passes for “news gathering” on social media. As you can see, I am skeptical of artificial intelligence and have shown that it is only primitive literacy. I agree with Gwynn Dyer – the focus of AI is still narrow and commercial. Meanwhile, packaging workers will continue to die at the “hands” of robots, and some people will continue to use infinitives.

FAQ:

1. How many people have been killed by robots?

Since the first recorded robot murder in 1979, over 50 people have lost their lives after being destroyed by these machines.

2. How many fatalities were recorded between 1991 and 2017?

A study documented 42 robot-induced deaths during this period.

3. Are there ethical research studies on artificial intelligence?

Although there are many discussions on the commercial benefits of artificial intelligence, there seems to be a lack of research on the ethical implications.

4. Is there a danger of fake images and documents generated by artificial intelligence?

Discussions about the dangers of fake images and documents generated by artificial intelligence are particularly intense leading up to the presidential elections in the United States.