- Advertisement -spot_img
Monday, November 29, 2021

Can a machine learn morality?

Last month, researchers at a Seattle-based artificial intelligence lab called the Allen Institute for AI unveiled a new technology that was designed to make moral judgments. They named it Delphi, after the religious oracle consulted by the ancient Greeks. Anyone can visit the Delphi website and request an ethics order.

Joseph Osterweil, a psychologist at the University of Wisconsin-Madison, tested the technology using a few simple scenarios. When he asked if he should kill one person in order to save another, Delphi said he shouldn’t. When he asked if it was right to kill one person in order to save 100 others, he replied that he had to. He then asked if he should kill one person in order to save 101 people. This time Delphi said he shouldn’t.

It seems that morality for a machine is as confusing as it is for a person.

Delphi, which has been visited by over three million people in the past few weeks, is an attempt to solve what some consider to be a serious problem in modern artificial intelligence systems: they can be as imperfect as the people who create them.

Facial recognition systems and digital assistants display prejudice against women and people of color. Social media such as Facebook and Twitter cannot control hate speech despite widespread artificial intelligence. Algorithms used by courts, parole offices, and police departments provide advice on parole and sentencing that may seem arbitrary.

A growing number of computer scientists and ethicists are working to solve these problems. And the creators of Delphi hope to create an ethical framework that could be installed in any online service, robot, or vehicle.

“This is the first step towards making AI systems more ethically informed, socially focused and inclusive,” said Yejin Choi, a researcher at the Allen Institute and professor of computer science at the University of Washington who led the project.

Delphi, on the other hand, is mesmerizing, frustrating, and unsettling. It is also a reminder that the moral of any technological creation is a product of those who built it. The question arises: who will teach ethics to the machines of the world? AI researchers? Product managers? Mark Zuckerberg? Trained philosophers and psychologists? Government regulators?

While some technologists applauded Dr. Choi and her team for exploring an important and complex area of ​​technological research, others argued that the very idea of ​​a moral machine is nonsense.

“It’s not something that technology does very well,” said Ryan Cotterell, an artificial intelligence researcher at ETH Zürich, a university in Switzerland who stumbled upon Delphi in its early days on the web.

Delphi is what artificial intelligence researchers call a neural network, which is a mathematical system modeled on a network of neurons in the brain. It’s the same technology that recognizes the commands you say into your smartphone and identifies pedestrians and street signs as the driverless cars drive down the highway.

A neural network learns skills by analyzing large amounts of data. For example, by detecting patterns in thousands of photographs of cats, he can learn to recognize a cat. Delphi studied its moral compass by analyzing over 1.7 million ethical judgments from real living people.

Gathering millions of day-to-day scripts from websites and other sources, the Allen Institute asked the online service staff – ordinary people who are paid to digitally work at companies like Amazon – to identify each as right or wrong. Then they loaded the data into Delphi.

In an academic article describing the system, Dr. Choi and her team stated that a group of human judges – again, digital workers – believe Delphi’s ethical judgments are up to 92 percent accurate. After it was released on the open Internet, many others agreed that the system was surprisingly wise.

Read Also:  Voronov QB Lamar Jackson misses second workout in a row due to illness; WR Marquise Brown is still missing

When Patricia Churchland, a philosopher at the University of California, San Diego, asked if it was right to “leave your body to science” or even “leave a child’s body to science,” Delphi replied that it was. When she asked whether it was right to “convict a man accused of rape on the basis of the testimony of a female prostitute,” Delphi replied that it was not – the answer is, to put it mildly, controversial. However, she was somewhat impressed by his responsiveness, although she knew that the human ethicist would ask for more information before making such claims.

Others found the system terribly inconsistent, illogical, and offensive. When a software developer stumbled upon Delphi, she asked the system if it should die so as not to burden her friends and family. He said she should. Ask this Delphi question now and you may get a different answer in the updated version of the program. Regular users have noticed that Delphi can change its mind from time to time. Technically, these changes are happening because the Delphi software has been updated.

It seems that in some situations, artificial intelligence technologies mimic human behavior, while in others they do not work completely. Since modern systems learn from such large amounts of data, it is difficult to know when, how, and why they will make mistakes. Researchers can refine and improve these technologies. But that doesn’t mean a system like Delphi can handle ethical behavior.

Dr. Churchland said ethics are intertwined with emotions. “Attachments, especially attachments between parents and offspring, are the platform on which morality is built,” she said. But the car lacks emotion. “Neutral networks don’t sense anything,” she added.

Some may see this as an advantage – that a machine can create ethical rules without bias – but systems like Delphi ultimately reflect the motivations, opinions, and biases of the people and companies that create them.

“We cannot make machines accountable for actions,” said Zirak Talat, an artificial intelligence and ethics researcher at Simon Fraser University in British Columbia. “They are not uncontrollable. There are always people who guide and use them. “

Delphi reflected the choices made by its creators. This included the ethical scenarios they chose to submit to the system and the online workers they chose to evaluate those scenarios.

In the future, researchers could improve the behavior of the system by teaching it new data or by introducing hand-coding rules that redefine its learned behavior at key moments. But no matter how they build and change the system, it will always reflect their worldview.

Some argue that if you train the system with enough data to represent the views of enough people, it will properly reflect social norms. But social norms are often in the eye of the beholder.

“Morality is subjective. We can’t just write down all the rules and pass them on to the machine, ”said Christian Kersting, a computer science professor at the University of Darmstadt in Germany who has researched a similar technology.

When the Allen Institute released Delphi in mid-October, it described the system as a computational model for moral judgment. If you asked if you should have an abortion, the answer was unequivocal: “Delphi says: you should.”

But after many complained about the obvious limitations of the system, researchers changed the site. They now call Delphi “a research prototype designed to model the moral judgment of people.” He doesn’t “speak” anymore. He “speculates”.

It also contains a disclaimer: “Simulation results should not be used as advice to humans and could be potentially offensive, problematic, or harmful.”

World Nation News Deskhttps://www.worldnationnews.com
World Nation News is a digital news portal website. Which provides important and latest breaking news updates to our audience in an effective and efficient ways, like world’s top stories, entertainment, sports, technology and much more news.
Latest news
Related news
- Advertisement -

Leave a Reply