OpenAI funds research into ‘ethics of artificial intelligence’

[ad_1]

OpenAI funds academic research into algorithms that can predict moral judgments in humans.

In a filing with the IRS, OpenAI Inc., a nonprofit affiliated with OpenAI, revealed that it had awarded a grant to researchers from Duke University for a project titled “Ethics in Research AI.” Contacted for comment, an OpenAI spokesperson noted A press release Which indicates that the award is part of a larger three-year, $1 million grant for Duke University professors who teach “Creating ethical AI“.

Little is known about the “ethics” research being funded by OpenAI, other than the fact that the grant expires in 2025. The study’s lead researcher, Walter Sinnott-Armstrong, a professor of practical ethics at Duke University, told TechCrunch via email that he “wouldn’t “. Be able to talk’ about work.

Sinnott-Armstrong and the project’s co-researcher, Jana Borg, have produced several studies – and a monograph book – On the ability of artificial intelligence to act as a “moral global positioning system” to help humans make better judgments. As part of larger teams, they’ve done that creature An “ethically compliant” algorithm to help determine who receives kidney donations, and lesson In which scenarios would people prefer AI to make ethical decisions?

According to the press release, the goal of the OpenAI-funded work is to train algorithms to “predict human moral judgments” in scenarios involving conflicts “among ethically relevant features in medicine, law, and business.”

But it is not at all clear that a concept as precise as ethics is within the reach of today’s technology.

In 2021, the nonprofit Allen Institute for Artificial Intelligence built a tool called Ask Delphi that aimed to provide ethically sound recommendations. It judged basic moral dilemmas well enough—for example, it “taught” the robot that cheating on an exam was wrong. But slightly rewording and rephrasing the questions was enough to get Delphi to agree to almost anything, including… Infant suffocation.

The reason has to do with how modern AI systems work.

Machine learning models are statistical machines. By training them on lots of examples from around the web, they learn the patterns in those examples to make predictions, such as the phrase “to whom” often precedes the phrase “it might interest me.”

AI has no appreciation for ethical concepts, nor an understanding of the logic and emotion that play a role in ethical decision-making. This is why AI tends to replicate the values ​​of Western, educated, industrialized countries, and articles endorsing these views dominate the web, and thus AI training data.

It is not surprising that many people’s values ​​are not expressed in the answers provided by AI, especially if these people do not contribute to AI training groups by posting online. Artificial intelligence accommodates a set of biases that go beyond Western orientation. Delphi He said That being straight is more “morally acceptable” than being gay.

The challenge facing OpenAI – and the researchers it supports – is made all the more difficult by the inherent subjectivity of ethics. Philosophers have debated the merits of different ethical theories for thousands of years, and there is no universally applicable framework in sight.

Claude please Kantianism (i.e. focusing on absolute ethical rules), while ChatGPT tends to… It’s all a little bit Utilitarianism (prioritizing the greatest amount of good for the greatest number of people). Is one superior to the other? It depends on who you ask.

An algorithm to predict moral judgments of humans must take all of this into account. This is a very difficult thing to get around, assuming such an algorithm is possible in the first place.

[ad_2]

Leave a Comment