Site icon Protechbro: Top Stories on Bitcoin, Ethereum, Web3, & Blockchain

OpenAI Funds Research into AI Morality

OpenAI Funds Research into AI Morality

OpenAI Inc. has awarded a grant to Duke University researchers for a project called “Research AI Morality,” according to a filing with the IRS

We contacted an OpenAI spokesperson for comment, and they pointed to a press release stating the award is part of a larger, three-year, $1 million grant to Duke professors researching “making moral AI.”

OpenAI is providing financial support for academic research that aims to develop algorithms that can anticipate the moral judgments of humans\\

Except for the fact that the grant expires in 2025, there is little information available regarding the “morality” research that OpenAI is funding. Walter Sinnott-Armstrong, a practical ethics professor at Duke and the principal investigator of the study, informed TechCrunch via email that he “will not be able to talk” about the work.

Sinnott-Armstrong and the project’s co-investigator, Jana Borg, have conducted numerous studies and published a book regarding AI’s potential to function as a “moral GPS” to assist humans in making more informed decisions. They have developed a “morally aligned” algorithm as part of larger teams to assist in the determination of who is eligible for kidney donations. Additionally, they have investigated the scenarios in which individuals would prefer that AI make moral decisions.

The objective of the OpenAI-funded research, as stated in the press release, is to train algorithms to “predict human moral judgments” in situations that involve conflicts between “morally relevant features in medicine, law, and business.”

However, it is not entirely certain that the technology of today can handle a concept as complex as morality.

The Allen Institute for AI, a nonprofit organization, developed a tool known as Ask Delphi in 2021 with the objective of providing ethically solid recommendations. For example, the computer “knew” that cheating on an exam was unethical. It was capable of evaluating fundamental moral dilemmas. However, Delphi was able to approve of virtually any action, including the smothering of infants, by just minimally rephrasing and rewording questions.

The explanation pertains to how modern artificial intelligence systems operate.


Statistical machines are machine learning models. Various internet sources train them on a vast number of examples, enabling them to identify patterns and make predictions. For example, the phrase “to whom” frequently precedes “it may concern.”

AI lacks an understanding of the reasoning and emotion that are integral to moral decision-making, as well as an appreciation for ethical concepts. This is why AI tends to emulate the values of Western, educated, and industrialized nations. Articles that support these perspectives dominate the web and, consequently, AI’s training data.

It is unsurprising that the responses provided by AI do not reflect the values of a significant number of individuals, particularly those who are not contributing to the AI’s training sets by posting online. Additionally, AI internalizes a variety of biases that extend beyond a Western perspective. Delphi asserted that being straight is more “morally acceptable” than being homosexual.

The inherent subjectivity of morality further complicates the challenge that OpenAI and the researchers it supports are facing. For thousands of years, philosophers have been debating the merits of a variety of ethical theories, and there is no universally applicable framework in sight.

Claude is a proponent of Kantianism, which emphasizes the importance of absolute moral principles. Conversely, ChatGPT is slightly utilitarian, prioritizing the greatest benefit for the greatest number of individuals. Can one be considered superior to the other? Depending on the individual you consult, the answer may vary.

All of this must be considered when developing an algorithm that forecasts the moral judgments of humans. That is an exceptionally high threshold to surpass, provided that such an algorithm is feasible.

Exit mobile version