Site icon Protechbro: Top Stories on Bitcoin, Ethereum, Web3, & Blockchain

Meta’s New AI Council Criticized for All-White Male Team

Meta's New AI

Meta's New AI

Meta’s new AI council is made of white men, ignoring women and people of another color, in leadership roles despite their qualifications.

On Wednesday, Meta revealed that an AI advisory committee consisting solely of white guys has been established. What more could we possibly hope for? Despite their qualifications and significant contributions to the development of artificial intelligence, women and people of color have long been marginalized and neglected in this field. For this reason, they have been vocal about their experiences for decades.

When we asked Meta for a statement regarding the diversity of the advisory board, Meta did not answer right away.

The Oversight Board and Meta’s actual board of directors are more diverse regarding gender and race than this new advisory board. This AI board, which similarly has no fiduciary duty, was not chosen by shareholders. The board will provide “insights and recommendations on technological advancements, innovation, and strategic growth opportunities,” Meta informed Bloomberg. It would have “periodic” meetings.

The fact that the AI advisory group comprises only businesses and entrepreneurs rather than ethicists or those with a background in academia or in-depth study is telling. Given the enormous number of products that current and past executives from Stripe, Shopify, and Microsoft have brought to market, one could argue that they are well-positioned to manage Meta’s AI product pipeline; yet, it has been repeatedly demonstrated that AI is not like other products. It’s a dangerous endeavor, and making a mistake can have far-reaching effects, especially for disadvantaged populations.

Meta's New AI Council Criticized for All-White Male Team - Protechbro: Top Stories on Bitcoin, Ethereum, Web3, & Blockchain
Women in tech

The managing director of the nonprofit AI Now Institute, which researches the social consequences of AI, Sarah Myers West, stated in a recent interview that it is imperative to “critically examine” the organizations creating AI to “make sure the public’s needs [are] served.”

“We know from independent research that these errors are not distributed equally; they disproportionately harm communities that have long borne the brunt of discrimination,” the speaker stated. “This is error-prone technology.” “The bar ought to be set much, much higher.”

The dark side of AI is something women are significantly more likely than males to encounter. Sensity AI discovered in 2019 that nonconsensual, sexually explicit movies made up 96% of AI deepfake videos on the internet. Since then, generative AI has become much more common, and the targets of this discriminatory behavior continue to be women.

Nonconsensual, pornographic deepfakes of Taylor Swift went viral on X in a well-publicized incident in January. One of the most popular posts received 45 million views and hundreds of thousands of likes. Since Taylor Swift is one of the most powerful women in the world, social media platforms like X have historically failed to safeguard women from similar situations. However, X banned search terms like “Taylor Swift AI” and “Taylor Swift deepfake.”

You might not have much luck if this is your situation and you’re not a worldwide pop sensation. Several accounts of adolescents in middle and high school create overt deepfakes of their peers. Even though this technology has been available for a while, it’s never been more accessible; anyone can download apps that precisely promise to “undress” pictures of women or replace their faces with pornography without needing to be technologically competent. In actuality, Facebook and Instagram carried advertisements for an app called Perky AI, which marketed itself as a tool for creating obscene photographs, according to an investigation by NBC’s Kat Tenbarge.

Two advertisements, which reportedly eluded Meta’s notice until Tenbarge informed the business of the problem, included blurry images of celebs Sabrina Carpenter and Jenna Ortega and asked users to ask the app to remove their clothing. An old photo of Ortega, taken when she was only sixteen, was utilized in the advertisements.

Perky AI’s error of letting it advertise wasn’t a unique instance. The Oversight Board at Meta has initiated inquiries into the company’s handling of concerns about sexually explicit content created by artificial intelligence.

Incorporating the perspectives of women and people of color into developing artificial intelligence products is essential. Such disadvantaged populations have been shut out of the creation of research and technologies that can potentially change the world for a very long time, with devastating outcomes.

A straightforward illustration is that, up until the 1970s, women were not allowed to participate in clinical studies. As a result, entire fields of study were formed without considering the effects on women. Black people, in particular, experience the adverse effects of technology created without considering their needs. According to a 2019 Georgia Institute of Technology study, self-driving cars are more likely to crash with Black individuals because their sensors may have a more challenging difficulty identifying Black skin.

The biases humans have taught algorithms to accept are the only ones they will repeat when trained on biased data. In general, racial prejudice in employment, housing, and the criminal justice system is already being sustained and amplified by AI systems. Since English is AI’s native tongue, as Axios pointed out, voice assistants have difficulty understanding a variety of accents and frequently label the work of non-native English speakers as artificial intelligence (AI) created. More often than not, facial recognition software flags Black individuals as potential matches for criminal suspects.

It appears that not enough leaders are addressing the way that the current development of AI incorporates the same power structures that are in place elsewhere about class, racism, gender, and Eurocentrism. Instead, they are strengthening it. The current hottest AI technology, generative AI, may make things worse rather than better. Still, founders, investors, and tech executives need to be more relaxed, breaking ground and moving quickly to notice this. A McKinsey analysis claims that artificial intelligence (AI) might automate around half of all employment that doesn’t require a four-year degree and pay more than $42,000 annually—jobs where minorities are disproportionately represented.

Concerns should be raised about how an all-white staff at one of the most well-known tech companies in the world, involved in this race to employ AI to save the globe, could ever provide advice on goods for everyone when just a single, limited demography is represented. Truly universal technology will require a tremendous amount of work to develop. Indeed, it is likely that this advisory board will not assist Meta in getting it right because the layers needed to create safe and inclusive AI are indeed so complex, ranging from research to understanding on an intersectional societal level. If Meta fails, there may be room for another company.

Exit mobile version