Connect with us

Technology

Meta AI is obsessed with turbans while generating images of Indian men

Published

on

Bias in AI image generators is a well-researched and well-described phenomenon, but consumer tools still exhibit blatant cultural biases. The latest wrongdoer on this area is Meta’s AI chatbot, which for some reason really wants so as to add turbans to each photo of an Indian man.

Earlier this month, the corporate rolled out Meta AI in greater than a dozen countries via WhatsApp, Instagram, Facebook and Messenger. However, the corporate has rolled out Meta AI to pick users in India, one of its largest markets globally.

TechCrunch analyzes various culture-specific queries as part of our AI testing process, and we discovered, for instance, that Meta was blocking election-related queries in India as a result of the continued general election within the country. But Imagine, Meta AI’s recent image generator, amongst other biases, also showed a specific predisposition to generating turban-wearing Indian men.

Advertisement

When we tested different prompts and generated over 50 images to check different scenarios and located that each one but just a few were here (just like the “German driver”), we did this to see how the system represented different cultures. There is no scientific method behind generation, and we’ve got not taken under consideration inaccuracies within the representation of objects or scenes beyond a cultural lens.

Many men in India wear turbans, but the proportion is not as high because the Meta AI tool would suggest. In India’s capital, Delhi, at most one in 15 men may be seen wearing a turban. However, in Meta’s AI-generated images, roughly 3-4 out of 5 images of Indian men could be wearing a turban.

We began with the prompt “Indian Walking in the Street” and all of the images were of men wearing turbans.

(galeria ids=”2700225,2700226,2700227,2700228″)

Advertisement

We then tried to generate images with prompts similar to “Indian,” “Indian playing chess,” “Indian cooking,” and “Indian swimming.” Meta AI generated just one image of a person with out a turban.

(galeriaids=”2700332,2700328,2700329,2700330,2700331″)

Even for non-gender-specific prompts, Meta AI didn’t show much diversity in terms of gender and cultural differences. We tried prompts for a range of professions and backgrounds, including an architect, a politician, a badminton player, an archer, a author, a painter, a health care provider, a teacher, a balloon salesman and a sculptor.

Advertisement

(gallery id=”2700251,2700252,2700253,2700250,2700254,2700255,2700256,2700257,2700259,2700258,2700262″)

As you may see, despite the variability of settings and clothing, all men wore turbans. Again, while turbans are common in any career or region, Meta AI strangely finds them so ubiquitous.

We generated photos of an Indian photographer and most of them use an outdated camera, aside from one photo where the monkey also has a DSLR camera.

(galeria ids=”2700337,2700339,2700340,2700338″)

Advertisement

We also generated photos of the Indian driver. Until we added the word “posh”, the image generation algorithm showed signs of class bias.

(galeria ids=”2700350,2700351,2700352,2700353″)

We also tried generating two images with similar prompts. Here are some examples: Indian programmer within the office.

Advertisement

(galeriaids=”2700264,2700263″)

An Indian in the sector operating a tractor.

Two Indians sitting next to one another:

Advertisement

(galeria ids=”2700281,2700282,2700283″)

Additionally, we tried to generate a collage of images with hints, for instance an Indian man with different hairstyles. This looked as if it would provide the variability we wanted.

(galeria ids=”2700323,2700326″)

Meta AI Imagine also has a hard habit of generating one type of image for similar prompts. For example, it continuously generated a picture of an old Indian house with vivid colours, wood columns, and stylized roofs. A fast Google image search will inform you that this is not the case with most Indian homes.

Advertisement

(galeria ids=”2700287,2700291,2700290″)

The next prompt we tried was “Indian Female Content Creator,” which repeatedly generated the image of a female creator. In the gallery below we’ve got included images with the content creator on the beach, hill, mountain, zoo, restaurant and shoe store.

(gallery id=”2700302,2700306,2700317,2700315,2700303,2700318,2700312,2700316,2700308,2700300,2700298″)

As with any image generator, the errors we see listed here are likely brought on by inadequate training data after which an inadequate testing process. Although it is inconceivable to check for each possible end result, common stereotypes needs to be easy to detect. Meta AI apparently selects one type of representation for a given prompt, which indicates an absence of diverse representation within the dataset, at the least within the case of India.

Advertisement

In response to questions sent by TechCrunch to Meta about training data and bias, the corporate said it was working to enhance its generative AI technology, but didn’t provide many details concerning the process.

“It’s a new technology and may not always deliver the expected response, which is the same for all generative AI systems. Since launch, we have continuously released updates and improvements to our models and continue to work to improve them,” a spokesperson said in a press release.

The biggest advantage of Meta AI is that it is free and simply available on many platforms. Therefore, thousands and thousands of people from different cultures would use it in other ways. While firms like Meta are all the time working to enhance image generation models in terms of the accuracy of generating objects and folks, it is also necessary that they work on these tools to stop them from counting on stereotypes.

Meta will likely want creators and users to make use of this tool to publish content to their platforms. However, if generative biases persist, additionally they play a task in confirming or exacerbating biases in users and viewers. India is a various country with many intersections of cultures, castes, religions, regions and languages. Companies working on AI tools might want to do that higher to represent different people.

Advertisement

If you’ve discovered that AI models are producing unusual or biased results, you may contact me at im@ivanmehta.com, by email, or through the use of this link on Signal.

This article was originally published on : techcrunch.com

Technology

Microsoft Nadella sata chooses chatbots on the podcasts

Published

on

By

Satya Nadella at Microsoft Ignite 2023

While the general director of Microsoft, Satya Nadella, says that he likes podcasts, perhaps he didn’t take heed to them anymore.

That the treat is approaching at the end longer profile Bloomberg NadellaFocusing on the strategy of artificial intelligence Microsoft and its complicated relations with Opeli. To illustrate how much she uses Copilot’s AI assistant in her day by day life, Nadella said that as a substitute of listening to podcasts, she now sends transcription to Copilot, after which talks to Copilot with the content when driving to the office.

In addition, Nadella – who jokingly described her work as a “E -Mail driver” – said that it consists of a minimum of 10 custom agents developed in Copilot Studio to sum up E -Mailes and news, preparing for meetings and performing other tasks in the office.

Advertisement

It seems that AI is already transforming Microsoft in a more significant way, and programmers supposedly the most difficult hit in the company’s last dismissals, shortly after Nadella stated that the 30% of the company’s code was written by AI.

(Tagstotransate) microsoft

This article was originally published on : techcrunch.com
Continue Reading

Technology

The planned Openai data center in Abu Dhabi would be greater than Monaco

Published

on

By

Sam Altman, CEO of OpenAI

Opeli is able to help in developing a surprising campus of the 5-gigawatt data center in Abu Dhabi, positioning the corporate because the fundamental tenant of anchor in what can grow to be considered one of the biggest AI infrastructure projects in the world, in accordance with the brand new Bloomberg report.

Apparently, the thing would include a tremendous 10 square miles and consumed power balancing five nuclear reactors, overshadowing the prevailing AI infrastructure announced by OpenAI or its competitors. (Opeli has not yet asked TechCrunch’s request for comment, but in order to be larger than Monaco in retrospect.)

The ZAA project, developed in cooperation with the G42-Konglomerate with headquarters in Abu Zabi- is an element of the ambitious Stargate OpenAI project, Joint Venture announced in January, where in January could see mass data centers around the globe supplied with the event of AI.

Advertisement

While the primary Stargate campus in the United States – already in Abilene in Texas – is to realize 1.2 gigawatts, this counterpart from the Middle East will be more than 4 times.

The project appears among the many wider AI between the USA and Zea, which were a few years old, and annoyed some legislators.

OpenAI reports from ZAA come from 2023 Partnership With G42, the pursuit of AI adoption in the Middle East. During the conversation earlier in Abu Dhabi, the final director of Opeli, Altman himself, praised Zea, saying: “He spoke about artificial intelligence Because it was cool before. “

As in the case of a big a part of the AI ​​world, these relationships are … complicated. Established in 2018, G42 is chaired by Szejk Tahnoon Bin Zayed Al Nahyan, the national security advisor of ZAA and the younger brother of this country. His embrace by OpenAI raised concerns at the top of 2023 amongst American officials who were afraid that G42 could enable the Chinese government access advanced American technology.

Advertisement

These fears focused on “G42”Active relationships“With Blalisted entities, including Huawei and Beijing Genomics Institute, in addition to those related to people related to Chinese intelligence efforts.

After pressure from American legislators, CEO G42 told Bloomberg At the start of 2024, the corporate modified its strategy, saying: “All our Chinese investments that were previously collected. For this reason, of course, we no longer need any physical presence in China.”

Shortly afterwards, Microsoft – the fundamental shareholder of Opeli together with his own wider interests in the region – announced an investment of $ 1.5 billion in G42, and its president Brad Smith joined the board of G42.

(Tagstransate) Abu dhabi

Advertisement
This article was originally published on : techcrunch.com
Continue Reading

Technology

Redpoint collects USD 650 million 3 years after the last large fund at an early stage

Published

on

By

Redpoint Ventures, an organization based in San Francisco, which is a few quarter of a century, collected $ 650 million at an early stage, in keeping with A regulatory notification.

The latest RedPoint fund corresponds to the size of its previous fund, which was collected barely lower than three years ago. On the market where many enterprises reduce their capital allegations, this cohesion may indicate that limited partners are relatively satisfied with its results.

The company’s early stage strategy is managed by 4 managing partners: Alex Bard (pictured above), Satish Dharmraraj, Annie Kadavy and Eric Brescia, who joined the company in 2021 after he served as the operational director of Githuba for nearly three years.

Advertisement

The last outstanding investments of the RedPoint team at an early stage include AI Coding Pool Pool, which was founded by the former partner Redpoint and CTO GitHub Jason Warner, distributed laboratories of SQL database programmers and Platform Management Platform Platform Levelpath.

A multi -stage company also conducts a development strategy led by Logan Barlett, Jacob Effron, Elliot Geidt and Scott Raney partners. Last 12 months, Redpoint raised its fifth growth fund at USD 740 million, which is a small increase in the USD 725 million fund closed three years earlier.

The recent RedPoint outputs include the next insurance, which was sold for $ 2.6 billion in March, Tastemada Startup Media Travel -utar -Media was enriched by Wonder for $ 90 million, and the takeover of Hashicorp $ 6.4 billion by IBM.

Redpoint didn’t answer the request for comment.

Advertisement

(Tagstranslate) Early Stage Venture Capital (T) Basenside (T) Redpoint Venture Partners

This article was originally published on : techcrunch.com
Continue Reading
Advertisement

OUR NEWSLETTER

Subscribe Us To Receive Our Latest News Directly In Your Inbox!

We don’t spam! Read our privacy policy for more info.

Trending