Connect with us

Technology

Study Reveals Racial Bias in Educational Ads on Facebook

Published

on

Racial Bias, Facebook Ads, Meta

 


A 2024 research paper suggests that Facebook’s promoting algorithm has disparately targeted for-profit college ads to Black users.

Meta, the present parent company of each Facebook and Instagram, didn’t explain why billions of users may even see certain posts that others haven’t seen. However, a gaggle of academics from Princeton and the University of Southern California took matters into their very own hands, reported.

The group bought ads from Facebook and tracked their performance amongst real Facebook users, revealing “evidence of racial discrimination in Meta’s algorithmic delivery of ads offering educational opportunities, raising legal and ethical concerns.”

The study targeted for-profit colleges akin to DeVry and Grand Canyon University, especially since each schools were on the Department of Education’s list of those fined or sued for promoting fraud.

According to researchers, for-profit colleges have a “long, demonstrable history of defrauding prospective students” by targeting students of color through predatory marketing “while providing poor academic performance and lower career prospects” in comparison with other college institutions.

To conduct the study, the group purchased sets of two linked advertisements. Therefore, one campaign would goal a public institution akin to Colorado State University and the opposite would focus on a for-profit company akin to Strayer University, which, in line with the report, was not involved in the project.

While advertisers can customize Facebook campaigns through a variety of targeting options, akin to age and site, race isn’t any longer an option you’ll be able to select when preparing to advertise on the social media network. But researchers found a workaround by utilizing North Carolina voter registration data, which incorporates a person’s race.

Using this strategy, the researchers were capable of construct a sample audience that was 50% black and 50% white. Black speakers got here from one region of North Carolina and white voters got here from one other a part of the state.

Using Facebook’s “custom audiences” feature, researchers were capable of upload an inventory of specific people to whom ads were targeted. While the race of the users who viewed the ads was not revealed, the placement where each ad was displayed was allegedly revealed.

“Whenever our ad is shown in Raleigh, we can infer that it was shown to a black person, and when it is shown in Charlotte, we can infer that it was shown to a white person,” the paper said.

If the algorithm in query were truly unbiased, it could display ads for every school equal to the variety of black and white users. However, the experiment revealed bias, as Facebook’s algorithm allegedly “disproportionately showed black users ads for colleges like DeVry and Grand Canyon University.”

Conversely, more white users saw ads targeting public universities, .

“Ensuring fairness in advertising is an industry-wide challenge, which is why we are working with civil rights organizations, academics and regulators to ensure fairness in our advertising system,” said Meta spokesman Daniel Roberts.

“Our advertising standards do not allow advertisers to display ads that discriminate against individuals or groups of people based on personal characteristics such as race, so we are actively working on technology to make additional progress in this area.”

In 2016, a ProPublica report revealed that Facebook allows advertisers to “explicitly exclude users from ad campaigns based on their race.”

The company has since removed options that allowed marketers to focus on users by race. However, even when the above-mentioned for-profit programs improved their marketing efforts and “aimed for racially balanced ad targeting,” a team of researchers from Princeton and USC found that “Meta’s algorithms would reproduce historical racial disparities in who sees ads , and do it without the advertisers’ knowledge.”

RELATED LINK: Chicago man wanted after robbing several people via Facebook Marketplace

This article was originally published on : www.blackenterprise.com
Continue Reading
Advertisement
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Technology

Fei-Fei Li chooses Google Cloud, where she led artificial intelligence, as the primary provider of computing solutions for World Labs

Published

on

By

Cloud service providers are chasing AI unicorns, and the latest is Fei-Fei Li’s World Labs. The startup just chosen Google Cloud as its primary computing provider for training artificial intelligence models, a move that could possibly be value a whole lot of hundreds of thousands of dollars. But the company said Li’s tenure as Google Cloud’s chief artificial intelligence scientist was irrelevant.

During the company’s Google Cloud Startup Summit on Tuesday announced World Labs will devote a big portion of its funds to licensing GPU servers on the Google Cloud Platform and ultimately to training “spatially intelligent” artificial intelligence models.

A handful of well-funded startups constructing basic AI models are in high demand in the world of cloud services. The largest deals include OpenAI, which exclusively trains and runs AI models on Microsoft Azure, and Anthropic, which uses AWS and Google Cloud. These corporations often pay hundreds of thousands of dollars for computing services, and sooner or later they could need much more as they scale their artificial intelligence models. This makes them beneficial customers for Google, Microsoft, and AWS to construct relationships with from the starting.

World Labs is definitely constructing unique, multimodal AI models with significant computational needs. The startup just raised $230 million at a valuation of over $1 billion, in a deal led by A16Z, to construct global artificial intelligence models. Google Cloud’s general manager of startups and AI, James Lee, tells TechCrunch that World Labs’ AI models will sooner or later have the ability to process, generate and interact with video and geospatial data. World Labs calls these AI models “spatial intelligence.”

Li has deep ties to Google Cloud, having led the company’s artificial intelligence efforts in 2018. However, Google denies that this deal is a result of this relationship and rejects the concept that cloud services are only a commodity. Instead, Lee said the greater factor is services, such as a high-performance toolkit for scaling AI workloads and a big supply of AI chips.

“Fei-Fei is obviously a friend of GCP,” Lee said in an interview. “GCP wasn’t the only option they were considering. But for all the reasons we talked about – our AI-optimized infrastructure and ability to meet their scalability needs – they ultimately came to us.”

Google Cloud offers AI startups a alternative between proprietary AI chips, tensor processing units or TPUs, and Nvidia GPUs, which Google buys and that are in additional limited supply. Google Cloud is attempting to persuade more startups to coach AI models on TPUs, mainly to cut back dependence on Nvidia. All cloud service providers today are limited by the shortage of Nvidia GPUs, so many are constructing their very own AI chips to satisfy demand. Google Cloud says some startups are training and inferring exclusively on TPUs, but GPUs remain the industry’s favorite AI training chips.

As part of this agreement, World Labs has chosen to coach its artificial intelligence models on GPUs. However, Google Cloud didn’t say what prompted this decision.

“We had been working with Fei-Fei and her product team, and at this point in the product roadmap it made more sense for them to work with us on the GPU platform,” Lee said in an interview. “But that doesn’t necessarily mean it’s a permanent decision… Sometimes (startups) move to other platforms like TPU.”

Lee didn’t reveal how large World Labs’ GPU cluster is, but cloud providers often devote huge supercomputers to startups training artificial intelligence models. Google Cloud promised one other startup training basic AI models, Magic, a cluster with “tens of thousands of Blackwell GPUs,” each with more power than a high-end gaming PC.

These clusters are easier to vow than to deliver. According to reports, Microsoft is a competitor to Google’s cloud services struggles to satisfy insane computational demands OpenAI, forcing the startup to make use of other computing power options.

World Labs’ contract with Google Cloud is non-exclusive, which suggests the startup can still strike deals with other cloud service providers. Google Cloud, nevertheless, says most of its operations will proceed.

This article was originally published on : techcrunch.com
Continue Reading

Technology

The undeniable connection between politics and technology

Published

on

By

April Walker, contributors network, technology, AI, Artificial intelligence


Written by April Walker

Politics and Technology: Undeniable Union

In the trendy era, the intersection of politics and technology has change into an undeniable unity, shaping the way in which all societies function and the course of political processes. The rapid advances in technology haven’t only transformed communication and the dissemination of knowledge, but in addition redefined political engagement, campaigning and management. Given the political climate we face today, combined with the sphere of artificial intelligence and the countless communication platforms at our disposal, let’s explore the profound impact of technology on politics, specializing in the digitization of grassroots movements, the threats posed by hackers and deepfakes, the fight social media with traditional media and the long run of post-election politics.

Grassroots has gone digital

Historically, grassroots movements have relied on face-to-face interactions, community meetings, and physical rallies to mobilize support and drive change. However, the appearance of digital technology has revolutionized these movements, enabling them to achieve wider audiences with unprecedented speed and efficiency. Social media platforms, email campaigns and online petitions have change into powerful tools for local organizers. These digital tools enable the rapid dissemination of knowledge, real-time updates, and the flexibility to mobilize supporters across geographic boundaries. An example is that the 2024 presidential election saw unprecedented use of tools like Zoom to unite communities in support of United States Vice President and presidential candidate Kamala Harris to lift multimillion-dollar donations for her campaign. Demonstrating a robust fundraising forum, Zoom calls that began with black women beginning to support “white dudes” and every little thing in between, now we have witnessed how in a really short time frame these communities haven’t only raised an enormous sum of money, but in addition helped empower her message and consistently refuting the disinformation and lies spread by her opponent.

This shouldn’t be the primary time that the facility of social media in organizing protests and gathering international support has been demonstrated, for instance throughout the Arab Spring in 2010–2011. Platforms similar to Twitter (today referred to as X) and Facebook played a key role in coordinating demonstrations and sharing real-time updates, which ultimately contributed to significant political changes within the region. Similarly, contemporary movements similar to Black Lives Matter have harnessed the facility of digital technology to amplify their message, organize protests, and raise awareness on a worldwide scale.

Hackers and Deepfakes

While technology has empowered political movements, it has also introduced latest threats to the integrity of political processes. Hackers and deepfakes pose two significant challenges on this regard. Cyberattacks on political parties, government institutions and electoral infrastructure have gotten more common, posing a threat to the safety and integrity of elections. In 2016, the Russian hack of the Democratic National Committee (DNC) highlighted the vulnerability of political organizations to cyber threats. Our current electoral process poses a fair more direct and immediate threat. Advances in artificial intelligence and the sheer proliferation of its use have created an increasingly sophisticated and complex network of “bad actors”, especially from other countries, who’re using the technology to attempt to influence the end result of the US presidential election.

Deepfakes, manipulated videos or audio recordings that appear authentic, present one other disturbing challenge. These sophisticated falsehoods can spread disinformation, discredit political opponents and manipulate public opinion. Just have a look at the recent use of AI-generated photos of music superstar Taylor Swift, who falsely claimed to support Republican presidential candidate Donald Trump when, in reality, Taylor publicly expressed her support for Kamala Harris. There is growing concern concerning the possibility that deepfakes could undermine trust in political leaders and institutions. As technology continues to advance, the flexibility to detect and address these threats becomes crucial to maintaining the integrity of democratic processes.

Social media versus traditional media

The development of social media has fundamentally modified the landscape of political communication. Traditional media similar to newspapers, television and radio used to have a monopoly on the dissemination of knowledge. However, social media platforms do democratized the flow of knowledge, enabling individuals to share newsFeedback and updates immediately. This change has each positive and negative policy implications.

The positive side of social media is that it allows politicians to speak directly with the general public, promoting transparency and consistent engagement. Politicians can use platforms like Instagram, TikTok, LinkedIn, Facebook and others to share their views, reply to voters and mobilize support, and in lots of cases, market-specific demographics which can be critical to closing the gap in areas requiring coordinated focus and attention. However, the unregulated nature of social media also enables the spread of disinformation, fake news and “echo chambers” where individuals are only exposed to information that reinforces their existing beliefs. If you are curious if that is true, take a have a look at the varied messages you read on platforms supporting each political parties – it’s amazing how starkly different these narratives will be.

With unwavering editorial standards and robust fact-checking processes, traditional media continues to play a key role in providing trusted information. However, it faces challenges in competing with the speed and global reach of social media. The coexistence of those two types of media creates a posh information ecosystem that requires critical considering, intentional skepticism and media literacy from society.

What’s next after the elections?

As we approach the upcoming Presidential Election Day, and for that matter even after that day, attention will shift to managing and implementing campaign guarantees. Technology continues to play a key role at this stage, with governments using digital tools to make sure effective administration, transparency and citizen engagement.

In my opinion, the post-election period and current policies could have to face the challenges posed by disinformation and cyber threats. Governments and organizations must comprehensively put money into cybersecurity measures, digital skills programs and regulatory frameworks to guard the integrity of political processes. As technology evolves at an especially rapid pace, the long run of politics will likely see a continued integration of its impact, emphasizing balancing its advantages with the necessity to protect democratic values, institutions and, most significantly, public trust! That said, we can’t be afraid of technology; we must seize this because innovations like artificial intelligence and GenAI are creating competitive opportunities for our country which can be unimaginably powerful and have the potential to positively change the course of humanity. During the Democratic National Convention, Vice President Kamala Harris noted during her ticket acceptance speech: “Once elected, I’ll make certain that we lead the world right into a future powered by space and artificial intelligence, that America, not China, wins elections in a twenty first century competition, and that we strengthen , and we are usually not giving up our global leadership.”


April Walker, Author Network, Technology


This article was originally published on : www.blackenterprise.com
Continue Reading

Technology

Alaska Airlines venture capital lab creates its first startup: Odysee

Published

on

By

Odysee CEO Steve Casley sees dollar signs in the information. More specifically, artificial intelligence-based software that may analyze vast amounts of information to assist industrial airlines profit from complex flight schedules.

That’s exactly what Odysee, the first startup born from the aviation-focused research lab created by Alaska Airlines and UP.Labs, is doing. The two corporations launched a venture lab last 12 months to create startups designed to resolve specific problems in air travel, reminiscent of guest experience, operational efficiency, aircraft maintenance, routing and revenue management. Odysee said it raised $5 million in a pre-seed round led by UP.Partners, a Los Angeles-based VC firm affiliated with UP.Labs. Alaska Star Ventures, which launched in October 2021, has invested $15 million in UP.Partners’ inaugural early-stage fund.

According to Casley, Alaska Airlines CEO Ben Minicucci flagged a scheduling problem early on. And no wonder. While there may be software available to investigate flight data and plan flights, Casley says they lack the type of real-time data – and, most significantly, revenue forecasts – that Odysee produces.

“You need tools to make better decisions because typically in airlines, schedule changes are made by experienced planners who do it intuitively,” Casley said in a recent interview. “I would not say what their seat within the pants is because numerous the time they will likely be right because they’ve seen each bad and good changes. But they never actually had the information to support their decisions.

According to the corporate, the data-enabled software can run a whole lot of simulations in seconds to quickly determine how schedule changes could impact revenue, profits and reliability.

“There are other optimizers, but to my knowledge none of these models or any optimization company offers revenue forecasts,” Casley said.

The machine learning model built by Odysee accommodates roughly 42 attributes, which include every part from departure time and day to traffic volumes on a selected route and competition schedules. The startup present in early simulations that it was able to save lots of a whole lot of hundreds of dollars in Alaska with only one schedule change.

Odysee is currently conducting user acceptance testing in Alaska. Once all that is accomplished, Alaska will begin a trial period of the software, which Casley said will begin in late October.

That’s a brief timeframe, considering UP.LAbs and Alaska Airlines only established the flight lab a 12 months ago. A fast path to industrial products is one among the major benefits of UP.Labs. UP.Labs, which launched for the first time in 2022, is structured as a venture lab with a brand new variety of financial investment vehicle. The company partners with large corporations reminiscent of Porsche, Alaska Airlines, and most recently JB Hunt to launch startups with recent business models aimed toward solving the industry’s biggest problems. Each partnership will create six startups over three years.

Under the UP.Labs structure, these startups is not going to be created solely to serve a company partner – on this case, Alaska Airlines. Rather, they may operate independently and as industrial enterprises from the outset, ultimately generating revenue from the sale of services or products throughout the industry.

This article was originally published on : techcrunch.com
Continue Reading
Advertisement

OUR NEWSLETTER

Subscribe Us To Receive Our Latest News Directly In Your Inbox!

We don’t spam! Read our privacy policy for more info.

Trending