Connect with us

Technology

The undeniable connection between politics and technology

Published

on

April Walker, contributors network, technology, AI, Artificial intelligence


Written by April Walker

Politics and Technology: Undeniable Union

In the trendy era, the intersection of politics and technology has change into an undeniable unity, shaping the way in which all societies function and the course of political processes. The rapid advances in technology haven’t only transformed communication and the dissemination of knowledge, but in addition redefined political engagement, campaigning and management. Given the political climate we face today, combined with the sphere of artificial intelligence and the countless communication platforms at our disposal, let’s explore the profound impact of technology on politics, specializing in the digitization of grassroots movements, the threats posed by hackers and deepfakes, the fight social media with traditional media and the long run of post-election politics.

Advertisement

Grassroots has gone digital

Historically, grassroots movements have relied on face-to-face interactions, community meetings, and physical rallies to mobilize support and drive change. However, the appearance of digital technology has revolutionized these movements, enabling them to achieve wider audiences with unprecedented speed and efficiency. Social media platforms, email campaigns and online petitions have change into powerful tools for local organizers. These digital tools enable the rapid dissemination of knowledge, real-time updates, and the flexibility to mobilize supporters across geographic boundaries. An example is that the 2024 presidential election saw unprecedented use of tools like Zoom to unite communities in support of United States Vice President and presidential candidate Kamala Harris to lift multimillion-dollar donations for her campaign. Demonstrating a robust fundraising forum, Zoom calls that began with black women beginning to support “white dudes” and every little thing in between, now we have witnessed how in a really short time frame these communities haven’t only raised an enormous sum of money, but in addition helped empower her message and consistently refuting the disinformation and lies spread by her opponent.

This shouldn’t be the primary time that the facility of social media in organizing protests and gathering international support has been demonstrated, for instance throughout the Arab Spring in 2010–2011. Platforms similar to Twitter (today referred to as X) and Facebook played a key role in coordinating demonstrations and sharing real-time updates, which ultimately contributed to significant political changes within the region. Similarly, contemporary movements similar to Black Lives Matter have harnessed the facility of digital technology to amplify their message, organize protests, and raise awareness on a worldwide scale.

Hackers and Deepfakes

Advertisement

While technology has empowered political movements, it has also introduced latest threats to the integrity of political processes. Hackers and deepfakes pose two significant challenges on this regard. Cyberattacks on political parties, government institutions and electoral infrastructure have gotten more common, posing a threat to the safety and integrity of elections. In 2016, the Russian hack of the Democratic National Committee (DNC) highlighted the vulnerability of political organizations to cyber threats. Our current electoral process poses a fair more direct and immediate threat. Advances in artificial intelligence and the sheer proliferation of its use have created an increasingly sophisticated and complex network of “bad actors”, especially from other countries, who’re using the technology to attempt to influence the end result of the US presidential election.

Deepfakes, manipulated videos or audio recordings that appear authentic, present one other disturbing challenge. These sophisticated falsehoods can spread disinformation, discredit political opponents and manipulate public opinion. Just have a look at the recent use of AI-generated photos of music superstar Taylor Swift, who falsely claimed to support Republican presidential candidate Donald Trump when, in reality, Taylor publicly expressed her support for Kamala Harris. There is growing concern concerning the possibility that deepfakes could undermine trust in political leaders and institutions. As technology continues to advance, the flexibility to detect and address these threats becomes crucial to maintaining the integrity of democratic processes.

Social media versus traditional media

The development of social media has fundamentally modified the landscape of political communication. Traditional media similar to newspapers, television and radio used to have a monopoly on the dissemination of knowledge. However, social media platforms do democratized the flow of knowledge, enabling individuals to share newsFeedback and updates immediately. This change has each positive and negative policy implications.

Advertisement

The positive side of social media is that it allows politicians to speak directly with the general public, promoting transparency and consistent engagement. Politicians can use platforms like Instagram, TikTok, LinkedIn, Facebook and others to share their views, reply to voters and mobilize support, and in lots of cases, market-specific demographics which can be critical to closing the gap in areas requiring coordinated focus and attention. However, the unregulated nature of social media also enables the spread of disinformation, fake news and “echo chambers” where individuals are only exposed to information that reinforces their existing beliefs. If you are curious if that is true, take a have a look at the varied messages you read on platforms supporting each political parties – it’s amazing how starkly different these narratives will be.

With unwavering editorial standards and robust fact-checking processes, traditional media continues to play a key role in providing trusted information. However, it faces challenges in competing with the speed and global reach of social media. The coexistence of those two types of media creates a posh information ecosystem that requires critical considering, intentional skepticism and media literacy from society.

What’s next after the elections?

As we approach the upcoming Presidential Election Day, and for that matter even after that day, attention will shift to managing and implementing campaign guarantees. Technology continues to play a key role at this stage, with governments using digital tools to make sure effective administration, transparency and citizen engagement.

Advertisement

In my opinion, the post-election period and current policies could have to face the challenges posed by disinformation and cyber threats. Governments and organizations must comprehensively put money into cybersecurity measures, digital skills programs and regulatory frameworks to guard the integrity of political processes. As technology evolves at an especially rapid pace, the long run of politics will likely see a continued integration of its impact, emphasizing balancing its advantages with the necessity to protect democratic values, institutions and, most significantly, public trust! That said, we can’t be afraid of technology; we must seize this because innovations like artificial intelligence and GenAI are creating competitive opportunities for our country which can be unimaginably powerful and have the potential to positively change the course of humanity. During the Democratic National Convention, Vice President Kamala Harris noted during her ticket acceptance speech: “Once elected, I’ll make certain that we lead the world right into a future powered by space and artificial intelligence, that America, not China, wins elections in a twenty first century competition, and that we strengthen , and we are usually not giving up our global leadership.”


April Walker, Author Network, Technology


Advertisement
This article was originally published on : www.blackenterprise.com
Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Technology

Anysphere, which makes the cursor supposedly collect USD 900 million with a valuation of USD 9 billion

Published

on

By

AI robot face and programming code on a black background.

Anysphere, producer of coding cursor with AI drive, attracted $ 900 million in the recent financing round by Thrive Capital, Financial Times He informed, citing anonymous sources familiar with the contract.

The report said that Andreessen Horowitz (A16Z) and ACCEL also participate in the round, which values ​​about $ 9 billion.

The cursor collected $ 105 million from Thrive, and A16Z with a valuation of $ 2.5 billion, as TechCrunch said in December. Capital Thrive also led this round and in addition participated in A16Z. According to Crunchbase data, the startup has collected over $ 173 million thus far.

Advertisement

It is alleged that investors, including index ventures and a reference point, attempt to support the company, but plainly existing investors don’t want to miss the opportunity to support it.

Other coding start-ups powered by artificial intelligence also attract the interest of investors. Techcrunch announced in February that Windsurf, a rival for Aklesphere, talked about collecting funds at a valuation of $ 3 billion. Openai, an investor in Anysphere, was supposedly I’m attempting to get windsurf for about the same value.

(Tagstransate) A16Z

(*9*)This article was originally published on : techcrunch.com

Continue Reading

Technology

This is the shipping of products from China to the USA

Published

on

By

Shein and Temu icons are seen displayed on a phone screen in this illustration photo

The Chinese retailer has modified the strategy in the face of American tariffs.

Thanks to the executive ordinance, President Donald Trump ended the so -called de minimis principle, which allowed goods value 800 USD or less entering the country without tariffs. It also increases tariffs to Chinese goods by over 100%, forcing each Chinese firms and Shein, in addition to American giants, similar to Amazon to adapt plans and price increases.

CNBC reports that this was also affected, and American buyers see “import fees” from 130% to 150% added to their accounts. Now, nevertheless, the company is not sending the goods directly from China to the United States. Instead, it only displays the offers of products available in American warehouses, while goods sent from China are listed as outside the warehouse.

Advertisement

“He actively recruits American sellers to join the platform,” said the spokesman ago. “The transfer is to help local sellers reach more customers and develop their companies.”

(tagstotransate) tariffs

This article was originally published on : techcrunch.com
Continue Reading

Technology

One of the last AI Google models is worse in terms of safety

Published

on

By

The Google Gemini generative AI logo on a smartphone.

The recently released Google AI model is worse in some security tests than its predecessor, in line with the company’s internal comparative test.

IN Technical report Google, published this week, reveals that his Flash Gemini 2.5 model is more likely that he generates a text that violates its security guidelines than Gemini 2.0 Flash. In two indicators “text security for text” and “image security to the text”, Flash Gemini 2.5 will withdraw 4.1% and 9.6% respectively.

Text safety for the text measures how often the model violates Google guidelines, making an allowance for the prompt, while image security to the text assesses how close the model adheres to those boundaries after displaying the monitors using the image. Both tests are automated, not supervised by man.

Advertisement

In an e-mail, Google spokesman confirmed that Gemini 2.5 Flash “performs worse in terms of text safety for text and image.”

These surprising comparative results appear when AI is passing in order that their models are more acceptable – in other words, less often refuse to answer controversial or sensitive. In the case of the latest Llam Meta models, he said that he fought models in order to not support “some views on others” and answers to more “debated” political hints. Opeli said at the starting of this yr that he would improve future models, in order to not adopt an editorial attitude and offers many prospects on controversial topics.

Sometimes these efforts were refundable. TechCrunch announced on Monday that the default CHATGPT OPENAI power supply model allowed juvenile to generate erotic conversations. Opeli blamed his behavior for a “mistake”.

According to Google Technical Report, Gemini 2.5 Flash, which is still in view, follows instructions more faithfully than Gemini 2.0 Flash, including instructions exceeding problematic lines. The company claims that regression might be partially attributed to false positives, but in addition admits that Gemini 2.5 Flash sometimes generates “content of violation” when it is clearly asked.

Advertisement

TechCrunch event

Berkeley, California
|.
June 5

Book now

Advertisement

“Of course, there is a tension between (after instructions) on sensitive topics and violations of security policy, which is reflected in our assessment,” we read in the report.

The results from Meepmap, reference, which can examine how models react to sensitive and controversial hints, also suggest that Flash Gemini 2.5 is much less willing to refuse to reply controversial questions than Flash Gemini 2.0. Testing the TechCrunch model through the AI ​​OpenRoutter platform has shown that he unsuccessfully writes essays to support human artificial intelligence judges, weakening the protection of due protection in the US and the implementation of universal government supervisory programs.

Thomas Woodside, co -founder of the Secure AI Project, said that the limited details given by Google in their technical report show the need for greater transparency in testing models.

“There is a compromise between the instruction support and the observation of politics, because some users may ask for content that would violate the rules,” said Woodside Techcrunch. “In this case, the latest Flash model Google warns the instructions more, while breaking more. Google does not present many details about specific cases in which the rules have been violated, although they claim that they are not serious. Not knowing more, independent analysts are difficult to know if there is a problem.”

Advertisement

Google was already under fire for his models of security reporting practices.

The company took weeks to publish a technical report for the most talented model, Gemini 2.5 Pro. When the report was finally published, it initially omitted the key details of the security tests.

On Monday, Google published a more detailed report with additional security information.

(Tagstotransate) Gemini

Advertisement
This article was originally published on : techcrunch.com
Continue Reading
Advertisement

OUR NEWSLETTER

Subscribe Us To Receive Our Latest News Directly In Your Inbox!

We don’t spam! Read our privacy policy for more info.

Trending