Connect with us

Technology

X users treating the grok like checking the facts of the spark about disinformation

Published

on

Fact checking using AI

Some users of X Musk’s X turn to AI Bot Grok Musk to examine the facts, increasing the fears amongst human checking facts that this may increasingly fuel disinformation.

At the starting of this month, X enabled users to be called GROK XAI and ask questions for various things. Movement was Similar to embarrassmentwhich conducted an automatic account to X to supply an identical experience.

Shortly after XAI formed an automatic GROK account on X, users began to experiment with asking questions. Some people in markets, including India, began to ask the groc to examine the comments and questions which can be aimed toward specific political opinions.

Advertisement

Consulating facts are concerned about the use of grok-lub another assistant AI of this kind-in this fashion, because bots can frame their answers to a convincing sound, even in the event that they are usually not correct in actual fact. Instances Disseminating false messages AND misinformation They were seen from the groc in the past.

In August last 12 months, five secretaries called In order to mislead critical changes in Grak, the information generated by the assistant appeared in social networks before the US elections.

Other chatbots, including chatgpt Opeli and Google’s Gemini, were also perceived generating inaccurate information in the election last 12 months. Separately, disinformation researchers present in 2023 that AI chatbots, including ChatgPT convincing text with misleading narrative.

“AI assistants, like Grok, are really good in using the natural language and give answers that sounds as man said. And in this way AI products have such a claim about the naturalness and authentic sound reactions, even when they are very wrong. It would be danger here”, Angie Holan, director of the international network justifying facts (IFCN) in Paynter, said the technician.

Advertisement
GROK was asked by the user to examine facts on claims made by one other user

Unlike AI assistants, human checking of facts use many, reliable sources to confirm information. They also accept full responsibility for his or her findings, and their names and organizations are attached to make sure credibility.

Pratik Sinha, co-founder of India non-profit website for checking the facts of Alt News, said that although the grok now seems to have convincing answers, it’s nearly as good as the data with which it’s delivered.

“Who decides what data will be delivered, and this is where the government’s interference, etc.” he noted.

“There is no transparency. Everything that has no transparency will cause damage, because everything that has no transparency can be formed anywhere.”

Advertisement

“You can be used improperly – to distribute disinformation”

In one of the answers published at the starting of this week, GROK account on X recognized that “it can be used improperly – disseminating disinformation and violation of privacy.”

However, an automatic account doesn’t show any waiver for users after they receive answers, leading them to a mistake, if, for instance, he hallucinate the answer, which is a possible drawback of artificial intelligence.

The answer of the grok on whether it will probably spread disinformation (translated from Hinglo)

“He can give information to answer,” said Techcrunch Anushka Jain, a research collaborator at Multidisciplinary Research based in Goa.

There can also be an issue about how much Grak uses posts on X as training data and what quality control they use to examine such posts. Last summer, he pushed the change, which appeared to enable the GROK to the user’s data by default.

The second area of ​​AI assistants, akin to grocs available via social media platforms, is to offer them with public information – versus chatgpt or other privately used chatbots.

Advertisement

Even if the user is aware that the information he receives from the assistant can mislead or not quite correct, others on the platform can still imagine it.

This could cause serious social damage. Examples were seen earlier in India when Non -trials circulated over WhatsApp, led to the lyncies of crowds. However, these serious incidents occurred before the arrival of Genai, which made it easier to generate synthetic content and appear more realistic.

“If you see many of these answers, you say: hey, well, most of them are right, and so it can be, but there are those that are wrong. And how much? This is not a small fraction. Some scientific studies have shown that AI models are subject to 20% of error levels … and when this is wrong, it can do it not with real consequences in the world,” ifcn.

And versus the real checking of the facts

While AI, including XAI, improve their AI models to speak more like people, they’re still not – and can’t – replace people.

Advertisement

Over the past few months, technology corporations have been investigating ways to scale back human checks. Platforms, including X and Meta, began to adopt a brand new concept of checking facts through so -called social notes.

Of course, such changes also cause anxiety to examine the facts.

Sinha of Alt News optimistically believes that individuals will learn to tell apart machines from human facts and appreciate more accuracy of people.

“We see that the pendulum will go back towards more checking of facts,” said Holan from IFCN.

Advertisement

However, she noticed that in the meantime the perpetrators of the facts would probably have more work with information generated by AI, which spread quickly.

“A lot of this problem depends, do you really care about what is actually true or not? You are simply looking for veneer something that sounds and seems true, not being real? Because such help and will give you,” she said.

X and XAI didn’t reply to our request for comment.

(Tattranslate) grok

Advertisement
This article was originally published on : techcrunch.com

Technology

As Musk manages his growing family: WSJ

Published

on

By

Elon Musk says his duty is to “make new people.” Now Investigation of WSJ He suggests that he could start greater than 14 known children, and the sources claim that the actual number will be much higher. The report also describes how Musk keeps these details within the package.

In the middle of all this, based on the report, there may be a longtime Fixer Jared Birchall, which runs the Muska’s family office, but additionally supports the logistics of the developing Muska family, including by developing Hush contracts and serving as a board for moms of some children.

For example, Musk reportedly asked the conservative influence of Ashley St. Clair for signing a restrictive agreement after she gave birth to their son last autumn. Agreement: $ 15 million plus an extra $ 100,000 per 30 days, so long as the kid is 21 in exchange for her silence. She refused; He says that the contract worsens with every treason perceived. (She told the journal that the Muska team sent her only $ 20,000 after they bowed to Musk to comment on his article).

Advertisement

As for Birchall, which can also be CEO Press-IMPLANTU-IMPLANTU VENTURE NEURALK IA partner In AI Venture XAI in Musk, Muska’s private life management can simply be the third full -time job. According to the journal, in a single two -hour conversation with St. Clair, Birchall told her that the transition “legal path” with musk “always, always leads to a worse result for this woman than otherwise.”

This article was originally published on : techcrunch.com
Continue Reading

Technology

Lime scooter and Ebike batteries will be recycled by Redwood Materials

Published

on

By

The joint company Micromobility Lime has reached an agreement on sending batteries utilized in scooters and electronic bikes to Sewoi materials that extract and recycle critical minerals, comparable to lithium, cobalt, nickel and copper.

The agreement announced on Monday makes Redwood Materials the only real battery recycling partner for common scooters and e-bike bikes situated in cities within the United States, Germany and the Netherlands. The contract doesn’t cover every region where lime worksAn inventory covering cities throughout Europe, Asia and Australia.

In Lime up to now he had other recycling partnerships, especially with Sprout through his suppliers. However, for the primary time, the joint company Micromobility had direct relations with battery recycling in North America, which might directly process the fabric for recovery and returns it to the availability chain.

Advertisement

Redwood Materials, The Carson City, Startup from Nevada founded by the previous CFO Tesla JB Straubel, will get better battery materials when they can’t be used. After recovering and recycling, the materials will be re -introduced within the battery production process. This production system of a closed loop-which can reduce the demand for extraction and refining of minerals-is on the Redwood Materials business center.

The effort can also be consistent with its own goals of limestone sustainable development. Lime is geared toward decarbonization of operations by 2030. The company has made progress in reducing the range 1, 2 and 3 of emissions by 59.5% in five years of basic years 2019. Wapno plans to report the outcomes of carbon dioxide emissions 2024 in May.

“This cooperation means significant progress in the establishment of a more round supply chain, helping our batteries not only to recycled responsibly after reaching the end of their lives, but that their materials are returned to the battery supply chain,” said Andrew Savage, vice chairman for balanced development in Lime.

Lime also has partnerships from Gomi in Great Britain and Voltr in France and other European countries to gather these live battery cells for “Second Life” applications, including, amongst others, in the sphere of consumer electronics, comparable to portable speakers and battery packages.

Advertisement

Redwood Materials has contracts with other micromobility corporations, including Lyft, RAD Power Bikes and bicycle batteries and scooters specialized in recycling. Redwood, which collected over $ 2 billion in private funds, announced at first of this month, opened the research and development center in San Francisco.

(Tagstranslat) ebikes

This article was originally published on : techcrunch.com
Continue Reading

Technology

The Legal Defense Fund withdraws from the META civil law advisory group over Dei Rolback

Published

on

By

Legal Defense Fund,, Meta, dei,


On April 11, the Legal Defense Fund announced that he was leaving the external advisory council for civil rights regarding the fear that the changes in technology company introduced diversity, own capital, inclusion and availability in January.

According to those changes that some perceived as the capitulation of meta against the upcoming Trump administration, contributed to their decision To leave the advisory council of the technology company.

In January, LDF, along with several other organizations of civil rights, which were a part of the board, sent a letter to Marek Zuckerberg, CEO of Meta, outlining their fears As for a way changes would negatively affect users.

Advertisement

“We are shocked and disappointed that the finish has not consulted with this group or its members, considering these significant changes in its content policy. Non -compliance with even its own advisory group of experts on external civil rights shows a cynical disregard for its diverse users base and undermines the commitment of the meta in the field of freedom of speech with which he claims to” return “.

They closed the letter, hoping that the finish would recommend the ideals of freedom of speech: “If the finish really wants to recommend freedom of speech, he must commit to freedom of speech for all his services. As an advisory group from external civil rights, we offer our advice and knowledge in creating a better path.”

These fears increased only in the next months, culminating in one other list, which from the LDF director, Todd A. Cox, who indicated that the organization withdraws its membership from the META civil law advisory council.

“I am deeply disturbed and disappointed with the announcement of Medical on January 7, 2025, with irresponsible changes in content moderation policies on platforms, which are a serious risk for the health and safety of black communities and risk that they destabilize our republic,” Cox wrote.

Advertisement

He continued: “For almost a decade, the NACP Legal Defense and Educational Fund, Inc. (LDF) has invested a lot of time and resources, working with META as part of the informal committee advising the company in matters of civil rights. However, the finish introduced these changes in the policy of the content modification without consulting this group, and many changes directly with the guidelines from the guidelines from LDF and partners. LD can no longer participate in the scope. ” Advisory Committee for Rights “

In a separate but related LDF list, it clearly resembled a finish about the actual obligations of the Citizens’ Rights Act of 1964 and other provisions regarding discrimination in the workplace, versus the false statements of the Trump administration, that diversity, justice and initiative to incorporate discriminates against white Americans.

“While the finish has modified its policy, its obligations arising from federal regulations regarding civil rights remain unchanged. The title of VII of the Act on civic rights of 1964 and other regulations on civil rights prohibit discrimination in the workplace, including disconnecting treatment, principles in the workplace which have unfair disproportionate effects, and the hostile work environment. Also when it comes to inclusion, and access programs.

In the LDF press release, announcing each letters, Cox He called attention Metal insert into growing violence and division in the country’s social climate.

Advertisement

“LDF worked hard and in good faith with meta leadership and its consulting group for civil rights to ensure that the company’s workforce reflects the values ​​and racial warehouses of the United States and to increase the security priorities of many different communities that use meta platforms,” ​​said Cox. “Now we cannot support a company in good conscience that consciously takes steps in order to introduce changes in politics that supply further division and violence in the United States. We call the meta to reverse the course with these dangerous changes.”

(Tagstranslate) TODD A. COX (T) Legal Defense Fund (T) META (T) Diversity (T) Equality (T) inclusion

This article was originally published on : www.blackenterprise.com
Advertisement
Continue Reading
Advertisement

OUR NEWSLETTER

Subscribe Us To Receive Our Latest News Directly In Your Inbox!

We don’t spam! Read our privacy policy for more info.

Trending