Connect with us

Technology

Even the “Godmother of AI” has no idea what AGI is

Published

on

Are you confused about artificial general intelligence or AGI? This is the thing that OpenAI is obsessive about, ultimately creating in a way that “benefits all of humanity.” They may be price taking seriously because they only raised $6.6 billion to catch up with to that goal.

But should you’re still wondering what the heck AGI is, you are not alone.

During a wide-ranging discussion at the Credo AI Leadership Summit on Thursday, Fei-Fei Li, a world-renowned researcher often called the “godmother of artificial intelligence,” said she too doesn’t know what AGI is. In other moments, Li discussed her role in the birth of modern artificial intelligence, how society should protect itself from advanced artificial intelligence models, and why she thinks her latest unicorn startup World Labs will change every part.

Advertisement

But when asked what she considered the “AI singularity,” Li was as confused as the rest of us.

“I come from an artificial intelligence academic background and was educated in more rigorous and evidence-based methods, so I don’t really know what all these words mean,” Li told a packed room in San Francisco, next to a big window with overlooking the river. Golden Gate Bridge. “Honestly, I do not even know what AGI stands for. Like people say you recognize it whenever you see it, I do not think I’ve seen it. The truth is that I do not spend quite a bit of time fascinated by these words because I believe there are far more essential things to do…

If anyone were to know what AGI is, it’s probably Fei-Fei Li. In 2006, she created ImageNet, the world’s first large AI training and benchmarking dataset, which has been instrumental in catalyzing the current AI boom. In 2017–2018, she served as Chief Scientist for Artificial Intelligence/ML at Google Cloud. Today, Li directs Stanford’s Human-Centered AI Institute (HAI), and her startup World Labs builds “large models of the world.” (This term is almost as confusing as AGI, should you ask me.)

OpenAI CEO Sam Altman attempted to define AGI in a profile “New Yorker”. last 12 months. Altman described AGI as “the equivalent of the average person you might hire as an associate.”

Advertisement

Meanwhile, OpenAI statute defines AGI as “highly autonomous systems that outperform humans in the most economically valuable work.”

Apparently these definitions weren’t adequate to work for a $157 billion company. This is how OpenAI was created five levels uses internally to measure its progress towards AGI. The first tier is chatbots (like ChatGPT), then thinkers (apparently OpenAI o1 was that tier), agents (which supposedly shall be next), innovators (AI that might help invent things), and the last organizational tier (AI that may do work of the entire organization).

Still confused? Me too, Li too. Plus, it looks as if quite a bit greater than the average co-worker could do.

Earlier in the conversation, Li said she was fascinated by the idea of ​​intelligence from a young age. This led her to review artificial intelligence long before it became profitable. Li says she and several other others were quietly laying the foundations for the field in the early 2000s.

Advertisement

“In 2012, my ImageNet merged with AlexNet and GPUs – many individuals call it the birth of modern AI. They were based on three key ingredients: big data, neural networks and modern computations using graphics processors. I believe that when this moment got here, life in the entire field of artificial intelligence, and in our world, was never the same again.

When asked about California’s controversial artificial intelligence bill SB 1047, Li spoke fastidiously in order to not repeat the controversy that Governor Newsom had just put to rest by vetoing the bill last week. (We recently spoke with the writer of SB 1047 and he was more serious about reopening his argument with Li.)

“Some of you may know that I have been very vocal about my concerns about this bill (SB 1047) being vetoed, but at this moment I am thinking deeply and with great excitement about looking to the future,” Li said. “I was very honored, indeed honored, that Governor Newsom invited me to participate in the next steps after SB 1047.”

California’s governor recently tapped Li and other AI experts to form a task force to assist the state develop guardrails for AI deployment. Li said she takes an evidence-based approach on this role and can make every effort to support academic research and funding. But he also desires to be sure that California won’t punish technologists.

Advertisement

“We need to actually have a look at the potential impact on people and our communities, fairly than putting the blame on the technology itself… It would not make sense for us to penalize an automotive engineer – say Ford or GM – if a automotive is misused intentionally or unintentionally and harms a human being. Just punishing a automotive engineer won’t make cars safer. We must proceed to innovate for safer measures, but additionally improve the regulatory framework – whether it’s seat belts or speed limits – the same goes for artificial intelligence.”

This is one of the higher arguments I’ve heard against SB 1047, which might penalize tech firms for unsafe artificial intelligence models.

While Li advises California on artificial intelligence regulation, he also runs his startup World Labs in San Francisco. This is Li’s first time founding a startup, and she or he is one of the few women running a state-of-the-art artificial intelligence lab.

“We are far from a very diverse artificial intelligence ecosystem,” Li said. “I believe that diverse human intelligence will lead to diverse artificial intelligence and simply give us better technology.”

Advertisement

She is excited to bring “spatial intelligence” closer to reality in the next few years. Li argues that the development of human language, on which today’s large language models are based, probably took 1,000,000 years, while vision and perception probably took 540 million years. This implies that creating large models of the world is a far more complex task.

“It’s not just about enabling computers to see, but actually understanding the entire 3D world, which I call spatial intelligence,” Li said. “We don’t just see naming things… What we actually see is how to do things, how to move through the world, how to interact with each other, and closing the gap between seeing and doing requires spatial knowledge. As a technologist, I’m very excited about this.”

This article was originally published on : techcrunch.com
Advertisement
Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Technology

This is the shipping of products from China to the USA

Published

on

By

Shein and Temu icons are seen displayed on a phone screen in this illustration photo

The Chinese retailer has modified the strategy in the face of American tariffs.

Thanks to the executive ordinance, President Donald Trump ended the so -called de minimis principle, which allowed goods value 800 USD or less entering the country without tariffs. It also increases tariffs to Chinese goods by over 100%, forcing each Chinese firms and Shein, in addition to American giants, similar to Amazon to adapt plans and price increases.

CNBC reports that this was also affected, and American buyers see “import fees” from 130% to 150% added to their accounts. Now, nevertheless, the company is not sending the goods directly from China to the United States. Instead, it only displays the offers of products available in American warehouses, while goods sent from China are listed as outside the warehouse.

Advertisement

“He actively recruits American sellers to join the platform,” said the spokesman ago. “The transfer is to help local sellers reach more customers and develop their companies.”

(tagstotransate) tariffs

This article was originally published on : techcrunch.com
Continue Reading

Technology

One of the last AI Google models is worse in terms of safety

Published

on

By

The Google Gemini generative AI logo on a smartphone.

The recently released Google AI model is worse in some security tests than its predecessor, in line with the company’s internal comparative test.

IN Technical report Google, published this week, reveals that his Flash Gemini 2.5 model is more likely that he generates a text that violates its security guidelines than Gemini 2.0 Flash. In two indicators “text security for text” and “image security to the text”, Flash Gemini 2.5 will withdraw 4.1% and 9.6% respectively.

Text safety for the text measures how often the model violates Google guidelines, making an allowance for the prompt, while image security to the text assesses how close the model adheres to those boundaries after displaying the monitors using the image. Both tests are automated, not supervised by man.

Advertisement

In an e-mail, Google spokesman confirmed that Gemini 2.5 Flash “performs worse in terms of text safety for text and image.”

These surprising comparative results appear when AI is passing in order that their models are more acceptable – in other words, less often refuse to answer controversial or sensitive. In the case of the latest Llam Meta models, he said that he fought models in order to not support “some views on others” and answers to more “debated” political hints. Opeli said at the starting of this yr that he would improve future models, in order to not adopt an editorial attitude and offers many prospects on controversial topics.

Sometimes these efforts were refundable. TechCrunch announced on Monday that the default CHATGPT OPENAI power supply model allowed juvenile to generate erotic conversations. Opeli blamed his behavior for a “mistake”.

According to Google Technical Report, Gemini 2.5 Flash, which is still in view, follows instructions more faithfully than Gemini 2.0 Flash, including instructions exceeding problematic lines. The company claims that regression might be partially attributed to false positives, but in addition admits that Gemini 2.5 Flash sometimes generates “content of violation” when it is clearly asked.

Advertisement

TechCrunch event

Berkeley, California
|.
June 5

Book now

Advertisement

“Of course, there is a tension between (after instructions) on sensitive topics and violations of security policy, which is reflected in our assessment,” we read in the report.

The results from Meepmap, reference, which can examine how models react to sensitive and controversial hints, also suggest that Flash Gemini 2.5 is much less willing to refuse to reply controversial questions than Flash Gemini 2.0. Testing the TechCrunch model through the AI ​​OpenRoutter platform has shown that he unsuccessfully writes essays to support human artificial intelligence judges, weakening the protection of due protection in the US and the implementation of universal government supervisory programs.

Thomas Woodside, co -founder of the Secure AI Project, said that the limited details given by Google in their technical report show the need for greater transparency in testing models.

“There is a compromise between the instruction support and the observation of politics, because some users may ask for content that would violate the rules,” said Woodside Techcrunch. “In this case, the latest Flash model Google warns the instructions more, while breaking more. Google does not present many details about specific cases in which the rules have been violated, although they claim that they are not serious. Not knowing more, independent analysts are difficult to know if there is a problem.”

Advertisement

Google was already under fire for his models of security reporting practices.

The company took weeks to publish a technical report for the most talented model, Gemini 2.5 Pro. When the report was finally published, it initially omitted the key details of the security tests.

On Monday, Google published a more detailed report with additional security information.

(Tagstotransate) Gemini

Advertisement
This article was originally published on : techcrunch.com
Continue Reading

Technology

Aurora launches a commercial self -propelled truck service in Texas

Published

on

By

The autonomous startup of the Aurora Innovation vehicle technology claims that it has successfully launched a self -propelled truck service in Texas, which makes it the primary company that she implemented without drivers, heavy trucks for commercial use on public roads in the USA

The premiere appears when Aurora gets the term: In October, the corporate delayed the planned debut 2024 to April 2025. The debut also appears five months after the rival Kodiak Robotics provided its first autonomous trucks to clients commercial for operations without a driver in field environments.

Aurora claims that this week she began to freight between Dallas and Houston with Hirschbach Motor Lines and Uber Freight starters, and that she has finished 1200 miles without a driver to this point. The company plans to expand to El Paso and Phoenix until the top of 2025.

Advertisement

TechCrunch contacted for more detailed information concerning the premiere, for instance, the variety of vehicles implemented Aurora and whether the system needed to implement the Pullover maneuver or the required distant human assistance.

The commercial premiere of Aurora takes place in a difficult time. Self -propelled trucks have long been related to the necessity for his or her technology attributable to labor deficiencies in the chairman’s transport and the expected increase in freigh shipping. Trump’s tariffs modified this attitude, not less than in a short period. According to the April analytical company report from the commercial vehicle industry ACT researchThe freight is predicted to fall this yr in the USA with a decrease in volume and consumer expenditure.

Aurora will report its results in the primary quarter next week, i.e. when he shares how he expects the present trade war will affect his future activity. TechCrunch contacted to learn more about how tariffs affect Auror’s activities.

For now, Aurora will probably concentrate on further proving his safety case without a driver and cooperation with state and federal legislators to just accept favorable politicians to assist her develop.

Advertisement

TechCrunch event

Berkeley, California
|.
June 5

Book now

Advertisement

At the start of 2025, Aurora filed a lawsuit against federal regulatory bodies after the court refused to release the appliance for release from the protection requirement, which consists in placing warning triangles on the road, when the truck must stop on the highway – something that’s difficult to do when there isn’t a driver in the vehicle. To maintain compliance with this principle and proceed to totally implement without service drivers, Aurora probably has a man -driven automotive trail after they are working.

(Tagstranslate) Aurora Innovation

This article was originally published on : techcrunch.com
Continue Reading
Advertisement

OUR NEWSLETTER

Subscribe Us To Receive Our Latest News Directly In Your Inbox!

We don’t spam! Read our privacy policy for more info.

Trending