Connect with us

Technology

UK-based Wayve secures strategic investment from Uber to further develop autonomous driving technology

Published

on

UK’s Wayve secures strategic investment from Uber to further develop self-driving tech

Uber is making a strategic investment in Wayve as an extension of the British startup’s previously announced $1.05 billion Series C funding round. The partnership will even see each firms work with automakers to integrate Wayve’s AI into consumer vehicles that can at some point run on the ride-hailing giant’s platform.

The tie-up comes per week after Uber announced that Cruise’s robot taxi would join the Uber app in 2025. It is the newest in a series of self-driving technology partnerships that Uber has secured over the past few years.

Details about Uber’s partnership with Wayve are scarce, however the startup has made a splash since its founding in Cambridge in 2017. Over the past two years, Wayve has raised greater than $1.3 billion from investors including SoftBank Group, Nvidia and Microsoft.

Advertisement

The startup is developing a self-learning, quite than rule-based, autonomous driving system—similar to Tesla’s AI. Like Tesla, Wayve doesn’t depend on lidar sensors. It uses cameras and radar to help its AI perceive the world around it. Unlike Tesla, Wayve is constructing its AI in order that other automakers can equip consumer vehicles with Level 2+ advanced driver assistance systems, in addition to Level 3 and Level 4 automated driving features.

SAE defines Level 3 and Level 4 autonomous driving systems are those who can operate autonomously under certain conditions. A driver still needs to be ready to take control of a Level 3 system, but not a Level 4 system. Wayve is currently still testing its L2+ technology in Jaguar I-Paces and Ford E-Transits with safety drivers behind the wheel, and has not begun testing L3 and L4, according to a Wayve spokesperson.

Wayve didn’t provide further details on the character of its agreement with Uber. In a press release, the corporate said the partnership “envisages future Wayve-powered autonomous vehicles being available on the Uber network.”

Neither Wayve nor Uber have provided a timeline for when Wayve-powered vehicles will likely be included in Uber’s app. They haven’t disclosed whether the vehicles will likely be fully autonomous or equipped only with advanced driver-assist technology. They haven’t said how much Uber is investing.

Advertisement

In a press release, Alex Kendall, CEO and co-founder of Wayve, said the partnership will help “significantly increase the AI ​​learning capacity of our fleet, ensuring our AV technology is safe and ready for global deployment across the Uber network.”

Kendall also noted that Wayve and Uber will jointly “work with automotive OEMs to bring autonomous driving technologies to consumers faster.”

Uber CEO Dara Khosrowshahi said in a press release that Wayve’s approach to AI “holds enormous promise” as the corporate strives for “a world where modern vehicles are shared, electric, and autonomous.”

“We are excited to have Wayve as our partner to help us build Uber into the premier network for autonomous vehicles,” Khosrowshahi said.

Advertisement

In recent weeks, Uber has positioned itself as a super partner for self-driving startups looking to enter the market. A Waymo robotaxi joined Uber’s platform in Phoenix last yr. Uber has also partnered with firms which can be making autonomous sidewalk delivery robots, including Serve Robotics, Cartken and Coco – and autonomous freight startups like Waabi and Aurora, which aim to bring autonomous driving capabilities to Uber Eats and Uber Freight.

This article was originally published on : techcrunch.com

Technology

PO clarous Director General Zoom also uses AI avatar during a quarterly connection

Published

on

By

Zoom CEO Eric Yuan

General directors at the moment are so immersed in artificial intelligence that they send their avatars to cope with quarterly connections from earnings as a substitute, a minimum of partly.

After AI Avatar CEO CEO appeared on the investor’s conversation firstly of this week, the final director of Zoom Eric Yuan also followed them, also Using his avatar for preliminary comments. Yuan implemented his non -standard avatar via Zoom Clips, an asynchronous company video tool.

“I am proud that I am one of the first general directors who used the avatar in a call for earnings,” he said – or fairly his avatar. “This is just one example of how Zoom shifts the limits of communication and cooperation. At the same time, we know that trust and security are necessary. We take seriously the content generated AI and build strong security to prevent improper use, protect the user’s identity and ensure that avatars are used responsibly.”

Advertisement

Yuan has long been in favor of using avatars at meetings and previously said that the corporate goals to create Digital user twins. He just isn’t alone on this vision; The CEO of transcript -powered AI, apparently, trains its own avatar Share the load.

Meanwhile, Zoom said he was doing it Avatar non -standard function available To all users this week.

(Tagstranslat) meetings AI

This article was originally published on : techcrunch.com
Advertisement
Continue Reading

Technology

The next large Openai plant will not be worn: Report

Published

on

By

Sam Altman speaks onstage during The New York Times Dealbook Summit 2024.

Opeli pushed generative artificial intelligence into public consciousness. Now it might probably develop a very different variety of AI device.

According to WSJ reportThe general director of Opeli, Altman himself, told employees on Wednesday that one other large product of the corporate would not be worn. Instead, it will be compact, without the screen of the device, fully aware of the user’s environment. Small enough to sit down on the desk or slot in your pocket, Altman described it each as a “third device” next to MacBook Pro and iPhone, in addition to “Comrade AI” integrated with on a regular basis life.

The preview took place after the OpenAI announced that he was purchased by IO, a startup founded last 12 months by the previous Apple Joni Ive designer, in a capital agreement value $ 6.5 billion. I will take a key creative and design role at Openai.

Advertisement

Altman reportedly told employees that the acquisition can ultimately add 1 trillion USD to the corporate conveyorsWearing devices or glasses that got other outfits.

Altman reportedly also emphasized to the staff that the key would be crucial to stop the copying of competitors before starting. As it seems, the recording of his comments leaked to the journal, asking questions on how much he can trust his team and the way rather more he will be able to reveal.

(Tagstotransate) devices

This article was originally published on : techcrunch.com
Advertisement
Continue Reading

Technology

The latest model AI Google Gemma can work on phones

Published

on

By

It grows “open” AI Google, Gemma, grows.

While Google I/O 2025 On Tuesday, Google removed Gemma 3N compresses, a model designed for “liquid” on phones, laptops and tablets. According to Google, available in a preview starting on Tuesday, Gemma 3N can support sound, text, paintings and flicks.

Models efficient enough to operate in offline mode and without the necessity to calculate within the cloud have gained popularity within the AI ​​community lately. They will not be only cheaper to make use of than large models, but they keep privacy, eliminating the necessity to send data to a distant data center.

Advertisement

During the speech to I/O product manager, Gemma Gus Martins said that GEMMA 3N can work on devices with lower than 2 GB of RAM. “Gemma 3N shares the same architecture as Gemini Nano, and is also designed for incredible performance,” he added.

In addition to Gemma 3N, Google releases Medgemma through the AI ​​developer foundation program. According to Medgemma, it’s essentially the most talented model to research text and health -related images.

“Medgemma (IS) OUR (…) A collection of open models to understand the text and multimodal image (health),” said Martins. “Medgemma works great in various imaging and text applications, thanks to which developers (…) could adapt the models to their own health applications.”

Also on the horizon there may be SignGEMMA, an open model for signaling sign language right into a spoken language. Google claims that Signgemma will allow programmers to create recent applications and integration for users of deaf and hard.

Advertisement

“SIGNGEMMA is a new family of models trained to translate sign language into a spoken text, but preferably in the American sign and English,” said Martins. “This is the most talented model of understanding sign language in history and we are looking forward to you-programmers, deaf and hard communities-to take this base and build with it.”

It is value noting that Gemma has been criticized for non -standard, non -standard license conditions, which in accordance with some developers adopted models with a dangerous proposal. However, this didn’t discourage programmers from downloading Gemma models tens of tens of millions of times.

.

(Tagstransate) gemma

Advertisement
This article was originally published on : techcrunch.com
Continue Reading
Advertisement

OUR NEWSLETTER

Subscribe Us To Receive Our Latest News Directly In Your Inbox!

We don’t spam! Read our privacy policy for more info.

Trending