Connect with us

Technology

Meta’s AI chief says world models are the key to ‘human-level artificial intelligence’, but it may not take another 10 years

Published

on

Do today’s artificial intelligence models really remember, think, plan and reason like a human brain would? Some AI labs would tell us that is the case, but according to Meta’s Chief AI Scientist Yann LeCun, the answer isn’t any. But he thinks we could achieve it in a few decade using a brand new method called the “world model.”

Earlier this yr, OpenAI released a brand new feature it calls “memory” that permits ChatGPT to “remember” your conversations. The startup’s latest generation of models, o1, displays the word “thinking” when generating results, and OpenAI claims the same models are able to “complex reasoning.”

Everything indicates that we are close to AGI. However, during recent discussion at the Hudson ForumLeCun undermines AI optimists like xAI founder Elon Musk and Google DeepMind co-founder Shane Legg, who suggest that human-level artificial intelligence is just around the corner.

“We need machines that understand the world; (machines) that can remember things, that have intuition, that have common sense, things that can reason and plan at the same level as humans,” LeCun said during the call. “Despite what you have heard from the most enthusiastic people, current AI systems are not capable of this.”

LeCun argues that today’s large language models, reminiscent of those powered by ChatGPT and Meta AI, are a far cry from “human-level artificial intelligence.” He later said that humanity could possibly be “years or even decades” away from achieving such a goal. (But that does not stop his boss, Mark Zuckerberg, from asking him when AGI will occur.)

The reason is easy: these LLMs work by predicting the next token (often just a few letters or a brief word), and today’s image/video models predict the next pixel. In other words, language models are one-dimensional predictors and AI image/video models are two-dimensional predictors. These models have develop into quite good at predicting of their respective dimensions, but they do not really understand the three-dimensional world.

For this reason, modern artificial intelligence systems are unable to perform easy tasks that almost all humans can. LeCun notes that folks learn to clear the table at age 10 and drive a automotive at 17 – they usually learn each in a matter of hours. However, even the most advanced artificial intelligence systems in the world today, built on 1000’s or tens of millions of hours of information, cannot operate reliably in the physical world.

To achieve more complex tasks, LeCun suggests that we’d like to construct three-dimensional models that may perceive the world around us and cluster around a brand new variety of artificial intelligence architecture: world models.

“A world model is a mental model of how the world behaves,” he explained. “You can imagine a sequence of actions you might take, and your world model will allow you to predict what effect that sequence of actions will have on the world.”

Consider the “world model” in your personal head. For example, imagine you have a look at a unclean bedroom and wish to clean it. You can imagine how collecting all of your clothes and putting them away would do the trick. You haven’t got to try multiple methods or learn the way to clean a room first. Your brain observes three-dimensional space and creates an motion plan that may enable you to achieve your goal the first time. This roadmap is the secret that the models of the AI ​​world promise.

Part of the profit is that world models can take in way more data than LLM models. This also makes them computationally intensive, which is why cloud service providers are racing to partner with artificial intelligence firms.

World models are a crucial concept that several artificial intelligence labs are currently working on, and the term is quickly becoming another buzzword attracting enterprise capital funds. A gaggle of esteemed artificial intelligence researchers, including Fei-Fei Li and Justin Johnson, just raised $230 million for his or her startup World Labs. The “Godmother of AI” and her team are also confident that world models will unlock much smarter AI systems. OpenAI also describes its unreleased Sora video generator as a world model, but doesn’t go into details.

LeCun outlined the idea of ​​using world models to create human-level artificial intelligence in: Article from 2022 on “goal-driven artificial intelligence,” although notes that the concept is over 60 years old. In short, the basic representation of the world (for instance, a video of a unclean room) and memory are fed into the world model. The world model then predicts what the world will seem like based on this information. You then provide the goals for the world model, including the modified state of the world you wish to achieve (e.g. a clean room), in addition to guardrails to ensure the model doesn’t harm people in achieving the goal (don’t kill me in the middle of cleansing the room, please). The world model then finds a sequence of actions to achieve these goals.

According to LeCun, Meta’s long-term research lab, FAIR, or Fundamental AI Research, is actively working on constructing goal-driven models of artificial intelligence and the world. FAIR used to concentrate on artificial intelligence for Meta’s upcoming products, but LeCun says that in recent years the lab has begun to focus exclusively on long-term artificial intelligence research. LeCun says FAIR doesn’t even use LLM courses presently.

World models are an intriguing idea, but LeCun says we’ve not made much progress in making these systems a reality. There are numerous very difficult problems facing us where we are now, and he says it’s actually more complicated than we predict.

“It will be years, if not a decade, before we can get everything up and running here,” Lecun said. “Mark Zuckerberg keeps asking me how long it will take.”

This article was originally published on : techcrunch.com
Continue Reading
Advertisement
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Technology

US medical device giant Artivion says hackers stole files during a cybersecurity incident

Published

on

By

Artivion, a medical device company that produces implantable tissue for heart and vascular transplants, says its services have been “disrupted” resulting from a cybersecurity incident.

In 8-K filing In an interview with the SEC on Monday, Georgia-based Artivion, formerly CryoLife, said it became aware of a “cybersecurity incident” that involved the “compromise and encryption” of information on November 21. This suggests that the corporate was attacked by ransomware, but Artivion has not yet confirmed the character of the incident and didn’t immediately reply to TechCrunch’s questions. No major ransomware group has yet claimed responsibility for the attack.

Artivion said it took some systems offline in response to the cyberattack, which the corporate said caused “disruptions to certain ordering and shipping processes.”

Artivion, which reported third-quarter revenue of $95.8 million, said it didn’t expect the incident to have a material impact on the corporate’s funds.

This article was originally published on : techcrunch.com
Continue Reading

Technology

It’s a Raspberry Pi 5 in a keyboard and it’s called Raspberry Pi 500

Published

on

By

Manufacturer of single-board computers Raspberry Pi is updating its cute little computer keyboard device with higher specs. Named Raspberry Pi500This successor to the Raspberry Pi 400 is just as powerful as the present Raspberry Pi flagship, the Raspberry Pi 5. It is on the market for purchase now from Raspberry Pi resellers.

The Raspberry Pi 500 is the simplest method to start with the Raspberry Pi because it’s not as intimidating because the Raspberry Pi 5. When you take a look at the Raspberry Pi 500, you do not see any chipsets or PCBs (printed circuit boards). The Raspberry Pi is totally hidden in the familiar housing, the keyboard.

The idea with the Raspberry Pi 500 is you could connect a mouse and a display and you are able to go. If, for instance, you’ve got a relative who uses a very outdated computer with an outdated version of Windows, the Raspberry Pi 500 can easily replace the old PC tower for many computing tasks.

More importantly, this device brings us back to the roots of the Raspberry Pi. Raspberry Pi computers were originally intended for educational applications. Over time, technology enthusiasts and industrial customers began using single-board computers all over the place. (For example, when you’ve ever been to London Heathrow Airport, all of the departures and arrivals boards are there powered by Raspberry Pi.)

Raspberry Pi 500 draws inspiration from the roots of the Raspberry Pi Foundation, a non-profit organization. It’s the right first computer for college. In some ways, it’s a lot better than a Chromebook or iPad because it’s low cost and highly customizable, which inspires creative pondering.

The Raspberry Pi 500 comes with a 32GB SD card that comes pre-installed with Raspberry Pi OS, a Debian-based Linux distribution. It costs $90, which is a slight ($20) price increase over the Raspberry Pi 400.

Only UK and US keyboard variants will probably be available at launch. But versions with French, German, Italian, Japanese, Nordic and Spanish keyboard layouts will probably be available soon. And when you’re in search of a bundle that features all the things you would like, Raspberry Pi also offers a $120 desktop kit that features the Raspberry Pi 500, a mouse, a 27W USB-C power adapter, and a micro-HDMI to HDMI cable.

In other news, Raspberry Pi has announced one other recent thing: the Raspberry Pi monitor. It is a 15.6-inch 1080p monitor that’s priced at $100. Since there are quite a few 1080p portable monitors available on the market, this launch is not as noteworthy because the Pi 500. However, for die-hard Pi fans, there’s now also a Raspberry Pi-branded monitor option available.

Image credits:Raspberry Pi

This article was originally published on : techcrunch.com
Continue Reading

Technology

Apple Vision Pro may add support for PlayStation VR controllers

Published

on

By

Vision Pro headset

According to Apple, Apple desires to make its Vision Pro mixed reality device more attractive for gamers and game developers latest report from Bloomberg’s Mark Gurman.

The Vision Pro was presented more as a productivity and media consumption device than a tool geared toward gamers, due partly to its reliance on visual and hand controls moderately than a separate controller.

However, Apple may need gamers if it desires to expand the Vision Pro’s audience, especially since Gurman reports that lower than half one million units have been sold to this point. As such, the corporate has reportedly been in talks with Sony about adding support for PlayStation VR2 handheld controllers, and has also talked to developers about whether they may support the controllers of their games.

Offering more precise control, Apple may also make other forms of software available in Vision Pro, reminiscent of Final Cut Pro or Adobe Photoshop.

This article was originally published on : techcrunch.com
Continue Reading
Advertisement

OUR NEWSLETTER

Subscribe Us To Receive Our Latest News Directly In Your Inbox!

We don’t spam! Read our privacy policy for more info.

Trending