Connect with us

Technology

Viggle Creates Controllable AI Characters for Memes and Idea Visualization

Published

on

Viggle makes controllable AI characters for memes and visualizing ideas

You might not be aware of Viggle AI, but you’ve got probably seen the viral memes it created. The Canadian AI startup is responsible for dozens of remix videos of rapper Lil Yachty bouncing around on stage at a summer music festival. In one among the videos, Lil Yachty was replaced by Joaquin Phoenix is ​​the Joker. In one other, Jesus gave the impression to be stirring up the emotions of the group.. Users have created countless versions of this video, but one AI startup fueled the memes. And Viggle’s CEO says YouTube videos are fueling his AI models.

Viggle trained a 3D video foundation model, JST-1, to attain a “true understanding of physics,” the corporate says in its press release. Viggle CEO Hang Chu says a key difference between Viggle and other AI video models is that Viggle lets users specify the movement they need the characters to make. Other AI video models often create unrealistic character movements that don’t obey the laws of physics, but Chu says Viggle’s models are different.

“We’re basically building a new type of graphics engine, but only using neural networks,” Chu said in an interview. “The model itself is completely different from existing video generators, which are mostly pixel-based and don’t really understand the structure and properties of physics. Our model is designed to have that understanding, and so it’s much better in terms of controllability and generation efficiency.”

For example, to create a video of the Joker as Lil Yachty, simply upload the unique video (Lil Yachty dancing on stage) and a picture of the character (the Joker) to perform the move. Alternatively, users can upload images of the character together with text prompts with instructions on how one can animate them. As a 3rd option, Viggle lets users create animated characters from scratch using just text prompts.

But memes only make up a small percentage of Viggle’s users; Chu says the model has gained widespread adoption as a visualization tool for creators. The videos are removed from perfect—they’re shaky, the faces expressionless—but Chu says it’s proven effective for filmmakers, animators, and video game designers to show their ideas into something visual. Right now, Viggle’s models only create characters, but Chu hopes they’ll eventually enable more complex videos.

Viggle currently offers a free, limited version of its AI model on Discord and its web app. The company also offers a $9.99 subscription for more storage and provides some creators with special access through its Creator Program. The CEO says Viggle is in talks with film and video game studios about licensing the technology, but he also sees it being adopted by independent animators and content creators.

On Monday, Viggle announced that it has raised $19 million in a Series A round led by Andreessen Horowitz, with participation from Two Small Fish. The startup says the round will help Viggle scale, speed up product development, and expand its team. Viggle tells TechCrunch that it has partnered with Google Cloud, amongst other cloud providers, to coach and run its AI models. These partnerships with Google Cloud often include access to GPU and TPU clusters, but typically don’t include YouTube videos on which to coach the AI ​​models.

Training data

During a TechCrunch interview with Chu, we asked what data Viggle’s AI video models were trained on.

“Until now, we have relied on publicly available data,” Chu said, echoing an identical line, OpenAI CTO Mira Murati responds to query about Sora’s training data.

When asked if Viggle’s training dataset included YouTube videos, Chu replied bluntly, “Yes.”

That might be an issue. In April, YouTube CEO Neal Mohan told Bloomberg that using YouTube videos to coach an AI video text generator could be “obvious violation” terms of use. The comments focused on OpenAI’s potential use of YouTube videos to coach Sora.

Mohan explained that Google, which owns YouTube, could have agreements with some creators to make use of their videos in training data sets for Google DeepMind’s Gemini. However, collecting videos from the platform is just not permitted, based on Mohan and YouTube Terms of Servicewithout obtaining prior consent from the corporate.

Following TechCrunch’s interview with Viggle’s CEO, a Viggle spokesperson emailed to retract Chu’s statement, telling TechCrunch that the CEO “spoke too soon about whether Viggle uses YouTube data for training. In fact, Hang/Viggle is not in a position to share the details of its training data.”

However, we noted that Chu has already done this officially and asked for a transparent statement on the matter. A Viggle spokesperson confirmed in his response that the AI ​​startup trains on YouTube videos:

Viggle uses various public sources, including YouTube, to generate AI content. Our training data has been fastidiously curated and refined, ensuring compliance with all terms of service throughout the method. We prioritize maintaining strong relationships with platforms like YouTube and are committed to adhering to their terms, avoiding mass downloads and every other activities that will involve unauthorized video downloads.

This approach to compliance seems to contradict Mohan’s comments in April that YouTube’s video corpus is just not a public source. We reached out to YouTube and Google spokespeople but have yet to listen to back.

The startup joins others within the gray area of ​​using YouTube as training data. It has been reported that multiple AI model developers – including OpenAI, Nvidia, Apple and Anthropic – everyone uses video transcripts or YouTube clips for training purposes. It’s a grimy secret in Silicon Valley that is not so secret: Everyone probably does it. What’s really rare is when people say it out loud.

This article was originally published on : techcrunch.com
Continue Reading
Advertisement
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Technology

US medical device giant Artivion says hackers stole files during a cybersecurity incident

Published

on

By

Artivion, a medical device company that produces implantable tissue for heart and vascular transplants, says its services have been “disrupted” resulting from a cybersecurity incident.

In 8-K filing In an interview with the SEC on Monday, Georgia-based Artivion, formerly CryoLife, said it became aware of a “cybersecurity incident” that involved the “compromise and encryption” of information on November 21. This suggests that the corporate was attacked by ransomware, but Artivion has not yet confirmed the character of the incident and didn’t immediately reply to TechCrunch’s questions. No major ransomware group has yet claimed responsibility for the attack.

Artivion said it took some systems offline in response to the cyberattack, which the corporate said caused “disruptions to certain ordering and shipping processes.”

Artivion, which reported third-quarter revenue of $95.8 million, said it didn’t expect the incident to have a material impact on the corporate’s funds.

This article was originally published on : techcrunch.com
Continue Reading

Technology

It’s a Raspberry Pi 5 in a keyboard and it’s called Raspberry Pi 500

Published

on

By

Manufacturer of single-board computers Raspberry Pi is updating its cute little computer keyboard device with higher specs. Named Raspberry Pi500This successor to the Raspberry Pi 400 is just as powerful as the present Raspberry Pi flagship, the Raspberry Pi 5. It is on the market for purchase now from Raspberry Pi resellers.

The Raspberry Pi 500 is the simplest method to start with the Raspberry Pi because it’s not as intimidating because the Raspberry Pi 5. When you take a look at the Raspberry Pi 500, you do not see any chipsets or PCBs (printed circuit boards). The Raspberry Pi is totally hidden in the familiar housing, the keyboard.

The idea with the Raspberry Pi 500 is you could connect a mouse and a display and you are able to go. If, for instance, you’ve got a relative who uses a very outdated computer with an outdated version of Windows, the Raspberry Pi 500 can easily replace the old PC tower for many computing tasks.

More importantly, this device brings us back to the roots of the Raspberry Pi. Raspberry Pi computers were originally intended for educational applications. Over time, technology enthusiasts and industrial customers began using single-board computers all over the place. (For example, when you’ve ever been to London Heathrow Airport, all of the departures and arrivals boards are there powered by Raspberry Pi.)

Raspberry Pi 500 draws inspiration from the roots of the Raspberry Pi Foundation, a non-profit organization. It’s the right first computer for college. In some ways, it’s a lot better than a Chromebook or iPad because it’s low cost and highly customizable, which inspires creative pondering.

The Raspberry Pi 500 comes with a 32GB SD card that comes pre-installed with Raspberry Pi OS, a Debian-based Linux distribution. It costs $90, which is a slight ($20) price increase over the Raspberry Pi 400.

Only UK and US keyboard variants will probably be available at launch. But versions with French, German, Italian, Japanese, Nordic and Spanish keyboard layouts will probably be available soon. And when you’re in search of a bundle that features all the things you would like, Raspberry Pi also offers a $120 desktop kit that features the Raspberry Pi 500, a mouse, a 27W USB-C power adapter, and a micro-HDMI to HDMI cable.

In other news, Raspberry Pi has announced one other recent thing: the Raspberry Pi monitor. It is a 15.6-inch 1080p monitor that’s priced at $100. Since there are quite a few 1080p portable monitors available on the market, this launch is not as noteworthy because the Pi 500. However, for die-hard Pi fans, there’s now also a Raspberry Pi-branded monitor option available.

Image credits:Raspberry Pi

This article was originally published on : techcrunch.com
Continue Reading

Technology

Apple Vision Pro may add support for PlayStation VR controllers

Published

on

By

Vision Pro headset

According to Apple, Apple desires to make its Vision Pro mixed reality device more attractive for gamers and game developers latest report from Bloomberg’s Mark Gurman.

The Vision Pro was presented more as a productivity and media consumption device than a tool geared toward gamers, due partly to its reliance on visual and hand controls moderately than a separate controller.

However, Apple may need gamers if it desires to expand the Vision Pro’s audience, especially since Gurman reports that lower than half one million units have been sold to this point. As such, the corporate has reportedly been in talks with Sony about adding support for PlayStation VR2 handheld controllers, and has also talked to developers about whether they may support the controllers of their games.

Offering more precise control, Apple may also make other forms of software available in Vision Pro, reminiscent of Final Cut Pro or Adobe Photoshop.

This article was originally published on : techcrunch.com
Continue Reading
Advertisement

OUR NEWSLETTER

Subscribe Us To Receive Our Latest News Directly In Your Inbox!

We don’t spam! Read our privacy policy for more info.

Trending