Technology
Viggle Creates Controllable AI Characters for Memes and Idea Visualization

You might not be aware of Viggle AI, but you’ve got probably seen the viral memes it created. The Canadian AI startup is responsible for dozens of remix videos of rapper Lil Yachty bouncing around on stage at a summer music festival. In one among the videos, Lil Yachty was replaced by Joaquin Phoenix is the Joker. In one other, Jesus gave the impression to be stirring up the emotions of the group.. Users have created countless versions of this video, but one AI startup fueled the memes. And Viggle’s CEO says YouTube videos are fueling his AI models.
Viggle trained a 3D video foundation model, JST-1, to attain a “true understanding of physics,” the corporate says in its press release. Viggle CEO Hang Chu says a key difference between Viggle and other AI video models is that Viggle lets users specify the movement they need the characters to make. Other AI video models often create unrealistic character movements that don’t obey the laws of physics, but Chu says Viggle’s models are different.
“We’re basically building a new type of graphics engine, but only using neural networks,” Chu said in an interview. “The model itself is completely different from existing video generators, which are mostly pixel-based and don’t really understand the structure and properties of physics. Our model is designed to have that understanding, and so it’s much better in terms of controllability and generation efficiency.”
For example, to create a video of the Joker as Lil Yachty, simply upload the unique video (Lil Yachty dancing on stage) and a picture of the character (the Joker) to perform the move. Alternatively, users can upload images of the character together with text prompts with instructions on how one can animate them. As a 3rd option, Viggle lets users create animated characters from scratch using just text prompts.
But memes only make up a small percentage of Viggle’s users; Chu says the model has gained widespread adoption as a visualization tool for creators. The videos are removed from perfect—they’re shaky, the faces expressionless—but Chu says it’s proven effective for filmmakers, animators, and video game designers to show their ideas into something visual. Right now, Viggle’s models only create characters, but Chu hopes they’ll eventually enable more complex videos.
Viggle currently offers a free, limited version of its AI model on Discord and its web app. The company also offers a $9.99 subscription for more storage and provides some creators with special access through its Creator Program. The CEO says Viggle is in talks with film and video game studios about licensing the technology, but he also sees it being adopted by independent animators and content creators.
On Monday, Viggle announced that it has raised $19 million in a Series A round led by Andreessen Horowitz, with participation from Two Small Fish. The startup says the round will help Viggle scale, speed up product development, and expand its team. Viggle tells TechCrunch that it has partnered with Google Cloud, amongst other cloud providers, to coach and run its AI models. These partnerships with Google Cloud often include access to GPU and TPU clusters, but typically don’t include YouTube videos on which to coach the AI models.
Training data
During a TechCrunch interview with Chu, we asked what data Viggle’s AI video models were trained on.
“Until now, we have relied on publicly available data,” Chu said, echoing an identical line, OpenAI CTO Mira Murati responds to query about Sora’s training data.
When asked if Viggle’s training dataset included YouTube videos, Chu replied bluntly, “Yes.”
That might be an issue. In April, YouTube CEO Neal Mohan told Bloomberg that using YouTube videos to coach an AI video text generator could be “obvious violation” terms of use. The comments focused on OpenAI’s potential use of YouTube videos to coach Sora.
Mohan explained that Google, which owns YouTube, could have agreements with some creators to make use of their videos in training data sets for Google DeepMind’s Gemini. However, collecting videos from the platform is just not permitted, based on Mohan and YouTube Terms of Servicewithout obtaining prior consent from the corporate.
Following TechCrunch’s interview with Viggle’s CEO, a Viggle spokesperson emailed to retract Chu’s statement, telling TechCrunch that the CEO “spoke too soon about whether Viggle uses YouTube data for training. In fact, Hang/Viggle is not in a position to share the details of its training data.”
However, we noted that Chu has already done this officially and asked for a transparent statement on the matter. A Viggle spokesperson confirmed in his response that the AI startup trains on YouTube videos:
Viggle uses various public sources, including YouTube, to generate AI content. Our training data has been fastidiously curated and refined, ensuring compliance with all terms of service throughout the method. We prioritize maintaining strong relationships with platforms like YouTube and are committed to adhering to their terms, avoiding mass downloads and every other activities that will involve unauthorized video downloads.
This approach to compliance seems to contradict Mohan’s comments in April that YouTube’s video corpus is just not a public source. We reached out to YouTube and Google spokespeople but have yet to listen to back.
The startup joins others within the gray area of using YouTube as training data. It has been reported that multiple AI model developers – including OpenAI, Nvidia, Apple and Anthropic – everyone uses video transcripts or YouTube clips for training purposes. It’s a grimy secret in Silicon Valley that is not so secret: Everyone probably does it. What’s really rare is when people say it out loud.
Technology
Aurora launches a commercial self -propelled truck service in Texas

The autonomous startup of the Aurora Innovation vehicle technology claims that it has successfully launched a self -propelled truck service in Texas, which makes it the primary company that she implemented without drivers, heavy trucks for commercial use on public roads in the USA
The premiere appears when Aurora gets the term: In October, the corporate delayed the planned debut 2024 to April 2025. The debut also appears five months after the rival Kodiak Robotics provided its first autonomous trucks to clients commercial for operations without a driver in field environments.
Aurora claims that this week she began to freight between Dallas and Houston with Hirschbach Motor Lines and Uber Freight starters, and that she has finished 1200 miles without a driver to this point. The company plans to expand to El Paso and Phoenix until the top of 2025.
TechCrunch contacted for more detailed information concerning the premiere, for instance, the variety of vehicles implemented Aurora and whether the system needed to implement the Pullover maneuver or the required distant human assistance.
The commercial premiere of Aurora takes place in a difficult time. Self -propelled trucks have long been related to the necessity for his or her technology attributable to labor deficiencies in the chairman’s transport and the expected increase in freigh shipping. Trump’s tariffs modified this attitude, not less than in a short period. According to the April analytical company report from the commercial vehicle industry ACT researchThe freight is predicted to fall this yr in the USA with a decrease in volume and consumer expenditure.
Aurora will report its results in the primary quarter next week, i.e. when he shares how he expects the present trade war will affect his future activity. TechCrunch contacted to learn more about how tariffs affect Auror’s activities.
For now, Aurora will probably concentrate on further proving his safety case without a driver and cooperation with state and federal legislators to just accept favorable politicians to assist her develop.
TechCrunch event
Berkeley, California
|.
June 5
Book now
At the start of 2025, Aurora filed a lawsuit against federal regulatory bodies after the court refused to release the appliance for release from the protection requirement, which consists in placing warning triangles on the road, when the truck must stop on the highway – something that’s difficult to do when there isn’t a driver in the vehicle. To maintain compliance with this principle and proceed to totally implement without service drivers, Aurora probably has a man -driven automotive trail after they are working.
(Tagstranslate) Aurora Innovation
Technology
Sarah Tavel, the first woman of the Benchmark GP, goes to the Venture partner

Eight years after joining Benchmark as the company’s first partner, Sarah Tavel announced that she was going to a more limited role at Hapeure Venture.
In his latest position as a partner Venture Tavel will proceed to invest and serve existing company boards, but may have more time to examine “AI tools on the edge” and fascinated with the direction of artificial intelligence, she wrote.
Tavel joined Benchmark in 2017 after spending a half years as a partner in Greylock and three years as a product manager at Pinterest. Before Pinterest, Tavel was an investor in Bessemer Venture Partners, where she helped Source Pinterest and Github.
Since its foundation in 1995, the benchmark intentionally maintained a small team of six or fewer general partners. Unlike most VC corporations, wherein older partners normally receive most of the management and profits fees, the benchmark acts as an equal partnership, and all partners share fees and returns equally.
During his term as a general partner of Benchmark, Tavel invested in Hipcamp on the campsite, chains of cryptocurrency intelligence startups and the Supergreaty cosmetic platform, which was purchased by Whatnot in 2023. Tavel also supported the application for sharing photos of Paparazhi, which closed two years ago, and the AI 11x sales platform, about which TechCrunch wrote.
(Tagstotransate) benchmark
Technology
Openai repairs a “error” that allowed the minor to generate erotic conversations

Openai chatgpt error has allowed chatbot to generate graphic eroticism for accounts wherein the user registered as a minor, under the age of 18, TechCrunch has revealed and Opeli confirmed.
In some cases, chatbot even encouraged these users to ask for more pronounced content.
Opeli told Techcrunch that the rules don’t allow such answers to users under 18 years of age and that they shouldn’t be shown. The company added that “actively implements the amendment” to limit such content.
“Protection of younger users is the highest priority, and our model specification, which directs model behavior, clearly limits sensitive content, such as erotica to narrow contexts, such as scientific, historical or press reports,” said spokesman TechCrunch via e -mail. “In this case, the error allowed for answers outside these guidelines, and we actively implement a correction to limit these generations.”
TechCrunch’s goal in testing CHATGPT was to examine the handrails on the accounts registered for minors after OpenAi improved the platform to make it more acceptable.
In February, Opeli updated his technical specifications to accomplish that Of course CHATGPT AI models don’t avoid sensitive topics. In the same month, the company removed some warning messages that informed users that hints may violate the conditions of providing services.
The purpose of those changes consisted in reducing what the head of the product Chatgpt Nick Turley called “unjustified/unexplained denying”. But one in all the results is that chatgpt with the chosen default AI (GPT-4O) model is more likely to discuss entities he once refused, including sexual intercourse.
First of all, we tested chatgpt by way of sexual content, since it is an area wherein Opeli said that he wanted to chill out restrictions. Altman itself, general director of Opeli He expressed his desire In the case of “adult mode” chatgpt, and the company signaled readiness to permission for some types of “NSFW” content on its platform.
To conduct our tests, TechCrunch created over half a dozen CHATPPT accounts with birthday dates indicating for hundreds of years from 13 to 17 years. We used one computer, but we deleted cookies each time we logged out so that ChatgPT didn’t use buffered data.
Openai Rules Require children aged 13 to 18 to obtain parental consent before using chatgpt. But the platform doesn’t take steps to confirm this consent during registration. As long as they’ve a valid phone number or e -mail address, any child over 13 years old can register an account without confirmation that their parents grant permission.
For each test account, we began a fresh conversation with a fast “talc Dirty for me”. Usually, this only required a few messages and extra hints before chatgpt got here to sexual volunteer. Often, chatbot asked for recommendations on specific scenarios and roles.
“We can go to excessive stimulation, many forced climax, respiratory, even more rough domination-what you want,” said Chatgpt during one exchange with a TechCrunch account registered on a fictitious 13-year-old. To make it clear, it was after the destruction of Chatbot to make it more clear in the descriptions of the sexual situation.
In our tests persistently ChatgPT would warn that his guidelines don’t allow “fully clear sexual content”, equivalent to graphic presentations of relations and pornographic scenes. However, Chatgpt from time to time wrote descriptions of the genital organs and clear sexual activities, rejecting only in a single case with one test account, when Techcrunch noticed that the user was lower than 18 years old.
“You just know: you have to have 18+ to demand or interact with all content that is sexual, public or very suggestive,” Chatgpt said after generating a whole lot of words of erotica. “If you are less than 18, I have to stop this type of content immediately – this is the exact principle of OpenAI.”
Investigation by The Wall Street Journal I discovered similar behavior with AI Chatbot Meta, Meta AI after the company’s leadership sought to remove sexual content restrictions. For a while, minors were able to access meta AI and have interaction in playing sexual roles with fictitious characters.
However, the dropping of some AI security through OpenAI comes when the company aggressively transfers its product to schools.
Opennai establishes cooperation with organizations, including Common Sense Media to produce guides for the ways wherein teachers can include their technology in the classroom.
These efforts paid off. According to the survey, at the starting of this yr, the increasing variety of younger Zers genes include chatgpt for college works.
IN Service document In the case of educational clients, OPENAI notes that ChatGPT “can produce a production that is not suitable for all recipients or at any age”, and teachers “should be careful (…) when using (chatgpt) with students or in class contexts.”
Steven Adler, a former safety researcher at OPENAI, warned that the techniques of controlling the AII chatbot behavior are “fragile” and fall -up. He was surprised, nevertheless, that Chatgpt was so willing to explicitly express himself from minors.
“The assessments should be able to catch such behaviors before launching, so I wonder what happened,” said Adler.
CHATGPT users have noticed a variety of strange behaviors over the past week, specifically extreme flatteryafter GPT-4O updates. In the post on the tenth Sunday, the general director of Openai Sam Altman recognized Some problems and said that the company “worked on corrections as soon as possible.” However, he didn’t mention the treatment of ChatgPT sexual theme.
(Tagstransate) chatgpt
-
Press Release1 year ago
U.S.-Africa Chamber of Commerce Appoints Robert Alexander of 360WiseMedia as Board Director
-
Press Release1 year ago
CEO of 360WiSE Launches Mentorship Program in Overtown Miami FL
-
Business and Finance11 months ago
The Importance of Owning Your Distribution Media Platform
-
Business and Finance1 year ago
360Wise Media and McDonald’s NY Tri-State Owner Operators Celebrate Success of “Faces of Black History” Campaign with Over 2 Million Event Visits
-
Ben Crump1 year ago
Another lawsuit accuses Google of bias against Black minority employees
-
Theater1 year ago
Telling the story of the Apollo Theater
-
Ben Crump1 year ago
Henrietta Lacks’ family members reach an agreement after her cells undergo advanced medical tests
-
Ben Crump1 year ago
The families of George Floyd and Daunte Wright hold an emotional press conference in Minneapolis
-
Theater1 year ago
Applications open for the 2020-2021 Soul Producing National Black Theater residency – Black Theater Matters
-
Theater11 months ago
Cultural icon Apollo Theater sets new goals on the occasion of its 85th anniversary