Connect with us

Technology

Viggle Creates Controllable AI Characters for Memes and Idea Visualization

Published

on

Viggle makes controllable AI characters for memes and visualizing ideas

You might not be aware of Viggle AI, but you’ve got probably seen the viral memes it created. The Canadian AI startup is responsible for dozens of remix videos of rapper Lil Yachty bouncing around on stage at a summer music festival. In one among the videos, Lil Yachty was replaced by Joaquin Phoenix is ​​the Joker. In one other, Jesus gave the impression to be stirring up the emotions of the group.. Users have created countless versions of this video, but one AI startup fueled the memes. And Viggle’s CEO says YouTube videos are fueling his AI models.

Viggle trained a 3D video foundation model, JST-1, to attain a “true understanding of physics,” the corporate says in its press release. Viggle CEO Hang Chu says a key difference between Viggle and other AI video models is that Viggle lets users specify the movement they need the characters to make. Other AI video models often create unrealistic character movements that don’t obey the laws of physics, but Chu says Viggle’s models are different.

“We’re basically building a new type of graphics engine, but only using neural networks,” Chu said in an interview. “The model itself is completely different from existing video generators, which are mostly pixel-based and don’t really understand the structure and properties of physics. Our model is designed to have that understanding, and so it’s much better in terms of controllability and generation efficiency.”

For example, to create a video of the Joker as Lil Yachty, simply upload the unique video (Lil Yachty dancing on stage) and a picture of the character (the Joker) to perform the move. Alternatively, users can upload images of the character together with text prompts with instructions on how one can animate them. As a 3rd option, Viggle lets users create animated characters from scratch using just text prompts.

But memes only make up a small percentage of Viggle’s users; Chu says the model has gained widespread adoption as a visualization tool for creators. The videos are removed from perfect—they’re shaky, the faces expressionless—but Chu says it’s proven effective for filmmakers, animators, and video game designers to show their ideas into something visual. Right now, Viggle’s models only create characters, but Chu hopes they’ll eventually enable more complex videos.

Viggle currently offers a free, limited version of its AI model on Discord and its web app. The company also offers a $9.99 subscription for more storage and provides some creators with special access through its Creator Program. The CEO says Viggle is in talks with film and video game studios about licensing the technology, but he also sees it being adopted by independent animators and content creators.

On Monday, Viggle announced that it has raised $19 million in a Series A round led by Andreessen Horowitz, with participation from Two Small Fish. The startup says the round will help Viggle scale, speed up product development, and expand its team. Viggle tells TechCrunch that it has partnered with Google Cloud, amongst other cloud providers, to coach and run its AI models. These partnerships with Google Cloud often include access to GPU and TPU clusters, but typically don’t include YouTube videos on which to coach the AI ​​models.

Training data

During a TechCrunch interview with Chu, we asked what data Viggle’s AI video models were trained on.

“Until now, we have relied on publicly available data,” Chu said, echoing an identical line, OpenAI CTO Mira Murati responds to query about Sora’s training data.

When asked if Viggle’s training dataset included YouTube videos, Chu replied bluntly, “Yes.”

That might be an issue. In April, YouTube CEO Neal Mohan told Bloomberg that using YouTube videos to coach an AI video text generator could be “obvious violation” terms of use. The comments focused on OpenAI’s potential use of YouTube videos to coach Sora.

Mohan explained that Google, which owns YouTube, could have agreements with some creators to make use of their videos in training data sets for Google DeepMind’s Gemini. However, collecting videos from the platform is just not permitted, based on Mohan and YouTube Terms of Servicewithout obtaining prior consent from the corporate.

Following TechCrunch’s interview with Viggle’s CEO, a Viggle spokesperson emailed to retract Chu’s statement, telling TechCrunch that the CEO “spoke too soon about whether Viggle uses YouTube data for training. In fact, Hang/Viggle is not in a position to share the details of its training data.”

However, we noted that Chu has already done this officially and asked for a transparent statement on the matter. A Viggle spokesperson confirmed in his response that the AI ​​startup trains on YouTube videos:

Viggle uses various public sources, including YouTube, to generate AI content. Our training data has been fastidiously curated and refined, ensuring compliance with all terms of service throughout the method. We prioritize maintaining strong relationships with platforms like YouTube and are committed to adhering to their terms, avoiding mass downloads and every other activities that will involve unauthorized video downloads.

This approach to compliance seems to contradict Mohan’s comments in April that YouTube’s video corpus is just not a public source. We reached out to YouTube and Google spokespeople but have yet to listen to back.

The startup joins others within the gray area of ​​using YouTube as training data. It has been reported that multiple AI model developers – including OpenAI, Nvidia, Apple and Anthropic – everyone uses video transcripts or YouTube clips for training purposes. It’s a grimy secret in Silicon Valley that is not so secret: Everyone probably does it. What’s really rare is when people say it out loud.

This article was originally published on : techcrunch.com
Continue Reading
Advertisement
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Technology

Introducing the Next Wave of Startup Battlefield Judges at TechCrunch Disrupt 2024

Published

on

By

Announcing our next wave of Startup Battlefield judges at TechCrunch Disrupt 2024

Startup Battlefield 200 is the highlight of every Disrupt, and we will’t wait to search out out which of the 1000’s of startups which have invited us to collaborate can have the probability to pitch to top enterprise capitalists at TechCrunch Disrupt 2024. Join us at Moscone West in San Francisco October 28–30 for an epic showdown where everyone can have the probability to make a major impact.

Get insight into what the judges are in search of in a profitable company as they supply detailed feedback on the evaluation criteria. Don’t miss the opportunity to learn from their expert insights and discover the key characteristics that result in startup success, only at Disrupt 2024.

We’re excited to introduce our next group of investors who will evaluate startups and dive into each pitch in an in-depth and insightful Q&A session. Stay tuned for more big names coming soon!

Alice Brooks, Partner, Khosla Ventures

Alicja is a partner in Khosla’s ventures interests in sustainability, food, agriculture, and manufacturing/supply chain. She has worked with multiple startups in robotics, IoT, retail, consumer goods, and STEM education, and led mechanical, electrical, and application development teams in the US and Asia. She also founded and managed manufacturing operations in factories in China and Taiwan. Prior to KV, Alice was the founder and CEO of Roominate, a STEM education company that helps girls learn engineering concepts through play.

Mark Crane, Partner, General Catalyst

Mark Crane is a partner at General Catalysta enterprise capital firm that works with founders from seed to endurance to assist them construct corporations that may stand the test of time. Focused on acquiring and investing in later-stage investment opportunities equivalent to AuthZed, Bugcrowd, Resilience, and TravelPerk. Prior to joining General Catalyst, Mark was a vice chairman at Cove Hill Partners in Massachusetts. Prior to that, he was a senior associate at JMI Equity and an associate at North Bridge Growth Equity.

Sofia Dolfe, Partner, Index Ventures

Sofia partners with founders who use their unique perspective and private understanding of the problem to construct corporations that drive behavioral change, powerful network effects, and transform entire industries, from grocery and e-commerce to financial services and healthcare. Sofia can also be one of Index projects‘ gaming leads, working with some of the best gaming corporations in Europe, making a recent generation of iconic gaming titles. He spends most of his time in the Nordics, but works with entrepreneurs across the continent.

Christine Esserman, Partner, Accel

Christine Esserman joined Acceleration in 2017 and focuses on software, web, and mobile technology corporations. Since joining Accel, Christine has helped lead Accel’s investments in Blackpoint Cyber, Linear, Merge, ThreeFlow, Bumble, Remote, Dovetail, Ethos, Guru, and Headway. Prior to joining Accel, Christine worked in product and operations roles at multiple startups. A native of the Bay Area, Christine graduated from the Wharton School at the University of Pennsylvania with a level in Finance and Operations.

Haomiao Huang, Founding Partner, Matter Venture Partners

Haomiao from Venture Matter Partners is a robotics researcher turned founder turned investor. He is especially obsessed with corporations that bring digital innovation to physical economy enterprises, with a give attention to sectors equivalent to logistics, manufacturing and transportation, and advanced technologies equivalent to robotics and AI. Haomiao spent 4 years investing in hard tech with Wen Hsieh at Kleiner Perkins. He previously founded smart home security startup Kuna, built autonomous cars at Caltech and, as part of his PhD research at Stanford, pioneered the aerodynamics and control of multi-rotor unmanned aerial vehicles. Kuna was part of the Y Combinator Winter 14 cohort.

Don’t miss it!

The Startup Battlefield winner, who will walk away with a $100,000 money prize, can be announced at Disrupt 2024—the epicenter of startups. Join 10,000 attendees to witness this breakthrough moment and see the next wave of tech innovation.

Register here and secure your spot to witness this epic battle of startups.

This article was originally published on : techcrunch.com
Continue Reading

Technology

India Considers Easing Market Share Caps for UPI Payments Operators

Published

on

By

phonepe UPI being used to accept payments at a road-side sunglasses stall.

The regulator that oversees India’s popular UPI rail payments is considering relaxing a proposed market share cap for operators like Google Pay, PhonePe and Paytm because it grapples with enforcing the restrictions, two people accustomed to the matter told TechCrunch.

The National Payments Corporation of India (NPCI), which is regulated by the Indian central bank, is considering increasing the market share that UPI operators can hold to greater than 40%, said two of the people, requesting anonymity because the knowledge is confidential. The regulator had earlier proposed a 30% market share limit to encourage competition within the space.

UPI has change into the most well-liked option to send and receive money in India, with the mechanism processing over 12 billion transactions monthly. Walmart-backed PhonePe has about 48% market share by volume and 50% by value, while Google Pay has 37.3% share by volume.

Once an industry heavyweight, Paytm’s market share has fallen to 7.2% from 11% late last yr amid regulatory challenges.

According to several industry executives, the NPCI’s increase in market share limits is more likely to be a controversial move as many UPI providers were counting on regulatory motion to curb the dominance of PhonePe and Google Pay.

NPCI, which has previously declined to comment on market share, didn’t reply to a request for comment on Thursday.

The regulator originally planned to implement the market share caps in January 2021 but prolonged the deadline to January 1, 2025. The regulator has struggled to seek out a workable option to implement its proposed market share caps.

The stakes are high, especially for PhonePe, India’s Most worthy fintech startup, valued at $12 billion.

Sameer Nigam, co-founder and CEO of PhonePe, said last month that the startup cannot go public “if there is uncertainty on regulatory issues.”

“If you buy a share at Rs 100 and value it assuming we have 48-49% market share, there is uncertainty whether it will come down to 30% and when,” Nigam told a fintech conference last month. “We are reaching out to them (the regulator) whether they can find another way to at least address any concerns they have or tell us what the list of concerns is,” he added.

This article was originally published on : techcrunch.com
Continue Reading

Technology

Bluesky addresses trust and security issues related to abuse, spam and more

Published

on

By

Bluesky butterfly logo and Jay Graber

Social media startup Bluesky, which is constructing a decentralized alternative to X (formerly Twitter), provided an update Wednesday on the way it’s approaching various trust and security issues on its platform. The company is in various stages of developing and piloting a variety of initiatives focused on coping with bad actors, harassment, spam, fake accounts, video security and more.

To address malicious users or those that harass others, Bluesky says it’s developing recent tools that can have the option to detect when multiple recent accounts are created and managed by the identical person. This could help curb harassment when a foul actor creates several different personas to attack their victims.

Another recent experiment will help detect “rude” replies and forward them to server moderators. Like Mastodon, Bluesky will support a network where self-hosters and other developers can run their very own servers that connect to Bluesky’s server and others on the network. This federation capability is still in early access. But in the long term, server moderators will have the option to resolve how they need to take care of individuals who post rude responses. In the meantime, Bluesky will eventually reduce the visibility of those responses on its app. Repeated rude labels on content will even lead to account-level labels and suspensions, it says.

To curb using lists to harass others, Bluesky will remove individual users from the list in the event that they block the list creator. Similar functionality was recently introduced to Starter Packs, a sort of shared list that will help recent users find people to follow on the platform (check TechCrunch Starter Pack).

Bluesky will even scan lists with offensive names or descriptions to limit the potential of harassing others by adding them to a public list with a toxic or offensive name or description. Those who violate Bluesky’s Community Guidelines might be hidden from the app until the list owner makes changes that align with Bluesky’s policies. Users who proceed to create offensive lists will even face further motion, though the corporate didn’t provide details, adding that the lists are still an area of ​​energetic discussion and development.

In the approaching months, Bluesky also intends to move to handling moderation reports through its app, using notifications relatively than counting on email reports.

To combat spam and other fake accounts, Bluesky is launching a pilot that can attempt to routinely detect when an account is fake, scamming or sending spam to users. Combined with moderation, the goal is to have the option to take motion on accounts inside “seconds of receiving a report,” the corporate said.

One of the more interesting developments is how Bluesky will comply with local laws while still allowing free speech. It will use geotags that allow it to hide some content from users in a particular area to comply with the law.

“This allows Bluesky’s moderation service to maintain flexibility in creating spaces for free expression while also ensuring legal compliance so that Bluesky can continue to operate as a service in these geographic regions,” the corporate shared in a blog post. “This feature will be rolled out on a country-by-country basis, and we will endeavor to inform users of the source of legal requests when legally possible.”

To address potential trust and safety issues with videos which have recently been added, the team is adding features like the flexibility to disable autoplay, ensuring videos are labeled, and providing the flexibility to report videos. They are still evaluating what else might need to be added, which might be prioritized based on user feedback.

When it comes to abuse, the corporate says its general framework is “a question of how often something happens versus how harmful it is.” The company focuses on addressing high-impact, high-frequency issues, in addition to “tracking edge cases that could result in significant harm to a few users.” The latter, while only affecting a small number of individuals, causes enough “ongoing harm” that Bluesky will take motion to prevent abuse, it says.

User concerns will be reported via reports, emails and mentions @safety.bsky.app account.

This article was originally published on : techcrunch.com
Continue Reading
Advertisement

OUR NEWSLETTER

Subscribe Us To Receive Our Latest News Directly In Your Inbox!

We don’t spam! Read our privacy policy for more info.

Trending