Technology
The creators of a short video using Sora technology explain the strengths and limitations of AI-generated videos

OpenAI’s video generation tool Sora surprised the AI community in February with smooth, realistic video that appears to be well ahead of the competition. However, there are a lot of details unnoticed of the rigorously choreographed debut – details that were filled in by a filmmaker who was given advance access to create a short film starring Sora.
Shy Kids is a Toronto-based digital production team that has been chosen as one of only a few by OpenAI for the production of short movies essentially for OpenAI promotional purposes, although they got considerable creative freedom in creating an “air head”. In fxguide visual effects news interviewpost-production artist Patrick Cederberg described “actually using Sora” as part of his work.
Perhaps the most significant takeaway for many is that this: while OpenAI’s post highlighting the short movies allows the reader to assume that they were created roughly fully shaped by Sora, the reality is that they were skilled productions, equipped with solid storyboarding, editing, color correction, and publish work equivalent to rotoscoping and visual effects. Just like Apple says “recorded on iPhone” but doesn’t show the studio setup, skilled lighting, and color work after the fact, Sora’s post only talks about what it allows people to do, not how they really did it.
The interview with Cederberg is interesting and quite non-technical, so in the event you are in any respect interested, go to fxguide and read. But listed below are some interesting facts about using Sora that tell us that, while impressive, this model is maybe less of a step forward than we thought.
Control is the most desired and most elusive thing at this moment. … The only thing we could achieve was simply hyper-descriptive tooltips. Clarifying the character’s attire, in addition to the type of balloon, was our way of ensuring consistency, since from shot to shot/generation to generation there just isn’t yet a feature arrange to offer full control over consistency.
In other words, things which are easy in traditional filmmaking, equivalent to selecting the color of a character’s clothing, require complex workarounds and controls in the generative system because each shot is created independently of the others. This could change, of course, but it surely’s definitely rather more labor intensive lately.
Sora’s prints also needed to be watched for unwanted elements: Cederberg described how the model routinely generated a face on a balloon that the foremost character has for a head, or a string hanging from the front. They needed to be removed by mail, one other time-consuming process in the event you couldn’t get a prompt to exclude them.
Precise timing and character or camera movements aren’t actually possible: “There is a little bit of temporal control over where these different actions are happening in a given generation, but it’s not precise… it’s kind of a shot in the dark,” Cederberga said.
For example, synchronizing a gesture equivalent to a wave is a very approximate and suggestion-based process, unlike manual animations. And a shot that appears like a panorama pointing upwards at a character’s body may or may not reflect what the filmmaker wanted – so on this case, the team rendered the shot in a vertical orientation and cropped it in post. The generated clips also often played in slow motion for no particular reason.
An example of Sora’s shot and its ending at a glance. Image credits: Shy children
In fact, the use of on a regular basis film language equivalent to “swipe right” and “tracking shot” was generally inconsistent, Cederberg found, which the team found quite surprising.
“Scientists weren’t really thinking like filmmakers before they turned to artists to play with this tool,” he said.
As a result, the team performed a whole lot of generations, each lasting 10–20 seconds, and ended up using only a handful. Cederberg estimated the ratio at 300:1, but of course we might all be surprised at the ratio for a regular photo.
A band, actually he made a little behind-the-scenes video explaining some of the issues they bumped into in the event you’re curious. Like much AI-related content, the comments are quite critical of the whole project — though not as insulting as the AI-powered promoting we have seen pilloried recently.
One last interesting caveat concerns copyright: If you ask Sora to share a Star Wars clip, he’ll refuse. And in the event you attempt to get around this “man in robes with a laser sword on a retrofuturistic spaceship” problem, he may also refuse because by some mechanism he recognizes what you are attempting to do. She also refused to take an “Aronofsky-style shot” or a “Hitchcock zoom.”
On the one hand, it makes total sense. However, this begs the query: If Sora knows what it’s, does that mean the model has been trained on that content to raised recognize that it’s infringing? OpenAI, which holds its training data cards near the vest – to the point of absurdity, as in CTO Mira Murati’s interview with Joanna Stern – he’ll almost definitely never tell us.
When it involves Sora and its use in filmmaking, it’s definitely a powerful and useful gizmo as an alternative, but its purpose just isn’t to “create films from whole footage.” Already. As one other villain famously said, “that will come later.”
Technology
One of the last AI Google models is worse in terms of safety

The recently released Google AI model is worse in some security tests than its predecessor, in line with the company’s internal comparative test.
IN Technical report Google, published this week, reveals that his Flash Gemini 2.5 model is more likely that he generates a text that violates its security guidelines than Gemini 2.0 Flash. In two indicators “text security for text” and “image security to the text”, Flash Gemini 2.5 will withdraw 4.1% and 9.6% respectively.
Text safety for the text measures how often the model violates Google guidelines, making an allowance for the prompt, while image security to the text assesses how close the model adheres to those boundaries after displaying the monitors using the image. Both tests are automated, not supervised by man.
In an e-mail, Google spokesman confirmed that Gemini 2.5 Flash “performs worse in terms of text safety for text and image.”
These surprising comparative results appear when AI is passing in order that their models are more acceptable – in other words, less often refuse to answer controversial or sensitive. In the case of the latest Llam Meta models, he said that he fought models in order to not support “some views on others” and answers to more “debated” political hints. Opeli said at the starting of this yr that he would improve future models, in order to not adopt an editorial attitude and offers many prospects on controversial topics.
Sometimes these efforts were refundable. TechCrunch announced on Monday that the default CHATGPT OPENAI power supply model allowed juvenile to generate erotic conversations. Opeli blamed his behavior for a “mistake”.
According to Google Technical Report, Gemini 2.5 Flash, which is still in view, follows instructions more faithfully than Gemini 2.0 Flash, including instructions exceeding problematic lines. The company claims that regression might be partially attributed to false positives, but in addition admits that Gemini 2.5 Flash sometimes generates “content of violation” when it is clearly asked.
TechCrunch event
Berkeley, California
|.
June 5
Book now
“Of course, there is a tension between (after instructions) on sensitive topics and violations of security policy, which is reflected in our assessment,” we read in the report.
The results from Meepmap, reference, which can examine how models react to sensitive and controversial hints, also suggest that Flash Gemini 2.5 is much less willing to refuse to reply controversial questions than Flash Gemini 2.0. Testing the TechCrunch model through the AI OpenRoutter platform has shown that he unsuccessfully writes essays to support human artificial intelligence judges, weakening the protection of due protection in the US and the implementation of universal government supervisory programs.
Thomas Woodside, co -founder of the Secure AI Project, said that the limited details given by Google in their technical report show the need for greater transparency in testing models.
“There is a compromise between the instruction support and the observation of politics, because some users may ask for content that would violate the rules,” said Woodside Techcrunch. “In this case, the latest Flash model Google warns the instructions more, while breaking more. Google does not present many details about specific cases in which the rules have been violated, although they claim that they are not serious. Not knowing more, independent analysts are difficult to know if there is a problem.”
Google was already under fire for his models of security reporting practices.
The company took weeks to publish a technical report for the most talented model, Gemini 2.5 Pro. When the report was finally published, it initially omitted the key details of the security tests.
On Monday, Google published a more detailed report with additional security information.
(Tagstotransate) Gemini
Technology
Aurora launches a commercial self -propelled truck service in Texas

The autonomous startup of the Aurora Innovation vehicle technology claims that it has successfully launched a self -propelled truck service in Texas, which makes it the primary company that she implemented without drivers, heavy trucks for commercial use on public roads in the USA
The premiere appears when Aurora gets the term: In October, the corporate delayed the planned debut 2024 to April 2025. The debut also appears five months after the rival Kodiak Robotics provided its first autonomous trucks to clients commercial for operations without a driver in field environments.
Aurora claims that this week she began to freight between Dallas and Houston with Hirschbach Motor Lines and Uber Freight starters, and that she has finished 1200 miles without a driver to this point. The company plans to expand to El Paso and Phoenix until the top of 2025.
TechCrunch contacted for more detailed information concerning the premiere, for instance, the variety of vehicles implemented Aurora and whether the system needed to implement the Pullover maneuver or the required distant human assistance.
The commercial premiere of Aurora takes place in a difficult time. Self -propelled trucks have long been related to the necessity for his or her technology attributable to labor deficiencies in the chairman’s transport and the expected increase in freigh shipping. Trump’s tariffs modified this attitude, not less than in a short period. According to the April analytical company report from the commercial vehicle industry ACT researchThe freight is predicted to fall this yr in the USA with a decrease in volume and consumer expenditure.
Aurora will report its results in the primary quarter next week, i.e. when he shares how he expects the present trade war will affect his future activity. TechCrunch contacted to learn more about how tariffs affect Auror’s activities.
For now, Aurora will probably concentrate on further proving his safety case without a driver and cooperation with state and federal legislators to just accept favorable politicians to assist her develop.
TechCrunch event
Berkeley, California
|.
June 5
Book now
At the start of 2025, Aurora filed a lawsuit against federal regulatory bodies after the court refused to release the appliance for release from the protection requirement, which consists in placing warning triangles on the road, when the truck must stop on the highway – something that’s difficult to do when there isn’t a driver in the vehicle. To maintain compliance with this principle and proceed to totally implement without service drivers, Aurora probably has a man -driven automotive trail after they are working.
(Tagstranslate) Aurora Innovation
Technology
Sarah Tavel, the first woman of the Benchmark GP, goes to the Venture partner

Eight years after joining Benchmark as the company’s first partner, Sarah Tavel announced that she was going to a more limited role at Hapeure Venture.
In his latest position as a partner Venture Tavel will proceed to invest and serve existing company boards, but may have more time to examine “AI tools on the edge” and fascinated with the direction of artificial intelligence, she wrote.
Tavel joined Benchmark in 2017 after spending a half years as a partner in Greylock and three years as a product manager at Pinterest. Before Pinterest, Tavel was an investor in Bessemer Venture Partners, where she helped Source Pinterest and Github.
Since its foundation in 1995, the benchmark intentionally maintained a small team of six or fewer general partners. Unlike most VC corporations, wherein older partners normally receive most of the management and profits fees, the benchmark acts as an equal partnership, and all partners share fees and returns equally.
During his term as a general partner of Benchmark, Tavel invested in Hipcamp on the campsite, chains of cryptocurrency intelligence startups and the Supergreaty cosmetic platform, which was purchased by Whatnot in 2023. Tavel also supported the application for sharing photos of Paparazhi, which closed two years ago, and the AI 11x sales platform, about which TechCrunch wrote.
(Tagstotransate) benchmark
-
Press Release1 year ago
U.S.-Africa Chamber of Commerce Appoints Robert Alexander of 360WiseMedia as Board Director
-
Press Release1 year ago
CEO of 360WiSE Launches Mentorship Program in Overtown Miami FL
-
Business and Finance11 months ago
The Importance of Owning Your Distribution Media Platform
-
Business and Finance1 year ago
360Wise Media and McDonald’s NY Tri-State Owner Operators Celebrate Success of “Faces of Black History” Campaign with Over 2 Million Event Visits
-
Ben Crump1 year ago
Another lawsuit accuses Google of bias against Black minority employees
-
Theater1 year ago
Telling the story of the Apollo Theater
-
Ben Crump1 year ago
Henrietta Lacks’ family members reach an agreement after her cells undergo advanced medical tests
-
Ben Crump1 year ago
The families of George Floyd and Daunte Wright hold an emotional press conference in Minneapolis
-
Theater1 year ago
Applications open for the 2020-2021 Soul Producing National Black Theater residency – Black Theater Matters
-
Theater11 months ago
Cultural icon Apollo Theater sets new goals on the occasion of its 85th anniversary