Technology

At the AI ​​Film Festival, humanity triumphed over technology

Published

on

In the third episode of “Creative Dialogues,” an interview series produced by the film division of the generative AI startup Runway, multimedia artist Claire Hentschker expresses her fear that AI will commodify the artistic process to the point where art becomes homogenized, regressing to its sort of derivative identity.

“Are you getting a narrower and narrower average of existing things?” she asks. “And then – as you average it – everything will just be a blur?”

These are the questions I asked myself Wednesday during a screening of the top 10 finalists of the second annual AI Runway Film Festival who available on request on the Runway website as of this morning.

This 12 months, Runway had two premieres, one in Los Angeles and the other in New York. I attended a performance in New York that took place at the Metrograph, a theater known for its art theater and avant-garde performances.

“Pounamu” – the story of a young bird exploring the wider world.
Image credits: Samuel Schrag

I’m completely happy to report that AI shouldn’t be accelerating in the future… at the very least not yet. However, the director’s trained eye – the human touch – makes a transparent difference in the effectiveness of the “AI film”.

All movies submitted to the festival featured artificial intelligence in some form, including AI-generated backgrounds and animations, synthetic narratives, and bullet time-style computer graphics. None of the elements appeared to be at the level of what cutting-edge tools like OpenAI’s Sora could produce, but that was to be expected given that the majority submissions were finalized earlier in the 12 months.

Indeed, it was obvious – sometimes painfully so – which parts of the movies were the product of the artificial intelligence model, reasonably than the actor, cinematographer or animator. Even otherwise strong scripts were let down at times by unsatisfactory generative AI effects.

Take, for instance, “Dear Mom” by Johans Saldana Guadalupe and Katie Luo, which, in her own words, tells the story of a daughter’s loving relationship along with her mother. It’s a tearjerker. But the Los Angeles freeway scene, with all the trademark weirdness of AI-generated movies (e.g. warped cars, weird physics), broke my spell.

A scene from the movie “Dear Mom”.
Image credits: Johans Saldana Guadalupe and Katie Luo

The limitations of recent artificial intelligence tools looked as if it would limit some movies.

As my colleague Devin Coldewey recently wrote, control through generative models – especially people who generate video – is elusive. Simple issues in traditional filmmaking, comparable to selecting the color of a personality’s clothing, require workarounds because each shot is created independently of the others. Sometimes even workarounds don’t help.

The resulting incoherence was on display at the festival, where several movies amounted to little greater than interconnected vignettes, connected by narrative and soundtrack. Carlo De Togni and Elena Sparacino’s “L’éveil à la création” showed how boring this formula could possibly be, with slideshow-like transitions that will make for a greater interactive storybook than a movie.

Léo Cannone “Where do grandmothers go when they get lost?” also falls into the vignette category – but it surely still triumphs because of its honest script (a baby describing what happens to their grandmothers after their deaths) and an exceptionally strong performance by its child star. The remainder of the audience looked as if it would agree; the film received one in all the more spirited ovations of the evening.

Giant grannies imagined by artificial intelligence.
Image credits: Leo Cannone

For me, this sums up the festival in a nutshell. Human input, reasonably than artificial intelligence, often makes the difference. Emotionality in a baby actor’s voice? It’s related to you. Artificial intelligence generated backgrounds? Less.

That was actually true of the festival’s Grand Prix winner “Get Me Out,” which documents one Japanese man’s struggle to get well from the psychological effects of immigration to the U.S. on him as a baby. Filmmaker Daniel Antebi depicts a person’s panic attacks using artificial intelligence-generated graphics – graphics that ultimately proved less effective than photos. The film ends with a shot of a person walking up the bridge as the streetlights on the boardwalk flash one after the other. It’s haunting – and delightful – and definitely took ages to capture.

In “Get Me Out” a person struggles together with his emotions – literally.
Image credits: Daniel Antebi

It could be very possible that at some point generative artificial intelligence will have the ability to recreate such scenes. Perhaps cinematography will eventually get replaced by hints – a casualty of the ever-growing datasets (albeit with a troubling copyright status) on which startups like Runway and OpenAI train their video generation models.

But that day shouldn’t be today.

As the screening ended and the awardees marched to the front of the theater to take their photos, I could not help but notice the cameraman standing in the corner documenting the entire event. Perhaps, on the contrary, artificial intelligence won’t ever replace some things, comparable to the humanity that we humans deeply desire.

This article was originally published on : techcrunch.com

Leave a Reply

Your email address will not be published. Required fields are marked *

Trending

Exit mobile version