google-site-verification=cXrcMGa94PjI5BEhkIFIyc9eZiIwZzNJc4mTXSXtGRM Can you hear me now? AI acoustics to combat noisy sound with generative artificial intelligence - 360WISE MEDIA
Connect with us

Technology

Can you hear me now? AI acoustics to combat noisy sound with generative artificial intelligence

Published

on

Noisy recordings of interviews and speeches are the nightmare of sound engineers. But one German startup hopes to solve this problem with a singular technical approach that uses generative artificial intelligence to improve the clarity of voices in video.

Today, AI acoustics got here out of hiding thanks to financing of 1.9 million euros. According to co-founder and CEO Fabian Seipel, AI-coustics technology goes beyond standard noise cancellation and works with any device and speaker.

“Our core mission is to ensure that every digital interaction, whether on a conference call, a consumer device, or a regular video on social media, is as clear as a professional studio broadcast,” Seipel told TechCrunch in an interview.

Seipel, an audio engineer by training, founded AI-coustics in 2021 together with Corvin Jaedicke, a lecturer in machine learning on the Technical University of Berlin. Seipel and Jaedicke met while studying audio technology at TU Berlin, where they often encountered poor sound quality in the net courses and tutorials that they had to take.

“We are driven by a personal mission to address the pervasive challenge of poor audio quality in digital communications,” said Seipel. “Although my hearing is somewhat impaired by music production in my early 20s, I have always struggled with online content and lectures, which led us to work primarily on speech quality and speech intelligibility.”

The marketplace for software that uses artificial intelligence to suppress noise and improve voice is already very strong. AI-coustics’ rivals include Insoundz, which uses generative artificial intelligence to enhance streamed and pre-recorded speech clips, and Veed.io, a video editing suite with tools to remove background noise from clips.

But Seipel says AI has a singular approach to developing AI mechanisms that truly reduce noise.

The startup uses a model trained on speech samples recorded on the startup’s studio in Berlin, AI-coustics’ hometown. People are paid to record samples – Seipel didn’t say what number of – that are then added to the info set to train an artificial intelligence noise reduction model.

“We have developed a unique approach to simulating audio artifacts and issues – e.g. noise, reverberation, compression, band-limited microphones, distortion, clipping, etc. – during the training process,” Seipel said.

I bet some people won’t mind AI-coustics’ one-time compensation system for creators, provided that the model the startup is training could prove quite lucrative in the long term. (There is a healthy debate about whether the creators of coaching data for AI models deserve to be compensated for his or her contributions.) But perhaps the larger and more immediate problem is bias.

It is well-known that speech recognition algorithms may cause errors – errors that ultimately harm users. AND test published in The Proceedings of the National Academy of Sciences found that speech recognition from leading firms was twice as likely to incorrectly transcribe audio from Black speakers than from white speakers.

To combat this, Seipel says the AI ​​focuses on recruiting “diverse” contributors to speech samples. He added: “Size and diversity are key to eliminating bias and ensuring the technology works across languages, speaker identities, ages, accents and genders.”

It wasn’t essentially the most scientific test, but I submitted three video clips – and interview with a farmer from the 18th centuryAND automotive driving demonstration and Protest in connection with the Israeli-Palestinian conflict — to the AI-coustics platform to see how well it handles each of them. AI has indeed delivered on its promise to increase transparency; to my ears, the processed clips had significantly less ambient noise drowning out the speakers.

Here’s an earlier clip of a farmer from the 18th century:


And after:

Seipel sees AI-coustics technology getting used to enhance real-time and recorded speech, and maybe even being built into devices resembling soundbars, smartphones and headphones to routinely increase voice clarity. Currently, AI-coustics offers an online application and API for audio and video post-processing, in addition to an SDK that permits the AI-coustics platform to be integrated with existing workflows, applications and hardware.

Seipel says the AI ​​– which makes money through a mix of subscriptions, on-demand pricing and licensing – currently has five enterprise customers and 20,000 users (though not all of them are paying). The plan for the subsequent few months includes expanding the corporate’s four-person team and refining its basic speech amplification model.

“Prior to our initial investment, Coustics AI was operating quite leanly and at a low burn rate to weather the headwinds in the VC investment market,” Seipel said. “AI-coustics now has a significant network of investors and mentors in Germany and the UK who provide advice. A strong technology base and the ability to serve different markets with the same database and core technology gives the company flexibility and the ability to change less.”

When asked whether audio mastering technologies resembling AI acoustics could steal jobs what some experts fearSeipel saw the potential of artificial intelligence to speed up time-consuming tasks that currently fall on audio engineers.

“A content creation studio or broadcast manager can save time and money by automating parts of the audio production process using artificial intelligence while maintaining the highest speech quality,” he said. “Speech quality and intelligibility continues to be a vexing issue for nearly every consumer or skilled device, in addition to when creating and consuming content. Any application that records, processes or transmits speech can potentially profit from our technology.

The financing got here in the shape of an equity and debt tranche from Connect Ventures, Inovia Capital, FOV Ventures and Ableton CFO Jan Bohl.

This article was originally published on : techcrunch.com

Technology

TikTok will automatically tag AI-generated content created on platforms such as DALL·E 3

Published

on

By

A laptop keyboard and TikTok logo displayed on a phone screen are seen in this multiple exposure illustration photo taken in Poland on March 17, 2024. (Photo by Jakub Porzycki/NurPhoto via Getty Images)

The company announced Thursday that TikTok is beginning to automatically tag AI-generated content that was created on other platforms. With this modification, if a creator posts content created using a service like OpenAI’s DALL·E 3 on TikTok, it will automatically have an “AI-generated” label attached to it, informing viewers that it was created using AI.

The social video platform does this by implementing Content Credentials, a technology developed by the Coalition for Content Provenance and Authenticity (C2PA), co-founded by Microsoft and Adobe. Content credentials attach specific metadata to content, which TikTok can then use to immediately recognize and mark AI-generated content.

As a result, TikTok will begin automatically labeling AI-generated content that’s uploaded to the platform with content credentials attached. The change will go live on Thursday and will apply to all users worldwide in the approaching weeks.

While TikTok already tags content created using TikTok’s AI effects, it will now tag content created on other platforms which have implemented content credentials, such as OpenAI’s DALL·E 3 and Microsoft’s Bing Image Creator. Although Microsoft, Adobe and OpenAI already use content credentials, Google promised to handle content credentials.

Image credits: ICT Tok

While TikTok already requires creators to reveal after they post content created or enhanced using AI, the corporate told TechCrunch that it sees the brand new change as an extra technique to be sure that AI-generated content is flagged, while also considering the pressure on creators .

In the approaching months, TikTok will also begin attaching content credentials to AI-generated content created on the platform using TikTok’s AI effects. Content credentials metadata will include details about where and the way the AI-generated content was created or edited and will remain attached to the content once downloaded. Other platforms that accept content credentials will give you the option to automatically mark content as AI-generated.

So while TikTok has committed to labeling AI content on its own service, additionally it is attempting to help be sure that AI content created on TikTok can be properly labeled when published on one other platform.

“AI-generated content is an incredible way to be creative, but transparency for viewers is crucial,” Adam Presser, director of operations and trust and safety at TikTok, said in a press release. “By collaborating with others to flag content across platforms, we make it easier for creators to responsibly view AI-generated content while continuing to deter harmful or misleading AIGC, which is prohibited on TikTok.”

TikTok touts that it’s the first video-sharing platform to implement content authentication. This is value mentioning (*3*)Meta announced back in February that it plans to make use of the C2PA solution so as to add content origins.

In Thursday’s announcement, TikTok said it’s committed to combating using fraudulent AI in elections and that its policies strongly prohibit misleading AI-generated content – whether it’s flagged or unlabeled.

This article was originally published on : techcrunch.com
Continue Reading

Technology

Serena Williams’ Husband Spends Cinco De Mayo With Early Reddit Investor Snoop Dogg, ‘Never Forget These Days’

Published

on

By

Serena Williams, Snoop Dogg, Alexis Ohanian


Serena Williams’ tech entrepreneur husband Alexis Ohanian spent the Cinco De Mayo holiday with certainly one of his wife’s hometown heroes, who was certainly one of the early investors on his Reddit platform.

On Sunday, May 5, Ohanian was outside celebrating a “beautiful day” in Los Angeles with certainly one of the town’s most outstanding figures. In a photograph shared on Twitter, the Reddit co-founder praised Snoop Dogg’s early support for his billion-dollar company.

“Beautiful day in Los Angeles. I appreciate this city very much. Have to meet lots of friends” wrote Ohanian.

“Some of you might know that @SnoopDogg was certainly one of our first investors in @reddit during our spin-off and pivot. Few people wanted to speculate in us then. Snoop managed to do it. And you’ll always remember the primary day.

Launched in 2005 on Reddit serves as an internet community where people from everywhere in the world can connect and communicate about any topic. Ohanian co-founded the corporate with Steve Huffman while they were each attending the University of Virginia. The company began to grow after connecting with Aaron Swartz of Infogami, who rewrote the coding for Reddit.

Just a yr after launching, Ohanian and Huffman sold Reddit to Conde Nast for about $10 million to $20 million. As user numbers and valuations continued to grow, investment flowed in. Until 2014, when Reddit was precious with a reported $500 million, Snoop Dogg has joined a bunch of investors in a $50 million funding round on the link-sharing and discussion site.

It was the investment group’s organizer who approached Snoop due to his celebrity status and recognition on the location’s “Ask Me Anything” message boards. Snoop was also high on Reddit’s wish list of investors trying to join the corporate.

“He’s like a Reddit superstar,” Sam Altman, CEO and co-founder of Open AI, said of Snoop Dogg. “The site loves him. Snoop was high on the list. We asked him to speculate and he agreed.

The “Gin and Juice” rapper doesn’t actually discuss his early investment in Reddit, but he has previously shared how he got into the market.

“I went inside. Understand me? I didn’t wish to tell everyone, but I’m a part of Reddit too.” – Snoop Dogg he told Ohanian on GGN News in 2017.

Reddit is currently valued at $10.68 billion, with a share price of $67.67. This is a drastic jump from the $5 billion valuation in 2021.


This article was originally published on : www.blackenterprise.com
Continue Reading

Technology

Apple’s ‘Crush’ ad is disgusting

Published

on

By

Apple’s ‘Crush’ ad is disgusting

You can generally depend on Apple for clever and well-produced ads, but on this case it missed the mark is the latest, which presents a tower of creative tools and analog items literally squeezed into the shape of an iPad. Many people, including myself, have had a negative and visceral response to this, and we must always discuss why.

This doesn’t just occur because we witness things being destroyed. There are countless video channels dedicated to crushing, burning, exploding and customarily destroying on a regular basis objects. Plus, everyone knows that some of these incidents occur daily at transfer stations and recycling centers. So that is not it.

And it isn’t that these items themselves are so priceless. Sure, a piano is price something. But we see them blowing up in motion movies on a regular basis and we do not feel bad. I like pianos, but that doesn’t suggest we will not do with no few old, unused pianos. Same with the remainder: it’s mostly junk that you could buy on Craigslist for a couple of bucks or on the dump without spending a dime. (Maybe not an assembly station.)

The problem is not with the movie itself, which, considering the individuals who shot and shot it, is actually thoroughly made. The problem is not the media, however the message.

We all understand the supposed point of promoting: you’ll be able to do all of it on an iPad. Great. Of course, we could also do that on the last iPad, but this one is thinner (by the best way, nobody asked for it; now the cases don’t fit on this case) and by some imaginary percentage higher.

But what all of us understand, because unlike Apple’s promoting executives, is that the world we live in is that the things that get fragmented here represent material, tangible, and real. And what is true has value. A worth that Apple clearly believes it may possibly crush into one other black mirror.

This belief is disgusting to me. And apparently for a lot of others as well.

Destruction of the piano within the music video Or MythBusters episode it is actually an act of creation. Even destroying a piano (or monitor, paint can, or drum set) for no reason is wasteful at worst!

But Apple is destroying these items – all you would like is a small device from the corporate that may do all this and more, and doesn’t need annoying things like strings, keys, buttons, brushes or mixing stations.

We are all coping with the results of the mass shift of media towards digital and all the time online technology. In some ways it’s really good! I feel technology has played an enormous role.

But from one other, equally real viewpoint, the digital transformation seems harmful and compelled – a technotopic, billionaire-approved vision of a future during which every child has a man-made intelligence best friend and may learn to play a virtual guitar on a chilly glass screen.

Does your child like music? They don’t need a harp, throw it within the trash. An iPad is enough. Do they like painting? Here’s Apple Pencil, pretty much as good as pens, watercolors, oils! Books? Don’t make us laugh! Destroy them. The paper is worthless, use one other screen. In fact, why not read in Apple Vision Pro, on even faker paper?

Apple seems to forget that it’s things in the actual world – the very things that Apple destroyed – that give value to fake versions of those things in the primary place.

A virtual guitar is not going to replace an actual guitar; it’s like considering that a book can replace the creator.

That doesn’t suggest we will not value each for various reasons. But Apple’s ad sends the message that the long run it wants won’t include paint bottles, knobs to show, sculptures, physical instruments or paper books. Of course, this is the long run he has been working on for us for years, he just didn’t put it so bluntly before.

When someone tells you who they’re, imagine them. Apple is very clear about what it is and what the long run needs to be. If this future doesn’t gross you out, you are welcome.

This article was originally published on : techcrunch.com
Continue Reading
Advertisement

OUR NEWSLETTER

Subscribe Us To Receive Our Latest News Directly In Your Inbox!

We don’t spam! Read our privacy policy for more info.

Trending