Connect with us

Technology

The “Mozart of mathematics” doesn’t worry about artificial intelligence replacing math geeks – ever

Published

on

Terence Tao, a UCLA professor considered “the world’s greatest living mathematician,” last month compared ChapGPT’s o1 reasoning model to “averagebut not completely incompetent”, a graduate who could answer a posh analytical problem accurately, “with plenty of hints and hints”.

Artificial intelligence may never defeat its human teachers, he now believes says The Atlantic. “One of the important thing differences (currently) between graduates and AI is that graduates learn. You tell the AI ​​that its approach is not working, it apologizes, possibly it corrects its course temporarily, but sometimes it just goes back to what it tried before.

The excellent news for math geniuses, Tao adds, is that AI and mathematicians will likely at all times be collaborators, where somewhat than replacing math geeks, AI will enable them to explore previously unattainable large-scale problems. Tao says about the longer term: “You might need a project and say, ‘What if I do this approach?’ Instead of spending hours attempting to get it to work, you direct GPT to do it for you.

This article was originally published on : techcrunch.com
Continue Reading
Advertisement
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Technology

Ashton Kutcher, Effie Epstein and Guy Oseary will appear at TechCrunch Disrupt 2024

Published

on

By

TechCrunch Disrupt 2024 Ashton Kutcher DSC04423

Last 12 months, Sound venturesThe nine-year-old Beverly Hills, California-based enterprise capital firm, led by general partners Ashton Kutcher, Guy Oseary and Effie Epstein, announced a brand new $265 million artificial intelligence fund that might bet on large language model corporations. including OpenAI, Anthropic and Hugging Face.

In fact, the plan was to take a position in just six corporations – a dangerous but potentially lucrative assumption for the team’s vision for a future wherein – as a result of the technical talent required and the capital needed to cover computational costs – the most important winners in AI could be few and far between. mass.

Since then, Sound has raised more cash for a similar fund. It also appears to have stuck to its mission, judging by the few deals it has publicly disclosed in 2024, and we’re excited to be hosting all three of them at TechCrunch Disrupt 2024 to discuss their strategy — together with the trends they’re tracking well Now.

Kutcher has arguably managed to transition between the worlds of acting and investing more easily than anyone else in Hollywood, launching his profession in 2010 with the discharge of the film Class A investments with talent manager and business partner Guy Oseary, whose early bets on Airbnb and Uber helped cement their reputations for striking the appropriate deal at the appropriate time.

The two, who founded Sound Ventures in 2014, brought Epstein on board a number of years later to round out their skills. Epstein previously led global strategy at Marsh & McLennan’s subsidiary, Marsh. She also served as vp of planning and head of investor relations at iHeartMedia and previously led business development at Clear.

Epstein also previously worked in investment banking within the energy sector, bringing quite a lot of experience and contacts to his firm.

You definitely don’t need to miss this bonfire on the Disrupt Stage, where you may be amongst 10,000 tech leaders, startups and VCs attending Disrupt 2024. Register today to secure your ticket price before it goes up.

This article was originally published on : techcrunch.com
Continue Reading

Technology

OpenAI secured another billions, but there was capital left for other startups

Published

on

By

This week once more brought us news regarding AI funding, in addition to some warnings: some categories and stages are showing signs of overheating. Luckily, we also spotted some cool startups – literally.

The most interesting startup stories of the week

Image credits:Whatnot

It could also be hard to imagine, but OpenAI continues to be a startup, hence its repeated first place. There were other interesting stories this week, nevertheless.

A billion AI: OpenAI raised $6.6 billion at a post-money valuation of $157 billion, and likewise secured a $4 billion revolving credit facility and launched a brand new interface. Business apparently has asked investors to not back rivals corresponding to Anthropic and xAI, but OpenAI has not confirmed this. Meanwhile, Anthropic has hired OpenAI co-founder Durk Kingma for a distant position.

Attack of the Clones: Y Combinator faced criticism for supporting AI code editor PearAI, whose CEO apologized for cloning another YC-backed open source project without proper attribution and with a “garbled” license.

Shopping broadcast live: Live shopping app Whatnot says its annual gross merchandise volume (GMV) has surpassed $2 billion this 12 months, meaning there’s still hope for live commerce within the US

The most interesting collections this week

CEO of Series Entertainment, Pany Haritatos
Image credits:Entertainment Series

Some firms prefer to boost funds secretly; others even work underwater.

Deep end: Artificial intelligence coding startup Poolside raised a $500 million Series B funding round led by Bain Capital Ventures, with participation from eBay and Nvidia. This enabled Poolside to bring 10,000 Nvidia GPUs online to coach future models, CEO Jason Warner said.

Cool water: Barcelona-based immersion cooling startup Submer raised $55.5 million to draw more customers to its solution already utilized by hyperscalers, telecommunications firms and other large corporations.

11x meets a16zTechCrunch has learned that 11x.ai, a startup that creates AI-powered sales bots, has secured a Series B funding round of roughly $50 million led by Andreessen Horowitz.

Hidden financing: Cloud backup startup Eon has come out of hiding to disclose that it has already achieved a post-money valuation of $750 million after raising three rounds of funding, including a $77 million Series B.

More hidden financing: Series, an AI-powered generative game development platform, has quietly raised a $28 million Series A funding round from Netflix, Dell, a16z and others.

The most interesting VC and funding news this week

startups, venture capital, Ali Rowghani
Image credits:Kimberly White/Stringer/Getty Images

Pruning season: Veteran enterprise capital firm CRV returned $275 million of its $500 million late-stage Select fund to investors, citing overvaluation of mature startups. It follows the same move by India’s Peak XV, which reduced its fund size and costs amid signs of overheating.

Launching: Former Y Combinator CEO and Twitter executive Ali Rowghani is launching Maxq, a brand new enterprise capital firm with a debut fund of $250 million.

NOT bullish: Index Ventures is looking for another New York investor and plans so as to add three or 4 latest people to its local team over the following 12 months, partner Shardul Shah told TechCrunch.

No less necessary

Image credits:Kevin Ryan

Speaking with TechCrunch Global Managing Editor Matt Rosoff ahead of this 12 months’s Startup Battlefield 200 at TechCrunch Disrupt, New York tech investor and serial entrepreneur Kevin Ryan shared his thoughts on when and if founders should sell their company. His belief: there ought to be more of them.

This article was originally published on : techcrunch.com
Continue Reading

Technology

The Movie Gen Meta model provides realistic video with sound, so we can finally have infinite Moo Deng

Published

on

By

No one really knows yet what generative video models are good for, but that does not stop firms like Runway, OpenAI, and Meta from investing thousands and thousands of their development. The latest version of Meta is titled Movie Genand true to its name, it turns text prompts into relatively realistic video with sound… but luckily no voice yet. And it’s sensible that they do not post it publicly.

Movie Gen is definitely a set (or “cast” as they call it) of basic models, the most important of which is the text-to-video bit. Meta claims it outperforms the likes of Runway Gen3, the newest LumaLabs release, and Kling 1.5, although as all the time, this type of thing shows more that they are playing the identical game than Movie Gen winning. The specs can be present in Meta’s release article describing all components.

Sound is generated to match the content of the video, adding, for instance, engine sounds corresponding to the movements of the automotive, the sound of a waterfall within the background, or thunder mid-video when required. He’ll even add music if he thinks it is vital.

It was trained on “a combination of licensed and publicly available datasets” that they called “proprietary/commercially sensitive” and provided no further details about it. We can only guess, which means there are various videos on Instagram and Facebook, in addition to some partner materials and rather more, that usually are not properly shielded from scrapers – i.e. “publicly available”.

However, Meta is clearly aiming here not only to say the “state-of-the-art” crown for a month or two, but for a practical, soup-to-nuts approach during which a quite simple material can be changed into a solid end product, a natural language prompt. Things like “imagine me as a baker baking a shiny hippopotamus-shaped cake during a storm.”

For example, one in every of the sticking points with these video generators is how difficult they have an inclination to be to edit. If you request a video of an individual crossing the road and also you realize that you just want the person to walk from right to left, not left to right. There’s a very good probability the entire shot will look different when you repeat the prompt with additional instruction. The meta adds an easy, text-based editing method where you can just say “change the background to a busy intersection” or “change her clothes to a red dress” and she is going to attempt to make that change, nevertheless it’s a change.

Image credits:Meta

Camera movements are also generally understood, and things like “tracking shot” and “panning left” are taken into consideration when generating video. It’s still quite clunky in comparison with actual camera controls, nevertheless it’s significantly better than nothing.

The model limitations are a bit strange. It generates video with a width of 768 pixels, a dimension familiar to most from the famous but outdated 1024×768 resolution, but which can be thrice the width of 256, so it plays well with other HD formats. The Movie Gen system upscales this resolution to 1080p, which is the source of the claim that it produces this resolution. Not entirely true, but we’ll leave them alone because upscaling is surprisingly effective.

Oddly enough, it generates as much as 16 seconds of video… at 16 frames per second, a frame rate that nobody in history has ever wanted or asked for. However, you can also record 10 seconds of video at 24 frames per second. Lead with it!

As for why it doesn’t play the voice… well, there are probably two reasons. First of all, it is extremely difficult. Generating speech is now easy, but matching it to lip movements and lip movements to faces is a rather more complicated proposition. I do not blame them for leaving it until later because it could have been a one-minute fail. Someone might say, “generate a clown delivering the Gettysburg address by riding around on a little bicycle” – nightmare fuel primed for popularity.

The second reason might be political: releasing a deepfake generator a month before the major elections is… not one of the best for optics. A practical preventive step is to barely limit its capabilities so that if malicious actors try to make use of it, it requires real work on their part. You can actually mix this generative model with a speech generator and an open mouth sync generator, but you can’t just generate a candidate making crazy claims.

“Movie Gen is currently purely an AI research concept, and even at this early stage, security is a top priority, as it is with all of our generative AI technologies,” a Meta representative said in response to TechCrunch’s questions.

Unlike, say, Llama’s large language models, Movie Gen is not going to be publicly available. You can replicate these techniques to some extent by following the research paper, however the code is not going to be published apart from the “baseline evaluation prompt dataset,” a record of the prompts used to generate the test videos.

This article was originally published on : techcrunch.com
Continue Reading
Advertisement

OUR NEWSLETTER

Subscribe Us To Receive Our Latest News Directly In Your Inbox!

We don’t spam! Read our privacy policy for more info.

Trending