Technology

Will people really pay $200 a month for a new OpenAI chatbot?

Published

on

On Thursday, OpenAI released a $200-a-month chatbot — and the AI ​​community wasn’t quite sure what to make of it.

The company’s new ChatGPT Pro plan gives you access to “o1 pro mode,” which OpenAI says “uses more processing power to get the best answers to your toughest questions.” An improved version of OpenAI’s o1 reasoning model, o1 pro mode, should answer questions related to science, math and coding in a more “robust” and “comprehensive” way, OpenAI says.

Almost immediately, people began asking him to attract unicorns:

And design a “crab-based” computer:

And wax poetry concerning the meaning of life:

But many people on X didn’t seem convinced that o1 pro replies were within the $200 range.

“Has OpenAI provided any specific examples of hints that failed in regular o1 but succeeded in o1-pro?” he asked British computer scientist Simon Willison. “I want to see a single specific example that shows his advantage.”

It’s a reasonable query; in spite of everything, it’s the most costly chatbot subscription on the planet. The service comes with other advantages, reminiscent of removal of rate limits and unlimited access to other OpenAI models. But $2,400 a 12 months is not bullshit, and the worth proposition of the o1 pro mode specifically stays unclear.

It didn’t take long to search out the failure cases. O1 pro mode has problems with Sudoku and is interrupted by an optical illusion joke that is clear to any human being.

OpenAI’s internal tests show that o1 pro mode performs only barely higher than standard o1 for coding and math problems:

Image credits:OpenAI

OpenAI conducted a “more stringent” evaluation in the identical tests to reveal the consistency of o1 pro mode: a model was only considered to have solved a query if it answered accurately in 4 out of 4 cases. But even in these tests the development was not dramatic:

Image credits:OpenAI

OpenAI CEO Sam Altman once wrote that OpenAI is on the correct track path “towards an intelligence too cheap to measure” – he was forced to achieve this explain many times on Thursday that ChatGPT Pro is not for most people.

“Most users will be very happy with o1 at (ChatGPT) Plus!” – he said in X. “Almost everyone will be best served by our free tier or Plus tier.”

So who is that this for? Are there really people who’re willing to pay $200 a month to ask questions on toys like “Write a 3-paragraph essay about strawberries without using the letter “e”” Or “solve this Mathematical Olympiad task“? Will they happily part with their hard-earned money, with little guarantee that the usual o1 won’t have the option to satisfactorily answer the identical questions?

I asked Ameet Talwalkar, associate professor of machine learning at Carnegie Mellon and enterprise partner at Amplify Partners, for his opinion. “I think it’s a big risk to raise the price tenfold,” he told TechCrunch by email. “I think in just a few weeks we’ll have a much better sense of the appetite for this functionality.”

UCLA computer scientist Guy Van den Broeck was more candid in his assessment. “I don’t know if this price makes sense,” he told TechCrunch, “and whether expensive reasoning models will be the norm.”

The generous view is that that is a marketing mistake. Describing o1 pro mode as one of the best at solving “toughest problems” doesn’t say much to potential customers. Neither unclear statements about how the model can “think longer” and reveal “intelligence”. As Willison points out, without concrete examples of the supposedly improved capabilities, it’s hard to justify paying more in any respect, much less ten times the value.

As far as I do know, the target market is experts in specialized fields. OpenAI says it plans to supply a handful of medical researchers from “leading institutions” with free access to ChatGPT Pro, which is able to include o1 pro mode. Errors are of paramount importance in healthcare and, as Bob McGrew, former research director of OpenAI, stated: excellent on X, greater reliability might be the important unlock of o1 pro mode.

McGrew too he thought o1 pro mode is an example of what he calls “intelligence overhang”: users (and maybe modelers) do not know learn how to extract value from “additional intelligence” as a consequence of the basic limitations of a easy, text-based interface. As with other OpenAI models, the one strategy to interact with o1 pro mode is thru ChatGPT, and – in response to McGrew – ChatGPT just isn’t perfect.

However, it’s also true that $200 sets high expectations. Judging by its early reception on social media, ChatGPT Pro is not exactly a hit.


This article was originally published on : techcrunch.com

Leave a Reply

Your email address will not be published. Required fields are marked *

Trending

Exit mobile version