google-site-verification=cXrcMGa94PjI5BEhkIFIyc9eZiIwZzNJc4mTXSXtGRM Mark Zuckerberg claims that Threads has 150 million monthly active users - 360WISE MEDIA
Connect with us

Technology

Mark Zuckerberg claims that Threads has 150 million monthly active users

Published

on

Threads, Meta’s rival on Twitter/X, is growing at a gradual pace. The social network now has greater than 150 million monthly active users – up from 130 million in February – Mark Zuckerberg mentioned during its first-quarter 2024 earnings call.

Since its last quarterly earnings call, Threads has taken specific steps toward integration with ActivityPub, the decentralized protocol that powers networks like Mastodon. In March, the corporate allowed U.S. users 18 and older to attach their accounts to Fediverse to have their posts appear on other servers.

The company also plans to make its API available to a big selection of developers by June, allowing them to construct experiences on the social network. However, it continues to be unclear whether Threads will allow developers to create full-fledged third-party clients.

Last week, Meta launched its AI chatbot on Facebook, Messenger, WhatsApp and Instagram. Threads were a notable exclusion from this list, likely because of the shortage of native DM functionality.

On Wednesday, Threads also released a shareable test feature users routinely archive their posts after the required time has passed. They also can archive or unarchive individual posts and make them public.

Threads is roughly nine months old and Meta continues to grow its audience. But it’s actually not a substitute for X, as Instagram CEO Adam Mosseri said in October that Threads would not “amplify messages on the platform.” But the Meta social network continues to achieve popularity. Earlier this week, Business expert reported that Threads currently has more day by day active users than X within the U.S., in keeping with app analytics firm Apptopia.

This article was originally published on : techcrunch.com
Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Technology

Why RAG won’t solve the AI ​​generative hallucination problem

Published

on

By

Hallucinations – essentially the lies that generative artificial intelligence models tell – pose an enormous problem for firms seeking to integrate the technology into their operations.

Because models haven’t any real intelligence and easily predict words, images, speech, music, and other data in keeping with a non-public schema, they often get it mistaken. Very bad. In a recent article in The Wall Street Journal, a source cites a case through which Microsoft’s generative AI invented meeting participants and suggested that conference calls covered topics that weren’t actually discussed during the call.

As I wrote a while ago, hallucinations could be an unsolvable problem in modern transformer-based model architectures. However, many generative AI vendors suggest eliminating them roughly through a technical approach called search augmented generation (RAG).

Here’s how one supplier, Squirro, he throws it: :

At the core of the offering is the concept of Recovery Augmented LLM or Recovery Augmented Generation (RAG) built into the solution… (our Generative Artificial Intelligence) is exclusive in its promise of zero hallucinations. Each piece of knowledge it generates is traceable to its source, ensuring credibility.

Here it’s similar tone from SiftHub:

Using RAG technology and fine-tuned large language models and industry knowledge training, SiftHub enables firms to generate personalized responses without hallucinations. This guarantees greater transparency and reduced risk, and instills absolute confidence in using AI for all of your needs.

RAG was pioneered by data scientist Patrick Lewis, a researcher at Meta and University College London and lead writer of the 2020 report paper who coined this term. When applied to a model, RAG finds documents which may be relevant to a given query—for instance, the Wikipedia page for the Super Bowl—using keyword searches, after which asks the model to generate a solution in this extra context.

“When you interact with a generative AI model like ChatGPT or Lama and ask a question, by default the model responds based on its ‘parametric memory’ – i.e. knowledge stored in its parameters as a result of training on massive data from the Internet,” he explained David Wadden, a research scientist at AI2, the artificial intelligence research arm of the nonprofit Allen Institute. “But just as you are likely to give more accurate answers if you have a source of information in front of you (e.g. a book or file), the same is true for some models.”

RAG is undeniably useful – it lets you assign things generated by the model to discovered documents to ascertain their veracity (with the additional advantage of avoiding potentially copyright-infringing regurgitations). RAG also allows firms that don’t need their documents for use for model training – say, firms in highly regulated industries comparable to healthcare and law – to permit their models to make use of these documents in a safer and temporary way.

But RAG actually stops the model from hallucinating. It also has limitations that many providers overlook.

Wadden says RAG is best in “knowledge-intensive” scenarios where the user desires to apply the model to fill an “information need” – for instance, to search out out who won the Super Bowl last 12 months. In such scenarios, the document answering the query will likely contain lots of the same keywords as the query (e.g., “Super Bowl,” “last year”), making it relatively easy to search out via keyword search.

Things get harder for reasoning-intensive tasks like coding and math, where in a keyword-based query it’s harder to find out the concepts needed to reply the query, much less determine which documents is perhaps relevant.

Even for basic questions, models can grow to be “distracted” by irrelevant content in the documents, especially long documents where the answer isn’t obvious. Or, for reasons still unknown, they could simply ignore the contents of recovered documents and rely as a substitute on their parametric memory.

RAG can be expensive when it comes to the equipment needed to deploy it on a big scale.

This is because retrieved documents, whether from the Internet, an internal database, or elsewhere, have to be kept in memory – at the very least temporarily – for the model to confer with them again. Another expense is computing the increased context that the model must process before generating a response. For a technology already famous for the large amounts of computing power and electricity required to even perform basic operations, this can be a serious consideration.

This does not imply RAG cannot be improved. Wadden noted many ongoing efforts to coach models to raised leverage documents recovered using RAG.

Some of those efforts include models that may “decide” when to make use of documents, or models that may opt out of search first in the event that they deem it unnecessary. Others are specializing in ways to index massive document datasets more efficiently and to enhance search through higher representations of documents—representations that transcend keywords.

“We’re pretty good at retrieving documents based on keywords, but we’re not very good at retrieving documents based on more abstract concepts, such as the checking technique needed to solve a math problem,” Wadden said. “Research is required to construct document representations and search techniques that may discover suitable documents for more abstract generation tasks. I feel it’s mostly an open query at this point.”

So RAG may help reduce models’ hallucinations, however it isn’t the answer to all hallucinatory problems of AI. Beware of any seller who tries to say otherwise.

This article was originally published on : techcrunch.com
Continue Reading

Technology

Luminar lays off 20% of staff and outsources lidar production

Published

on

By

Lidar Luminar is reducing its workforce by 20% and will rely more on its contract manufacturing partner as part of a restructuring that may shift the corporate to a more “asset-conserving” business model because it seeks to scale production.

The cuts will affect roughly 140 employees and will take effect immediately. Luminar can be cutting ties with “most” of its contract employees.

“Today we stand at the intersection of two realities: the core of our business has never been stronger in technology, products, industrialization and commercialization; at the same time, the capital markets’ perception of our company has never been more challenging,” billionaire founder and CEO Austin Russell said in a speech letter posted on the Luminar website. “The business model and cost structure that enabled us to achieve this leadership position no longer meet the needs of the company.”

Russell wrote within the letter that the restructuring will enable Luminar to bring products to market faster, “drastically reduce” costs and ensure improved profitability for the corporate. The company said in its regulatory filing sawing that the changes would scale back operating costs “by $50 million to $65 million annually.” The company can be limiting its global reach “by subleasing some or all of certain facilities.”

Luminar will proceed to operate its Florida facility, which is used for development, testing and research and development purposes, in keeping with spokesman Milin Mehta.

In April, Luminar announced that it had begun shipping volume production lidar sensors to Volvo for installation within the automaker’s EX90 luxury SUV. It also announced plans to deepen its relationship with Taiwanese contract manufacturing company TPK Holding. TPK “has committed to an exclusive partnership with Luminar,” Russell wrote in his letter.

This article was originally published on : techcrunch.com
Continue Reading

Technology

Iconiq raises $5.15 billion for seventh flagship fund

Published

on

By

SEC filings show Iconiq Capital raised $5.15 billion across two funds affiliated with its seventh family of growth funds.

The company, which began in 2011 as a personal office managing the capital of a number of the most outstanding and wealthiest figures within the technology industry, including Mark Zuckerberg and Jack Dorsey, originally targeted amount of USD 5.75 billion. “Wall Street” every day announced in March 2022. It is unclear whether the corporate continues to be raising capital for its goal.

Iconiq didn’t immediately reply to a request for comment.

The fund size represents a major increase over Iconiq Fund VI’s $3.75 billion goal.

Iconiq’s latest fund transfer is impressive considering many other high-growth investors have failed to realize their goals over the long run. Most notably, Tiger Global closed its latest enterprise capital fund at $2.2 billion, the corporate’s smallest fund since 2014. Bloomberg reported. Tiger initially planned to lift $6 billion, lower than half of its predecessor’s total $12.7 billion the corporate closed in March 2022.

The two giant funds are usually not in the exact same situation. Global Tiger was widely criticized for investing capital too quickly at exorbitant prices in the course of the tech boom of 2020 and 2021 (though the notion that it was overpaying has at all times been denied). Unlike Tiger Global, which has been actively selling additional shares to secure liquidity, Iconiq is buying secondary positions, in keeping with two sources.

Iconiq’s substantive fundraising likely means its backers are relatively joyful with the corporate’s investment strategy.

According to PitchBook data, Iconiq has accomplished several dozen exits from its portfolio lately, including: IPO of Snowflake, Airbnb, GitLab and HashiCorp. In 2023, Iconiq invested $1.1 billion in 22 corporations, – he says and his portfolio includes, amongst others: startups like wire, Canva, Ramp, ServiceTitan, Writer AND Pigment.

Business Fund VII-B raised $3.95 billion from 291 investors, nonetheless Fund VII closed at $1.26 billion from 462 donors, in keeping with official documents.

According to the Iconiq report, the seventh Iconiq vehicle will put money into 20-25 technology corporations Insider Buyback Report based on the New Mexico Investment Board meeting held in March 2022.

This article was originally published on : techcrunch.com
Continue Reading
Advertisement

OUR NEWSLETTER

Subscribe Us To Receive Our Latest News Directly In Your Inbox!

We don’t spam! Read our privacy policy for more info.

Trending