Technology
Why RAG won’t solve the AI generative hallucination problem

Hallucinations – essentially the lies that generative artificial intelligence models tell – pose an enormous problem for firms seeking to integrate the technology into their operations.
Because models haven’t any real intelligence and easily predict words, images, speech, music, and other data in keeping with a non-public schema, they often get it mistaken. Very bad. In a recent article in The Wall Street Journal, a source cites a case through which Microsoft’s generative AI invented meeting participants and suggested that conference calls covered topics that weren’t actually discussed during the call.
As I wrote a while ago, hallucinations could be an unsolvable problem in modern transformer-based model architectures. However, many generative AI vendors suggest eliminating them roughly through a technical approach called search augmented generation (RAG).
Here’s how one supplier, Squirro, he throws it: :
At the core of the offering is the concept of Recovery Augmented LLM or Recovery Augmented Generation (RAG) built into the solution… (our Generative Artificial Intelligence) is exclusive in its promise of zero hallucinations. Each piece of knowledge it generates is traceable to its source, ensuring credibility.
Here it’s similar tone from SiftHub:
Using RAG technology and fine-tuned large language models and industry knowledge training, SiftHub enables firms to generate personalized responses without hallucinations. This guarantees greater transparency and reduced risk, and instills absolute confidence in using AI for all of your needs.
RAG was pioneered by data scientist Patrick Lewis, a researcher at Meta and University College London and lead writer of the 2020 report paper who coined this term. When applied to a model, RAG finds documents which may be relevant to a given query—for instance, the Wikipedia page for the Super Bowl—using keyword searches, after which asks the model to generate a solution in this extra context.
“When you interact with a generative AI model like ChatGPT or Lama and ask a question, by default the model responds based on its ‘parametric memory’ – i.e. knowledge stored in its parameters as a result of training on massive data from the Internet,” he explained David Wadden, a research scientist at AI2, the artificial intelligence research arm of the nonprofit Allen Institute. “But just as you are likely to give more accurate answers if you have a source of information in front of you (e.g. a book or file), the same is true for some models.”
RAG is undeniably useful – it lets you assign things generated by the model to discovered documents to ascertain their veracity (with the additional advantage of avoiding potentially copyright-infringing regurgitations). RAG also allows firms that don’t need their documents for use for model training – say, firms in highly regulated industries comparable to healthcare and law – to permit their models to make use of these documents in a safer and temporary way.
But RAG actually stops the model from hallucinating. It also has limitations that many providers overlook.
Wadden says RAG is best in “knowledge-intensive” scenarios where the user desires to apply the model to fill an “information need” – for instance, to search out out who won the Super Bowl last 12 months. In such scenarios, the document answering the query will likely contain lots of the same keywords as the query (e.g., “Super Bowl,” “last year”), making it relatively easy to search out via keyword search.
Things get harder for reasoning-intensive tasks like coding and math, where in a keyword-based query it’s harder to find out the concepts needed to reply the query, much less determine which documents is perhaps relevant.
Even for basic questions, models can grow to be “distracted” by irrelevant content in the documents, especially long documents where the answer isn’t obvious. Or, for reasons still unknown, they could simply ignore the contents of recovered documents and rely as a substitute on their parametric memory.
RAG can be expensive when it comes to the equipment needed to deploy it on a big scale.
This is because retrieved documents, whether from the Internet, an internal database, or elsewhere, have to be kept in memory – at the very least temporarily – for the model to confer with them again. Another expense is computing the increased context that the model must process before generating a response. For a technology already famous for the large amounts of computing power and electricity required to even perform basic operations, this can be a serious consideration.
This does not imply RAG cannot be improved. Wadden noted many ongoing efforts to coach models to raised leverage documents recovered using RAG.
Some of those efforts include models that may “decide” when to make use of documents, or models that may opt out of search first in the event that they deem it unnecessary. Others are specializing in ways to index massive document datasets more efficiently and to enhance search through higher representations of documents—representations that transcend keywords.
“We’re pretty good at retrieving documents based on keywords, but we’re not very good at retrieving documents based on more abstract concepts, such as the checking technique needed to solve a math problem,” Wadden said. “Research is required to construct document representations and search techniques that may discover suitable documents for more abstract generation tasks. I feel it’s mostly an open query at this point.”
So RAG may help reduce models’ hallucinations, however it isn’t the answer to all hallucinatory problems of AI. Beware of any seller who tries to say otherwise.
Technology
Microsoft Nadella sata chooses chatbots on the podcasts

While the general director of Microsoft, Satya Nadella, says that he likes podcasts, perhaps he didn’t take heed to them anymore.
That the treat is approaching at the end longer profile Bloomberg NadellaFocusing on the strategy of artificial intelligence Microsoft and its complicated relations with Opeli. To illustrate how much she uses Copilot’s AI assistant in her day by day life, Nadella said that as a substitute of listening to podcasts, she now sends transcription to Copilot, after which talks to Copilot with the content when driving to the office.
In addition, Nadella – who jokingly described her work as a “E -Mail driver” – said that it consists of a minimum of 10 custom agents developed in Copilot Studio to sum up E -Mailes and news, preparing for meetings and performing other tasks in the office.
It seems that AI is already transforming Microsoft in a more significant way, and programmers supposedly the most difficult hit in the company’s last dismissals, shortly after Nadella stated that the 30% of the company’s code was written by AI.
(Tagstotransate) microsoft
Technology
The planned Openai data center in Abu Dhabi would be greater than Monaco

Opeli is able to help in developing a surprising campus of the 5-gigawatt data center in Abu Dhabi, positioning the corporate because the fundamental tenant of anchor in what can grow to be considered one of the biggest AI infrastructure projects in the world, in accordance with the brand new Bloomberg report.
Apparently, the thing would include a tremendous 10 square miles and consumed power balancing five nuclear reactors, overshadowing the prevailing AI infrastructure announced by OpenAI or its competitors. (Opeli has not yet asked TechCrunch’s request for comment, but in order to be larger than Monaco in retrospect.)
The ZAA project, developed in cooperation with the G42-Konglomerate with headquarters in Abu Zabi- is an element of the ambitious Stargate OpenAI project, Joint Venture announced in January, where in January could see mass data centers around the globe supplied with the event of AI.
While the primary Stargate campus in the United States – already in Abilene in Texas – is to realize 1.2 gigawatts, this counterpart from the Middle East will be more than 4 times.
The project appears among the many wider AI between the USA and Zea, which were a few years old, and annoyed some legislators.
OpenAI reports from ZAA come from 2023 Partnership With G42, the pursuit of AI adoption in the Middle East. During the conversation earlier in Abu Dhabi, the final director of Opeli, Altman himself, praised Zea, saying: “He spoke about artificial intelligence Because it was cool before. “
As in the case of a big a part of the AI world, these relationships are … complicated. Established in 2018, G42 is chaired by Szejk Tahnoon Bin Zayed Al Nahyan, the national security advisor of ZAA and the younger brother of this country. His embrace by OpenAI raised concerns at the top of 2023 amongst American officials who were afraid that G42 could enable the Chinese government access advanced American technology.
These fears focused on “G42”Active relationships“With Blalisted entities, including Huawei and Beijing Genomics Institute, in addition to those related to people related to Chinese intelligence efforts.
After pressure from American legislators, CEO G42 told Bloomberg At the start of 2024, the corporate modified its strategy, saying: “All our Chinese investments that were previously collected. For this reason, of course, we no longer need any physical presence in China.”
Shortly afterwards, Microsoft – the fundamental shareholder of Opeli together with his own wider interests in the region – announced an investment of $ 1.5 billion in G42, and its president Brad Smith joined the board of G42.
(Tagstransate) Abu dhabi
Technology
Redpoint collects USD 650 million 3 years after the last large fund at an early stage

Redpoint Ventures, an organization based in San Francisco, which is a few quarter of a century, collected $ 650 million at an early stage, in keeping with A regulatory notification.
The latest RedPoint fund corresponds to the size of its previous fund, which was collected barely lower than three years ago. On the market where many enterprises reduce their capital allegations, this cohesion may indicate that limited partners are relatively satisfied with its results.
The company’s early stage strategy is managed by 4 managing partners: Alex Bard (pictured above), Satish Dharmraraj, Annie Kadavy and Eric Brescia, who joined the company in 2021 after he served as the operational director of Githuba for nearly three years.
The last outstanding investments of the RedPoint team at an early stage include AI Coding Pool Pool, which was founded by the former partner Redpoint and CTO GitHub Jason Warner, distributed laboratories of SQL database programmers and Platform Management Platform Platform Levelpath.
A multi -stage company also conducts a development strategy led by Logan Barlett, Jacob Effron, Elliot Geidt and Scott Raney partners. Last 12 months, Redpoint raised its fifth growth fund at USD 740 million, which is a small increase in the USD 725 million fund closed three years earlier.
The recent RedPoint outputs include the next insurance, which was sold for $ 2.6 billion in March, Tastemada Startup Media Travel -utar -Media was enriched by Wonder for $ 90 million, and the takeover of Hashicorp $ 6.4 billion by IBM.
Redpoint didn’t answer the request for comment.
(Tagstranslate) Early Stage Venture Capital (T) Basenside (T) Redpoint Venture Partners
-
Press Release1 year ago
U.S.-Africa Chamber of Commerce Appoints Robert Alexander of 360WiseMedia as Board Director
-
Press Release1 year ago
CEO of 360WiSE Launches Mentorship Program in Overtown Miami FL
-
Business and Finance12 months ago
The Importance of Owning Your Distribution Media Platform
-
Business and Finance1 year ago
360Wise Media and McDonald’s NY Tri-State Owner Operators Celebrate Success of “Faces of Black History” Campaign with Over 2 Million Event Visits
-
Ben Crump1 year ago
Another lawsuit accuses Google of bias against Black minority employees
-
Theater1 year ago
Telling the story of the Apollo Theater
-
Ben Crump1 year ago
Henrietta Lacks’ family members reach an agreement after her cells undergo advanced medical tests
-
Ben Crump1 year ago
The families of George Floyd and Daunte Wright hold an emotional press conference in Minneapolis
-
Theater1 year ago
Applications open for the 2020-2021 Soul Producing National Black Theater residency – Black Theater Matters
-
Theater12 months ago
Cultural icon Apollo Theater sets new goals on the occasion of its 85th anniversary