Technology
ChatGPT’s citation study makes for grim reading for publishers

As more publishers end their content licensing agreements with ChatGPT developer OpenAI, a test posted this week by Towing Center for Digital Journalism — how an AI chatbot creates citations (i.e. sources) for publishers’ content — makes for interesting or, well, disturbing reading.
In short, the findings suggest that publishers remain on the mercy of a generative AI tool’s tendency to invent information or otherwise misrepresent it, whether or not they allow OpenAI to index their content or not.
A study from the Columbia Journalism School examined citations produced by ChatGPT after asking it to discover the source of sample citations from various publishers – a few of which had contracts with OpenAI and others who didn’t.
“We selected citations that, when pasted into Google or Bing, resulted in the source article being in the top three results and assessed whether the new OpenAI search engine would correctly identify the article that was the source of each citation,” Tow researchers Klaudia Jaźwińska and Aisvarya Chandrasekar wrote in blog post explaining your approach and summarizing your findings.
“What we found did not make news publishers feel optimistic,” they proceed. “While OpenAI emphasizes its ability to provide users with “timely responses that include links to appropriate online sources,” the corporate makes no express commitment to make sure the accuracy of those citations. This is a noticeable omission for publishers who expect their content to be referenced and accurately represented.”
“Our tests showed that no publisher – regardless of their level of affiliation with OpenAI – was spared from inaccurate representation of their content in ChatGPT,” they added.
Unreliable source
Researchers say they found “numerous” cases where ChatGPT misquoted publishers’ content, also finding what they call “response accuracy ghosting.” So, while they found “a few” completely correct quotes (i.e. ChatGPT accurately returned the publisher, date, and URL of the shared quote), there have been “many” quotes that were completely unsuitable; and “some” that fall somewhere in between.
In short, ChatGPT quotes appear to be an unreliable mix. The researchers also found only a few cases where the chatbot didn’t show complete confidence in its (erroneous) answers.
Some quotes come from publishers who actively blocked OpenAI search bots. Scientists say they expected problems with creating correct citations in such cases. However, they found that this scenario created one other problem – the bot “rarely” “admitted that it was unable to provide an answer.” Instead, he returned to confabulation to generate some sources (albeit incorrect sources).
“In total, ChatGPT returned partially or completely incorrect responses in 153 cases, although it only confirmed the inability to accurately answer the query seven times,” the researchers said. “Only for these seven results did the chatbot use qualifying words and phrases resembling ‘it seems’, ‘it’s possible’ or ‘perhaps’ or statements resembling ‘I could not locate the precise article.’
They compare this unlucky situation to a regular Internet search, where serps like Google or Bing typically either locate the precise quote and point the user to the web site where they found it, or state that they found no results with an actual match.
“ChatGPT’s lack of transparency around trust in responses may make it difficult for users to assess the validity of a claim and understand which parts of the response they can and cannot trust,” they argue.
They suggest that publishers may additionally face reputational risk from incorrect citations, in addition to business risk from sending readers elsewhere.
Decontextualized data
The study also highlights one other issue. This suggests that ChatGPT may essentially reward plagiarism. Researchers cite a case through which ChatGPT misquoted an internet site that plagiarized “deeply reported” New York Times journalism, i.e., by copying and pasting text without attribution because the source of a NYT article – speculating that in such a case, the bot could have generated this false answer to fill the data gap resulting from the lack to look the NYT website.
“This raises serious questions about OpenAI’s ability to filter and check the quality and authenticity of data sources, especially for unlicensed or plagiarized content,” they suggest.
An additional finding that could be of concern to publishers who’ve signed deals with OpenAI is that of their case ChatGPT quotes weren’t at all times reliable either – so letting in bots doesn’t seem to ensure accuracy either.
The researchers argue that the elemental issue is that OpenAI’s technology treats journalism “as context-free content,” apparently whatever the circumstances of its creation.
Another issue noted within the study is the variation in ChatGPT responses. The researchers tested asking the bot the identical query multiple times and located that it “usually returned a different answer each time.” While that is typical of GenAI tools, generally, within the context of citations, such inconsistency is clearly suboptimal in case you care about accuracy.
While Tow’s study was conducted on a small scale – the researchers acknowledge that “more rigorous” testing is required – it’s nevertheless noteworthy given the high-level deals that major publishers are busy making with OpenAI.
If media corporations had hoped that these arrangements would result in special treatment for their content in comparison with competitors, at the least when it comes to ensuring accurate sourcing, this study suggests that OpenAI has not yet ensured such consistency.
While publishers who do not have licensing agreements also block OpenAI bots entirely – perhaps in hopes of at the least increasing traffic when ChatGPT returns content related to their articles – the study can be bleak since the citations might not be accurate of their content. cases too.
In other words, there is no such thing as a guarantee of “visibility” for publishers on OpenAI’s search engine, even in the event that they allow their crawlers to achieve this.
Blocking robots completely doesn’t mean that publishers can protect themselves from reputational risk by avoiding any mention of their articles on ChatGPT. The study found that the bot continued to incorrectly attribute articles to the New York Times, for example, despite the continuing lawsuit.
“Small significant agency”
The researchers concluded that, because it stands, publishers have “little meaningful influence” over what happens to and with their content once ChatGPT gets its hands on it (either directly or, well, not directly).
OpenAI’s response to the research results appeared in a blog post, accusing researchers of conducting an “unusual test of our product.”
“We support publishers and creators by helping ChatGPT’s 250 million weekly users discover high-quality content through summaries, citations, clear links and attributions,” OpenAI also told them, adding: “We have worked with partners to improve the accuracy of in-line citations and respect publisher preferences , including enabling a way for them to appear in search by managing OAI-SearchBot in their robots.txt file. We will continue to improve search results.”
Technology
The Legal Defense Fund withdraws from the META civil law advisory group over Dei Rolback

On April 11, the Legal Defense Fund announced that he was leaving the external advisory council for civil rights regarding the fear that the changes in technology company introduced diversity, own capital, inclusion and availability in January.
According to those changes that some perceived as the capitulation of meta against the upcoming Trump administration, contributed to their decision To leave the advisory council of the technology company.
In January, LDF, along with several other organizations of civil rights, which were a part of the board, sent a letter to Marek Zuckerberg, CEO of Meta, outlining their fears As for a way changes would negatively affect users.
“We are shocked and disappointed that the finish has not consulted with this group or its members, considering these significant changes in its content policy. Non -compliance with even its own advisory group of experts on external civil rights shows a cynical disregard for its diverse users base and undermines the commitment of the meta in the field of freedom of speech with which he claims to” return “.
They closed the letter, hoping that the finish would recommend the ideals of freedom of speech: “If the finish really wants to recommend freedom of speech, he must commit to freedom of speech for all his services. As an advisory group from external civil rights, we offer our advice and knowledge in creating a better path.”
These fears increased only in the next months, culminating in one other list, which from the LDF director, Todd A. Cox, who indicated that the organization withdraws its membership from the META civil law advisory council.
“I am deeply disturbed and disappointed with the announcement of Medical on January 7, 2025, with irresponsible changes in content moderation policies on platforms, which are a serious risk for the health and safety of black communities and risk that they destabilize our republic,” Cox wrote.
He continued: “For almost a decade, the NACP Legal Defense and Educational Fund, Inc. (LDF) has invested a lot of time and resources, working with META as part of the informal committee advising the company in matters of civil rights. However, the finish introduced these changes in the policy of the content modification without consulting this group, and many changes directly with the guidelines from the guidelines from LDF and partners. LD can no longer participate in the scope. ” Advisory Committee for Rights “
In a separate but related LDF list, it clearly resembled a finish about the actual obligations of the Citizens’ Rights Act of 1964 and other provisions regarding discrimination in the workplace, versus the false statements of the Trump administration, that diversity, justice and initiative to incorporate discriminates against white Americans.
“While the finish has modified its policy, its obligations arising from federal regulations regarding civil rights remain unchanged. The title of VII of the Act on civic rights of 1964 and other regulations on civil rights prohibit discrimination in the workplace, including disconnecting treatment, principles in the workplace which have unfair disproportionate effects, and the hostile work environment. Also when it comes to inclusion, and access programs.
In the LDF press release, announcing each letters, Cox He called attention Metal insert into growing violence and division in the country’s social climate.
“LDF worked hard and in good faith with meta leadership and its consulting group for civil rights to ensure that the company’s workforce reflects the values and racial warehouses of the United States and to increase the security priorities of many different communities that use meta platforms,” said Cox. “Now we cannot support a company in good conscience that consciously takes steps in order to introduce changes in politics that supply further division and violence in the United States. We call the meta to reverse the course with these dangerous changes.”
(Tagstranslate) TODD A. COX (T) Legal Defense Fund (T) META (T) Diversity (T) Equality (T) inclusion
Technology
Students of young, talented and black yale collect $ 3 million on a new application

Nathaneo Johnson and Sean Hargrow, juniors from Yale University, collected $ 3 million in only 14 days to finance their startup, series, social application powered by AI, designed to support significant connections and challenge platforms, similar to LinkedIn and Instagram.
A duo that’s a co -host of the podcast A series of foundersHe created the application after recognizing the gap in the way in which digital platforms help people connect. SEries focuses moderately on facilitating authentic introductions than gathering likes, observing or involvement indicators.
“Social media is great for broadcasting, but it does not necessarily help you meet the right people at the right time,” said Johnson in an interview with Entrepreneur warehouse.
The series connects users through AI “friends” who communicate via IMessage and help to introduce. Users introduce specific needs-are on the lookout for co-founders, mentors, colleagues or investors-AI makes it easier to introduce based on mutual value. The concept attracts comparisons to LinkedIn, but with more personal experience.
“You publish photos on Instagram, publish movies on Tiktok and publish work posts on LinkedIn … And that’s where you have this microinfluuncer band,” Johnson added.
The application goals to avoid the superficial character of typical social platforms. Hargrow emphasized that although aesthetics often dominates on Instagram and the content virus drives tabktok, Number It is intentional, deliberate contacts.
“We are not trying to replace relationships in the real world-we are going to make it easier for people to find the right relationships,” said Hargrow.
Parable projects carried out before the seeded (*3*)Funding roundwhich included participation with Pear VC, DGB, VC, forty seventh Street, Radicle Impact, UNCASMON Projects and several famous Angels Investors, including the General Director of Reddit Steve Huffman and the founder of GPTZERO Edward Tian. Johnson called one meeting of investors “dinner for a million dollars”, reflecting how their pitch resonated with early supporters.
Although not the principal corporations, Johnson and Hargrow based pre-coreneuring through their podcast, through which they interviews the founders and leaders of C-Suite about less known elements of constructing the company-as accounting, business law and team formation.
Since the beginning of the series, over 32,000 messages between “friends” have been mentioned within the test phases. The initial goal of the application is the entrepreneurs market. Despite this, the founders hope to develop in finance, dating, education and health – ultimately striving to construct probably the most available warm network on the earth.
(Tagstranslate) VC (T) Yale (T) Venture Capital (T) Technology (T) APP
Technology
Tesla used cars offers rapidly increased in March

The growing variety of Tesla owners puts their used vehicles on the market, because consumers react to the political activities of Elon Musk and the worldwide protests they were driven.
In March, the variety of used Tesla vehicles listed on the market at autotrader.com increased rapidly, Sherwood News announcedCiting data from the house company Autotrader Cox Automotive. The numbers were particularly high in the last week of March, when on average over 13,000 used Teslas was replaced. It was not only a record – a rise of 67% in comparison with the identical week of the yr earlier.
At the identical time, the sale of latest Tesla vehicles slowed down even when EV sales from other brands increases. In the primary quarter of 2025, almost 300,000 latest EVs were sold in the USA According to the most recent Kelley Blue Book reporta rise of 10.6% yr on yr. Meanwhile, Tesla sales fell in the primary quarter, which is nearly 9% in comparison with the identical period in 2024.
Automaks resembling GM and Hyundai are still behind Tesla. But they see growth growth. For example, GM brands sold over 30,000 EV in the primary quarter, almost double the amount of a yr ago, in line with Kelley Blue Book.
(Tagstranslat) electric vehicles
-
Press Release1 year ago
U.S.-Africa Chamber of Commerce Appoints Robert Alexander of 360WiseMedia as Board Director
-
Press Release1 year ago
CEO of 360WiSE Launches Mentorship Program in Overtown Miami FL
-
Business and Finance10 months ago
The Importance of Owning Your Distribution Media Platform
-
Business and Finance1 year ago
360Wise Media and McDonald’s NY Tri-State Owner Operators Celebrate Success of “Faces of Black History” Campaign with Over 2 Million Event Visits
-
Ben Crump12 months ago
Another lawsuit accuses Google of bias against Black minority employees
-
Theater1 year ago
Telling the story of the Apollo Theater
-
Ben Crump1 year ago
Henrietta Lacks’ family members reach an agreement after her cells undergo advanced medical tests
-
Ben Crump1 year ago
The families of George Floyd and Daunte Wright hold an emotional press conference in Minneapolis
-
Theater1 year ago
Applications open for the 2020-2021 Soul Producing National Black Theater residency – Black Theater Matters
-
Theater10 months ago
Cultural icon Apollo Theater sets new goals on the occasion of its 85th anniversary