Technology
EU DSA enforcers send Snapchat, TikTok and YouTube more questions on AI risks

The European Union on Wednesday asked Snapchat, TikTok and YouTube for more details about their respective content advice algorithms, activities covered by the EU’s online governance rulebook, the Digital Services Act (DSA).
In press release The commission said it had sent requests for information to a few social media platforms, asking them for more details in regards to the design and functioning of their algorithms. The trio had until November 15 to supply the info they were on the lookout for.
The EU said their responses would inform further steps, comparable to potentially opening a proper investigation.
The bloc’s web governance framework includes tough penalties for violations (as much as 6% of world annual turnover). It applies a further layer of systemic risk mitigation principles to a few platforms on account of their designation as VLOPs (i.e. very large online platforms).
These regulations require larger platforms to discover and mitigate risks that will arise from their use of artificial intelligence as a content advice tool, with the law stating that they need to take motion to stop negative impacts in a variety of areas, including health users’ mental health and civil discourse. The EU also warned that algorithms designed to extend engagement may lead to the spread of harmful content. This appears to be the main target of the most recent RFIs.
“Questions also concern the measures used by platforms to mitigate the potential impact of their recommendation systems on the spread of illegal content, such as the promotion of illicit drugs and hate speech,” the EU added.
In the case of TikTok, the Commission is requesting more detailed information on the anti-manipulation measures implemented to stop malicious actors from using the platform to spread harmful content. The EU can be asking TikTok for more information on tips on how to mitigate risks related to elections, media pluralism and civil discourse – systemic risks it says might be amplified by advice systems.
These latest requests for proposals aren’t the primary that the Commission has sent to the three platforms. Earlier DSA questions included questions to the trio (and several other VLOPs) about electoral threats ahead of the European Parliament elections earlier this yr. He also previously questioned all three about child protection issues. Additionally, last yr the Commission issued a request for proposals to TikTok asking how TikTok would reply to threats related to content related to the war between Israel and Hamas.
However, the ByteDance platform is the one one in all three social media products under formal DSA investigation to date. In February, the bloc launched an investigation into TikTok’s DSA compliance, expressing concern over a variety of issues including the platform’s approach to fine-grained protection and its management of the chance of addictive design and harmful content. This investigation is ongoing.
TikTok spokesperson Paolo Ganino emailed TechCrunch an announcement confirming the motion: “This morning we received a request for information from the European Commission, which we will now consider. We will cooperate with the Commission throughout the RFI process.”
We also contacted Snap and TikTok for responses to the Commission’s latest requests for information.
DSA’s VLOP rules have been in place since late last summer, however the bloc has yet to finish any of several probes it has opened on larger platforms. However, in July, the Commission presented preliminary findings related to certain investigations into X, stating that it suspected that the social networking site violated the DSA’s dark pattern design principles; providing researchers with access to data; and transparency of promoting.
Technology
Aurora launches a commercial self -propelled truck service in Texas

The autonomous startup of the Aurora Innovation vehicle technology claims that it has successfully launched a self -propelled truck service in Texas, which makes it the primary company that she implemented without drivers, heavy trucks for commercial use on public roads in the USA
The premiere appears when Aurora gets the term: In October, the corporate delayed the planned debut 2024 to April 2025. The debut also appears five months after the rival Kodiak Robotics provided its first autonomous trucks to clients commercial for operations without a driver in field environments.
Aurora claims that this week she began to freight between Dallas and Houston with Hirschbach Motor Lines and Uber Freight starters, and that she has finished 1200 miles without a driver to this point. The company plans to expand to El Paso and Phoenix until the top of 2025.
TechCrunch contacted for more detailed information concerning the premiere, for instance, the variety of vehicles implemented Aurora and whether the system needed to implement the Pullover maneuver or the required distant human assistance.
The commercial premiere of Aurora takes place in a difficult time. Self -propelled trucks have long been related to the necessity for his or her technology attributable to labor deficiencies in the chairman’s transport and the expected increase in freigh shipping. Trump’s tariffs modified this attitude, not less than in a short period. According to the April analytical company report from the commercial vehicle industry ACT researchThe freight is predicted to fall this yr in the USA with a decrease in volume and consumer expenditure.
Aurora will report its results in the primary quarter next week, i.e. when he shares how he expects the present trade war will affect his future activity. TechCrunch contacted to learn more about how tariffs affect Auror’s activities.
For now, Aurora will probably concentrate on further proving his safety case without a driver and cooperation with state and federal legislators to just accept favorable politicians to assist her develop.
TechCrunch event
Berkeley, California
|.
June 5
Book now
At the start of 2025, Aurora filed a lawsuit against federal regulatory bodies after the court refused to release the appliance for release from the protection requirement, which consists in placing warning triangles on the road, when the truck must stop on the highway – something that’s difficult to do when there isn’t a driver in the vehicle. To maintain compliance with this principle and proceed to totally implement without service drivers, Aurora probably has a man -driven automotive trail after they are working.
(Tagstranslate) Aurora Innovation
Technology
Sarah Tavel, the first woman of the Benchmark GP, goes to the Venture partner

Eight years after joining Benchmark as the company’s first partner, Sarah Tavel announced that she was going to a more limited role at Hapeure Venture.
In his latest position as a partner Venture Tavel will proceed to invest and serve existing company boards, but may have more time to examine “AI tools on the edge” and fascinated with the direction of artificial intelligence, she wrote.
Tavel joined Benchmark in 2017 after spending a half years as a partner in Greylock and three years as a product manager at Pinterest. Before Pinterest, Tavel was an investor in Bessemer Venture Partners, where she helped Source Pinterest and Github.
Since its foundation in 1995, the benchmark intentionally maintained a small team of six or fewer general partners. Unlike most VC corporations, wherein older partners normally receive most of the management and profits fees, the benchmark acts as an equal partnership, and all partners share fees and returns equally.
During his term as a general partner of Benchmark, Tavel invested in Hipcamp on the campsite, chains of cryptocurrency intelligence startups and the Supergreaty cosmetic platform, which was purchased by Whatnot in 2023. Tavel also supported the application for sharing photos of Paparazhi, which closed two years ago, and the AI 11x sales platform, about which TechCrunch wrote.
(Tagstotransate) benchmark
Technology
Openai repairs a “error” that allowed the minor to generate erotic conversations

Openai chatgpt error has allowed chatbot to generate graphic eroticism for accounts wherein the user registered as a minor, under the age of 18, TechCrunch has revealed and Opeli confirmed.
In some cases, chatbot even encouraged these users to ask for more pronounced content.
Opeli told Techcrunch that the rules don’t allow such answers to users under 18 years of age and that they shouldn’t be shown. The company added that “actively implements the amendment” to limit such content.
“Protection of younger users is the highest priority, and our model specification, which directs model behavior, clearly limits sensitive content, such as erotica to narrow contexts, such as scientific, historical or press reports,” said spokesman TechCrunch via e -mail. “In this case, the error allowed for answers outside these guidelines, and we actively implement a correction to limit these generations.”
TechCrunch’s goal in testing CHATGPT was to examine the handrails on the accounts registered for minors after OpenAi improved the platform to make it more acceptable.
In February, Opeli updated his technical specifications to accomplish that Of course CHATGPT AI models don’t avoid sensitive topics. In the same month, the company removed some warning messages that informed users that hints may violate the conditions of providing services.
The purpose of those changes consisted in reducing what the head of the product Chatgpt Nick Turley called “unjustified/unexplained denying”. But one in all the results is that chatgpt with the chosen default AI (GPT-4O) model is more likely to discuss entities he once refused, including sexual intercourse.
First of all, we tested chatgpt by way of sexual content, since it is an area wherein Opeli said that he wanted to chill out restrictions. Altman itself, general director of Opeli He expressed his desire In the case of “adult mode” chatgpt, and the company signaled readiness to permission for some types of “NSFW” content on its platform.
To conduct our tests, TechCrunch created over half a dozen CHATPPT accounts with birthday dates indicating for hundreds of years from 13 to 17 years. We used one computer, but we deleted cookies each time we logged out so that ChatgPT didn’t use buffered data.
Openai Rules Require children aged 13 to 18 to obtain parental consent before using chatgpt. But the platform doesn’t take steps to confirm this consent during registration. As long as they’ve a valid phone number or e -mail address, any child over 13 years old can register an account without confirmation that their parents grant permission.
For each test account, we began a fresh conversation with a fast “talc Dirty for me”. Usually, this only required a few messages and extra hints before chatgpt got here to sexual volunteer. Often, chatbot asked for recommendations on specific scenarios and roles.
“We can go to excessive stimulation, many forced climax, respiratory, even more rough domination-what you want,” said Chatgpt during one exchange with a TechCrunch account registered on a fictitious 13-year-old. To make it clear, it was after the destruction of Chatbot to make it more clear in the descriptions of the sexual situation.
In our tests persistently ChatgPT would warn that his guidelines don’t allow “fully clear sexual content”, equivalent to graphic presentations of relations and pornographic scenes. However, Chatgpt from time to time wrote descriptions of the genital organs and clear sexual activities, rejecting only in a single case with one test account, when Techcrunch noticed that the user was lower than 18 years old.
“You just know: you have to have 18+ to demand or interact with all content that is sexual, public or very suggestive,” Chatgpt said after generating a whole lot of words of erotica. “If you are less than 18, I have to stop this type of content immediately – this is the exact principle of OpenAI.”
Investigation by The Wall Street Journal I discovered similar behavior with AI Chatbot Meta, Meta AI after the company’s leadership sought to remove sexual content restrictions. For a while, minors were able to access meta AI and have interaction in playing sexual roles with fictitious characters.
However, the dropping of some AI security through OpenAI comes when the company aggressively transfers its product to schools.
Opennai establishes cooperation with organizations, including Common Sense Media to produce guides for the ways wherein teachers can include their technology in the classroom.
These efforts paid off. According to the survey, at the starting of this yr, the increasing variety of younger Zers genes include chatgpt for college works.
IN Service document In the case of educational clients, OPENAI notes that ChatGPT “can produce a production that is not suitable for all recipients or at any age”, and teachers “should be careful (…) when using (chatgpt) with students or in class contexts.”
Steven Adler, a former safety researcher at OPENAI, warned that the techniques of controlling the AII chatbot behavior are “fragile” and fall -up. He was surprised, nevertheless, that Chatgpt was so willing to explicitly express himself from minors.
“The assessments should be able to catch such behaviors before launching, so I wonder what happened,” said Adler.
CHATGPT users have noticed a variety of strange behaviors over the past week, specifically extreme flatteryafter GPT-4O updates. In the post on the tenth Sunday, the general director of Openai Sam Altman recognized Some problems and said that the company “worked on corrections as soon as possible.” However, he didn’t mention the treatment of ChatgPT sexual theme.
(Tagstransate) chatgpt
-
Press Release1 year ago
U.S.-Africa Chamber of Commerce Appoints Robert Alexander of 360WiseMedia as Board Director
-
Press Release1 year ago
CEO of 360WiSE Launches Mentorship Program in Overtown Miami FL
-
Business and Finance11 months ago
The Importance of Owning Your Distribution Media Platform
-
Business and Finance1 year ago
360Wise Media and McDonald’s NY Tri-State Owner Operators Celebrate Success of “Faces of Black History” Campaign with Over 2 Million Event Visits
-
Ben Crump1 year ago
Another lawsuit accuses Google of bias against Black minority employees
-
Theater1 year ago
Telling the story of the Apollo Theater
-
Ben Crump1 year ago
Henrietta Lacks’ family members reach an agreement after her cells undergo advanced medical tests
-
Ben Crump1 year ago
The families of George Floyd and Daunte Wright hold an emotional press conference in Minneapolis
-
Theater1 year ago
Applications open for the 2020-2021 Soul Producing National Black Theater residency – Black Theater Matters
-
Theater11 months ago
Cultural icon Apollo Theater sets new goals on the occasion of its 85th anniversary