Connect with us

Technology

Cruise receives a $1.5 million fine for concealing details of a pedestrian accident from a safety authority

Published

on

Cruise gets $1.5 million penalty for keeping pedestrian crash details from safety regulator

Cruise, the autonomous vehicle subsidiary of General Motors, must pay a $1.5 million penalty to the National Highway Traffic Safety Administration after preliminary reports to the safety regulator about last 12 months’s pedestrian accident omitted information that The company’s robotxi dragged a woman 20 feet.

Punishment is a component a consent order announced the regulator on Monday. The order, which was jointly agreed to by the corporate and NHTSA, may even require Cruise to submit a “corrective action plan” that outlines changes it has made to raised comply with the regulator’s rules.

“It is critical for companies developing automated driving systems to prioritize safety and transparency from the outset,” NHTSA Deputy Administrator Sophie Shulman said in a statement.

Cruise may even need to submit safety reports to the regulator every 90 days for the following two years, together with a report detailing any software updates and a report detailing how the robotxi fleet complies with road traffic regulations. NHTSA has the choice to increase the consent order for a further 12 months.

Steve Kenner, Cruise’s chief safety officer, said in a statement that the consent order represents a “step forward in a new chapter” for the corporate and that it represents a “firm commitment to greater transparency in our interactions with our regulators.”

The consent order was issued almost a 12 months after the infamous San Francisco accident. The pedestrian was first hit by a human-driven vehicle after which ended up on the road of the Cruise robotaxi. Even though the Cruise AV braked, it still hit the pedestrian and stopped. However, then the robotxi drove to the side of the road and dragged the pedestrian with it.

Cruise and other AV corporations are required to submit a series of reports to NHTSA each time one of their vehicles is involved in an accident. According to the NHTSA, the primary message Cruise sent the day after the crash didn’t include any information in regards to the woman being dragged. The regulator said the second report, which was due inside 10 days of the disaster, also omitted this information. It wasn’t until Cruise’s third report, filed a month after the crash, gave NHTSA the total picture.

At the time, the California Department of Motor Vehicles accused Cruise of failing to release footage of a robotaxi dragging a pedestrian, which was the idea on which the DMV suspended Cruise’s operating permits.

In Monday’s consent order, NHTSA said Cruise “was aware of the post-crash behavior of the Cruise vehicle” when it filed the primary two reports but “omitted this material information from the reports.”

Over the past 12 months, Cruise has undergone a makeover and now has latest management, fewer employees, and is slowly getting its robotics back for testing in multiple locations. In June, it paid a fine to the California Public Utilities Commission, and earlier this month it announced it was beginning to bring some AV vehicles back to the Bay Area – though human-operated and only in Mountain View and Sunnyvale.

This article was originally published on : techcrunch.com
Continue Reading
Advertisement
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Technology

The Pinnit app for Android allows you to search your notification history

Published

on

By

Pinnit Android app lets you search through your notification history

The Android notification drawer could be each useful and distracting due to the sheer volume of notifications you receive. You need to higher manage notifications for individual apps through settings or configure your system to avoid missing essential updates. An app called Pinnit helps you view missed notifications with the notification history feature and likewise allows you to create custom notifications.

The app was created by an Indian Android developer Sasikanth Miriyampalli and designer from Brazil Eduardo Pratti. The application was there first launched in 2020 and allows users to pin their tasks to the notification bar in order that they do not forget about them.

Now, developers have provide you with a redesigned version of Pinnit with notification history as one among the primary features of the app. This allows you to view all rejected notifications and even search them. Search also allows you to sort notifications by newest and oldest. Additionally, you can filter out certain app notifications for a cleaner view. You may also view notifications using the date picker.

“The initial plan for the app was always to support notification history, but at the time we weren’t able to build it. However, we built a new version of the app from scratch to include this feature along with a dynamic theme and new user interface,” Miriyampalli told TechCrunch in an email.

Image credits: Pinnit

The vision behind the app was that amidst a sea of ​​notifications, people often missed essential reminders and tasks.

Pinnit allows users to create a persistent notification they don’t need to forget from their existing notifications, without losing the unique content. The app also allows you to create a custom notification with a title and outline – identical to a reminder.

Image credits: Pinnit

You can use this feature to pin a missed call or email that you want to reply to later via notification. You may also pin a notification of an ongoing sports match to track the rating with one tap. While custom notifications currently act mostly as to-dos, developers want to do more with them in the longer term.

Miriyampalli said that in the approaching months, developers plan to add recent features similar to location-based reminders, word filtering in notifications in privacy settings, app widgets and support for large screen devices.

Pinnit costs $1.99 one-time fee after 14 days of trial period.


This article was originally published on : techcrunch.com
Continue Reading

Technology

Here’s what’s illegal under California’s 18 (and counting) new AI laws

Published

on

By

Here is what’s illegal under California’s 8 (and counting) new AI laws

In September, California Governor Gavin Newsom considered 38 AI-related bills, including the highly controversial SB 1047, which the state Legislature sent to his desk for final approval. On Sunday, he vetoed SB 1047, marking the tip of the road for California’s controversial AI bill that was intended to forestall AI-related disasters, but he signed greater than a dozen other AI-related bills into law this month. These bills seek to deal with probably the most pressing issues surrounding AI: every part from Al risks to fake nudes created by AI image generators to Hollywood studios creating AI clones of deceased performers.

“California, home to most of the world’s leading artificial intelligence companies, is working to leverage these revolutionary technologies to help address pressing challenges while examining their risks,” Governor Newsom’s office said in a press release. press release.

So far, Governor Newsom has signed 18 AI-related bills, a few of that are probably the most far-reaching U.S. generative AI bills up to now. Here’s what they do.

AI Risk

On Sunday, Governor Newsom signed SB 896 into law, which requires the California Office of Emergency Services to conduct risk analyzes of potential threats posed by generative artificial intelligence. CalOES will work with pioneering model corporations resembling OpenAI and Anthropic to research the potential threats posed by AI to critical government infrastructure, in addition to threats that may lead to mass casualty incidents.

Training data

Another bill signed by Newsom this month requires generative AI providers to reveal data used to coach their AI systems in documentation posted on their website. AB 2013 enters into force in 2026 and requires artificial intelligence providers to publish: the sources of knowledge sets, an outline of how the information is used, the number of knowledge points within the set, whether copyrighted or licensed data is included, the period wherein the information was collected m .amongst others standards.

Privacy and artificial intelligence systems

Newsom also signed AB 1008 on Sunday, which makes it clear that California’s privacy laws also cover generative artificial intelligence systems. This signifies that if an AI system like ChatGPT reveals someone’s personal information (name, address, biometrics), California’s privacy laws will limit how corporations can use and cash in on that data.

Education

Newsom signed AB 2876 this month, which requires the California State Board of Education to incorporate “artificial intelligence skills” in math, science and history curricula and instructional materials. This means California schools can start teaching students the fundamentals of how artificial intelligence works, in addition to the restrictions, impacts and ethical considerations of using the technology.

Another new law SB 1288requires California superintendents to create working groups to check how artificial intelligence is utilized in public school education.

Definition of artificial intelligence

This month, Newsom signed laws establishing a uniform definition of artificial intelligence in California law. AB 2885 states that artificial intelligence is defined as “an engineered or machine-based system that varies in level of autonomy and which, for explicit or implicit purposes, can infer from the inputs it receives how to generate outputs that can influence environments physical or virtual.” “

Healthcare

Another act signed in September is the so-called AB 3030which requires healthcare providers to reveal once they use generative artificial intelligence to speak with a patient, particularly when those communications include the patient’s clinical information.

Meanwhile, Newsom recently signed the contract SB1120which puts limits on how healthcare providers and health insurers can automate their services. The law ensures that licensed physicians oversee the usage of artificial intelligence tools in these facilities.

Automatic AI connections

Last Friday, Governor Newsom signed a bill requiring robocallers to reveal whether or not they use AI-generated voices. AB 2905 is meant to forestall one other occurrence of a fake robocall resembling Joe Biden’s voice, which misled many citizens in New Hampshire earlier this yr.

Deeply fake pornography

On Sunday, Newsom signed it AB 1831 A law has entered into force that extends the scope of existing regulations on child pornography to incorporate content generated by artificial intelligence systems.

Last week, Newsom signed two bills targeting the creation and spread of false nudes. SB926 criminalizes this act by making it illegal to blackmail someone with AI-generated nude photos that resemble them.

SB981which also took effect on Thursday, requires social media platforms to establish channels where users can report false acts that resemble them. The content must then be temporarily blocked while it’s investigated by the platform and permanently removed if confirmed.

Watermarks

Also on Thursday, Newsom signed a bill to assist the general public discover content generated by artificial intelligence. SB942 requires commonly used generative AI systems to reveal of their content provenance data that it was generated by AI. For example, all images created by Dall-E with OpenAI now require a small tag of their metadata indicating that they were generated by AI.

Many AI corporations are already doing this, and there are several free tools that might help people read origination data and detect AI-generated content.

Deep election fraud

Earlier this week, California’s governor signed three bills aimed toward combating AI deepfakes that might influence elections.

One of California’s new laws AB 2655requires large online platforms like Facebook and X to remove or label election-related artificial intelligence misinformation and create channels for reporting such content. Candidates and elected officials can seek an injunction if a big online platform fails to comply with the bill.

Another law, AB 2839is aimed toward social media users who post or repost artificial intelligence misinformation that might mislead voters in regards to the upcoming election. The law went into effect immediately on Tuesday, and Newsom suggested that Elon Musk could possibly be liable to violating it.

Under a new California law, AI-generated political ads now require direct disclosure. AB 2355. This means Trump may not find a way to post artificial intelligence falsehoods featuring Taylor Swift supporting him on Truth Social (she endorsed Kamala Harris). The FCC has proposed the same national disclosure requirement and has already ruled that robocalls using AI-generated voices are illegal.

Actors and artificial intelligence

The two bills Newsom signed into law earlier this month — pushed by SAG-AFTRA, the nation’s largest film and tv actors’ union — create new standards for California’s media industry. AB 2602 requires studios to acquire permission from an actor before creating an AI-generated replica of his or her voice or likeness.

Meanwhile, AB 1836 prohibits studios from creating digital replicas of deceased performers without the consent of their estates (e.g., the recent “Alien” and “Star Wars” movies, in addition to other movies, used legally approved replicas).

SB 1047 is vetoed

Governor Newsom still has several AI-related bills to take up by the tip of September. However, SB 1047 isn’t one among them – the bill was vetoed on Sunday.

In a letter explaining your decisionNewsom said SB 1047 focused too narrowly on large artificial intelligence systems that might “give the public a false sense of security.” California’s governor noted that small AI models could possibly be as dangerous because the models targeted by SB 1047, and said a more flexible regulatory approach was needed.

During a conversation with Salesforce CEO Mark Benioff earlier this month on the Dreamforce 2024 conference, Newsom was in a position to tip his hat about SB 1047 and the way he thinks about broader regulation of the unreal intelligence industry.

“There is one bill that is kind of too big in terms of public discourse and awareness; that’s SB 1047,” Newsom said on stage this month. “What are the demonstrable risks associated with AI and what are the hypothetical risks? I can’t solve everything. What can we solve? That’s why we’re taking this approach across the spectrum on this issue.”

Check back to this text for updates on which AI laws the California governor is and is not signing.

This article was originally published on : techcrunch.com
Continue Reading

Technology

YouTube blocks videos of Adele, Green Day, Bob Dylan and others in dispute with SESAC

Published

on

By

As of Saturday, many YouTube videos containing music by artists similar to Adele, Green Day, Bob Dylan, Nirvana and REM have been banned from streaming in the United States.

For example, when you attempt to play Dylan’s “Like A Rolling Stone” (whether or not classic album recording Or live performance), you’ll as an alternative see the message: “This video contains content from SESAC. It is not available in your country.” Sometimes you may even watch a pre-roll ad before receiving the message.

However, not all videos featuring these artists are blocked; it’s unclear whether playable videos are excluded from the present dispute or have simply been missed.

In statements to press AND on social mediaYouTube blames the situation on failed negotiations with SESAC, a performing rights group that claims to represent greater than 35,000 artists and music publishers.

“Unfortunately, despite our best efforts, we were unable to reach a fair settlement before it expired,” YouTube said. “We take copyright very seriously and content represented by SESAC is no longer available on YouTube in the US. We are in active discussions with SESAC and hope to reach a new agreement as soon as possible.”

The situation is reminiscent of the dispute between Universal Music Group and TikTok earlier this yr, which resulted in UMG pulling songs from artists including Taylor Swift, Billie Eilish and Ariana Grande from the short-video platform while negotiating royalties.

Unlike UMG, SESAC is just not a record label, but relatively a corporation that collects royalties for songwriters and publishers, much like ASCAP and BMI. In addition to the above-mentioned artists, he also represents Burna Boy, George Clinton, Kenny Rogers, Kings of Leon and many others.

This article was originally published on : techcrunch.com
Continue Reading
Advertisement

OUR NEWSLETTER

Subscribe Us To Receive Our Latest News Directly In Your Inbox!

We don’t spam! Read our privacy policy for more info.

Trending