Connect with us

Technology

How the theft of 40 million UK voter registers could have been completely prevented

Published

on

The cyberattack on the British Electoral Commission, which led to the breach of 40 million people’s electoral records, could have been entirely prevented if the organisation had put in place basic security measures, based on a devastating report published this week by the UK’s data protection watchdog.

Report published by the UK Information Commissioner’s Office on Monday blamed the Electoral Commission, which holds copies of the UK’s electoral register, for a series of security lapses that led to the mass theft of voter data since August 2021.

The Electoral Commission only discovered the security breach of its systems greater than a yr later, in October 2022, and it took until August 2023 to publicly disclose the year-long data breach.

The commission said at the time of its public disclosure that hackers had breached its email servers and stolen, amongst other things, copies of the UK electoral registers. These registers hold details about voters who registered between 2014 and 2022, and include names, postal addresses, phone numbers and non-public voter information.

UK Government later attributed the incursion to ChinaWith High-ranking officials warn that the stolen data could be used for “wide-scale espionage and international repression of perceived dissidents and critics in the UK.” China has denied any involvement in the security breach.

On Monday, the ICO issued a proper warning to the Electoral Commission for breaching UK data protection laws, adding: “Had the Electoral Commission taken basic steps to protect its systems, such as effective security patching and password management, it is highly unlikely that this data breach would not have occurred.”

The Electoral Commission, for its part, admitted short statement after the report was published, it was found that “insufficient protective measures were not implemented to prevent a cyberattack on the Commission”.

Until the ICO report was released, it was not clear what exactly led to the data of tens of tens of millions of British voters being exposed, or what could have been done otherwise.

We now know that the ICO has blamed the Commission for failing to patch (*40*) on its email server, which was the initial point of entry for hackers who made off with tons of voter data. The report also confirms a detail reported by TechCrunch in 2023 that the Commission’s email was a self-hosted Microsoft Exchange server.

In its report, the ICO confirmed that not less than two groups of malicious hackers breached the Commission’s self-hosted Exchange server in 2021-2022 by exploiting a sequence of three vulnerabilities. collectively known as ProxyShellwhich allowed hackers to interrupt in, take control, and place malicious code on the server.

Microsoft released patches for ProxyShell several months earlier, in April and May 2021, but the Commission didn’t install them.

By August 2021, the U.S. Cybersecurity Agency CISA began to sound the alarm that malicious attackers were actively exploiting ProxyShell, at which point every organization that had an efficient security patching process had already deployed the patches months ago and was already protected. The Electoral Commission was not one of those organizations.

“The Electoral Commission did not have an adequate patching system in place at the time of the incident,” the ICO report reads. “This flaw is a fundamental measure.”

Among other significant security issues uncovered during the ICO investigation, the Electoral Commission had allowed passwords that were “highly susceptible to guessing” and in addition confirmed it was “aware” that parts of its infrastructure were out of date.

ICO Deputy Commissioner Stephen Bonner said in a press release on the report and the ICO reprimand: “Had the Electoral Commission taken basic steps to protect its systems, such as effective security patching and password management, it is highly likely that this data breach would not have occurred.”

Why didn’t the ICO wonderful the Electoral Commission?

A completely avoidable cyberattack that exposed the personal data of 40 million British voters might appear to be a serious enough breach to warrant a wonderful fairly than a reprimand from the Electoral Commission, but the ICO has issued only a public reprimand for lax security.

In the past, public sector bodies have been punished for violating data protection rules. But in June 2022 Under the previous Conservative government, the ICO announced it will trial a modified approach to enforcement against public bodies.

The regulator said the policy change meant public bodies were unlikely to see large fines for breaches over the next two years, whilst the ICO suggested incidents would proceed to be closely investigated. However, the sector was told to expect increased use of reprimands and other enforcement powers, fairly than fines.

IN open letter Explaining the move at the time, Information Commissioner John Edwards wrote: “I am not convinced that large fines alone are such an effective deterrent in the public sector. They do not affect shareholders or individual directors in the same way as in the private sector, but come directly from the budget for providing services. The impact of a public sector fine often falls on the victims of a breach, in the form of reduced budgets for key services, rather than on the perpetrators. In effect, those affected are punished twice.”

At first glance, it might sound that the Electoral Commission was lucky to find the breach during the ICO’s two-year trial period of taking a more lenient approach to enforcing sector-specific rules.

In line with the ICO’s statement that it’ll be trialling fewer sanctions for public sector data breaches, Edwards said the regulator will adopt a more proactive workflow involving engagement with senior management of public bodies to lift standards and ensure compliance with data protection laws across government bodies through a harm prevention approach.

But when Edwards revealed a plan to check a mixture of softer enforcement and proactive outreach, he acknowledged it will take effort from each side, writing: “We can’t do this alone. There has to be accountability on both sides to deliver these improvements.”

The Electoral Commission’s breach of the rules could subsequently raise wider questions on the success of the ICO process, including whether public sector bodies have kept their end of the bargain to justify more lenient enforcement.

It actually doesn’t appear that the Electoral Commission was suitably proactive in assessing the risk of a breach in the early months of the ICO process – that’s, before it discovered the hack in October 2022. The ICO’s reprimand, describing the Commission’s failure to patch a known software vulnerability as a “fundamental measure”, for instance, appears like the very definition of a preventable data breach that the regulator said it wanted to handle through a change in public sector policy.

In this case, nevertheless, the ICO says it didn’t apply a more lenient public sector enforcement policy.

In response to questions on why the Electoral Commission was not fined, ICO spokeswoman Lucy Milburn told TechCrunch: “Following a thorough investigation, a fine was not considered in this case. Despite the number of people affected, the personal data was limited primarily to the names and addresses contained in the Electoral Register. Our investigation did not find any evidence of misuse of personal data or that this breach caused any direct harm.”

“The Electoral Commission has already taken the necessary steps we expect to see to improve security in the wake of this incident, including implementing an infrastructure modernisation plan, as well as reviewing password policies and multi-factor authentication for all users,” the spokesperson added.

The regulator says there was no wonderful because no data was misused, or fairly the ICO found no evidence of misuse. The mere disclosure of 40 million voters’ information didn’t meet the ICO’s requirements.

One wonders to what extent the regulator’s investigation focused on identifying how voter information might have been misused?

Returning to the ICO’s Public Sector Enforcement Process end of JuneAs the experiment approached its two-year mark, the regulator issued a press release saying it will review the policy before deciding in the autumn on the future of the sectoral approach.

Whether this policy will proceed, or whether there might be a shift towards fewer reprimands and bigger fines for public sector data breaches, stays to be seen. Regardless, the Electoral Commission breach case shows that the ICO is reluctant to impose sanctions on the public sector – unless the disclosure of individuals’ data will be linked to proven harm.

It isn’t clear how a regulatory approach that isn’t intended to be deterrent will help raise data protection standards across government.

This article was originally published on : techcrunch.com
Continue Reading
Advertisement
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Technology

Bluesky addresses trust and security issues related to abuse, spam and more

Published

on

By

Bluesky butterfly logo and Jay Graber

Social media startup Bluesky, which is constructing a decentralized alternative to X (formerly Twitter), provided an update Wednesday on the way it’s approaching various trust and security issues on its platform. The company is in various stages of developing and piloting a variety of initiatives focused on coping with bad actors, harassment, spam, fake accounts, video security and more.

To address malicious users or those that harass others, Bluesky says it’s developing recent tools that can have the option to detect when multiple recent accounts are created and managed by the identical person. This could help curb harassment when a foul actor creates several different personas to attack their victims.

Another recent experiment will help detect “rude” replies and forward them to server moderators. Like Mastodon, Bluesky will support a network where self-hosters and other developers can run their very own servers that connect to Bluesky’s server and others on the network. This federation capability is still in early access. But in the long term, server moderators will have the option to resolve how they need to take care of individuals who post rude responses. In the meantime, Bluesky will eventually reduce the visibility of those responses on its app. Repeated rude labels on content will even lead to account-level labels and suspensions, it says.

To curb using lists to harass others, Bluesky will remove individual users from the list in the event that they block the list creator. Similar functionality was recently introduced to Starter Packs, a sort of shared list that will help recent users find people to follow on the platform (check TechCrunch Starter Pack).

Bluesky will even scan lists with offensive names or descriptions to limit the potential of harassing others by adding them to a public list with a toxic or offensive name or description. Those who violate Bluesky’s Community Guidelines might be hidden from the app until the list owner makes changes that align with Bluesky’s policies. Users who proceed to create offensive lists will even face further motion, though the corporate didn’t provide details, adding that the lists are still an area of ​​energetic discussion and development.

In the approaching months, Bluesky also intends to move to handling moderation reports through its app, using notifications relatively than counting on email reports.

To combat spam and other fake accounts, Bluesky is launching a pilot that can attempt to routinely detect when an account is fake, scamming or sending spam to users. Combined with moderation, the goal is to have the option to take motion on accounts inside “seconds of receiving a report,” the corporate said.

One of the more interesting developments is how Bluesky will comply with local laws while still allowing free speech. It will use geotags that allow it to hide some content from users in a particular area to comply with the law.

“This allows Bluesky’s moderation service to maintain flexibility in creating spaces for free expression while also ensuring legal compliance so that Bluesky can continue to operate as a service in these geographic regions,” the corporate shared in a blog post. “This feature will be rolled out on a country-by-country basis, and we will endeavor to inform users of the source of legal requests when legally possible.”

To address potential trust and safety issues with videos which have recently been added, the team is adding features like the flexibility to disable autoplay, ensuring videos are labeled, and providing the flexibility to report videos. They are still evaluating what else might need to be added, which might be prioritized based on user feedback.

When it comes to abuse, the corporate says its general framework is “a question of how often something happens versus how harmful it is.” The company focuses on addressing high-impact, high-frequency issues, in addition to “tracking edge cases that could result in significant harm to a few users.” The latter, while only affecting a small number of individuals, causes enough “ongoing harm” that Bluesky will take motion to prevent abuse, it says.

User concerns will be reported via reports, emails and mentions @safety.bsky.app account.

This article was originally published on : techcrunch.com
Continue Reading

Technology

Apple Airpods Now With FDA-Approved Hearing Aid Feature

Published

on

By

The newest AirPods are a part of a growing group of hearing aids available over-the-counter.


Apple’s latest Airpods could help those with hearing impairments. The tech company’s software update has been approved by the FDA to be used as hearing aids.

The FDA approved Apple’s hearing aid feature on September 12. The free update, available on AirPods Pro 2, will amplify sounds for the hearing impaired. However, the feature is simply available to adults 18 and older with an iPhone or iPad compatible with iOS 18.

“Today’s approval of over-the-counter hearing aid software for a commonly used consumer audio product is another step that will increase the availability, affordability, and acceptability of hearing support for adults with mild to moderate hearing loss,” said Dr. Michelle Tarver, acting director of the FDA’s Center for Devices and Radiological Health, in a press release. obtained by .

They confirmed the feature’s use after a clinical trial with 118 participants. The results showed that users “achieved similar perceived benefits to those who received a professional fit on the same device.” Apple also announced the brand new development just days before the agency’s approval.

“Hearing health is an essential part of our overall well-being, yet it is often overlooked — in fact, according to Apple’s Hearing Study, as many as 75 percent of people diagnosed with hearing loss go untreated,” said Sumbul Desai, MD, vice chairman of Health at Apple. press release“We’re excited to deliver breakthrough software features in AirPods Pro that put users’ hearing health first, offering new ways to test and get help for hearing loss.”

What’s more, Apple intends its recent AirPods to supply a “world-first” hearing health experience. Noting that 1.5 billion people suffer from hearing loss, the device also goals to forestall and detect hearing problems.

“Your AirPods Pro will transform into your own personalized hearing aid, amplifying the specific sounds you need in real time, such as parts of speech or elements of your environment,” Desai added in a video announcing the event.

The latest AirPods are a part of a growing variety of over-the-counter (OTC) hearing aids. They usually are not only more accessible, but additionally significantly cheaper than prescription medical devices. While they’re designed for individuals with mild to moderate hearing loss, they’ll initially treat those with limited abilities.

AirPods Pro 2 is available now for $249.


This article was originally published on : www.blackenterprise.com
Continue Reading

Technology

LinkedIn collected user data for training purposes before updating its terms of service

Published

on

By

LinkedIn scraped user data for training before updating its terms of service

LinkedIn could have trained AI models on user data without updating its terms.

LinkedIn users within the United States — but not within the EU, EEA, or Switzerland, likely as a consequence of data privacy laws in those regions — have the choice to opt out toggle on the settings screen, revealing that LinkedIn collects personal data to coach “AI models to create content.” The toggle isn’t recent. But, as in early reported According to 404 Media, LinkedIn didn’t initially update its privacy policy to handle data use.

The Terms of Service have already been published. updatedbut that sometimes happens well before an enormous change, equivalent to using user data for a brand new purpose like this. The idea is that this offers users the choice to make changes to their account or leave the platform in the event that they do not like the changes. It looks like that is not the case this time.

So what models does LinkedIn train? Its own, the corporate’s says in a Q&A session, including models to put in writing suggestions and post recommendations. But LinkedIn also says that generative AI models on its platform could be trained by a “third-party vendor,” equivalent to its corporate parent Microsoft.

“As with most features on LinkedIn, when you use our platform, we collect and use (or process) data about your use of the platform, including personal data,” the Q&A reads. “This may include your use of generative AI (AI models used to create content) or other AI features, your posts and articles, how often you use LinkedIn, your language preferences, and any feedback you may have provided to our teams. We use this data, in accordance with our privacy policy, to improve or develop the LinkedIn Services.”

LinkedIn previously told TechCrunch that it uses “privacy-enhancing techniques, including redaction and removal of information, to limit personally identifiable information contained in datasets used to train generative AI.”

To opt out of LinkedIn’s data collection, go to the “Data Privacy” section of the LinkedIn settings menu in your computer, click “Data to improve Generative AI,” after which turn off “Use my data to train AI models to create content.” You may try a more comprehensive opt-out through this typebut LinkedIn notes that opting out is not going to affect training that has already taken place.

The nonprofit Open Rights Group (ORG) has asked the Information Commissioner’s Office (ICO), the UK’s independent regulator for data protection laws, to research LinkedIn and other social networks that train on user data by default. Earlier this week, Meta announced it was resuming plans to gather user data for AI training after working with the ICO to simplify the opt-out process.

“LinkedIn is the latest social media company to process our data without asking for our consent,” Mariano delli Santi, a lawyer and policy officer at ORG, said in a press release. “The opt-out model once again proves to be completely inadequate to protect our rights: society cannot be expected to monitor and prosecute every internet company that decides to use our data to train AI. Opt-in consent is not only legally required, but also common sense.”

The Irish Data Protection Commission (DPC), the supervisory authority responsible for monitoring compliance with the GDPR, the EU’s general privacy rules, told TechCrunch that LinkedIn had last week announced that clarifications on its global privacy policy could be published today.

“LinkedIn has informed us that the policy will include an opt-out setting for members who do not want their data used to train AI models that generate content,” a DPC spokesperson said. “This opt-out is not available to EU/EEA members, as LinkedIn does not currently use EU/EEA member data to train or tune these models.”

TechCrunch has reached out to LinkedIn for comment. We will update this text if we hear back.

The need for more data to coach generative AI models has led to more platforms repurposing or otherwise repurposing their vast troves of user-generated content. Some have even taken steps to monetize that content—Tumblr owner Automattic, Photobucket, Reddit, and Stack Overflow are among the many networks licensing data to AI model developers.

Not all of them made opting out easy. When Stack Overflow announced it will begin licensing content, several users deleted their posts in protest — only to see those posts restored and their accounts suspended.

This article was originally published on : techcrunch.com
Continue Reading
Advertisement

OUR NEWSLETTER

Subscribe Us To Receive Our Latest News Directly In Your Inbox!

We don’t spam! Read our privacy policy for more info.

Trending