Connect with us

Technology

How the theft of 40 million UK voter registers could have been completely prevented

Published

on

The cyberattack on the British Electoral Commission, which led to the breach of 40 million people’s electoral records, could have been entirely prevented if the organisation had put in place basic security measures, based on a devastating report published this week by the UK’s data protection watchdog.

Report published by the UK Information Commissioner’s Office on Monday blamed the Electoral Commission, which holds copies of the UK’s electoral register, for a series of security lapses that led to the mass theft of voter data since August 2021.

The Electoral Commission only discovered the security breach of its systems greater than a yr later, in October 2022, and it took until August 2023 to publicly disclose the year-long data breach.

The commission said at the time of its public disclosure that hackers had breached its email servers and stolen, amongst other things, copies of the UK electoral registers. These registers hold details about voters who registered between 2014 and 2022, and include names, postal addresses, phone numbers and non-public voter information.

UK Government later attributed the incursion to ChinaWith High-ranking officials warn that the stolen data could be used for “wide-scale espionage and international repression of perceived dissidents and critics in the UK.” China has denied any involvement in the security breach.

On Monday, the ICO issued a proper warning to the Electoral Commission for breaching UK data protection laws, adding: “Had the Electoral Commission taken basic steps to protect its systems, such as effective security patching and password management, it is highly unlikely that this data breach would not have occurred.”

The Electoral Commission, for its part, admitted short statement after the report was published, it was found that “insufficient protective measures were not implemented to prevent a cyberattack on the Commission”.

Until the ICO report was released, it was not clear what exactly led to the data of tens of tens of millions of British voters being exposed, or what could have been done otherwise.

We now know that the ICO has blamed the Commission for failing to patch (*40*) on its email server, which was the initial point of entry for hackers who made off with tons of voter data. The report also confirms a detail reported by TechCrunch in 2023 that the Commission’s email was a self-hosted Microsoft Exchange server.

In its report, the ICO confirmed that not less than two groups of malicious hackers breached the Commission’s self-hosted Exchange server in 2021-2022 by exploiting a sequence of three vulnerabilities. collectively known as ProxyShellwhich allowed hackers to interrupt in, take control, and place malicious code on the server.

Microsoft released patches for ProxyShell several months earlier, in April and May 2021, but the Commission didn’t install them.

By August 2021, the U.S. Cybersecurity Agency CISA began to sound the alarm that malicious attackers were actively exploiting ProxyShell, at which point every organization that had an efficient security patching process had already deployed the patches months ago and was already protected. The Electoral Commission was not one of those organizations.

“The Electoral Commission did not have an adequate patching system in place at the time of the incident,” the ICO report reads. “This flaw is a fundamental measure.”

Among other significant security issues uncovered during the ICO investigation, the Electoral Commission had allowed passwords that were “highly susceptible to guessing” and in addition confirmed it was “aware” that parts of its infrastructure were out of date.

ICO Deputy Commissioner Stephen Bonner said in a press release on the report and the ICO reprimand: “Had the Electoral Commission taken basic steps to protect its systems, such as effective security patching and password management, it is highly likely that this data breach would not have occurred.”

Why didn’t the ICO wonderful the Electoral Commission?

A completely avoidable cyberattack that exposed the personal data of 40 million British voters might appear to be a serious enough breach to warrant a wonderful fairly than a reprimand from the Electoral Commission, but the ICO has issued only a public reprimand for lax security.

In the past, public sector bodies have been punished for violating data protection rules. But in June 2022 Under the previous Conservative government, the ICO announced it will trial a modified approach to enforcement against public bodies.

The regulator said the policy change meant public bodies were unlikely to see large fines for breaches over the next two years, whilst the ICO suggested incidents would proceed to be closely investigated. However, the sector was told to expect increased use of reprimands and other enforcement powers, fairly than fines.

IN open letter Explaining the move at the time, Information Commissioner John Edwards wrote: “I am not convinced that large fines alone are such an effective deterrent in the public sector. They do not affect shareholders or individual directors in the same way as in the private sector, but come directly from the budget for providing services. The impact of a public sector fine often falls on the victims of a breach, in the form of reduced budgets for key services, rather than on the perpetrators. In effect, those affected are punished twice.”

At first glance, it might sound that the Electoral Commission was lucky to find the breach during the ICO’s two-year trial period of taking a more lenient approach to enforcing sector-specific rules.

In line with the ICO’s statement that it’ll be trialling fewer sanctions for public sector data breaches, Edwards said the regulator will adopt a more proactive workflow involving engagement with senior management of public bodies to lift standards and ensure compliance with data protection laws across government bodies through a harm prevention approach.

But when Edwards revealed a plan to check a mixture of softer enforcement and proactive outreach, he acknowledged it will take effort from each side, writing: “We can’t do this alone. There has to be accountability on both sides to deliver these improvements.”

The Electoral Commission’s breach of the rules could subsequently raise wider questions on the success of the ICO process, including whether public sector bodies have kept their end of the bargain to justify more lenient enforcement.

It actually doesn’t appear that the Electoral Commission was suitably proactive in assessing the risk of a breach in the early months of the ICO process – that’s, before it discovered the hack in October 2022. The ICO’s reprimand, describing the Commission’s failure to patch a known software vulnerability as a “fundamental measure”, for instance, appears like the very definition of a preventable data breach that the regulator said it wanted to handle through a change in public sector policy.

In this case, nevertheless, the ICO says it didn’t apply a more lenient public sector enforcement policy.

In response to questions on why the Electoral Commission was not fined, ICO spokeswoman Lucy Milburn told TechCrunch: “Following a thorough investigation, a fine was not considered in this case. Despite the number of people affected, the personal data was limited primarily to the names and addresses contained in the Electoral Register. Our investigation did not find any evidence of misuse of personal data or that this breach caused any direct harm.”

“The Electoral Commission has already taken the necessary steps we expect to see to improve security in the wake of this incident, including implementing an infrastructure modernisation plan, as well as reviewing password policies and multi-factor authentication for all users,” the spokesperson added.

The regulator says there was no wonderful because no data was misused, or fairly the ICO found no evidence of misuse. The mere disclosure of 40 million voters’ information didn’t meet the ICO’s requirements.

One wonders to what extent the regulator’s investigation focused on identifying how voter information might have been misused?

Returning to the ICO’s Public Sector Enforcement Process end of JuneAs the experiment approached its two-year mark, the regulator issued a press release saying it will review the policy before deciding in the autumn on the future of the sectoral approach.

Whether this policy will proceed, or whether there might be a shift towards fewer reprimands and bigger fines for public sector data breaches, stays to be seen. Regardless, the Electoral Commission breach case shows that the ICO is reluctant to impose sanctions on the public sector – unless the disclosure of individuals’ data will be linked to proven harm.

It isn’t clear how a regulatory approach that isn’t intended to be deterrent will help raise data protection standards across government.

This article was originally published on : techcrunch.com
Continue Reading
Advertisement
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Technology

SpaceX wants to test refueling spacecraft in space early next year

Published

on

By

SpaceX will attempt to transfer fuel from one orbiting spacecraft to one other as early as March next year, a technical milestone that may pave the way in which for an uncrewed demonstration of the spacecraft on the Moon, a NASA official said this week.

Much has been said about Starship’s potential to transform the industrial space industry, but NASA can also be losing hope that the vehicle will return humans to the Moon under the Artemis program. The space agency has awarded the corporate a $4.05 billion contract for 2 human Starship vehicles, with the upper stage (also called Starship) landing astronauts on the lunar surface for the primary time because the Apollo era. A crewed landing is currently scheduled for September 2026.

Kent Chojnacki, deputy program manager for NASA’s Human Landing System (HLS) program, provided more details on how closely the agency will work with the space company on this critical mission in an interview with Spaceflight Now. It will come as no surprise that NASA is paying close attention to the Starship test campaign, which has seen five launches thus far.

SpaceX made history in its latest test on Oct. 13 when it first managed to catch a super-heavy rocket booster in mid-air using “sticks” attached to the launch tower.

“Every time it comes to (launch), we learn a lot,” Chojnacki said.

Chojnacki’s work history includes quite a few roles in the Space Launch System (SLS) program, which oversees the event of the large rocket of the identical name being built by a handful of traditional space-first aircraft. The first SLS rocket launched the Artemis I mission in December 2023, and future rockets will launch additional missions under the Artemis program. However, no a part of the rocket is reusable, which is why NASA spends greater than $2 billion on each launch vehicle.

The first contracts under the SLS program were awarded over ten years ago as a part of the so-called a cost-plus model, meaning NASA pays a base amount plus expenses. (This variety of contract has been heavily criticized for encouraging long development schedules and high expenses.) In contrast, HLS contracts are “fixed price” – so SpaceX receives a one-time payment of $2.99 ​​billion, provided certain milestones are met.

Chojnacki said NASA has taken very different approaches to the HLS and SLS programs, even outside of the contracting model.

“SLS was a very traditional NASA program. NASA defined a very stringent set of requirements and dictated the fuel supplies, dictated everything to the various elements. They flowed downwards. These were cost-effective programs where aerospace companies responded and we worked in a very traditional way,” he said. “Moving to HLS, we’re doing a whole lot of moving parts without delay. Currently, SpaceX’s first landing contract includes 27 system requirements. Twenty-seven and we tried to be as relaxed as possible.

Under the SpaceX contract, they have to pass mandatory design reviews, but SpaceX may offer additional milestones as a part of the payment. One of the necessities required by SpaceX is an indication of ship-to-ship propellant transfer. These tests are scheduled to start around March 2025 and end in the summer, Chojnacki said.

“This could be the primary time this has been demonstrated on this scale, so it’s an enormous constructing block. And when you try this, you have really opened up the door to moving huge amounts of cargo and charge beyond the globe of the Earth. If you manage to have a spacecraft with a propellant unit, that can be the next step towards uncrewed demonstrations.

In addition to testing, Starship’s next major review can be the Critical Design Review (CDR) in summer 2025, when NASA will certify that the corporate has met all 27 system requirements. Chojnacki said NASA astronauts also meet with SpaceX once a month to provide information concerning the interior of Starship. The company is constructing mock-ups of the crew cabin, including the sleeping area and laboratory, in Boca Chica. NASA anticipates receiving a design update this month before it during next year’s CDR.

That’s not the one place NASA shared its input: it also provided feedback on some facets of the rocket’s design, similar to the vehicle’s cryogenic components, and in addition performed some tests on thermal plates that help keep the temperature of cryogenic fuels low.

If all goes according to plan, SpaceX will send astronauts to the Moon in September 2026.

“It’s definitely a date we’re working towards. We haven’t any known roadblocks. We have some things that need to be demonstrated for the primary time and we’ve a plan on how to exhibit them.

This article was originally published on : techcrunch.com
Continue Reading

Technology

Microsoft and A16Z are putting aside their differences and joining hands in protest against artificial intelligence regulations

Published

on

By

Image of a computer, phone and clock on a desk tied in red tape.

The two biggest forces in two deeply intertwined tech ecosystems – large incumbents and startups – have taken a break from counting money and together they demand this from the federal government to stop even considering regulations that might affect their financial interests or, as they prefer to call it, innovation.

“Our two companies may not agree on everything, but it’s not about our differences,” writes this group with very different perspectives and interests: A16Z founders, partners Marc Andreessen and Ben Horowitz, and Microsoft CEO Satya Nadella and president/director legal affairs Brad Kowal. A very cross-sectional gathering, representing each big business and big money.

But they are supposedly taking care of little boys. That is, all the businesses that may be impacted by this latest try to abuse the regulations: SB 1047.

Imagine being charged a fee for improperly disclosing an open model! A16Z General Partner Anjney Midha he called it a “regressive tax” on startups and a “blatant regulatory capture” by Big Tech firms that, unlike Midha and his impoverished colleagues, could afford the lawyers needed to comply with the regulations.

Except that was all disinformation spread by Andreessen Horowitz and other wealthy interests who actually stood to suffer as supporters of billion-dollar enterprises. In fact, small models and startups would only be barely affected since the proposed law specifically protected them.

It’s strange that the identical form of targeted carve-out for “Little Tech” that Horowitz and Andreessen routinely advocate for was distorted and minimized by the lobbying campaign they and others waged against SB 1047. (In an interview with the bill’s sponsor , California State Senator Scott Wiener talked about this whole thing recently on Disrupt.)

This bill had its problems, but its opposition greatly exaggerated compliance costs and didn’t significantly substantiate claims that it will chill or burden startups.

It’s a part of a longtime pattern in which Big Tech – to which, despite their stance, Andreessen and Horowitz are closely related – operates on the state level, where it could possibly win (as with SB 1047), while asking for federal solutions that it knows will won’t ever come, or which can have no teeth because of partisan bickering and congressional ineptitude on technical issues.

This joint statement of “political opportunity” is the second a part of the sport: After torpedoing SB 1047, they will say they did it solely to support federal policy. Never mind that we’re still waiting for a federal privacy law that tech firms have been pushing for a decade while fighting state laws.

What policies do they support? “A different responsible market approach”, in other words: down with our money, Uncle Sam.

Regulations needs to be based on a “science-based and standards-based approach, recognizing regulatory frameworks that focus on the use and misuse of technology” and should “focus on the risk of bad actors exploiting artificial intelligence.” This signifies that we must always not introduce proactive regulation, but quite reactive penalties when criminals use unregulated products for criminal purposes. This approach has worked great in this whole FTX situation, so I understand why they support it.

“The regulation should only be implemented if the benefits outweigh the costs.” It would take 1000’s of words to clarify all of the ways this idea expressed in this context is funny. But they are principally suggesting that the fox needs to be included on the henhouse planning committee.

Regulators should “allow developers and startups the flexibility to choose AI models to use wherever they build solutions, and not tilt the playing field in favor of any one platform.” This suggests that there may be some agenda requiring permission to make use of one model or one other. Since this is just not the case, it’s a straw man.

Here is a lengthy quote that I have to quote in full:

The right to education: Copyright goals to advertise the progress of science and the applied arts by extending protection to publishers and authors to encourage them to make recent works and knowledge available to the general public, but not on the expense of society’s right to learn from those works. Copyright law mustn’t be co-opted to suggest that machines needs to be prevented from using data – the premise of artificial intelligence – to learn in the identical way as humans. Unprotected knowledge and facts, whether or not contained in protected subject material, should remain free and accessible.

To be clear, the clear statement here is that software operated by billion-dollar corporations has the “right” to access any data since it should give you the chance to learn from it “in the same way as humans.”

First of all, no. These systems are not like people; they generate data in their training data that mimics human activity. These are complex statistical projection programs with a natural language interface. They haven’t any more “right” to any document or fact than Excel.

Second, the concept that “facts” – by which they mean “intellectual property” – are the one thing these systems are interested in, and that some type of fact-gathering cabal is working to forestall them, is an artificial narrative we have seen before. Perplexity made the “facts belong to everyone” argument in its public response to a lawsuit alleging systematic content theft, and its CEO Aravind Srinivas repeated that mistake to me on stage at Disrupt, as in the event that they were being sued for knowing tidbits just like the Earth’s distance from the Moon.

While this is just not the place to totally discuss this particular straw man argument, let me simply indicate that while facts are indeed free agents, there are real costs to how they are created – say, through original reporting and scientific research. This is why copyright and patent systems exist: not to forestall the wide sharing and use of mental property, but to encourage its creation by ensuring that it could possibly be assigned real value.

Copyright law is much from perfect and is more likely to be abused as often as used. However, this is just not “co-opted to suggest that machines should be prevented from using data” – it’s used to be sure that bad actors don’t bypass the worth systems we’ve got built around mental property.

This is a fairly clear query: let’s allow the systems we own, operate and take advantage of to freely use the worthwhile work of others without compensation. To be fair, this part is “in the same way as people” because people design, run and implement these systems, and these people don’t desire to pay for something they do not have to, and they don’t desire to. I don’t desire regulations to alter that .

There are many other recommendations in this small policy document, which were little question covered in greater detail in the versions sent on to lawmakers and regulators through official lobbying channels.

Some of the ideas are undoubtedly good, if slightly selfish: “fund digital literacy programs that help people understand how to use artificial intelligence tools to create and access information.” Good! Of course, the authors invest heavily in these tools. Support “Open Data Commons – collections of accessible data managed in the public interest.” Great! “Examine procurement practices to enable more startups to sell technology to the government.” Excellent!

But these more general, positive recommendations are something the industry sees yearly: invest in public resources and speed up government processes. These tasty but irrelevant suggestions are merely tools for the more vital ones I described above.

Ben Horowitz, Brad Smith, Marc Andreessen and Satya Nadella want the federal government to step back from regulating this lucrative recent development, let industry resolve which regulations are value compromising, and invalidate copyright laws in a way that kind of acts as a blanket reprieve for illegal or unethical practices that many imagine have enabled the rapid development of artificial intelligence. These are principles that are vital to them, whether children are acquiring digital skills or not.

This article was originally published on : techcrunch.com
Continue Reading

Technology

Nigerian technology company Moniepoint secures $110 million investment

Published

on

By


Moniepoint, a Nigerian fintech company that serves thousands and thousands of entrepreneurs across Africa, has secured $110 million in financing for expand your corporation. The financing got here from 4 entities: London Development Partners International and Lightrock, which had previously invested within the company; The recent investors are the Google African Investment Fund and Verod Capital.

Moniepoint – formerly called TeamApt –was named fastest growing fintech company in accordance with Financial Times for 2 years.

The company provides quite a lot of financial services, including bank accounts, loans and business management services. It processes over 800 million transactions monthly value over PLN 7 billion.

Moniepointe co-founder Tosin Eniolorunda says the brand new investor will provide financing help the company improve customer experience.

“Our mission is to assist our customers solve their challenges by making our platform more progressive, transparent and secure. Proceeds from this fundraising will speed up our efforts towards financial inclusion and supporting Africa’s entrepreneurial potential. I need to sincerely thank all the Moniepoint team for making this achievement possible, ” – said Eniolorunda in a press release shared Afrotech.

Adefolarin Ogunsanya, Partner at Development Partners International, also expressed his excitement to contribute to the event of Moniepoint.

“We are delighted to steer this investment round in Moniepoint, one of the vital exciting and fastest growing corporations in Africa. As a profitable company led by a wonderful management team with a transparent strategic vision, Moniepoint is well positioned to proceed its impressive growth trajectory while ensuring financial inclusion for vulnerable businesses and individuals across Africa, Afrotech reports.

Moniepoint officials say they are going to proceed to prioritize Nigeria, however the company plans to expand into other African countries. The company is currently assessing these markets to evaluate how effectively it may well operate meet customer needs.

The successful fintech company has previously received support from QED Investors, British International Investment (BII) and Endeavor Catalyst. Since its founding in 2015, the company has raised over $180 million.


This article was originally published on : www.blackenterprise.com
Continue Reading
Advertisement

OUR NEWSLETTER

Subscribe Us To Receive Our Latest News Directly In Your Inbox!

We don’t spam! Read our privacy policy for more info.

Trending