google-site-verification=cXrcMGa94PjI5BEhkIFIyc9eZiIwZzNJc4mTXSXtGRM Former NSA hacker and former Apple researcher launches startup to protect Apple devices - 360WISE MEDIA
Connect with us

Technology

Former NSA hacker and former Apple researcher launches startup to protect Apple devices

Published

on

Two experienced security experts are starting a startup that goals to help other cybersecurity product developers improve their efforts to protect Apple devices.

Their startup is known as Double You, the name comes from the initials of its co-founder, Patrick Wardle, who worked on the US National Security Agency in 2006-2008. Wardle then worked for years as an offensive security researcher before independently researching the defensive security of Apple’s macOS. Since 2015, Wardle has been developing free and open source macOS security tools under its wing. Purpose – see Foundationwhich can be hosting an Apple-centric event Objective By The Sea conference.

Its co-founder is Mikhail Sosonkin, who before working at Apple from 2019 to 2021 was also an offensive cybersecurity researcher for years. Wardle, who described himself as a “mad scientist in the lab,” said Sosonkin was “the right partner” for whom he needed to turn his ideas into reality.

“Mike may not be making waves, but he’s an amazing software engineer,” Wardle said.

The idea behind DoubleYou is that compared to Windows, there are still only a couple of good security products for macOS and iPhones. This is an issue because Macs have gotten an increasingly popular selection for businesses world wide, which implies malicious hackers are increasingly targeting Apple computers too. Wardle and Sosonkin said there aren’t many talented macOS and iOS security researchers, which implies corporations are struggling to develop their products.

Wardle and Sosonkin’s idea is to take a page out of the playbook of hackers who focus on attacking systems and apply it to defense. Several offensive cybersecurity corporations offer modular products, able to delivering an entire exploit chain or only one element of it. The DoubleYou team wants to just do that – but with defensive tools.

“Instead of building an entire product from scratch, for example, we really took a step back and said, ‘hey, how do adversaries do this?’” Wardle said in an interview with TechCrunch. “Can we basically take the same model of basically democratizing security, but from a defensive standpoint where we develop individual capabilities that we can then license and have other companies integrate into their security products?”

Wardle and Sosonkin consider they’ll.

And while the co-founders have not yet selected the total list of modules they need to offer, they said their product will certainly include a core offering that features analyzing a complete recent process to detect and block untrusted code (which in macOS means they don’t seem to be “notarized” by Apple) and monitoring and blocking unusual DNS network traffic, which may detect malware when connecting to domains related to hacking groups. Wardle said that, a minimum of for now, they might be primarily for macOS.

The founders also want to develop tools to monitor software that wishes to develop into persistent – an indicator of malware – to detect cryptocurrency miners and ransomware based on their behavior and to detect when the software tries to gain permission to use a webcam and microphone.

Sosonkin described it as an “off-the-shelf, catalog-based approach” during which each customer can select the components they need to use of their product. Wardle described it as a supplier of automotive parts reasonably than a manufacturer of your entire automotive. This approach, Wardle added, is comparable to the one he took when developing various Objective-See tools comparable to Oversight, which monitors microphone and webcam usage; AND Knock Knockwhich monitors whether the appliance wants to persist.

“We don’t have to use new technology to make it work. We need to actually take the tools that are available and put them in the right place,” Sosonkin said.

Wardle and Sosonkin’s plan doesn’t involve making any outside investments for now. The co-founders said they wanted to remain independent and avoid a few of the pitfalls of attracting outside investment, namely having to scale an excessive amount of and too quickly, allowing them to give attention to technology development.

“Maybe in some ways we are like stupid idealists,” Sosonkin said. “We just want to catch some malware. Hopefully we can make some money along the way.”

This article was originally published on : techcrunch.com

Technology

Google is laying off employees, Tesla is canning its Supercharger team, and UnitedHealthcare is revealing security vulnerabilities

Published

on

By

Welcome to Week in Review (WiR), TechCrunch’s regular newsletter summarizing the week in technology. This release is a bit bittersweet for me – it would be my last (a minimum of for some time). I’ll soon be shifting my focus to a brand new AI newsletter that I’m very enthusiastic about. Stay tuned for further information!

Now, let’s get to the news: This week, Google laid off employees from its Flutter, Dart, and Python teams just weeks before its annual I/O developer conference. In total, 200 people from Google’s “core” teams were laid off, including people working on application platforms and other engineering roles.

Elsewhere, Tesla CEO Elon Musk has gutted the corporate’s team answerable for overseeing its supercharger station network in a brand new round of layoffs – despite recently bringing in major automakers like Ford and General Motors. The cuts are so complete that Musk suggested in an email that they might force Tesla to slow the rollout of its Supercharger network.

UnitedHealthcare CEO Andrew Witty told a House subcommittee that the ransomware gang that breached US health tech giant Change Healthcare – a subsidiary of UnitedHealthcare – used a set of stolen credentials to realize access to Change Healthcare systems that didn’t were protected by multi-factor authentication. Last week, UnitedHealthcare reported that hackers had stolen health data for “a significant portion of people in America.”

Many other things happened. We sum all of it up on this issue of WiR – but first, let’s remind you to enroll in the WiR newsletter every Saturday.

News

Hallucinations, hallucinations: OpenAI faces one other privacy grievance within the EU. This one – filed by a nonprofit privacy rights organization night on behalf of a person complainant – targets the shortcoming of the AI-powered ChatGPT chatbot to correct misinformation about individuals that it generates.

Just get out… of Sam’s Club: Sam’s Club customers who pay on the register or via the Scan & Go mobile app can now leave the shop without having to double-check their purchases. Technology, exposed at January’s Consumer Electronics Show, it has already been implemented in 20% of Sam’s Club locations.

TikTok bypasses Apple’s rules: TikTok provides some users with a link to a web site where they should purchase coins used to tip digital creators on the platform. Typically, these coins have to be purchased via an in-app purchase – which requires a 30% commission paid to Apple – suggesting that TikTok could also be attempting to bypass Apple’s App Store rules.

NIST’s GenAI Platform: The National Institute of Standards and Technology (NIST), an agency of the U.S. Department of Commerce that develops and tests technologies for the U.S. government, businesses and most people, has launched NIST GenAI, a brand new program to judge generative artificial intelligence technologies, including text-based technologies and artificial image generating intelligence.

Getir draws out: Getir, the fast-trading giant, has withdrawn from the US, UK and Europe to deal with Turkey, its home country. The company – once valued at nearly $12 billion – said the move would impact hundreds of salaried and full-time employees.

Analysis

About Techstars’ Cold War: Dom’s stellar reporting pulls back the curtain on a 12 months of monetary losses and staff cuts at startup accelerator Techstars, whose CEO, Maëlle Gavet, has been a controversial force for change.

AI-based coding: Yours is really taking a look at Copilot Workspace, which is kind of an evolution of GitHub’s AI-powered coding assistant Copilot right into a more general tool — constructing on recently introduced features like Copilot Chat, which allows developers to ask questions on their code in natural language.

Autonomous automobile racing: Tim Stevens talks a few racing event in Abu Dhabi through which an autonomous automobile faced a Formula 1 driver.

This article was originally published on : techcrunch.com
Continue Reading

Technology

Why RAG won’t solve the AI ​​generative hallucination problem

Published

on

By

Hallucinations – essentially the lies that generative artificial intelligence models tell – pose an enormous problem for firms seeking to integrate the technology into their operations.

Because models haven’t any real intelligence and easily predict words, images, speech, music, and other data in keeping with a non-public schema, they often get it mistaken. Very bad. In a recent article in The Wall Street Journal, a source cites a case through which Microsoft’s generative AI invented meeting participants and suggested that conference calls covered topics that weren’t actually discussed during the call.

As I wrote a while ago, hallucinations could be an unsolvable problem in modern transformer-based model architectures. However, many generative AI vendors suggest eliminating them roughly through a technical approach called search augmented generation (RAG).

Here’s how one supplier, Squirro, he throws it: :

At the core of the offering is the concept of Recovery Augmented LLM or Recovery Augmented Generation (RAG) built into the solution… (our Generative Artificial Intelligence) is exclusive in its promise of zero hallucinations. Each piece of knowledge it generates is traceable to its source, ensuring credibility.

Here it’s similar tone from SiftHub:

Using RAG technology and fine-tuned large language models and industry knowledge training, SiftHub enables firms to generate personalized responses without hallucinations. This guarantees greater transparency and reduced risk, and instills absolute confidence in using AI for all of your needs.

RAG was pioneered by data scientist Patrick Lewis, a researcher at Meta and University College London and lead writer of the 2020 report paper who coined this term. When applied to a model, RAG finds documents which may be relevant to a given query—for instance, the Wikipedia page for the Super Bowl—using keyword searches, after which asks the model to generate a solution in this extra context.

“When you interact with a generative AI model like ChatGPT or Lama and ask a question, by default the model responds based on its ‘parametric memory’ – i.e. knowledge stored in its parameters as a result of training on massive data from the Internet,” he explained David Wadden, a research scientist at AI2, the artificial intelligence research arm of the nonprofit Allen Institute. “But just as you are likely to give more accurate answers if you have a source of information in front of you (e.g. a book or file), the same is true for some models.”

RAG is undeniably useful – it lets you assign things generated by the model to discovered documents to ascertain their veracity (with the additional advantage of avoiding potentially copyright-infringing regurgitations). RAG also allows firms that don’t need their documents for use for model training – say, firms in highly regulated industries comparable to healthcare and law – to permit their models to make use of these documents in a safer and temporary way.

But RAG actually stops the model from hallucinating. It also has limitations that many providers overlook.

Wadden says RAG is best in “knowledge-intensive” scenarios where the user desires to apply the model to fill an “information need” – for instance, to search out out who won the Super Bowl last 12 months. In such scenarios, the document answering the query will likely contain lots of the same keywords as the query (e.g., “Super Bowl,” “last year”), making it relatively easy to search out via keyword search.

Things get harder for reasoning-intensive tasks like coding and math, where in a keyword-based query it’s harder to find out the concepts needed to reply the query, much less determine which documents is perhaps relevant.

Even for basic questions, models can grow to be “distracted” by irrelevant content in the documents, especially long documents where the answer isn’t obvious. Or, for reasons still unknown, they could simply ignore the contents of recovered documents and rely as a substitute on their parametric memory.

RAG can be expensive when it comes to the equipment needed to deploy it on a big scale.

This is because retrieved documents, whether from the Internet, an internal database, or elsewhere, have to be kept in memory – at the very least temporarily – for the model to confer with them again. Another expense is computing the increased context that the model must process before generating a response. For a technology already famous for the large amounts of computing power and electricity required to even perform basic operations, this can be a serious consideration.

This does not imply RAG cannot be improved. Wadden noted many ongoing efforts to coach models to raised leverage documents recovered using RAG.

Some of those efforts include models that may “decide” when to make use of documents, or models that may opt out of search first in the event that they deem it unnecessary. Others are specializing in ways to index massive document datasets more efficiently and to enhance search through higher representations of documents—representations that transcend keywords.

“We’re pretty good at retrieving documents based on keywords, but we’re not very good at retrieving documents based on more abstract concepts, such as the checking technique needed to solve a math problem,” Wadden said. “Research is required to construct document representations and search techniques that may discover suitable documents for more abstract generation tasks. I feel it’s mostly an open query at this point.”

So RAG may help reduce models’ hallucinations, however it isn’t the answer to all hallucinatory problems of AI. Beware of any seller who tries to say otherwise.

This article was originally published on : techcrunch.com
Continue Reading

Technology

Luminar lays off 20% of staff and outsources lidar production

Published

on

By

Lidar Luminar is reducing its workforce by 20% and will rely more on its contract manufacturing partner as part of a restructuring that may shift the corporate to a more “asset-conserving” business model because it seeks to scale production.

The cuts will affect roughly 140 employees and will take effect immediately. Luminar can be cutting ties with “most” of its contract employees.

“Today we stand at the intersection of two realities: the core of our business has never been stronger in technology, products, industrialization and commercialization; at the same time, the capital markets’ perception of our company has never been more challenging,” billionaire founder and CEO Austin Russell said in a speech letter posted on the Luminar website. “The business model and cost structure that enabled us to achieve this leadership position no longer meet the needs of the company.”

Russell wrote within the letter that the restructuring will enable Luminar to bring products to market faster, “drastically reduce” costs and ensure improved profitability for the corporate. The company said in its regulatory filing sawing that the changes would scale back operating costs “by $50 million to $65 million annually.” The company can be limiting its global reach “by subleasing some or all of certain facilities.”

Luminar will proceed to operate its Florida facility, which is used for development, testing and research and development purposes, in keeping with spokesman Milin Mehta.

In April, Luminar announced that it had begun shipping volume production lidar sensors to Volvo for installation within the automaker’s EX90 luxury SUV. It also announced plans to deepen its relationship with Taiwanese contract manufacturing company TPK Holding. TPK “has committed to an exclusive partnership with Luminar,” Russell wrote in his letter.

This article was originally published on : techcrunch.com
Continue Reading
Advertisement

OUR NEWSLETTER

Subscribe Us To Receive Our Latest News Directly In Your Inbox!

We don’t spam! Read our privacy policy for more info.

Trending