Connect with us

Technology

CTGT aims to make AI models safer

Published

on

Futuristic digital blockchain background. Abstract connections technology and digital network. 3d illustration of the Big data and communications technology.

Growing up as an immigrant, Cyril Gorlla taught himself how to code and practiced as if he were possessed.

“At age 11, I successfully completed a coding course at my mother’s college, amid periodic home media disconnections,” he told TechCrunch.

In highschool, Gorlla learned about artificial intelligence and have become so obsessive about the concept of ​​training his own AI models that he took his laptop apart to improve its internal cooling. This tinkering led Gorlla to an internship at Intel during his sophomore 12 months of faculty, where he researched the optimization and interpretation of artificial intelligence models.

Gorlla’s college years coincided with the synthetic intelligence boom – during which firms like OpenAI raised billions of dollars for artificial intelligence technology. Gorlla believed that artificial intelligence had the potential to transform entire industries. But he also felt that safety work was taking a backseat to shiny latest products.

“I felt there needed to be a fundamental change in the way we understand and train artificial intelligence,” he said. “Lack of certainty and trust in model outputs poses a significant barrier to adoption in industries such as healthcare and finance, where AI can make the most difference.”

So, together with Trevor Tuttle, whom he met during his undergraduate studies, Gorlla left the graduate program to found CTGT, an organization that will help organizations implement artificial intelligence more thoughtfully. CTGT presented today at TechCrunch Disrupt 2024 as a part of the Startup Battlefield competition.

“My parents think I go to school,” he said. “It might be a shock for them to read this.”

CTGT works with firms to discover biased results and model hallucinations and tries to address their root cause.

It will not be possible to completely eliminate errors from the model. However, Gorlla says CTGT’s audit approach may help firms mitigate them.

“We reveal the model’s internal understanding of concepts,” he explained. “While a model that tells the user to add glue to a recipe may seem funny, the reaction of recommending a competitor when a customer asks for a product comparison is not so trivial. Providing a patient with outdated information from a clinical trial or a credit decision made on the basis of hallucinations is unacceptable.”

Recent vote from Cnvrg found that reliability is a top concern for enterprises deploying AI applications. In a separate one test At risk management software provider Riskonnect, greater than half of executives said they were concerned that employees would make decisions based on inaccurate information from artificial intelligence tools.

The idea of ​​a dedicated platform for assessing the decision-making technique of an AI model will not be latest. TruEra and Patronus AI are among the many startups developing tools for interpreting model behavior, as are Google and Microsoft.

Gorlla, nonetheless, argues that CTGT techniques are more efficient — partly because they don’t depend on training “evaluative” artificial intelligence to monitor models in production.

“Our mathematically guaranteed interpretability is different from current state-of-the-art methods, which are inefficient and require training hundreds of other models to gain model insight,” he said. “As firms grow to be increasingly aware of computational costs and enterprise AI moves from demos to delivering real value, our worth proposition is important as we offer firms with the flexibility to rigorously test the safety of advanced AI without having to train additional models or evaluate other models . “

To address potential customers’ concerns about data breaches, CTGT offers an on-premises option as well as to its managed plan. He charges the identical annual fee for each.

“We do not have access to customer data, which gives them full control over how and where it is used,” Gorlla said.

CTGT, graduate Character labs accelerator, has the support of former GV partners Jake Knapp and John Zeratsky (co-founders of Character VC), Mark Cuban and Zapier co-founder Mike Knoop.

“Artificial intelligence that cannot explain its reasoning is not intelligent enough in many areas where complex rules and requirements apply,” Cuban said in a press release. “I invested in CTGT because it solves this problem. More importantly, we are seeing results in our own use of AI.”

And – although CTGT is in its early stages – it has several clients, including three unnamed Fortune 10 brands. Gorlla says CTGT worked with considered one of these firms to minimize bias in its facial recognition algorithm.

“We identified a flaw in the model that was focusing too much on hair and clothing to make predictions,” he said. “Our platform provided practitioners with instant knowledge without the guesswork and time waste associated with traditional interpretation methods.”

In the approaching months, CTGT will concentrate on constructing the engineering team (currently only Gorlla and Tuttle) and improving the platform.

If CTGT manages to gain a foothold within the burgeoning marketplace for AI interpretation capabilities, it could possibly be lucrative indeed. Markets and Markets analytical company projects that “explainable AI” as a sector could possibly be value $16.2 billion by 2028.

“The size of the model is much larger Moore’s Law and advances in AI training chips,” Gorlla said. “This means we need to focus on a fundamental understanding of AI to deal with both the inefficiencies and the increasingly complex nature of model decisions.”

This article was originally published on : techcrunch.com
Continue Reading
Advertisement
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Technology

The Rise and Fall of the “Scattered Spider” Hackers.

Published

on

By

A statue of CrowdStrike’s action figure that represents the Scattered Spider cybercriminal group, seen at the Black Hat cybersecurity conference in August 2024.

After greater than two years of evading capture following a hacking spree that targeted some of the world’s largest technology firms, U.S. authorities say they’ve finally caught a minimum of some of the hackers responsible.

In August 2022 security researchers made their information public with a warning that a bunch of hackers targeted greater than 130 organizations in a complicated phishing campaign that stole the credentials of nearly 10,000 employees. The hackers specifically targeted firms that use Okta, a single sign-on service provider that hundreds of firms around the world use to permit their employees to log in from home.

Due to its give attention to Okta, the hacker group was dubbed “0ktapus”. By now the group has been hacked Caesar’s entertainmentCoinbase, DoorDash, Mailchimp, Riot Games, Twilio (twice) and dozens more.

The most notable and severe cyber attack by hackers in terms of downtime and impact was the September 2023 breach of MGM Resorts, which reportedly cost the casino and hotel giant a minimum of $100 million. In this case, the hackers collaborated with the Russian-speaking ransomware gang ALPHV and demanded a ransom from MGM for the company to get better its files. The break-in was such a nuisance that MGM-owned casinos had problems with service delivery for several days.

Over the past two years, as law enforcement has closed in on hackers, people in the cybersecurity industry have been attempting to work out exactly tips on how to classify hackers and whether to place them in a single group or one other.

Techniques utilized by hackers similar to social engineering, email and SMS phishing, and SIM swapping are common and widespread. Some of the individual hackers were part of several groups chargeable for various data breaches. These circumstances make it obscure exactly who belongs to which group. Cybersecurity giant CrowdStrike has dubbed this hacker group “Scattered Spider,” and researchers imagine it has some overlap with 0ktapus.

The group was so energetic and successful that the US cybersecurity agency CISA and the FBI issued a advice in late 2023 with detailed details about the group’s activities and techniques in an try and help organizations prepare for and defend against anticipated attacks.

Scattered Spider is a “cybercriminal group targeting large companies and their IT helpdesks,” CISA said in its advisory. The agency warned that the group “typically engaged in data theft for extortion purposes” and noted its known ties to ransomware gangs.

One thing that is comparatively certain is that hackers mostly speak English and are generally believed to be teenagers or early 20s, and are sometimes called “advanced, persistent teenagers.”

“A disproportionate number of minors are involved and this is because the group deliberately recruits minors due to the lenient legal environment in which these minors live, and they know that nothing will happen to them if the police catch the child” – Allison Nixon , director of research for Unit 221B, told TechCrunch at the time.

Over the past two years, some members of 0ktapus and Scattered Spider have been linked to a similarly nebulous group of cybercriminals generally known as “Com” People inside this broader cybercriminal community committed crimes that leaked into the real world. Some of them are chargeable for acts of violence similar to robberies, burglaries and bricklaying – hiring thugs to throw bricks at someone’s house or apartment; and swatting – when someone tricks authorities into believing that a violent crime has occurred, prompting the intervention of an armed police unit. Although born as a joke, the swat has fatal consequences.

After two years of hacking, authorities are finally starting to discover and prosecute Scattered Spider members.

in July This was confirmed by the British police arrest of a 17-year-old in reference to the MGM burglary.

In November, the U.S. Department of Justice announced it had indicted five hackers: Ahmed Hossam Eldin Elbadawy, 23, of College Station, Texas; Noah Michael Urban, 20, from Palm Coast, Florida, arrested in January; Evans Onyeaka Osiebo, 20, of Dallas, Texas; Joel Martin Evans, 25, of Jacksonville, North Carolina; and Tyler Robert Buchanan, 22, from the UK, who was arrested in June in Spain.

This article was originally published on : techcrunch.com
Continue Reading

Technology

OpenAI accidentally deleted potential evidence in NY Times copyright lawsuit (update)

Published

on

By

OpenAI logo with spiraling pastel colors (Image Credits: Bryce Durbin / TechCrunch)

Lawyers for The New York Times and Daily News, who’re suing OpenAI for allegedly copying their work to coach artificial intelligence models without permission, say OpenAI engineers accidentally deleted potentially relevant data.

Earlier this fall, OpenAI agreed to offer two virtual machines in order that advisors to The Times and Daily News could seek for copyrighted content in their AI training kits. (Virtual machines are software-based computers that exist inside one other computer’s operating system and are sometimes used for testing purposes, backing up data, and running applications.) letterlawyers for the publishers say they and the experts they hired have spent greater than 150 hours since November 1 combing through OpenAI training data.

However, on November 14, OpenAI engineers deleted all publisher search data stored on one among the virtual machines, in keeping with the above-mentioned letter, which was filed late Wednesday in the U.S. District Court for the Southern District of New York.

OpenAI tried to get better the information – and was mostly successful. However, since the folder structure and filenames were “irretrievably” lost, the recovered data “cannot be used to determine where the news authors’ copied articles were used to build the (OpenAI) models,” the letter says.

“The news plaintiffs were forced to recreate their work from scratch, using significant man-hours and computer processing time,” lawyers for The Times and the Daily News wrote. “The plaintiffs of the news learned only yesterday that the recovered data was useless and that the work of experts and lawyers, which took a whole week, had to be repeated, which is why this supplementary letter is being filed today.”

The plaintiffs’ attorney explains that they don’t have any reason to consider the removal was intentional. However, they are saying the incident highlights that OpenAI “is in the best position to search its own datasets” for potentially infringing content using its own tools.

An OpenAI spokesman declined to make an announcement.

However, late Friday, November 22, OpenAI’s lawyer filed a motion answer to a letter sent Wednesday by attorneys to The Times and Daily News. In their response, OpenAI’s lawyers unequivocally denied that OpenAI had deleted any evidence and as a substitute suggested that the plaintiffs were guilty for a system misconfiguration that led to the technical problem.

“Plaintiffs requested that one of several machines provided by OpenAI be reconfigured to search training datasets,” OpenAI’s attorney wrote. “Implementation of plaintiffs’ requested change, however, resulted in the deletion of the folder structure and certain file names from one hard drive – a drive that was intended to serve as a temporary cache… In any event, there is no reason to believe that any files were actually lost.”

In this and other cases, OpenAI maintains that training models using publicly available data – including articles from The Times and Daily News – are permissible. In other words, by creating models like GPT-4o that “learn” from billions of examples of e-books, essays, and other materials to generate human-sounding text, OpenAI believes there isn’t a licensing or other payment required for examples – even when he makes money from these models.

With this in mind, OpenAI has signed licensing agreements with a growing number of recent publishers, including the Associated Press, Business Insider owner Axel Springer, the Financial Times, People’s parent company Dotdash Meredith and News Corp. OpenAI declined to offer the terms of those agreements. offers are public, but one among its content partners, Dotdash, is apparently earns at the least $16 million a 12 months.

OpenAI has not confirmed or denied that it has trained its AI systems on any copyrighted works without permission.

This article was originally published on : techcrunch.com
Continue Reading

Technology

Sequoia increases its 2020 fund by 25%

Published

on

By

Sequoia, venture capital, startups, VC

Sequoia says no going out, no problem.

According to data from the Silicon Valley enterprise capital giant, the worth of its Sequoia Capital US Venture XVII fund increased by 24.6% in June at the top of 12 months. Pitchbookwho analyzed data from the University of California Regents Fund.

Sequoia’s margin is notable since the fund hasn’t had any exits yet. This can be a positive development for the 2020 fund vintage, on condition that after the uncertain valuations of 2020 and 2021, this yr’s funds usually are not expected to perform well for any VC. The mismatch is probably going resulting from high AI valuations giving risks a way of an economic recovery that has yet to bear fruit in other sectors. Sequoia is an investor in high-growth artificial intelligence corporations including OpenAI, Glean and Harvey, amongst others.

Sequoia has raised over $800 million for Fund XVII, which closed in 2022.

This article was originally published on : techcrunch.com
Continue Reading
Advertisement

OUR NEWSLETTER

Subscribe Us To Receive Our Latest News Directly In Your Inbox!

We don’t spam! Read our privacy policy for more info.

Trending