Connect with us

Technology

Artificial intelligence could create more work for staff, study finds

Published

on

Humans have limited the flexibility of AI to perform a spread of business tasks within the workplace.


Artificial intelligence could be the reply, in accordance with a brand new study by the Australian Securities and Investments Commission (ASIC) and Amazon. create more work for staff after researchers used Meta’s open-source Llama2-70B software to perform business tasks within the workplace.

According to artificial intelligence (AI) programs and a bunch of 10 individuals were asked to summarize submissions specializing in ASICs, recommendations, and regulatory references. In addition, the tool was asked to incorporate each context and page references. Both sets of responses were then blindly scored by reviewers for consistency, length, and identification of ASIC references, in addition to regulatory references.

The AI ​​programs fared poorly in comparison with their human counterparts, scoring 47% in comparison with humans’ 81%. According to the team that conducted the study, the AI ​​performed particularly poorly when it got here to finding references to AISC within the documents it was alleged to use to create its summaries.

“Finding references in larger documents is notoriously difficult for LLM due to context window limitations and embedding strategies,” the team wrote. “Page references are not traditionally stored in embedding models because the content of PDF documents is retrieved as plain text. To achieve greater accuracy in this problem, significant progress has been made by splitting documents into pages and treating pages as fragments with associated metadata.”

According to the study, reviewers often needed to “refer to the source material to confirm details of the AI ​​summary” and “generally agreed that the AI ​​results could potentially generate more work if they were used (in their current state), either because of the need to fact-check the results or because the source material better presented the information.”

While this study has limited application to AI, it does suggest that the expected advantages of AI could also be limited by the necessity for humans to do more work to satisfy the precise needs of AI within the workplace.

Several reports indicate that black people and artificial intelligence are at odds with one another, a bunch of researchers from Stanford University say provided some feedback to the Congressional Black Caucus within the March 2024 White Paper

According to the researchers, “While AI has the potential to exacerbate racial inequality, it can also benefit Black communities. If implemented carefully, AI has the power to improve access to healthcare and education, as well as create new economic opportunities. For example, AI can help doctors make more accurate diagnoses and provide personalized treatment plans, especially in underserved communities where access to healthcare is limited.”


This article was originally published on : www.blackenterprise.com
Continue Reading
Advertisement
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Technology

‘Wolves’ sequel canceled because director ‘no longer trusted’ Apple

Published

on

By

It could also be hard to recollect, but George Clooney and Brad Pitt starred together within the movie “Wolves,” which Apple released just two months ago.

On Friday, the film’s author and director Jon Watts said Friday that the sequel is not any longer happening; IN one other interview for Deadlinehe explained that he “no longer trusts (Apple) as a creative partner.”

According to reports, the corporate limiting your film strategy. For example, “Wolfs” was imagined to have a giant theatrical release, but as an alternative it played in a limited variety of theaters for just per week before it landed on Apple TV+.

Watts, who also created the brand new Star Wars series “Skeleton Crew,” said Apple’s change “came as a complete surprise and was made without any explanation or discussion.”

“I was completely shocked and asked them not to tell me I was writing a sequel,” Watts said. “They ignored my request and announced it in their press release anyway, apparently to put a positive spin on their streaming axis.”

As a result, Watts said he “quietly refunded the money they gave me to continue” and canceled the project.

This article was originally published on : techcrunch.com
Continue Reading

Technology

The Rise and Fall of the “Scattered Spider” Hackers.

Published

on

By

A statue of CrowdStrike’s action figure that represents the Scattered Spider cybercriminal group, seen at the Black Hat cybersecurity conference in August 2024.

After greater than two years of evading capture following a hacking spree that targeted some of the world’s largest technology firms, U.S. authorities say they’ve finally caught a minimum of some of the hackers responsible.

In August 2022 security researchers made their information public with a warning that a bunch of hackers targeted greater than 130 organizations in a complicated phishing campaign that stole the credentials of nearly 10,000 employees. The hackers specifically targeted firms that use Okta, a single sign-on service provider that hundreds of firms around the world use to permit their employees to log in from home.

Due to its give attention to Okta, the hacker group was dubbed “0ktapus”. By now the group has been hacked Caesar’s entertainmentCoinbase, DoorDash, Mailchimp, Riot Games, Twilio (twice) and dozens more.

The most notable and severe cyber attack by hackers in terms of downtime and impact was the September 2023 breach of MGM Resorts, which reportedly cost the casino and hotel giant a minimum of $100 million. In this case, the hackers collaborated with the Russian-speaking ransomware gang ALPHV and demanded a ransom from MGM for the company to get better its files. The break-in was such a nuisance that MGM-owned casinos had problems with service delivery for several days.

Over the past two years, as law enforcement has closed in on hackers, people in the cybersecurity industry have been attempting to work out exactly tips on how to classify hackers and whether to place them in a single group or one other.

Techniques utilized by hackers similar to social engineering, email and SMS phishing, and SIM swapping are common and widespread. Some of the individual hackers were part of several groups chargeable for various data breaches. These circumstances make it obscure exactly who belongs to which group. Cybersecurity giant CrowdStrike has dubbed this hacker group “Scattered Spider,” and researchers imagine it has some overlap with 0ktapus.

The group was so energetic and successful that the US cybersecurity agency CISA and the FBI issued a advice in late 2023 with detailed details about the group’s activities and techniques in an try and help organizations prepare for and defend against anticipated attacks.

Scattered Spider is a “cybercriminal group targeting large companies and their IT helpdesks,” CISA said in its advisory. The agency warned that the group “typically engaged in data theft for extortion purposes” and noted its known ties to ransomware gangs.

One thing that is comparatively certain is that hackers mostly speak English and are generally believed to be teenagers or early 20s, and are sometimes called “advanced, persistent teenagers.”

“A disproportionate number of minors are involved and this is because the group deliberately recruits minors due to the lenient legal environment in which these minors live, and they know that nothing will happen to them if the police catch the child” – Allison Nixon , director of research for Unit 221B, told TechCrunch at the time.

Over the past two years, some members of 0ktapus and Scattered Spider have been linked to a similarly nebulous group of cybercriminals generally known as “Com” People inside this broader cybercriminal community committed crimes that leaked into the real world. Some of them are chargeable for acts of violence similar to robberies, burglaries and bricklaying – hiring thugs to throw bricks at someone’s house or apartment; and swatting – when someone tricks authorities into believing that a violent crime has occurred, prompting the intervention of an armed police unit. Although born as a joke, the swat has fatal consequences.

After two years of hacking, authorities are finally starting to discover and prosecute Scattered Spider members.

in July This was confirmed by the British police arrest of a 17-year-old in reference to the MGM burglary.

In November, the U.S. Department of Justice announced it had indicted five hackers: Ahmed Hossam Eldin Elbadawy, 23, of College Station, Texas; Noah Michael Urban, 20, from Palm Coast, Florida, arrested in January; Evans Onyeaka Osiebo, 20, of Dallas, Texas; Joel Martin Evans, 25, of Jacksonville, North Carolina; and Tyler Robert Buchanan, 22, from the UK, who was arrested in June in Spain.

This article was originally published on : techcrunch.com
Continue Reading

Technology

OpenAI accidentally deleted potential evidence in NY Times copyright lawsuit (update)

Published

on

By

OpenAI logo with spiraling pastel colors (Image Credits: Bryce Durbin / TechCrunch)

Lawyers for The New York Times and Daily News, who’re suing OpenAI for allegedly copying their work to coach artificial intelligence models without permission, say OpenAI engineers accidentally deleted potentially relevant data.

Earlier this fall, OpenAI agreed to offer two virtual machines in order that advisors to The Times and Daily News could seek for copyrighted content in their AI training kits. (Virtual machines are software-based computers that exist inside one other computer’s operating system and are sometimes used for testing purposes, backing up data, and running applications.) letterlawyers for the publishers say they and the experts they hired have spent greater than 150 hours since November 1 combing through OpenAI training data.

However, on November 14, OpenAI engineers deleted all publisher search data stored on one among the virtual machines, in keeping with the above-mentioned letter, which was filed late Wednesday in the U.S. District Court for the Southern District of New York.

OpenAI tried to get better the information – and was mostly successful. However, since the folder structure and filenames were “irretrievably” lost, the recovered data “cannot be used to determine where the news authors’ copied articles were used to build the (OpenAI) models,” the letter says.

“The news plaintiffs were forced to recreate their work from scratch, using significant man-hours and computer processing time,” lawyers for The Times and the Daily News wrote. “The plaintiffs of the news learned only yesterday that the recovered data was useless and that the work of experts and lawyers, which took a whole week, had to be repeated, which is why this supplementary letter is being filed today.”

The plaintiffs’ attorney explains that they don’t have any reason to consider the removal was intentional. However, they are saying the incident highlights that OpenAI “is in the best position to search its own datasets” for potentially infringing content using its own tools.

An OpenAI spokesman declined to make an announcement.

However, late Friday, November 22, OpenAI’s lawyer filed a motion answer to a letter sent Wednesday by attorneys to The Times and Daily News. In their response, OpenAI’s lawyers unequivocally denied that OpenAI had deleted any evidence and as a substitute suggested that the plaintiffs were guilty for a system misconfiguration that led to the technical problem.

“Plaintiffs requested that one of several machines provided by OpenAI be reconfigured to search training datasets,” OpenAI’s attorney wrote. “Implementation of plaintiffs’ requested change, however, resulted in the deletion of the folder structure and certain file names from one hard drive – a drive that was intended to serve as a temporary cache… In any event, there is no reason to believe that any files were actually lost.”

In this and other cases, OpenAI maintains that training models using publicly available data – including articles from The Times and Daily News – are permissible. In other words, by creating models like GPT-4o that “learn” from billions of examples of e-books, essays, and other materials to generate human-sounding text, OpenAI believes there isn’t a licensing or other payment required for examples – even when he makes money from these models.

With this in mind, OpenAI has signed licensing agreements with a growing number of recent publishers, including the Associated Press, Business Insider owner Axel Springer, the Financial Times, People’s parent company Dotdash Meredith and News Corp. OpenAI declined to offer the terms of those agreements. offers are public, but one among its content partners, Dotdash, is apparently earns at the least $16 million a 12 months.

OpenAI has not confirmed or denied that it has trained its AI systems on any copyrighted works without permission.

This article was originally published on : techcrunch.com
Continue Reading
Advertisement

OUR NEWSLETTER

Subscribe Us To Receive Our Latest News Directly In Your Inbox!

We don’t spam! Read our privacy policy for more info.

Trending