Connect with us

Technology

Jamie Dimon believes that artificial intelligence can help people work less and live longer

Published

on

Jamie Dimon, JPMorgan Chase

Marc Morial, president of The National Urban League, warned in a 2019 article that automation poses a transparent threat to the workforce prospects of Black Americans.


According to JP Morgan CEO Jamie Dimon, he’s optimistic concerning the prospect of artificial intelligence improving the work-life balance of American staff in the longer term comments he made while appearing on the show .

As reported , Dimon appears to have acknowledged that artificial intelligence will replace jobs for some Americans, and has made the ambitious claim that because of progress, humans could soon live beyond 100 years of age.

Advertisement

“People must take a deep breath. Technology has at all times replaced jobs. “Your children will live to be 100 and thanks to technology they won’t get cancer and they will literally probably work three and a half days a week,” Dimon said.

Dimon’s forecasts are ambitious because On average, Americans work about 37 hours every weekso principally still America’s pioneering standard 40-hour, five-day work week within the Nineteen Twenties by Ford Motor Co.

While JPMorgan Chase did create a five-year, $350 million reskilling initiative in 2019 to help prepare staff for a work economy more depending on AI and technology, company employeesper , it’s 44% white, 21% Latino, 19% Asian and 14% Black.

According to a 2022 CDC evaluation, yes It isn’t easy to predict how radical changes within the workforce will likely be as a consequence of technological advances because there are too many variables to say with any certainty which jobs and sectors will likely be affected and how.

Advertisement

In April, MIT economist David Autor was the lead writer of a study that found that since a minimum of 1980 technological advances haven’t created more jobs than they’ve eliminatedbut with the caveat that some types of work have only been transformed, not completely eliminated.

As the Author said: “Artificial intelligence is basically different. It can replace some high-skilled specialist knowledge, but can complement decision-making tasks. I feel we live in an era where we now have this recent tool and we do not know what it’s good for. New technologies have strengths and weaknesses, and recognizing them takes time. GPS was invented for military purposes and it took a long time before it appeared on smartphones.

The writer continued: “The missing link has been documenting and quantifying the extent to which technology improves the quality of human work. All previous measures simply showed automation and its impact on the movement of workers. We were amazed that we could identify, classify, and quantify gain. That in itself is quite fundamental to me.”

According to the writer, streamlining means a fundamental restructuring of the best way work is performed, while automation essentially replaces the worker.

Advertisement

“You can think of automation as a machine that takes input from work and does it for the employee,” Autor explained. “We see enhancement as technology that increases the variety of things people can do, the quality of what they can do, or their productivity.”

Marc Morial, president of the National Urban League, he warned in a 2019 article that automation poses a transparent threat to black Americans’ job prospects.

Morial referred to a McKinsey and Company report titled “The future of work in black America”, which painted a bleak picture, especially for Black men. “African Americans are overrepresented in jobs most likely to be lost, such as food service, retail, office support and factory work,” Morial wrote.

Morial continued: “African Americans are also underrepresented in jobs where the risk of AI loss is lowest. These include educators, health care workers, lawyers and agricultural workers.”

Advertisement

According to the McKinsey report, along with improving the outlook for areas where black people work and live, “the public and private sectors will need to implement targeted programs to increase awareness of the risks of automation among African-American workers. Additionally, both sectors will need to provide African Americans with opportunities to pursue higher education and the ability to move into higher-paying roles and occupations.”


This article was originally published on : www.blackenterprise.com
Advertisement
Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Technology

Anysphere, which makes the cursor supposedly collect USD 900 million with a valuation of USD 9 billion

Published

on

By

AI robot face and programming code on a black background.

Anysphere, producer of coding cursor with AI drive, attracted $ 900 million in the recent financing round by Thrive Capital, Financial Times He informed, citing anonymous sources familiar with the contract.

The report said that Andreessen Horowitz (A16Z) and ACCEL also participate in the round, which values ​​about $ 9 billion.

The cursor collected $ 105 million from Thrive, and A16Z with a valuation of $ 2.5 billion, as TechCrunch said in December. Capital Thrive also led this round and in addition participated in A16Z. According to Crunchbase data, the startup has collected over $ 173 million thus far.

Advertisement

It is alleged that investors, including index ventures and a reference point, attempt to support the company, but plainly existing investors don’t want to miss the opportunity to support it.

Other coding start-ups powered by artificial intelligence also attract the interest of investors. Techcrunch announced in February that Windsurf, a rival for Aklesphere, talked about collecting funds at a valuation of $ 3 billion. Openai, an investor in Anysphere, was supposedly I’m attempting to get windsurf for about the same value.

(Tagstransate) A16Z

(*9*)This article was originally published on : techcrunch.com

Continue Reading

Technology

This is the shipping of products from China to the USA

Published

on

By

Shein and Temu icons are seen displayed on a phone screen in this illustration photo

The Chinese retailer has modified the strategy in the face of American tariffs.

Thanks to the executive ordinance, President Donald Trump ended the so -called de minimis principle, which allowed goods value 800 USD or less entering the country without tariffs. It also increases tariffs to Chinese goods by over 100%, forcing each Chinese firms and Shein, in addition to American giants, similar to Amazon to adapt plans and price increases.

CNBC reports that this was also affected, and American buyers see “import fees” from 130% to 150% added to their accounts. Now, nevertheless, the company is not sending the goods directly from China to the United States. Instead, it only displays the offers of products available in American warehouses, while goods sent from China are listed as outside the warehouse.

Advertisement

“He actively recruits American sellers to join the platform,” said the spokesman ago. “The transfer is to help local sellers reach more customers and develop their companies.”

(tagstotransate) tariffs

This article was originally published on : techcrunch.com
Continue Reading

Technology

One of the last AI Google models is worse in terms of safety

Published

on

By

The Google Gemini generative AI logo on a smartphone.

The recently released Google AI model is worse in some security tests than its predecessor, in line with the company’s internal comparative test.

IN Technical report Google, published this week, reveals that his Flash Gemini 2.5 model is more likely that he generates a text that violates its security guidelines than Gemini 2.0 Flash. In two indicators “text security for text” and “image security to the text”, Flash Gemini 2.5 will withdraw 4.1% and 9.6% respectively.

Text safety for the text measures how often the model violates Google guidelines, making an allowance for the prompt, while image security to the text assesses how close the model adheres to those boundaries after displaying the monitors using the image. Both tests are automated, not supervised by man.

Advertisement

In an e-mail, Google spokesman confirmed that Gemini 2.5 Flash “performs worse in terms of text safety for text and image.”

These surprising comparative results appear when AI is passing in order that their models are more acceptable – in other words, less often refuse to answer controversial or sensitive. In the case of the latest Llam Meta models, he said that he fought models in order to not support “some views on others” and answers to more “debated” political hints. Opeli said at the starting of this yr that he would improve future models, in order to not adopt an editorial attitude and offers many prospects on controversial topics.

Sometimes these efforts were refundable. TechCrunch announced on Monday that the default CHATGPT OPENAI power supply model allowed juvenile to generate erotic conversations. Opeli blamed his behavior for a “mistake”.

According to Google Technical Report, Gemini 2.5 Flash, which is still in view, follows instructions more faithfully than Gemini 2.0 Flash, including instructions exceeding problematic lines. The company claims that regression might be partially attributed to false positives, but in addition admits that Gemini 2.5 Flash sometimes generates “content of violation” when it is clearly asked.

Advertisement

TechCrunch event

Berkeley, California
|.
June 5

Book now

Advertisement

“Of course, there is a tension between (after instructions) on sensitive topics and violations of security policy, which is reflected in our assessment,” we read in the report.

The results from Meepmap, reference, which can examine how models react to sensitive and controversial hints, also suggest that Flash Gemini 2.5 is much less willing to refuse to reply controversial questions than Flash Gemini 2.0. Testing the TechCrunch model through the AI ​​OpenRoutter platform has shown that he unsuccessfully writes essays to support human artificial intelligence judges, weakening the protection of due protection in the US and the implementation of universal government supervisory programs.

Thomas Woodside, co -founder of the Secure AI Project, said that the limited details given by Google in their technical report show the need for greater transparency in testing models.

“There is a compromise between the instruction support and the observation of politics, because some users may ask for content that would violate the rules,” said Woodside Techcrunch. “In this case, the latest Flash model Google warns the instructions more, while breaking more. Google does not present many details about specific cases in which the rules have been violated, although they claim that they are not serious. Not knowing more, independent analysts are difficult to know if there is a problem.”

Advertisement

Google was already under fire for his models of security reporting practices.

The company took weeks to publish a technical report for the most talented model, Gemini 2.5 Pro. When the report was finally published, it initially omitted the key details of the security tests.

On Monday, Google published a more detailed report with additional security information.

(Tagstotransate) Gemini

Advertisement
This article was originally published on : techcrunch.com
Continue Reading
Advertisement

OUR NEWSLETTER

Subscribe Us To Receive Our Latest News Directly In Your Inbox!

We don’t spam! Read our privacy policy for more info.

Trending