Technology
Microsoft’s Mustafa Suleyman says he loves Sam Altman and believes he is sincere about AI security

In an interview Tuesday on the Aspen Ideas Festival, Mustafa Suleyman, CEO of Microsoft AI, made it clear that he admires OpenAI CEO Sam Altman.
CNBC’s Andrew Ross Sorkin he asked what the plan can be when Microsoft’s vast AI future wasn’t so closely depending on OpenAI, using the metaphor of winning a cycling race. But Suleiman dodged it.
“I don’t buy the metaphor that there is a finish line. This is another false frame,” he said. “We have to stop seeing everything as a close race.”
He then toed Microsoft’s corporate stance on his company’s take care of OpenAI, wherein the corporate reportedly invested $10 billion through a mix of money and cloud credits. The deal gives Microsoft a big share of OpenAI’s industrial business and enables it to embed artificial intelligence models into Microsoft products and sell the technology to Microsoft’s cloud customers. Some reports indicate that Microsoft may you may even be eligible for some OpenAI payments.
“It’s true that we have fierce competition with them,” Suleyman said of OpenAI. “It’s an independent company. We don’t own or control them. We don’t even have board members. So they do their very own thing. But we share a deep partnership. I’m a excellent friend of Sam’s and I actually have numerous respect for them and trust and imagine in what they’ve done. And this is what it would seem like for a lot of, a few years,” Suleyman said.
It is essential for Suleiman to profess this close/distant relationship. Microsoft investors and corporate customers appreciate the close relationship. However, regulatory authorities became concerned about April as well The EU agreed that its investment was not a real takeover. If this changes, regulatory involvement will probably change as well.
Suleyman says he trusts Altman with AI security
In a way, Suleyman was the Sam Altman of AI before OpenAI. He has spent most of his profession competing with OpenAI and is known for his ego.
Suleyman was the founding father of DeepMind, a man-made intelligence pioneer, and sold it to Google in 2014. He was reportedly placed on administrative leave following allegations of worker mistreatment, as reported by Bloomberg in 2019and then moved on to other roles at Google before leaving the corporate in 2022 to hitch Greylock Partners as a enterprise partner. A number of months later, he and Greylock’s Reid Hoffman, a Microsoft board member, launched Inflection AI to construct their very own LLM chatbot, amongst other goals.
Microsoft CEO Satya Nadella tried to rent Sam Altman last fall, but was unsuccessful when OpenAI fired him and then quickly reinstated him. Then in March, Microsoft hired Suleyman and most of Inflection, leaving a skeleton company and an enormous paycheck. In his recent position at Microsoft, Suleyman audited the OpenAI code, Semafor reported the case earlier this month. As one among OpenAI’s previous major rivals, it might now delve deeper into its crown jewel rival.
There is one other wrinkle to all this. OpenAI was founded with the goal of conducting AI safety research to stop a one-day evil AI from destroying humanity. In 2023, while Suleyman was still a competitor to OpenAI, he published a book with researcher Michael Bhaskar titled “The Coming Wave: Technology, Power and the 21st Century’s Greatest Dilemma”. The book discusses the threats related to artificial intelligence and methods to prevent them.
A bunch of former OpenAI employees signed the letter earlier this month, outlining his concerns that OpenAI and other artificial intelligence firms should not taking security seriously enough.
When asked about this, Suleiman also revealed that he had love and trust for Altman, but additionally that he wanted each regulation and a slower pace.
“Maybe it’s because I’m British with European leanings, but I’m not afraid of regulation in the way that everyone seems to be,” he said, describing all of the finger-pointing by former employees as “healthy dialogue.” He added: “I think it’s a great thing that technologists, entrepreneurs and CEOs of companies like myself and Sam, who I love very much and think are amazing,” are talking about regulation. “He’s not cynical, he’s genuine. He really believes in it.”
But he also said, “Friction can be our friend here. These technologies have gotten so powerful, can be so intimate, can be so ubiquitous, that it is time to take stock. If all this dialog slows AI development by six to 18 months or more, “it’s time well spent.”
Everything is very cozy between these players.
Suleyman wants cooperation with China, AI in classrooms
Suleiman also made interesting remarks on other issues. About the AI race with China:
“With all due respect to my good friends in DC and the military-industrial complex, if the default assumption is that this can only be a new Cold War, then that’s exactly what it will be, because it will become a self-fulfilling prophecy. They will be afraid that we are afraid that we will be hostile, so they must be hostile, and this will only escalate,” he said. “We need to find ways to work together, show them respect, while recognizing that we have a different set of values.”
On the opposite hand, he also said that China is “constructing its own technology ecosystem and spreading it around the globe. We really should pay close attention to this.
When asked about his opinion on children using artificial intelligence in class work, Suleiman, who replied that he had no children, shrugged. “I feel we’ve to be a bit of cautious about the shortcomings of any tool, you understand, similar to when calculators got here out, it was type of an instinctive response of, oh no, everyone will give you the chance to type of solve all of the equation problems immediately. And that may make us dumber because we couldn’t do mental arithmetic.
He also predicts that there’ll soon be a time when AI will act as a teacher’s aide, perhaps talking live within the classroom because the AI’s verbal skills improve. “What would it look like if a great teacher or educator had a deep conversation with artificial intelligence that was live and in front of an audience?”
The lesson here is that if we wish the individuals who create and profit from artificial intelligence to control humanity and protect it from its worst effects, we could also be setting unrealistic expectations.
Technology
Anysphere, which makes the cursor supposedly collect USD 900 million with a valuation of USD 9 billion

Anysphere, producer of coding cursor with AI drive, attracted $ 900 million in the recent financing round by Thrive Capital, Financial Times He informed, citing anonymous sources familiar with the contract.
The report said that Andreessen Horowitz (A16Z) and ACCEL also participate in the round, which values about $ 9 billion.
The cursor collected $ 105 million from Thrive, and A16Z with a valuation of $ 2.5 billion, as TechCrunch said in December. Capital Thrive also led this round and in addition participated in A16Z. According to Crunchbase data, the startup has collected over $ 173 million thus far.
It is alleged that investors, including index ventures and a reference point, attempt to support the company, but plainly existing investors don’t want to miss the opportunity to support it.
Other coding start-ups powered by artificial intelligence also attract the interest of investors. Techcrunch announced in February that Windsurf, a rival for Aklesphere, talked about collecting funds at a valuation of $ 3 billion. Openai, an investor in Anysphere, was supposedly I’m attempting to get windsurf for about the same value.
(Tagstransate) A16Z
(*9*)This article was originally published on : techcrunch.com
Technology
This is the shipping of products from China to the USA

The Chinese retailer has modified the strategy in the face of American tariffs.
Thanks to the executive ordinance, President Donald Trump ended the so -called de minimis principle, which allowed goods value 800 USD or less entering the country without tariffs. It also increases tariffs to Chinese goods by over 100%, forcing each Chinese firms and Shein, in addition to American giants, similar to Amazon to adapt plans and price increases.
CNBC reports that this was also affected, and American buyers see “import fees” from 130% to 150% added to their accounts. Now, nevertheless, the company is not sending the goods directly from China to the United States. Instead, it only displays the offers of products available in American warehouses, while goods sent from China are listed as outside the warehouse.
“He actively recruits American sellers to join the platform,” said the spokesman ago. “The transfer is to help local sellers reach more customers and develop their companies.”
(tagstotransate) tariffs
Technology
One of the last AI Google models is worse in terms of safety

The recently released Google AI model is worse in some security tests than its predecessor, in line with the company’s internal comparative test.
IN Technical report Google, published this week, reveals that his Flash Gemini 2.5 model is more likely that he generates a text that violates its security guidelines than Gemini 2.0 Flash. In two indicators “text security for text” and “image security to the text”, Flash Gemini 2.5 will withdraw 4.1% and 9.6% respectively.
Text safety for the text measures how often the model violates Google guidelines, making an allowance for the prompt, while image security to the text assesses how close the model adheres to those boundaries after displaying the monitors using the image. Both tests are automated, not supervised by man.
In an e-mail, Google spokesman confirmed that Gemini 2.5 Flash “performs worse in terms of text safety for text and image.”
These surprising comparative results appear when AI is passing in order that their models are more acceptable – in other words, less often refuse to answer controversial or sensitive. In the case of the latest Llam Meta models, he said that he fought models in order to not support “some views on others” and answers to more “debated” political hints. Opeli said at the starting of this yr that he would improve future models, in order to not adopt an editorial attitude and offers many prospects on controversial topics.
Sometimes these efforts were refundable. TechCrunch announced on Monday that the default CHATGPT OPENAI power supply model allowed juvenile to generate erotic conversations. Opeli blamed his behavior for a “mistake”.
According to Google Technical Report, Gemini 2.5 Flash, which is still in view, follows instructions more faithfully than Gemini 2.0 Flash, including instructions exceeding problematic lines. The company claims that regression might be partially attributed to false positives, but in addition admits that Gemini 2.5 Flash sometimes generates “content of violation” when it is clearly asked.
TechCrunch event
Berkeley, California
|.
June 5
Book now
“Of course, there is a tension between (after instructions) on sensitive topics and violations of security policy, which is reflected in our assessment,” we read in the report.
The results from Meepmap, reference, which can examine how models react to sensitive and controversial hints, also suggest that Flash Gemini 2.5 is much less willing to refuse to reply controversial questions than Flash Gemini 2.0. Testing the TechCrunch model through the AI OpenRoutter platform has shown that he unsuccessfully writes essays to support human artificial intelligence judges, weakening the protection of due protection in the US and the implementation of universal government supervisory programs.
Thomas Woodside, co -founder of the Secure AI Project, said that the limited details given by Google in their technical report show the need for greater transparency in testing models.
“There is a compromise between the instruction support and the observation of politics, because some users may ask for content that would violate the rules,” said Woodside Techcrunch. “In this case, the latest Flash model Google warns the instructions more, while breaking more. Google does not present many details about specific cases in which the rules have been violated, although they claim that they are not serious. Not knowing more, independent analysts are difficult to know if there is a problem.”
Google was already under fire for his models of security reporting practices.
The company took weeks to publish a technical report for the most talented model, Gemini 2.5 Pro. When the report was finally published, it initially omitted the key details of the security tests.
On Monday, Google published a more detailed report with additional security information.
(Tagstotransate) Gemini
-
Press Release1 year ago
U.S.-Africa Chamber of Commerce Appoints Robert Alexander of 360WiseMedia as Board Director
-
Press Release1 year ago
CEO of 360WiSE Launches Mentorship Program in Overtown Miami FL
-
Business and Finance11 months ago
The Importance of Owning Your Distribution Media Platform
-
Business and Finance1 year ago
360Wise Media and McDonald’s NY Tri-State Owner Operators Celebrate Success of “Faces of Black History” Campaign with Over 2 Million Event Visits
-
Ben Crump1 year ago
Another lawsuit accuses Google of bias against Black minority employees
-
Theater1 year ago
Telling the story of the Apollo Theater
-
Ben Crump1 year ago
Henrietta Lacks’ family members reach an agreement after her cells undergo advanced medical tests
-
Ben Crump1 year ago
The families of George Floyd and Daunte Wright hold an emotional press conference in Minneapolis
-
Theater1 year ago
Applications open for the 2020-2021 Soul Producing National Black Theater residency – Black Theater Matters
-
Theater11 months ago
Cultural icon Apollo Theater sets new goals on the occasion of its 85th anniversary