Technology
Apple is joining the race to find an AI icon that makes sense

This week was an exciting one for the AI community, as Apple joined Google, OpenAI, Anthropic, Meta and others in a long-running competition to find an icon that even remotely suggests AI to users. And like everyone else, Apple threw down the gauntlet.
Apple intelligence is represented by a circular shape composed of seven loops. Or possibly it is a circle with a crooked infinity symbol inside? No, it’s New Siri from Apple Intelligence. Or possibly it’s New Siri when your phone lights up around the edges? Yes.
The thing is, nobody knows what artificial intelligence looks like and even what it should seem like. It does every part but looks like nothing. However, it needs to be represented in user interfaces so that people know they’re interacting with a machine learning model and not only an easy search, submit, or whatever.
While approaches to stigmatizing this supposedly all-seeing, all-knowing, all-acting intelligence vary, they’ve coalesced around the idea that an artificial intelligence avatar must be non-threatening, abstract, but relatively easy and non-anthropomorphic. (They seem to have rejected my suggestion that these models at all times speak in rhyme.)
Early icons of artificial intelligence were sometimes little robots, wizard hats or magic wands: novelties. But the implications of the former are inhumanity, rigidity and limitation – robots know nothing, usually are not personal to you, perform predetermined, automated tasks. And magic wands and the like suggest irrational invention, inexplicable, mysterious – perhaps advantageous for an image generator or creative soundboard, but not for the sorts of actual, credible answers these firms want you to consider that A.I. ensure.
Company logo design is generally an odd mixture of strong vision, industrial necessity, and compromise made by committee. The effects of those influences could be seen in the logos featured here.
The strongest vision is, for higher or for worse, the black dot of OpenAI. The cold, featureless hole into which you drop your query is a bit like a wishing well or Echo’s cave.
The committee’s best energy was directed towards Microsoft, whose Copilot logo is virtually indescribable.
But notice how 4 of the six (five of the seven, when you count Apple twice and why we shouldn’t) use nice, candy-colored colours: colours that mean nothing but are cheerful and approachable, leaning toward femininity (how such things are considered in design language) and even childish. Soft transitions of pink, purple and turquoise; pastels, not hard colours; 4 are soft, limitless shapes; Embarrassment and Google have sharp edges, but the former suggests an limitless book, while the latter is a pleased, symmetrical star with friendly concavities. Some also animate in use, feeling alive and responsive (and crowd pleasing, so you possibly can’t ignore it – taking a look at you, Meta).
Generally speaking, the intended impression is one among friendliness, openness, and undefined potential – as opposed to points comparable to expertise, efficiency, decisiveness, or creativity, for instance.
Do you think that I’m analyzing an excessive amount of? How many pages do you think that the design development documents for every of those logos contained – greater than or lower than 20 pages? My money can be on the former. Companies are obsessive about these things. (And yet you by some means missed a logo of hate or created an inexplicably sexual atmosphere.)
However, the point is not that corporate design teams do what they do, but that nobody has managed to give you a visible concept that clearly says “AI” to the user. At best, these colourful shapes convey a negative idea: that this interface is an email, a search engine, a notes app.
Email logos are sometimes related to an envelope because (after all) it is email, each conceptually and practically. A more general “send” message icon is indicated, sometimes split, like a paper plane, indicating a document in motion. The settings use a gear or key, suggesting tinkering with the engine or machine. These concepts apply across languages and (to some extent) generations.
Not every icon may refer so clearly to its corresponding function. For example, how do you indicate “download” when the word varies from culture to culture? In France they charge a telécharge, which makes sense, but doesn’t really mean “collecting”. However, we have now reached the arrow pointing down, sometimes touching the surface. Load. The same goes for cloud computing – we have embraced the cloud although it’s mainly a marketing term meaning “a big data center somewhere.” But what was the alternative? Little data center button?
AI is still recent to consumers, who’re being asked to use it as a substitute of “other things,” which is a really general category that AI product providers are reluctant to define because that would mean there are some things AI can do and a few things it will probably’t do. could be. They usually are not ready to admit it: all fiction is based on the fact that artificial intelligence is able to do every part in theory, and achieving it is only a matter of engineering and computation.
In other words, to paraphrase Steinbeck: every AI considers itself temporarily embarrassed by AGI. (Or I should say, it is considered by the marketing department, because artificial intelligence itself, as a pattern generator, doesn’t take anything into consideration.)
In the meantime, these firms still have to call it what it is and put a “face” on it – though it’s telling and refreshing that nobody has actually chosen a face. But even here they’re acting at the whim of consumers who ignore GPT version numbers as an oddity and like to say ChatGPT; who is unable to establish contact with “Bard”, but agrees to meet “Gemini”, who has proven focus; who’ve never wanted to Bing (and definitely not talk to the thing) but don’t mind having a co-pilot.
Apple, for its part, has taken a shotgun approach: you ask Siri to send a request to Apple Intelligence (two different logos), which takes place in your private cloud (unrelated to iCloud), and possibly even forward your request to ChatGPT (no logo allowed) ), and the best indication that the AI is listening to what you are saying is… swirling colours somewhere or in every single place on the screen.
Until artificial intelligence is defined just a little higher, we will expect the icons and logos that represent it to proceed to be vague, non-threatening, abstract shapes. A colourful, ever-changing blob would not take your job, right?
Technology
One of the last AI Google models is worse in terms of safety

The recently released Google AI model is worse in some security tests than its predecessor, in line with the company’s internal comparative test.
IN Technical report Google, published this week, reveals that his Flash Gemini 2.5 model is more likely that he generates a text that violates its security guidelines than Gemini 2.0 Flash. In two indicators “text security for text” and “image security to the text”, Flash Gemini 2.5 will withdraw 4.1% and 9.6% respectively.
Text safety for the text measures how often the model violates Google guidelines, making an allowance for the prompt, while image security to the text assesses how close the model adheres to those boundaries after displaying the monitors using the image. Both tests are automated, not supervised by man.
In an e-mail, Google spokesman confirmed that Gemini 2.5 Flash “performs worse in terms of text safety for text and image.”
These surprising comparative results appear when AI is passing in order that their models are more acceptable – in other words, less often refuse to answer controversial or sensitive. In the case of the latest Llam Meta models, he said that he fought models in order to not support “some views on others” and answers to more “debated” political hints. Opeli said at the starting of this yr that he would improve future models, in order to not adopt an editorial attitude and offers many prospects on controversial topics.
Sometimes these efforts were refundable. TechCrunch announced on Monday that the default CHATGPT OPENAI power supply model allowed juvenile to generate erotic conversations. Opeli blamed his behavior for a “mistake”.
According to Google Technical Report, Gemini 2.5 Flash, which is still in view, follows instructions more faithfully than Gemini 2.0 Flash, including instructions exceeding problematic lines. The company claims that regression might be partially attributed to false positives, but in addition admits that Gemini 2.5 Flash sometimes generates “content of violation” when it is clearly asked.
TechCrunch event
Berkeley, California
|.
June 5
Book now
“Of course, there is a tension between (after instructions) on sensitive topics and violations of security policy, which is reflected in our assessment,” we read in the report.
The results from Meepmap, reference, which can examine how models react to sensitive and controversial hints, also suggest that Flash Gemini 2.5 is much less willing to refuse to reply controversial questions than Flash Gemini 2.0. Testing the TechCrunch model through the AI OpenRoutter platform has shown that he unsuccessfully writes essays to support human artificial intelligence judges, weakening the protection of due protection in the US and the implementation of universal government supervisory programs.
Thomas Woodside, co -founder of the Secure AI Project, said that the limited details given by Google in their technical report show the need for greater transparency in testing models.
“There is a compromise between the instruction support and the observation of politics, because some users may ask for content that would violate the rules,” said Woodside Techcrunch. “In this case, the latest Flash model Google warns the instructions more, while breaking more. Google does not present many details about specific cases in which the rules have been violated, although they claim that they are not serious. Not knowing more, independent analysts are difficult to know if there is a problem.”
Google was already under fire for his models of security reporting practices.
The company took weeks to publish a technical report for the most talented model, Gemini 2.5 Pro. When the report was finally published, it initially omitted the key details of the security tests.
On Monday, Google published a more detailed report with additional security information.
(Tagstotransate) Gemini
Technology
Aurora launches a commercial self -propelled truck service in Texas

The autonomous startup of the Aurora Innovation vehicle technology claims that it has successfully launched a self -propelled truck service in Texas, which makes it the primary company that she implemented without drivers, heavy trucks for commercial use on public roads in the USA
The premiere appears when Aurora gets the term: In October, the corporate delayed the planned debut 2024 to April 2025. The debut also appears five months after the rival Kodiak Robotics provided its first autonomous trucks to clients commercial for operations without a driver in field environments.
Aurora claims that this week she began to freight between Dallas and Houston with Hirschbach Motor Lines and Uber Freight starters, and that she has finished 1200 miles without a driver to this point. The company plans to expand to El Paso and Phoenix until the top of 2025.
TechCrunch contacted for more detailed information concerning the premiere, for instance, the variety of vehicles implemented Aurora and whether the system needed to implement the Pullover maneuver or the required distant human assistance.
The commercial premiere of Aurora takes place in a difficult time. Self -propelled trucks have long been related to the necessity for his or her technology attributable to labor deficiencies in the chairman’s transport and the expected increase in freigh shipping. Trump’s tariffs modified this attitude, not less than in a short period. According to the April analytical company report from the commercial vehicle industry ACT researchThe freight is predicted to fall this yr in the USA with a decrease in volume and consumer expenditure.
Aurora will report its results in the primary quarter next week, i.e. when he shares how he expects the present trade war will affect his future activity. TechCrunch contacted to learn more about how tariffs affect Auror’s activities.
For now, Aurora will probably concentrate on further proving his safety case without a driver and cooperation with state and federal legislators to just accept favorable politicians to assist her develop.
TechCrunch event
Berkeley, California
|.
June 5
Book now
At the start of 2025, Aurora filed a lawsuit against federal regulatory bodies after the court refused to release the appliance for release from the protection requirement, which consists in placing warning triangles on the road, when the truck must stop on the highway – something that’s difficult to do when there isn’t a driver in the vehicle. To maintain compliance with this principle and proceed to totally implement without service drivers, Aurora probably has a man -driven automotive trail after they are working.
(Tagstranslate) Aurora Innovation
Technology
Sarah Tavel, the first woman of the Benchmark GP, goes to the Venture partner

Eight years after joining Benchmark as the company’s first partner, Sarah Tavel announced that she was going to a more limited role at Hapeure Venture.
In his latest position as a partner Venture Tavel will proceed to invest and serve existing company boards, but may have more time to examine “AI tools on the edge” and fascinated with the direction of artificial intelligence, she wrote.
Tavel joined Benchmark in 2017 after spending a half years as a partner in Greylock and three years as a product manager at Pinterest. Before Pinterest, Tavel was an investor in Bessemer Venture Partners, where she helped Source Pinterest and Github.
Since its foundation in 1995, the benchmark intentionally maintained a small team of six or fewer general partners. Unlike most VC corporations, wherein older partners normally receive most of the management and profits fees, the benchmark acts as an equal partnership, and all partners share fees and returns equally.
During his term as a general partner of Benchmark, Tavel invested in Hipcamp on the campsite, chains of cryptocurrency intelligence startups and the Supergreaty cosmetic platform, which was purchased by Whatnot in 2023. Tavel also supported the application for sharing photos of Paparazhi, which closed two years ago, and the AI 11x sales platform, about which TechCrunch wrote.
(Tagstotransate) benchmark
-
Press Release1 year ago
U.S.-Africa Chamber of Commerce Appoints Robert Alexander of 360WiseMedia as Board Director
-
Press Release1 year ago
CEO of 360WiSE Launches Mentorship Program in Overtown Miami FL
-
Business and Finance11 months ago
The Importance of Owning Your Distribution Media Platform
-
Business and Finance1 year ago
360Wise Media and McDonald’s NY Tri-State Owner Operators Celebrate Success of “Faces of Black History” Campaign with Over 2 Million Event Visits
-
Ben Crump1 year ago
Another lawsuit accuses Google of bias against Black minority employees
-
Theater1 year ago
Telling the story of the Apollo Theater
-
Ben Crump1 year ago
Henrietta Lacks’ family members reach an agreement after her cells undergo advanced medical tests
-
Ben Crump1 year ago
The families of George Floyd and Daunte Wright hold an emotional press conference in Minneapolis
-
Theater1 year ago
Applications open for the 2020-2021 Soul Producing National Black Theater residency – Black Theater Matters
-
Theater11 months ago
Cultural icon Apollo Theater sets new goals on the occasion of its 85th anniversary