Technology
The new “Protector” application offers the riders a safe “uber with weapons”

The new application offers the average person the ability to book safety details in the field of transport.
The new application has entered the market, allowing riders to book rides with former military professionals or law enforcement agencies who’re armed and ready to make sure safety.
RiveShare’s protector is now available on iOS and Mac App Storeenabling users to book security data during travel. Former product manager Nikita Bier, the creator of the GAS social application, revealed that he was a Protection advisor and took to X, where he described the application as “Uber with weapons”.
“Over the past few months, I advise Protectors: a new application for ordering security details on demand”, Bier Tweet. “Or simply: Uber with weapons.”
Currently available only in LA and NYC, NEW fired The application allows users to decide on the variety of “defenders” they need while driving, informs. Riders may also pick from 4 outfits: formal business, which is a traditional suit; Business Casual, which is a lighter suit without a tie; Tactical free, where the driver has a polo shirt with tactical pants and shoes; Or an operator wherein an armed skilled reports a full swat style outfit.
Users may also select as many as three driving vehicles in motorcycle format, resembling a government official or a celebrity. The service has a high price, and one Cadillac Escalade ride with one “defender” cost 1000 USD for pickup in KTLA’s Hollywood Newsroom at 18:00 on Saturday, in line with the application.
This fee includes a minimum of five hours of protection, the shortest duration for booking. Is also annual Membership fee of USD 129.
The application published a promotional film in January shown The murder of the general director of Unitedhealthcare to display the value of the Protector service.
“Now we are moving into a script in which if the defender was present, the crisis could be avoided,” says the voice of the video.
Since the announcement, Protector has turn into the 14th. The hottest application in the travel category, set after the Frontier Airlines application, but before the Spirit Airlines application.
Technology
Claude: Everything you need to know about AI Antropic

Anthropic, one in every of the world’s largest artificial intelligence suppliers, has a strong family of generative AI models called Claude. These models can perform quite a few tasks, from painting signatures and writing E -Mail to solving mathematical challenges and coding.
Because the anthropic model is growing so fast, it’s difficult to follow which Claude models do it. To help, we’ve got developed a Claude guide, which we shall be up to date when latest models and updates come.
Claude models
Claude models are named after literary artworks: Haiku, Sonnet and Opus. The latest are:
- Claude 3.5 haikulight model.
- SONET CLAUDE 3.7Hybrid, medium, hybrid reasoning model. This is currently the flagship model of AI Anthropika.
- Close the third jobLarge model.
It is needed, Claude 3 Opus – the biggest and most costly anthropic model of offers – is currently the least capable Claude model. This definitely changes when Anthropic releases an updated version of Opus.
Recently, Anthropic has released Sonnet Claude 3.7, probably the most advanced model. This AI model differs from Claude 3.5 Haiku and Claude 3 Opus, since it is a hybrid model of AI reasoning, which might give each real -time answers and more thoughtful answers to questions.
Using Claude 3.7 Sonnet, users can select whether to include the flexibility to reason the AI model, which prompts the model to “think” for a brief or long period.
When the reasoning is turned on, the Claude 3.7 sonnet will spend from a couple of seconds to a couple of minutes within the “thinking” phase before the reply. During this phase, the AI model spreads the user’s prompt into smaller parts and checks its answers.
Sonet Claude 3.7 is the primary AI model that may “reason”, a way to which many AI laboratories turned to traditional methods of improving the cone of artificial intelligence.
Even with disabled reasoning, Claude 3.7 Sonnet stays one in every of the very best models of the technology industry.
In November, Antropic released an improved – and dearer – version of its light model AI, Claude 3.5 Haiku. This model exceeds Opus Claude 3 Anthropica on several comparative tests, but cannot analyze images similar to Claude 3 Opus or Claude 3.7 Sonnet.
All Claude-which models have a typical contextual window of 200,000-year-old-also follow multi-stage instructions, Use tools (e.g. Ticker Ticker Json.
The context window is the quantity of information that a model similar to Claude can analyze before generating latest data, while tokens are divided by bits of raw data (similar to the syllables “fan”, “tas” and “tic” within the word “fantastic”). Two hundred thousand tokens are equivalent to about 150,000 words or a 600-page novel.
Unlike many primary generative AI models, Anthropic cannot access the Internet, which implies that they will not be particularly great in answering current questions. They also cannot generate images – only easy line schemes.
As for the primary differences between Claude models, the Claude 3.7 sonet is quicker than Claude 3 Opus and understands higher refined and complicated instructions. Haiku struggles with sophisticated hints, but that is the fastest of three models.
Deliberate prices of the Claude model
Claude models can be found via API Antropic and managed platforms, similar to Amazon Bedrock and Google Cloud Vertex AI.
Here are anthropic API prices:
- Claude 3.5 haiku It costs 80 cents for million input tokens (~ 750,000 words) or 4 USD production tokens for 1,000,000
- SONET CLAUDE 3.7 costs $ 3 for a million input tokens or USD 15 per million tokens
- Close the third job It costs USD 15 per million tokens or USD 75 for a million production tokens
Anthropic offers fast buffering and parties to get additional savings through the performance.
Fast buffering allows programmers to store specific “fast contexts”, which could be reused in API connections for the model, and asynchronous group processes groups with low priority (after which cheaper) demands of the model’s request.
Claude plans and applications
For individual users and corporations that simply want to interact with Claude models via applications for the Internet, Android and iOS, Antropic offers a free Claude plan with rate limitations and other restrictions on use.
The update to one in every of the corporate’s subscriptions removes these limits and unlocks latest functionality. Current plans are:
Claude Pro, which costs $ 20 a month, has 5 times higher rate limits, priority access and preview of the upcoming functions.
A business-oriented team costs 30 USD per month-adds navigation desktop to control the invoicing and management of users, and integration with data repositors similar to code databases and customer relationship management platforms (e.g. Salesforce). The switch allows or turns off quotes of verification of claims generated by AI. (Like all models, Claude from time to time hallucinates.)
Both subscribers of pros and the team receive projects, a function that relies on Claude leads to knowledge databases, which could be stylish guides, intelligence transcripts and so forth. These customers, along with free level users, may use artifacts, a working area where users can edit and add to content similar to code, applications, web sites and other documents generated by Claude.
For customers who need much more, there may be Claude Enterprise, which allows firms to send reserved data to Claude in order that Claude can analyze information and answer questions. Claude Enterprise also has a bigger context window (500,000 tokens), GitHub integration for engineering teams to synchronize their GitHub repositories with Claude in addition to projects and artifacts.
Caution
As with all AI generative models, there may be a risk related to using Claude.
Models sometimes Make mistakes through the summary Or Answering the questions Because of their tendency to hallucination. They are also trained in public web data, a few of which could also be protected by copyright or under a restrictive license. Anthropic and plenty of other AI suppliers claims that honest use The doctrine protects them against copyright claims. But this didn’t stop the info owners With Folding processes.
Anthropic offers rules To protect some clients from the battles of the courtroom resulting from challenges regarding honest use. However, they don’t solve the moral breakup of using models trained on the idea of information without permission.
(Tagstranslatate) AI (T) Anthropic
Technology
A single default slogan reveals access to dozens of residential buildings

The safety researcher claims that the default password sent within the widely used door access control system allows everyone to easily and remotely access the door locks and inspection of elevators in dozens of buildings within the USA and Canada.
Hirsch, an organization that’s now the owner of the Enterphone Mesh door access system is not going to fix the gap, saying that the error is according to the design and that customers should follow the corporate’s configuration instructions and alter the default password.
This leaves dozens of unveiled residential and office buildings in North America, which haven’t yet modified the default password of the access control system or usually are not aware that they need to, According to Eric Daiglewho found dozens of exposed buildings.
The default slogans usually are not unusual or not necessarily a secret in devices connected to the Internet; Passwords supplied with products are frequently designed to simplify access to login for the shopper and are sometimes within the user manual. But counting on the client by changing the default password to prevent future malicious access it still classifies as susceptibility to security within the product itself.
In the case of Hirsch door products, customers usually are not monitored or required to change the default password.
As such, Daigle received a security error, formally marked as CVE-2025-26793.
No planned amendment
The default passwords have long been an issue for devices connected via the Internet, enabling malicious hackers to use passwords to log in as in the event that they were a legitimate owner and steal data or take over devices to use the bandwidth to introduce cyber attacks. In recent years, governments have tried to stop technology producers from the use of uncertain default slogans, making an allowance for the chance of security.
In the case of the Hirsch door entry system, the error is rated as 10 out of 10 on a severity of susceptibility, thanks to the benefit with which everyone can use it. Practically speaking, the use of an error is so simple as taking the default password from the system’s installation guide on the Hirsch website and connecting the password to the login page addressed to the Internet within the system of any constructing.
IN Blog postDaigle said that last yr he was susceptible after discovering one of the doorway panels to the Enterphone door in Hirsch in a constructing within the hometown of Vancouver. Daigle used the Zoomeye scanning website to seek for Enterphone network systems that were connected to the Internet, and located 71 systems that were still based on unconnected obligations.
Daigle said that the default password allows access to the net Mesh background system, whose constructing managers use to manage access to winds, common areas and office and housing locks. Each system displays the physical address of the constructing with the mesh system installed, enabling everyone to whose constructing they’ve access.
Daigle said that you would be able to effectively break into dozens of affected buildings inside just a few minutes without attracting attention.
TechCrunch intervened because Hirsch has no funds reminiscent of the revelation of sensitivity, members of society reminiscent of Daigle reported a security defect to the corporate.
Mark Allen, general director of Hirsch, didn’t answer Techcrunch’s request for commentary, but as an alternative put down his senior Hirsch product manager, who told Techcrunch that the use of default passwords by the corporate is “outdated” (regardless of how). The product manager said that “is equally worrying”, that there are customers who “installed systems and do not comply with manufacturers’ recommendations”, referring to their very own instructions for the Hirsch installation.
Hirsch wouldn’t commit himself to publicly disclose the main points in regards to the error, but said that he had contacted his clients regarding tracking the product manual.
Because Hirsch doesn’t want to fix the error, some buildings – and their inhabitants – will probably remain exposed. The error shows that selections regarding product development from the past can come back to have implications in the true world summer later.
(Tagstranslata) cyber security
Technology
Did Xai lie about GROK 3 comparative tests?

Debates on AI comparative tests – and their reporting by AI Labs – spill at the general public.
This week, Openai worker accused Elon Musk’s Ai Company, XAI, publishing comparative results for his or her latest AI model, Grok 3. One of the co -founders of XAI, Igor Babushkin, he insisted that the corporate had the suitable.
The truth lies somewhere in between.
IN Publish on the XAI blogThe company has published a chart showing the outcomes of GROK 3 on Aime 2025, a set of adverse mathematical questions from the recent Invitational mathematical exam. Some experts have Aime validity as a AI reference point. Nevertheless, AIME 2025 and older versions of the test are widely used to look at the mathematical ability of the model.
The XAI chart showed two variants of GROK 3, Grok 3 Reasoning Beta and GroK 3 mini reasoning, beating the very best available OpenAI, O3-Mini-High, on Aime 2025. But OpenAI employees on X quickly noticed that the XAI chart XAI chart. He didn’t consider the AME 2025 O3-Mini-High lead to “Cons@64”.
What is Cons@64, are you able to ask? Well, that is the abbreviation for “Conszeus@64” and principally gives model 64 tries to reply every problem in relation and accepts answers most frequently generated as final answers. As you may imagine, Cons@64 tends to extend the outcomes of the models, and skipping it from the chart may cause one model to surpass one other when it shouldn’t be in point of fact.
GROK 3 Beta and grok 3 mini reasoning for AIME 2025 at “@1”-what implies that the primary result, which models have achieved at a distance-see below the results of the O3-Mini-High. Grok 3 Reasoning Beta also the trail also behind the O1 Openai model on “Medium” Computing. However, XAI is GROK 3 promoting As “the smartest artificial intelligence of the world.”
Babushkin Ox was arguing that OpenAI previously published similarly misleading comparative charts – although charts comparing the performance of its own models. A more neutral event in the talk has developed a more “accurate” chart showing almost every model in Cons@64:
Funny, as some people perceive my conspiracy as an attack on Opeli and others as an attack on the groc, while in point of fact it’s deep propaganda
(I actually think the grok looks good there, and chicanery ttc openai for o3-mini-*high*-pass@”” “1 ″” “deserves more control.) https://t.co/djqljpcjh8 pic.twitter.com/3wh8foufic– TERORTAXES ▶ ️ (Deepseek Twitter🐋iron Powder 2023 – ∞) (@teortaxestex) February 20, 2025
But as a researcher AI Nathan Lambert He identified within the postPerhaps crucial metric stays a secret: the calculation (and money) cost he needed for every model to realize his best result. It simply shows how little a lot of the test tests AI communicates about the restrictions of models – and their strengths.
(Toshma of All State) (Enter updates) in Triptaren !!!
-
Press Release11 months ago
CEO of 360WiSE Launches Mentorship Program in Overtown Miami FL
-
Press Release11 months ago
U.S.-Africa Chamber of Commerce Appoints Robert Alexander of 360WiseMedia as Board Director
-
Business and Finance9 months ago
The Importance of Owning Your Distribution Media Platform
-
Business and Finance11 months ago
360Wise Media and McDonald’s NY Tri-State Owner Operators Celebrate Success of “Faces of Black History” Campaign with Over 2 Million Event Visits
-
Ben Crump11 months ago
Another lawsuit accuses Google of bias against Black minority employees
-
Theater11 months ago
Telling the story of the Apollo Theater
-
Ben Crump11 months ago
Henrietta Lacks’ family members reach an agreement after her cells undergo advanced medical tests
-
Ben Crump11 months ago
The families of George Floyd and Daunte Wright hold an emotional press conference in Minneapolis
-
Theater11 months ago
Applications open for the 2020-2021 Soul Producing National Black Theater residency – Black Theater Matters
-
Theater9 months ago
Cultural icon Apollo Theater sets new goals on the occasion of its 85th anniversary