Connect with us

Technology

Anduril is accelerating the launch of defensive payloads by purchasing ready-made Apex satellite buses

Published

on

On the sensor expands even further to the “top level”.

The company, best known for its AI-powered defense products in the air, on land and at sea, is partnering with a satellite bus startup Apical space for the rapid deployment of payloads into orbit for the US Department of Defense.

This is a rare case where an emerging defense contractor decides to partner with a supplier somewhat than construct the product itself or just acquire the supplier outright. But this partnership is smart: Anduril attributes much of its success to its approach to product design and development, which emphasizes rapidly developing large numbers of products using off-the-shelf components to cut back costs. Apex does something similar by producing satellite buses, the part of the spacecraft that holds the payload. In the past, they were subject to individual engineering processes, long lead times and high prices.

“We’re really focused on recreating the same things we’ve done in other areas, in the space domain,” Gokul Subramanian, Anduril’s vp of space and software, said at a press conference. “If you concentrate on what Anduril has done successfully in sea, air and ground transportation, there is a shift from the low-volume, high-cost systems which have traditionally been used to high-volume, low-cost systems. We have the same belief in space – that to achieve success in space, we want to maneuver to high-volume, low-cost production.”

Ian Cinnamon, co-founder and CEO of Apex Space, said the satellite bus is the “biggest bottleneck” in the space ecosystem, stopping America from putting more mass into orbit. Their goal is to deliver satellite buses to customers in weeks, not years, with more transparent pricing and a standardized product.

The Anduril-built payload flew in March on the first-ever Apex mission, which Subramanian called a “mission data processor” that allows on-orbit processing of images captured from the satellite. This payload leverages Lattice, the command and control process implemented in all Anduril products. In summary, Anduril was in a position to reveal the ability to point a spacecraft to a particular location, take an image of what the spacecraft saw, process that image, and transmit the data to Earth – all completely autonomously.

“It was the first experiment that gave us confidence in our vision for space, our collaboration with Ian and the bus platform they built,” he said.

Anduril has already purchased a dedicated satellite bus from Apex, which can be launched next yr. Anduril will operate this method, which can carry payloads built in-house and by others. This can be the model of the future, the pair of executives explained: Apex will provide the buses, Anduril will “mission the system,” Subramanian said.

Subramanian declined to comment on the specific opportunities the company hopes to pursue with the latest partnership, however it leaves the company in a great position to tackle a main contractor role on some coveted contracts. For example, the Space Development Agency’s Proliferated Warfighter Space Architecture program is deploying masses of satellites to enhance the Space Force’s aging missile tracking and defense architecture. SDA spends huge amounts of money on these satellites; So far, contracts for the construction of satellites under the program have been awarded to, amongst others: Sierra Space, Rocket Lab, SpaceX. Anduril undoubtedly hopes to affix the club.

This is not Anduril’s first foray into space: in July 2023, the company won a $10.50 contract from Space Systems Command to include Lattice into Space Surveillance Network (SSN) sensors, used for early warning of missiles. Last week, the company was also awarded a $25.3 million contract from the Space Force to offer additional ANN upgrades.

This is the first of many partnerships Anduril intends to announce, including with other bus providers, Subramanian added.

This article was originally published on : techcrunch.com
Continue Reading
Advertisement
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Technology

Police in California don’t love their Tesla police cars

Published

on

By

On Thursday evening, Elon Musk showed off Tesla’s latest technology, saying in regards to the sleek Cybercab robots and a prototype for a brand new electric van that “the future should look like the future.”

However, today the Californian police are beginning to regret the choice to exchange their fleet with Tesla Y models. Although climate-friendly – preparing departments for emission-free future – it seems that Teslas pose many other challenges, per Interviews with SFGate with three Northern California police chiefs.

For example, after the modification, the automotive’s rear seat is just too small for multiple passenger and the front seat is just too tight for officers. The bosses also cite “autopilot interference” when trying to depart the road; argue that counting on unsecured charging stations puts officers in danger when transporting suspects over long distances; and keep in mind that during a shooting, police are taught to cover behind the engine block of a automotive. In the case of electrical vehicles, this isn’t possible.

This article was originally published on : techcrunch.com
Continue Reading

Technology

Full Anthropic CEO Goes Techno-Optimistic in 15,000-Word Paean to Artificial Intelligence

Published

on

By

Anthropic Co-Founder & CEO Dario Amodei speaks onstage during TechCrunch Disrupt 2023 at Moscone Center.

Anthropic CEO Dario Amodei wants you to know that he will not be “doomsday” for AI.

At least that is my understanding of “mic drop” at ~15,000 words essay Amodei posted on his blog late Friday night. (I attempted asking Anthropic’s chatbot Claude if he was OK, but unfortunately the post exceeded the free plan’s length limit.)

Amodei paints an image of a world where all the risks of artificial intelligence are mitigated and technology delivers previously unrealized prosperity, social improvement, and abundance. He says this is not intended to minimize the shortcomings of AI – Amodei is initially targeting, without naming names, AI corporations that over-sell and usually hype their technological capabilities. However, it may very well be argued that this essay leans an excessive amount of towards techno-utopia, making claims which might be simply not supported by facts.

Amodei believes that “powerful AI” will emerge as early as 2026. By “powerful AI” he means AI that’s “smarter than a Nobel Prize winner” in fields resembling biology and engineering and might perform tasks such like proving unsolved math theorems and writing “extremely good novels.” Amodei claims that this artificial intelligence will have the opportunity to control any software and hardware possible, including industrial machines, and can essentially do many of the work that humans do today – but higher.

“(This artificial intelligence) can engage in any activity, communication, or remote operation… including taking action on the internet, giving directions or giving directions to people, ordering materials, directing experiments, watching videos, creating videos, etc.” – writes Amodei. “It has no physical form (other than life on a computer screen), but can control existing physical tools, robots, or laboratory equipment through a computer; theoretically, he could even design robots or equipment that he could use.”

Lots would have to occur to get to this point.

Even today’s best artificial intelligence cannot “think” the way in which we understand it. Models don’t reason a lot as replicate patterns they observe in their training data.

Assuming for the sake of Amodea’s argument that the AI ​​industry will soon “solve” human-like pondering, will robotics catch up to enable future AIs to conduct laboratory experiments, produce their very own tools, and so forth? The fragility of today’s robots suggests that is unlikely.

But Amodei is an optimist – very optimistic.

He believes that in the subsequent 7-12 years, artificial intelligence could help treat just about all infectious diseases, eliminate most cancers, treat genetic diseases and stop Alzheimer’s disease in its early stages. Amodei believes that in the subsequent 5-10 years, conditions resembling post-traumatic stress disorder (PTSD), depression, schizophrenia and addictions might be curable with AI-based drugs or genetically preventable through embryo screening ( and controversial opinion) — and that there can even be AI-developed drugs that “adjust cognitive function and emotional state” to “trick (our brains) to behave a little better and provide more satisfying daily experiences.”

If this happens, Amodei expects the common human lifespan to double to 150 years.

“My basic prediction is that AI-based biology and medicine will allow us to compress the progress that biologists would make over the next 50 to 100 years into 5 to 10 years,” he writes. “I’ll call it the ‘compressed twenty first century’: the concept if we develop powerful artificial intelligence, we’ll make as much progress in biology and medicine in just a few years as we’d in all the twenty first century.

This also seems far-fetched, provided that artificial intelligence has not yet radically modified medicine – and will not occur for a very long time, or never. Even if the AI ​​does it reduce requires the work and expense of getting a drug into preclinical testing, it could fail at a later stage, similar to human-designed drugs. It is essential to consider that artificial intelligence currently used in healthcare has proven to be biased and dangerous in many respects, or otherwise extremely difficult to implement in existing clinical and laboratory settings. It seems, well, aspirational to suggest that each one of those and other problems might be solved inside the subsequent decade or so.

But Amodei doesn’t end there.

He claims that artificial intelligence can solve world hunger. This could reverse the tide of climate change. It could also transform the economies of most developing countries; Amodei believes that AI can increase sub-Saharan Africa’s GDP per capita ($1,701 in 2022) to China’s GDP per capita ($12,720 in 2022) inside 5-10 years.

These are daring statements, although probably familiar to anyone who has listened to the followers of the “Singularity” movement, which expects similar results. Amodei acknowledges that such a development would require “a massive effort in terms of global health, philanthropy and (and) political support,” which he believes will occur since it is in the world’s best economic interest.

This can be a dramatic change in human behavior, provided that humans have repeatedly shown that their primary goal is what is going to profit them in the short term. (Deforestation this is only one example amongst hundreds). It’s also price noting that most of the staff chargeable for labeling datasets used to train AI are paid well below minimum wage, while their employers reap tens of tens of millions – or lots of of tens of millions – of equity from the outcomes.

Amodei briefly touches on the specter of AI to civil society, proposing that the coalition of democracies secure the AI ​​supply chain and block adversaries who intend to use AI for malicious purposes from accessing powerful technique of AI production (semiconductors, etc.). At the identical time, he suggests that AI, in the best hands, may be used to “challenge repressive governments” and even reduce bias in the legal system. (Historically, artificial intelligence heightened prejudices in the legal system).

“A truly mature and successful implementation of AI can reduce bias and be fairer for all,” writes Amodei.

So if artificial intelligence takes over every possible task and does it higher and faster, would not that put humans in a difficult position from an economic viewpoint? Amodei admits that it’s, and that at this point society would wish to have conversations about “how the economy should be organized.”

But it offers no solution.

“People really want a sense of accomplishment and even a sense of competition, and in a post-AI world it will be entirely possible to spend years attempting a very difficult task with a complex strategy, similar to what people do today, starting career research projects, trying to become Hollywood actors or start companies,” he writes. “The fact that (a) AI could in principle do this task better and (b) this task is no longer an economically rewarding element of the global economy does not seem to me to make much difference.”

In conclusion, Amodei puts forward the concept artificial intelligence is solely a technological accelerator – that humans naturally move towards “the rule of law, democracy and Enlightenment values.” But in doing so, it ignores most of the costs of AI. Artificial intelligence is predicted to have – and already has – a huge effect on the environment. And this causes inequality. Nobel Prize-winning economist Joseph Stiglitz and others excellent AI-driven workplace disruptions could further concentrate wealth in the hands of corporations and leave staff more powerless than ever.

These corporations include Anthropic, although Amodei is reluctant to admit it. After all, Anthropic is a business – one apparently price nearly $40 billion. And those that profit from AI technology are essentially corporations whose only responsibility is to increase profits for shareholders, not the betterment of humanity.

A cynic might actually query the timing of the essay, provided that Anthropic is reported to be in the means of raising billions of dollars in enterprise capital funding. OpenAI CEO Sam Altman published the same technopotimist manifesto shortly before OpenAI closed its $6.5 billion funding round. Perhaps it is a coincidence.

On the opposite hand, Amodei will not be a philanthropist. Like every CEO, he has a product to present. It just so happens that his product will “save the world” – and people who think otherwise risk being left behind. At least that is what he would have you think.

This article was originally published on : techcrunch.com
Continue Reading

Technology

Khosla Ventures just backed OpenAI with $405 million more, but not necessarily with its own capital

Published

on

By

Khosla Ventures has raised $405 million for OpenAI, in response to the documents.

Based on the filing itself, Khosla’s stake in ChatGPT maker is not less than 6% from the $6.6 billion round the corporate accomplished last week. That doesn’t suggest Khosla put significant, or any, capital into this round. Most, perhaps all, of the $405 million was raised from other investors through a special purpose vehicle, or special purpose vehicle.

Special purpose vehicles are used when an organization does not have enough capital to fill a round, or when it has sufficient exposure to the corporate and offers its allocation to others demanding shares.

Khosla Ventures declined to comment, so we do not know the terms of its participation in OpenAI’s latest round, which valued the corporate at $157 billion. Either way, OpenAI has been good for Khosla Ventures, which invested $50 million in the corporate in 2019, and in response to The Information 5% ownership. OpenAI’s valuation on the time is not publicly known, but it was likely much lower than the $29 billion valuation OpenAI reportedly achieved in 2023 when Microsoft invested $10 billion.

This article was originally published on : techcrunch.com
Continue Reading
Advertisement

OUR NEWSLETTER

Subscribe Us To Receive Our Latest News Directly In Your Inbox!

We don’t spam! Read our privacy policy for more info.

Trending