Connect with us

Technology

Anduril is accelerating the launch of defensive payloads by purchasing ready-made Apex satellite buses

Published

on

On the sensor expands even further to the “top level”.

The company, best known for its AI-powered defense products in the air, on land and at sea, is partnering with a satellite bus startup Apical space for the rapid deployment of payloads into orbit for the US Department of Defense.

This is a rare case where an emerging defense contractor decides to partner with a supplier somewhat than construct the product itself or just acquire the supplier outright. But this partnership is smart: Anduril attributes much of its success to its approach to product design and development, which emphasizes rapidly developing large numbers of products using off-the-shelf components to cut back costs. Apex does something similar by producing satellite buses, the part of the spacecraft that holds the payload. In the past, they were subject to individual engineering processes, long lead times and high prices.

“We’re really focused on recreating the same things we’ve done in other areas, in the space domain,” Gokul Subramanian, Anduril’s vp of space and software, said at a press conference. “If you concentrate on what Anduril has done successfully in sea, air and ground transportation, there is a shift from the low-volume, high-cost systems which have traditionally been used to high-volume, low-cost systems. We have the same belief in space – that to achieve success in space, we want to maneuver to high-volume, low-cost production.”

Ian Cinnamon, co-founder and CEO of Apex Space, said the satellite bus is the “biggest bottleneck” in the space ecosystem, stopping America from putting more mass into orbit. Their goal is to deliver satellite buses to customers in weeks, not years, with more transparent pricing and a standardized product.

The Anduril-built payload flew in March on the first-ever Apex mission, which Subramanian called a “mission data processor” that allows on-orbit processing of images captured from the satellite. This payload leverages Lattice, the command and control process implemented in all Anduril products. In summary, Anduril was in a position to reveal the ability to point a spacecraft to a particular location, take an image of what the spacecraft saw, process that image, and transmit the data to Earth – all completely autonomously.

“It was the first experiment that gave us confidence in our vision for space, our collaboration with Ian and the bus platform they built,” he said.

Anduril has already purchased a dedicated satellite bus from Apex, which can be launched next yr. Anduril will operate this method, which can carry payloads built in-house and by others. This can be the model of the future, the pair of executives explained: Apex will provide the buses, Anduril will “mission the system,” Subramanian said.

Subramanian declined to comment on the specific opportunities the company hopes to pursue with the latest partnership, however it leaves the company in a great position to tackle a main contractor role on some coveted contracts. For example, the Space Development Agency’s Proliferated Warfighter Space Architecture program is deploying masses of satellites to enhance the Space Force’s aging missile tracking and defense architecture. SDA spends huge amounts of money on these satellites; So far, contracts for the construction of satellites under the program have been awarded to, amongst others: Sierra Space, Rocket Lab, SpaceX. Anduril undoubtedly hopes to affix the club.

This is not Anduril’s first foray into space: in July 2023, the company won a $10.50 contract from Space Systems Command to include Lattice into Space Surveillance Network (SSN) sensors, used for early warning of missiles. Last week, the company was also awarded a $25.3 million contract from the Space Force to offer additional ANN upgrades.

This is the first of many partnerships Anduril intends to announce, including with other bus providers, Subramanian added.

This article was originally published on : techcrunch.com
Continue Reading
Advertisement
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Technology

A black MIT graduate has created a dance nonprofit for diversity in STEM

Published

on

By

Yamileé Toussaint, Stem from dance, stem, MIT

The nonprofit organization promotes dance to make STEM education more accessible and exciting for diverse students, especially young Black girls.


A Black MIT graduate is using her love of dance to encourage diverse girls to explore science, technology, engineering and math (STEM).

Growing up on Long Island, New York, Yamileé Toussaint developed wide-ranging interests beyond STEM. Since her parents studied medicine and engineering, Toussaint’s family encouraged her to dance while pursuing an education. While at MIT, CNN Hero 2024 he even led a dance group.

“It has always been a source of community, perseverance and learning determination,” she told CNN.

However, Toussaint immediately noticed that few black women like her pursued this path. She only saw two of them, including her, studying mechanical engineering at university. The feeling of isolation never left Toussaint, which led her to initiate change in her outside community in New York.

“What struck me most was that I didn’t feel so special, that I should be one of two people,” she explained. “I felt like it should be different and it could be different… I just started thinking about a world where the benefits of dance can lead to the outcomes we are looking for in STEM.”

After becoming a teacher in East Brooklyn in 2008, Toussaint created STEM From Dance 4 years later. This nonprofit organization promotes dance in STEM education more available and exciting for diverse students, especially young black girls.

The program includes school and summer programs that reach dancers who’re less fascinated by math and science. However, the teachings work cohesively, with participants using STEM projects as they choreograph their moves. By learning find out how to create code that may work for the LED strips that illuminate the dance floor, the ladies realize how science can improve their performances.

The program has expanded to nine cities in the US. Toussaint emphasized that the mission goes beyond filling representation gaps in STEM. They also hope to remind young girls that they’ll do things that intimidate them.

“Through dance, we are able to create an atmosphere in which we feel comfortable,” Toussaint said. “With this space, we’re able to introduce something that seems a little intimidating… So when they’re faced with a difficult math problem, they’re reminded, ‘I can do hard things.’”

According to the Equal Employment Opportunity Commission, only 14.58% of girls are in STEM identified as Black or African American people in 2019. Toussaint and STEM From Dance, while still underrepresented in the sector, hope to remove this barrier for the following generation.

“I believe that the solution to some of the world’s most pressing problems lies in having these girls in the room because they have a different set of life experiences,” Toussaint said. “They are creative, intellectual, curious, artistic and will bring a different set of ideas to the discussion, so we need to make sure those are included.”

However, hopes for expansion are still alive. Fundraising on GoFundMe to the master these dance enthusiasts and future STEM professionals proceed.

Post views: 89


This article was originally published on : www.blackenterprise.com
Continue Reading

Technology

Meta won’t say whether it’s training AI on photos wearing smart glasses

Published

on

By

Meta

The AI-powered Meta Ray-Bans have a discreet front-facing camera that allows you to take photos not only whenever you ask, but in addition when the AI ​​features trigger them with specific keywords, corresponding to “see.” This signifies that smart glasses collect a variety of photos, each intentionally and unintentionally taken. But the corporate is not going to commit to keeping these photos private.

We asked Meta if it plans to coach artificial intelligence models on photos of Meta’s Ray-Ban users, because it does with photos from public social media accounts. The company would not say.

“We don’t discuss this publicly,” Anuj Kumar, a senior director working on AI-powered wearables at Meta, said in a video interview with TechCrunch on Monday.

“We don’t normally share this externally,” said Meta spokeswoman Mimi Huggins, who also participated within the video call. When TechCrunch asked for clarification on whether Meta was training on these images, Huggins replied: “We’re not saying otherwise.”

This is especially concerning because, amongst other things, Ray-Ban Meta has a brand new artificial intelligence feature that captures many passive photos. Last week, TechCrunch reported that Meta plans to launch a brand new real-time video feature for Ray-Ban Meta. When activated with specific keywords, the smart glasses will stream a series of images (essentially live video) to the multimodal AI model, enabling it to reply questions on its surroundings in a low-latency and natural manner.

That’s a variety of photos, they usually are photos that a Ray-Ban Meta user will not be aware of taking. Let’s say you asked your smart glasses to scan the contents of your closet to enable you to select an outfit. The glasses effectively take dozens of photos of your room and all the pieces in it and upload all of them to an AI model within the cloud.

What then happens to those photos? The meta won’t tell.

Wearing Ray-Ban Meta glasses also signifies that you might be wearing a camera on your face. As we discovered with Google Glass, it’s not something other people universally agree on, to say the least. So you would possibly think that it will be obvious for an organization that does this to say, “Hey! All your face camera photos and videos will likely be completely private and hidden in face camera.

But that is not what Meta is doing here.

Meta has already declared that it’s training its AI models every American’s public posts on Instagram and Facebook. The company has decided that that is “publicly available data” and we can have to simply accept it. It and other tech corporations have adopted a really expansive definition of what’s publicly available for them to coach AI and what will not be.

But the world you have a look at through smart glasses is definitely not “publicly accessible.” While we won’t say obviously that Meta is training its AI models on Meta’s Ray-Ban camera footage, the corporate simply would not say with certainty that it is not.

Other AI model providers have more transparent policies regarding training on user data. Anthropic says it he never trains customer input or output from one in every of its AI models. OpenAI also says this he never trains user input or output via API.

We have reached out to Meta for further clarification and can update this story if we’re contacted.

This article was originally published on : techcrunch.com
Continue Reading

Technology

Cruise receives a $1.5 million fine for concealing details of a pedestrian accident from a safety authority

Published

on

By

Cruise gets $1.5 million penalty for keeping pedestrian crash details from safety regulator

Cruise, the autonomous vehicle subsidiary of General Motors, must pay a $1.5 million penalty to the National Highway Traffic Safety Administration after preliminary reports to the safety regulator about last 12 months’s pedestrian accident omitted information that The company’s robotxi dragged a woman 20 feet.

Punishment is a component a consent order announced the regulator on Monday. The order, which was jointly agreed to by the corporate and NHTSA, may even require Cruise to submit a “corrective action plan” that outlines changes it has made to raised comply with the regulator’s rules.

“It is critical for companies developing automated driving systems to prioritize safety and transparency from the outset,” NHTSA Deputy Administrator Sophie Shulman said in a statement.

Cruise may even need to submit safety reports to the regulator every 90 days for the following two years, together with a report detailing any software updates and a report detailing how the robotxi fleet complies with road traffic regulations. NHTSA has the choice to increase the consent order for a further 12 months.

Steve Kenner, Cruise’s chief safety officer, said in a statement that the consent order represents a “step forward in a new chapter” for the corporate and that it represents a “firm commitment to greater transparency in our interactions with our regulators.”

The consent order was issued almost a 12 months after the infamous San Francisco accident. The pedestrian was first hit by a human-driven vehicle after which ended up on the road of the Cruise robotaxi. Even though the Cruise AV braked, it still hit the pedestrian and stopped. However, then the robotxi drove to the side of the road and dragged the pedestrian with it.

Cruise and other AV corporations are required to submit a series of reports to NHTSA each time one of their vehicles is involved in an accident. According to the NHTSA, the primary message Cruise sent the day after the crash didn’t include any information in regards to the woman being dragged. The regulator said the second report, which was due inside 10 days of the disaster, also omitted this information. It wasn’t until Cruise’s third report, filed a month after the crash, gave NHTSA the total picture.

At the time, the California Department of Motor Vehicles accused Cruise of failing to release footage of a robotaxi dragging a pedestrian, which was the idea on which the DMV suspended Cruise’s operating permits.

In Monday’s consent order, NHTSA said Cruise “was aware of the post-crash behavior of the Cruise vehicle” when it filed the primary two reports but “omitted this material information from the reports.”

Over the past 12 months, Cruise has undergone a makeover and now has latest management, fewer employees, and is slowly getting its robotics back for testing in multiple locations. In June, it paid a fine to the California Public Utilities Commission, and earlier this month it announced it was beginning to bring some AV vehicles back to the Bay Area – though human-operated and only in Mountain View and Sunnyvale.

This article was originally published on : techcrunch.com
Continue Reading
Advertisement

OUR NEWSLETTER

Subscribe Us To Receive Our Latest News Directly In Your Inbox!

We don’t spam! Read our privacy policy for more info.

Trending