Connect with us

Technology

Fei-Fei Li chooses Google Cloud, where she led artificial intelligence, as the primary provider of computing solutions for World Labs

Published

on

Cloud service providers are chasing AI unicorns, and the latest is Fei-Fei Li’s World Labs. The startup just chosen Google Cloud as its primary computing provider for training artificial intelligence models, a move that could possibly be value a whole lot of hundreds of thousands of dollars. But the company said Li’s tenure as Google Cloud’s chief artificial intelligence scientist was irrelevant.

During the company’s Google Cloud Startup Summit on Tuesday announced World Labs will devote a big portion of its funds to licensing GPU servers on the Google Cloud Platform and ultimately to training “spatially intelligent” artificial intelligence models.

A handful of well-funded startups constructing basic AI models are in high demand in the world of cloud services. The largest deals include OpenAI, which exclusively trains and runs AI models on Microsoft Azure, and Anthropic, which uses AWS and Google Cloud. These corporations often pay hundreds of thousands of dollars for computing services, and sooner or later they could need much more as they scale their artificial intelligence models. This makes them beneficial customers for Google, Microsoft, and AWS to construct relationships with from the starting.

World Labs is definitely constructing unique, multimodal AI models with significant computational needs. The startup just raised $230 million at a valuation of over $1 billion, in a deal led by A16Z, to construct global artificial intelligence models. Google Cloud’s general manager of startups and AI, James Lee, tells TechCrunch that World Labs’ AI models will sooner or later have the ability to process, generate and interact with video and geospatial data. World Labs calls these AI models “spatial intelligence.”

Li has deep ties to Google Cloud, having led the company’s artificial intelligence efforts in 2018. However, Google denies that this deal is a result of this relationship and rejects the concept that cloud services are only a commodity. Instead, Lee said the greater factor is services, such as a high-performance toolkit for scaling AI workloads and a big supply of AI chips.

“Fei-Fei is obviously a friend of GCP,” Lee said in an interview. “GCP wasn’t the only option they were considering. But for all the reasons we talked about – our AI-optimized infrastructure and ability to meet their scalability needs – they ultimately came to us.”

Google Cloud offers AI startups a alternative between proprietary AI chips, tensor processing units or TPUs, and Nvidia GPUs, which Google buys and that are in additional limited supply. Google Cloud is attempting to persuade more startups to coach AI models on TPUs, mainly to cut back dependence on Nvidia. All cloud service providers today are limited by the shortage of Nvidia GPUs, so many are constructing their very own AI chips to satisfy demand. Google Cloud says some startups are training and inferring exclusively on TPUs, but GPUs remain the industry’s favorite AI training chips.

As part of this agreement, World Labs has chosen to coach its artificial intelligence models on GPUs. However, Google Cloud didn’t say what prompted this decision.

“We had been working with Fei-Fei and her product team, and at this point in the product roadmap it made more sense for them to work with us on the GPU platform,” Lee said in an interview. “But that doesn’t necessarily mean it’s a permanent decision… Sometimes (startups) move to other platforms like TPU.”

Lee didn’t reveal how large World Labs’ GPU cluster is, but cloud providers often devote huge supercomputers to startups training artificial intelligence models. Google Cloud promised one other startup training basic AI models, Magic, a cluster with “tens of thousands of Blackwell GPUs,” each with more power than a high-end gaming PC.

These clusters are easier to vow than to deliver. According to reports, Microsoft is a competitor to Google’s cloud services struggles to satisfy insane computational demands OpenAI, forcing the startup to make use of other computing power options.

World Labs’ contract with Google Cloud is non-exclusive, which suggests the startup can still strike deals with other cloud service providers. Google Cloud, nevertheless, says most of its operations will proceed.

This article was originally published on : techcrunch.com
Continue Reading
Advertisement
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Technology

The undeniable connection between politics and technology

Published

on

By

April Walker, contributors network, technology, AI, Artificial intelligence


Written by April Walker

Politics and Technology: Undeniable Union

In the trendy era, the intersection of politics and technology has change into an undeniable unity, shaping the way in which all societies function and the course of political processes. The rapid advances in technology haven’t only transformed communication and the dissemination of knowledge, but in addition redefined political engagement, campaigning and management. Given the political climate we face today, combined with the sphere of artificial intelligence and the countless communication platforms at our disposal, let’s explore the profound impact of technology on politics, specializing in the digitization of grassroots movements, the threats posed by hackers and deepfakes, the fight social media with traditional media and the long run of post-election politics.

Grassroots has gone digital

Historically, grassroots movements have relied on face-to-face interactions, community meetings, and physical rallies to mobilize support and drive change. However, the appearance of digital technology has revolutionized these movements, enabling them to achieve wider audiences with unprecedented speed and efficiency. Social media platforms, email campaigns and online petitions have change into powerful tools for local organizers. These digital tools enable the rapid dissemination of knowledge, real-time updates, and the flexibility to mobilize supporters across geographic boundaries. An example is that the 2024 presidential election saw unprecedented use of tools like Zoom to unite communities in support of United States Vice President and presidential candidate Kamala Harris to lift multimillion-dollar donations for her campaign. Demonstrating a robust fundraising forum, Zoom calls that began with black women beginning to support “white dudes” and every little thing in between, now we have witnessed how in a really short time frame these communities haven’t only raised an enormous sum of money, but in addition helped empower her message and consistently refuting the disinformation and lies spread by her opponent.

This shouldn’t be the primary time that the facility of social media in organizing protests and gathering international support has been demonstrated, for instance throughout the Arab Spring in 2010–2011. Platforms similar to Twitter (today referred to as X) and Facebook played a key role in coordinating demonstrations and sharing real-time updates, which ultimately contributed to significant political changes within the region. Similarly, contemporary movements similar to Black Lives Matter have harnessed the facility of digital technology to amplify their message, organize protests, and raise awareness on a worldwide scale.

Hackers and Deepfakes

While technology has empowered political movements, it has also introduced latest threats to the integrity of political processes. Hackers and deepfakes pose two significant challenges on this regard. Cyberattacks on political parties, government institutions and electoral infrastructure have gotten more common, posing a threat to the safety and integrity of elections. In 2016, the Russian hack of the Democratic National Committee (DNC) highlighted the vulnerability of political organizations to cyber threats. Our current electoral process poses a fair more direct and immediate threat. Advances in artificial intelligence and the sheer proliferation of its use have created an increasingly sophisticated and complex network of “bad actors”, especially from other countries, who’re using the technology to attempt to influence the end result of the US presidential election.

Deepfakes, manipulated videos or audio recordings that appear authentic, present one other disturbing challenge. These sophisticated falsehoods can spread disinformation, discredit political opponents and manipulate public opinion. Just have a look at the recent use of AI-generated photos of music superstar Taylor Swift, who falsely claimed to support Republican presidential candidate Donald Trump when, in reality, Taylor publicly expressed her support for Kamala Harris. There is growing concern concerning the possibility that deepfakes could undermine trust in political leaders and institutions. As technology continues to advance, the flexibility to detect and address these threats becomes crucial to maintaining the integrity of democratic processes.

Social media versus traditional media

The development of social media has fundamentally modified the landscape of political communication. Traditional media similar to newspapers, television and radio used to have a monopoly on the dissemination of knowledge. However, social media platforms do democratized the flow of knowledge, enabling individuals to share newsFeedback and updates immediately. This change has each positive and negative policy implications.

The positive side of social media is that it allows politicians to speak directly with the general public, promoting transparency and consistent engagement. Politicians can use platforms like Instagram, TikTok, LinkedIn, Facebook and others to share their views, reply to voters and mobilize support, and in lots of cases, market-specific demographics which can be critical to closing the gap in areas requiring coordinated focus and attention. However, the unregulated nature of social media also enables the spread of disinformation, fake news and “echo chambers” where individuals are only exposed to information that reinforces their existing beliefs. If you are curious if that is true, take a have a look at the varied messages you read on platforms supporting each political parties – it’s amazing how starkly different these narratives will be.

With unwavering editorial standards and robust fact-checking processes, traditional media continues to play a key role in providing trusted information. However, it faces challenges in competing with the speed and global reach of social media. The coexistence of those two types of media creates a posh information ecosystem that requires critical considering, intentional skepticism and media literacy from society.

What’s next after the elections?

As we approach the upcoming Presidential Election Day, and for that matter even after that day, attention will shift to managing and implementing campaign guarantees. Technology continues to play a key role at this stage, with governments using digital tools to make sure effective administration, transparency and citizen engagement.

In my opinion, the post-election period and current policies could have to face the challenges posed by disinformation and cyber threats. Governments and organizations must comprehensively put money into cybersecurity measures, digital skills programs and regulatory frameworks to guard the integrity of political processes. As technology evolves at an especially rapid pace, the long run of politics will likely see a continued integration of its impact, emphasizing balancing its advantages with the necessity to protect democratic values, institutions and, most significantly, public trust! That said, we can’t be afraid of technology; we must seize this because innovations like artificial intelligence and GenAI are creating competitive opportunities for our country which can be unimaginably powerful and have the potential to positively change the course of humanity. During the Democratic National Convention, Vice President Kamala Harris noted during her ticket acceptance speech: “Once elected, I’ll make certain that we lead the world right into a future powered by space and artificial intelligence, that America, not China, wins elections in a twenty first century competition, and that we strengthen , and we are usually not giving up our global leadership.”


April Walker, Author Network, Technology


This article was originally published on : www.blackenterprise.com
Continue Reading

Technology

Alaska Airlines venture capital lab creates its first startup: Odysee

Published

on

By

Odysee CEO Steve Casley sees dollar signs in the information. More specifically, artificial intelligence-based software that may analyze vast amounts of information to assist industrial airlines profit from complex flight schedules.

That’s exactly what Odysee, the first startup born from the aviation-focused research lab created by Alaska Airlines and UP.Labs, is doing. The two corporations launched a venture lab last 12 months to create startups designed to resolve specific problems in air travel, reminiscent of guest experience, operational efficiency, aircraft maintenance, routing and revenue management. Odysee said it raised $5 million in a pre-seed round led by UP.Partners, a Los Angeles-based VC firm affiliated with UP.Labs. Alaska Star Ventures, which launched in October 2021, has invested $15 million in UP.Partners’ inaugural early-stage fund.

According to Casley, Alaska Airlines CEO Ben Minicucci flagged a scheduling problem early on. And no wonder. While there may be software available to investigate flight data and plan flights, Casley says they lack the type of real-time data – and, most significantly, revenue forecasts – that Odysee produces.

“You need tools to make better decisions because typically in airlines, schedule changes are made by experienced planners who do it intuitively,” Casley said in a recent interview. “I would not say what their seat within the pants is because numerous the time they will likely be right because they’ve seen each bad and good changes. But they never actually had the information to support their decisions.

According to the corporate, the data-enabled software can run a whole lot of simulations in seconds to quickly determine how schedule changes could impact revenue, profits and reliability.

“There are other optimizers, but to my knowledge none of these models or any optimization company offers revenue forecasts,” Casley said.

The machine learning model built by Odysee accommodates roughly 42 attributes, which include every part from departure time and day to traffic volumes on a selected route and competition schedules. The startup present in early simulations that it was able to save lots of a whole lot of hundreds of dollars in Alaska with only one schedule change.

Odysee is currently conducting user acceptance testing in Alaska. Once all that is accomplished, Alaska will begin a trial period of the software, which Casley said will begin in late October.

That’s a brief timeframe, considering UP.LAbs and Alaska Airlines only established the flight lab a 12 months ago. A fast path to industrial products is one among the major benefits of UP.Labs. UP.Labs, which launched for the first time in 2022, is structured as a venture lab with a brand new variety of financial investment vehicle. The company partners with large corporations reminiscent of Porsche, Alaska Airlines, and most recently JB Hunt to launch startups with recent business models aimed toward solving the industry’s biggest problems. Each partnership will create six startups over three years.

Under the UP.Labs structure, these startups is not going to be created solely to serve a company partner – on this case, Alaska Airlines. Rather, they may operate independently and as industrial enterprises from the outset, ultimately generating revenue from the sale of services or products throughout the industry.

This article was originally published on : techcrunch.com
Continue Reading

Technology

Open-source BI platform Lightdash gains Accel’s support in bringing artificial intelligence to business analytics

Published

on

By

Lightdash founders Hamzah Chaudhary and Oliver Laslett

LightdashBusiness Intelligence (BI) platform, an open-source alternative to Google Looker, is revealing a brand new product that enables corporations to train “AI analysts” for individual teams’ applications, enabling anyone in the corporate to query aggregate business data.

To help, the four-year-old startup also announced Tuesday that it has raised $11 million in a Series A funding round led by Accel.

Lightdash is built for an open-source command-line data transformation tool called db (data authoring tool) that relies on SQL and helps corporations transform raw data into structured, analysis-ready datasets. At the time, the corporate was often known as Hubble has accomplished Y Combinator’s (YC) S20 series.with particular emphasis on testing corporations’ data warehouses to discover data quality issues. As it turned out, these metrics were essentially the most useful in BI tools, hence the co-founder and CEO Hamzah Chaudhary switched product and brand to Lightdash in 2021.

In context, “business analytics” describes the technique of combining and integrating disparate sets of knowledge to derive insights, discover trends, and predict future outcomes. The Lightdash platform serves as each front-end and back-end, so people inexperienced in SQL, similar to marketing or finance teams, can access the visual component through the interface. More technical users can use the backend to create customized workflows and define all of the business logic needed for business reporting purposes.

This ties in with the newest launch of Lightdash, a feature that can enable any team member to ask natural language questions on company data and receive “curated insights” relevant to their department.

“For example, the finance team will have an AI analyst who will only have access to the data, metrics and content that is relevant to them,” Chaudhary explained to TechCrunch via email. “They can interact with their AI analyst in natural language, dramatically reducing the time it takes to get insights in the form of a chart, spreadsheet or dashboard.”

Lightdash AI Analyst. Image credits:Lightdash

One obstacle to enterprises fully implementing generative AI is the thorny issue of knowledge security; corporations are cautious about sharing confidential company data. However, Chaudhary claims that the corporate’s AI analyst is powered by the identical Lightdash API that’s used in its standard productmeaning corporations already comfortable with Lightdash credentials don’t expose themselves to any additional risk through the use of its AI.

“Data permissions and management is one of the key obstacles preventing larger companies from implementing these tools, and with Lightdash’s AI analyst, these manufacturing capabilities are available right out of the box,” Chaudhary said. “It’s value recognizing; “It’s not a completely new query engine for customer data, it’s actually a completely new way of interacting with our existing query engine.”

Additionally, an AI analyst largely doesn’t need access to actual customer data, Chaudhary added, because he relies on metadata similar to the title and outline of the metric for many of his evaluation. “Clients have full control over what information they want to share with LLM,” he said.

Moreover, Chaudhary says customers can select their preferred LLM provider, including the likes of OpenAI and Anthropic, while still having the ability to use their very own model, which should allay any lingering concerns about opening up access to the corporate’s sensitive data.

In the cloud

Since announcing industrial launch and $8.4 million in seed funding two years ago, Lightdash has launched a hosted cloud service for its basic open source productwith additional features including security tools. Chaudhary says greater than 5,000 teams currently use the open source product on their very own, though it’s often a place to begin before upgrading to the complete feature set available in a industrial version.

“Larger teams have successfully used the OSS product to perform proofs of concept without being bogged down by information and procurement reviews,” Chaudhary said. “This allows companies to decouple the purchasing process from Lightdash testing, dramatically lowering the barrier to trialing the tool and building internal Lightdash champions before moving to a cloud product. Lightdash OSS also provides hobbyists and smaller teams with an easy introduction to BI as it provides a complete set of features to help you get started. As teams grow, they prefer a cloud platform for managed deployment, additional features, and better performance and security.”

Chaudhary claims to have increased its revenue sevenfold in the last 12 months, and its clients include: $60 billion enterprise software company Workday, in addition to Beauty Pie, Hypebeast and Morning Brew.

Currently, Lightdash says its global team has 13 employees split between Europe and the United States, and with the infusion of fresh money, the corporate said it intends to expand its team and product by incorporating latest features along the lines of what it’s currently rolling out in its AI Analysts.

In addition to lead sponsor Accel, Lightdash’s Series A round included participation from Operator Partners, Shopify Ventures, Y Combinator and a handful of angel investors.

This article was originally published on : techcrunch.com
Continue Reading
Advertisement

OUR NEWSLETTER

Subscribe Us To Receive Our Latest News Directly In Your Inbox!

We don’t spam! Read our privacy policy for more info.

Trending