Connect with us

Technology

AWS CEO Matt Garman on generative AI, open source, and shutdown services

Published

on

It was quite a surprise when Adam Selipsky stepped down as CEO of Amazon’s AWS cloud computing unit. Perhaps an equally big surprise was that he was replaced by Matt Garman. Garman joined Amazon as an intern in 2005 and became a full-time worker in 2006, working on early AWS products. Few people know the corporate higher than Garman, whose last position before becoming CEO was as senior vp of sales, marketing and global services for AWS.

Garman told me in an interview last week that he hasn’t made any significant changes to the organization yet. “Not much has changed in the organization. “The business is doing quite well, so there is no need to make huge changes to anything we are focusing on,” he said. However, he identified several areas where he believes the corporate must focus and where he sees opportunities for AWS.

Emphasize start-ups and rapid innovation

One of them, somewhat surprisingly, is startups. “I think we have evolved as an organization. … In the early days of AWS, our main focus was on how to really attract developers and startups, and we got a lot of traction there from the beginning,” he explained. “And then we started thinking about how do we appeal to larger businesses, how do we appeal to governments, how do we appeal to regulated sectors around the world? And I think one of the things that I just emphasized again – it’s not really a change – but I just emphasized that we can’t lose focus on startups and developers. We have to do all these things.”

The second area he wants the team to focus on is maintaining with the whirlwind of change within the industry.

“I even have really emphasized with the team how essential it’s for us to proceed to remain on the forefront that we’ve by way of the set of services, capabilities and features and functions that we’ve today – and proceed to lean forward and construct the plan motion involving real innovation,” he said. “I think the reason customers use AWS today is because we have the best and broadest set of services. The reason people turn to us today is because we continue to deliver industry-leading security and operational efficiency by far, and help them innovate and move faster. We must continue to implement the action plan. “It’s not likely a change in itself, but that is probably what I highlighted essentially the most: how essential it’s for us to take care of the extent of innovation and the speed at which we deliver products.”

When I asked him if he thought possibly the corporate hadn’t innovated fast enough up to now, he said no. “I think the pace of innovation is only going to accelerate, so it’s just important to emphasize that we also need to accelerate the pace of innovation. It’s not that we’re losing it; it is simply an emphasis on how much we need to accelerate given the pace of technology.”

Generative Artificial Intelligence in AWS

With the emergence of generative AI and technology changing rapidly, AWS must be “on the cutting edge of all of them,” he said.

Shortly after ChatGPT’s launch, many experts questioned whether AWS was too slow to launch generative AI tools on its own and left the door open to competitors like Google Cloud and Microsoft Azure. However, Garman believes that this was more imagination than reality. He noted that AWS has long offered successful machine learning services like SageMaker, even before generative AI became a buzzword. He also noted that the corporate has taken a more thoughtful approach to generative AI than perhaps a few of its competitors.

“We were looking at generative AI before it became a widely accepted thing, but I will say that when ChatGPT came out, it was kind of a discovery of a new area and how to apply this technology. I think everyone was excited and energized by it, right? … I think a group of people – our competitors – were kind of racing to put chatbots at the top and show that they are leading the way in generative AI,” he said.

I feel a bunch of individuals – our competitors – were sort of racing to place chatbots on top of every part and show that they were the leader in generative AI.

Instead, Garman said, the AWS team desired to take a step back and take a look at how its customers, whether startups or enterprises, could best integrate the technology into their applications and leverage own, differentiated data. “They will need a platform that they will construct on freely and consider it as a platform to construct on relatively than an application that they will customize. That’s why we took the time to construct this platform,” he said.

In the case of AWS, that platform is Bedrock, where it offers access to a big selection of open and proprietary models. Just doing this – and allowing users to attach different models together – was a bit controversial on the time, he said. “But for us, we thought that was probably where the world was going, and now it was certain that that was where the world was going,” he said. He said he thinks everyone will want custom models and provide their very own data for them.

Garman said the bedrock is “growing like a weed right now.”

One problem with generative AI that it still wants to resolve is price. “A lot of this is doubling down on our custom silicon and making some other changes to the models to draw the conclusion that you’re going to be building (something) much cheaper into your applications.”

Garman said the following generation of AWS’s custom Trainium chips, which the corporate debuted on the re:Invent conference in late 2023, will likely be launched later this 12 months. “I’m really excited that we can really turn this cost curve around and start delivering real value to customers.”

One area where AWS hasn’t necessarily tried to compete with a few of the other tech giants is in constructing its own large language models. When I asked Garman about this, he noted that these are still issues the corporate is “very focused on.” He thinks it is vital for AWS to have its own models while still using third-party models. But he also desires to ensure AWS’s own models can bring unique value and differentiation, either through leveraging its own data or “through other areas where we see opportunities.”

Among these areas of opportunity are costs, but in addition agents, which everyone within the industry seems optimistic about for the time being. “Having models that are reliable, at a very high level of correctness, and can call other APIs and do things. “I think there is some innovation that can be done in this area,” Garman said. He says agents will gain way more utility from generative AI, automating processes on behalf of their users.

Q, a chatbot powered by artificial intelligence

At the recent re:Invent conference, AWS also unveiled Q, its AI-powered generative assistant. Currently, there are mainly two versions of this solution: Q Developer and Q Business.

Q Developer integrates with a lot of the most well-liked development environments and, amongst other things, offers code completion and tools for modernizing legacy Java applications.

“We really think of Q Developer as a broader sense of really helping throughout the developer lifecycle,” Garman said. “I think a lot of early developer tools focused on coding, and we’re thinking more about how do we help with everything that’s painful and labor-intensive for developers?”

At Amazon, teams used Q Developer to update 30,000 Java applications, saving $260 million and 4,500 years of developer labor, Garman said.

Q Business uses similar technologies under the hood, but focuses on aggregating internal company data from many various sources and making it searchable using a ChatGPT-like Q&A service. The company “sees a real driving force there,” Garman said.

Shutting down services

While Garman noted that not much has modified under his leadership, one thing that has happened recently at AWS is that the corporate announced plans to shut down a few of its services. Traditionally, AWS hasn’t done this fairly often, but this summer it announced plans to shut down services just like the Cloud9 web IDE, CodeCommit competitor GitHub, CloudSearch and others.

“It’s sort of a clean-up where we looked at a number of services and either, frankly, introduced a better service that people should move to, or we launched one that we just didn’t get better,” he explained. “And by the way, some of them we just don’t do well, and their traction has been quite poor. We looked at it and said, “You know what? The partner ecosystem actually has a better solution and we intend to build on it.” You cannot put money into every part. You cannot construct every part. We do not like to do that. We take this seriously if firms wish to rely on us to support their activities in the long run. That’s why we’re very careful.”

AWS and the open source ecosystem

One relationship that has long been difficult for AWS – or no less than has been perceived as difficult – is its relationship with the open source ecosystem. This is changing, and just just a few weeks ago AWS contributed its code to the OpenSearch Linux Foundation and the newly formed OpenSearch Foundation.

We love open source. We rely on open source code. I feel we’re attempting to leverage the open source community, making an enormous contribution to the open source community.

“I think our view is pretty simple,” Garman said after I asked him what he thought in regards to the future relationship between AWS and open source software. “We love open source. We rely on open source code. I feel we’re attempting to leverage the open source community, making an enormous contribution to the open source community. I feel that is what open source is all about – benefiting from the community – and that is why we take it seriously.”

He noted that AWS has made key investments in open source software and a lot of its own open source projects.

“Most of the friction has come from firms that originally began open source projects and then decided to sort of decommission them from open source, and I feel they’ve the appropriate to do this. But you recognize, that is not the true spirit of open source. Whenever we see people doing this, take Elastic for instance, and OpenSearch (AWS’ ElasticSearch fork) is sort of popular. … If there’s a Linux (Foundation) project, or an Apache project, or the rest that we will construct on, we would like to construct on that; we contribute to them. I feel we’ve evolved and learned as a company learn how to be good stewards of this community, and I hope that has been noticed by others.

This article was originally published on : techcrunch.com
Continue Reading
Advertisement
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Technology

Australian government withdraws disinformation law

Published

on

By

The Australian government has withdrawn a bill that might have imposed penalties on online platforms as much as 5 percent their global income in the event that they fail to stop the spread of disinformation.

The bill, backed by the Labor government, would enable the Australian Communications and Media Authority to create enforceable rules on disinformation on digital platforms.

IN statementCommunications Minister Michelle Rowland said the bill would “provide an unprecedented level of transparency, holding big tech accountable for its systems and processes to prevent and prevent the spread of harmful misinformation and disinformation online.”

However, she said that “based on public statements and conversations with senators, it is clear that there is no way this proposal could be passed through the Senate.”

When a revised version of the bill was introduced in September, Elon Musk, the owner of X (formerly Twitter), criticized it in a one-word post: “Fascists.”

Shadow communications minister David Coleman was a vocal opponent of the bill, arguing it could encourage platforms to suppress free speech to avoid penalties. Because the bill seems dead now, Coleman sent that it was a “shocking attack on free speech that betrayed our democracy” and called on the Prime Minister to “rule out any future version of this legislation”.

Meanwhile, Rowland in his statement called on Parliament to support “other proposals to strengthen democratic institutions and keep Australians safe online”, including laws to combat deepfakes, enforcement of “truth in political advertising during elections” and regulation of artificial intelligence .

Prime Minister Anthony Albanese can be moving forward with a plan to ban children under 16 from using social media.

This article was originally published on : techcrunch.com
Continue Reading

Technology

Department of Justice tells Google to sell Chrome

Published

on

By

Welcome back to the week in review. This week, we take a look at how the Department of Justice ordered Google to sell Chrome to break its monopoly, whether OpenAI by chance deleted potential evidence in a copyright lawsuit filed by The New York Times, and the way artificial intelligence corporations are exploiting TikTok for research purposes. Let’s do it.

The U.S. Department of Justice argued that Google should get rid of its Chrome browser to help break the corporate’s illegal monopoly on online search. U.S. District Court Judge Amit Mehta ruled in August that Google is an illegal monopoly for abusing its power within the search industry, and the Department of Justice’s latest filing says Google’s ownership of Android and Chrome poses a “significant challenge” to pursuing countermeasures aimed toward establishing a competitive search engine market.

Anthropic raised a further $4 billion from Amazon and agreed to make Amazon Web Services the first training site for its flagship generative artificial intelligence models. Anthropic can be working with Annapurna Labs, AWS’s chip manufacturing division, to develop future generations of Trainium accelerators, custom AWS chips for training artificial intelligence models. Amazon’s recent money injection brings the tech giant’s total investment in Anthropic to $8 billion.

OpenAI by chance deleted potential evidence in The New York Times and Daily News’ copyright lawsuit, say the publisher’s lawyers. As part of the lawsuit, OpenAI agreed to provide two virtual machines so the lawyer could seek for copyrighted content in its AI training kits. However, within the letter, lawyers for the publishers claim that OpenAI engineers deleted all publisher search data stored on one of the virtual machines.



News

Image credits:Presley Ann/Getty Images and CFOTO/Future Publishing via Getty Images

Kim Kardashian meets Optimus: The fashion mogul had hands-on experience with Tesla’s bipedal humanoid robot. In videos posted to X, Kardashian encourages Optimus to make a heart out of his hand, dance like he’s at a luau and play rock, paper, scissors. Read more

Oura’s valuation exceeds $5 billion: The smart ring maker has received a $75 million investment from glucose device maker Dexcom. The investment, which constitutes Oura’s Series D financing round, raises the corporate’s valuation to over $5 billion. Read more

Let’s organize a celebration for Partiful: The customizable event planning app challenges legacy solutions like Evite, Eventbrite, and Facebook Events, is a favourite amongst Gen Z users, and was just named a top app of 2024 by Google. Read more

Talk to me in your language: Microsoft will soon allow Teams users to clone their voices so that they can talk to others in up to nine languages: English, French, German, Italian, Japanese, Korean, Portuguese, Mandarin Chinese and Spanish. Read more

Hackers attack Andrew Tate: According to The Daily Dot, hackers breached a web-based course founded by an influencer and self-confessed misogynist, exposing data on nearly 800,000 users. Tate is currently under house arrest awaiting trial on sex trafficking and rape charges. Read more

What makes a bank a bank? The U.S. Consumer Financial Protection Bureau has ruled that each one digital services that handle significant volumes of transactions needs to be subject to bank-style supervision, which could impact Apple Pay, Cash App, Google Pay, PayPal and Venmo. Read more

A more conversational Siri: According to sources cited by Bloomberg, Apple is developing a new edition of Siri based on advanced multilingual models in an attempt to meet up with more natural-sounding competitors comparable to Google Gemini Live. Read more

Making Money With TikTok Brains: Several AI-powered research tools are taking advantage of the “PDF to Brainrot” trend, during which the text of an uploaded document is read in a monotone voice against a backdrop of “weirdly satisfying” vertical videos like Subway Surfers gameplay. Read more

Threads attacks Bluesky: As Bluesky’s user base surpasses 20 million, Instagram Threads has begun rolling out a brand new feature called custom feeds to capitalize on user demand for more personalization. Read more

ChatGPT within the classroom: OpenAI has released a free online course to help elementary and middle school teachers find out how to introduce ChatGPT into their classrooms. However, some educators are concerned about this technology and its potential for error. Read more

Do we want one other day by day word game? Normally I’m an evangelist for word games and crosswords, but I feel like we’re quickly approaching market saturation. Netflix has launched a brand new day by day word puzzle game in partnership with TED called TED Tumblewords. Read more

Analysis

selection of x-ray scans of the human head
Image credits:Real444/Getty Images

Please don’t send X-ray images to the chatbot: People often turn to generative AI chatbots to ask questions on their health concerns and higher understand their health. Since October, X users have been encouraged to upload their X-rays, MRIs and PET scans to the AI-powered chatbot, Grok, to help interpret the outcomes. Medical data is a special category subject to federal protections that, usually, only you may circumvent. But simply because you may does not imply you need to. As Zack Whittaker writes, it’s price remembering that what goes on the Internet never leaves it. Read more

This article was originally published on : techcrunch.com
Continue Reading

Technology

How the digital “you” can withstand your torturous online conference calls

Published

on

By

Now you can appear like you are on a Zoom call in your office, even whilst you’re sipping a margarita in a hammock far, far-off. Courtesy of a several-month-old startup called Marinadethe premise is easy: upload a five-minute training video of you creating an avatar, and 24 hours later you may seemingly be able to go. Do you ought to call from your automotive? This can be your secret. Too lazy to get away from bed? No problem. At the beach club? You’re probably pushing it, although judging by the demo video, that is not the only problem that should be solved. (The service is currently available in Basic, Standard and Professional versions, with prices starting from $300 to $1,150 per yr.)

The technology, backed by Los Angeles-based Krew Capital, currently only works with macOS, Pickle says, but a Windows version is anticipated next month. As for the conferencing apps that customers can pick from, they include Zoom, Google Meet and Teams, in keeping with Pickle. However, you should have to attend to make use of them. According to the website, “due to high demand, clone generation is currently delayed.”

This article was originally published on : techcrunch.com
Continue Reading
Advertisement

OUR NEWSLETTER

Subscribe Us To Receive Our Latest News Directly In Your Inbox!

We don’t spam! Read our privacy policy for more info.

Trending