Hammerspace CEO On Storage For AI, New 0M Funding Round And IPO Plans

Hammerspace CEO On Storage For AI, New $100M Funding Round And IPO Plans


‘Last year, we saw 1,000 percent growth in our sales. We are growing very fast. [An IPO] could be soon. And I believe that we need to be a public company as soon as possible because we sell to very large companies,’ says Hammerspace founder and CEO David Flynn.

Hammerspace, developer of a high-performance platform for storing, managing and migrating data for AI, this week unveiled a massive $100 million round of funding.

The funding round is unique in this industry in that it was led not by venture capital companies but by investors normally focused on public companies, said Hammerspace founder and CEO David Flynn.

“What attracts me is I’m working with long-term investors who will be supportive and hold or even accumulate a bigger position once we IPO, unlike the pure venture funds where IPO is the end of the journey and they want to sell,” Flynn told CRN.

[Related: The 2025 Storage 100]

That long-term view is also important as Hammerspace looks at holding an IPO, a move Flynn said he would like to see happen in the relatively near future.

“I believe that we need to be a public company as soon as possible because we sell to very large companies,” he said. “After Snowflake’s IPO, there literally was an inflection point [of high growth] once they became public. The same happened with Data Domain. When you sell to large enterprises, they feel much more comfortable buying from you when you’re a public company. It shows that you’re there to stay. So we anticipate needing to be a public company to help accelerate growth and doing that in fairly short order.”

In the meantime, Hammerspace will continue to focus on developing the high-performance storage technology that Flynn said is needed as AI goes mainstream.

“The thing about AI is it demands performance,” he said.” People don’t have time to build out exotic HPC [high-performance computing] file systems. … Because AI is going mainstream, there’s not enough talent on the planet for everybody to be able to install it and make it work. So it has to be plug-and-play, easy to use.”

There’s a lot going on at Hammerspace and the quest for AI-focused storage. Here is more of CRN’s discussion with Flynn.


How do you define Hammerspace?

We’re a high-performance data platform. We speed up the use of data in AI systems, and uniquely we do that at every stage, from the installation, setup and configuration to getting data out of existing storage systems, to orchestrating the data across data centers and then feeding the high-performance data to GPUs, and even positioning the data into the GPU servers for local access. It’s a data platform that accelerates data inside of storage. We accelerate that data to be available at unprecedented levels of performance that are necessary for GPU computing.

This isn’t just a high-performance file system, so to speak. This is uniquely the only one that’s actually native to Linux, so you don’t have to install any software. It’s the only one that can use your existing storage. It’s the only one that can use the local flash inside of the GPU servers as part of it. So we can incorporate your existing storage. We can incorporate the storage inside of GPU servers without deploying any special software. It uses native Linux because we are the first to be using the new parallel NFS [network file system] that we uniquely built in the standard and into the kernel. My CTO is the kernel maintainer of the NFS stack. We enhanced NFS, and that allows us to turn NFS into the first-ever high-performance HPC-class parallel file system that’s native to Linux. That’s why I can make the claim that, unlike anything on the planet, we are much easier to deploy because you can come to the table with your existing storage and with your existing Linux, and you don’t have to add any software or new storage. Our system can then orchestrate data and move it across systems and across data centers from behind a single pane of glass and namespace. We can create a globally unified global file system. That means you don’t have to wait to copy things around. There’s no downtime, not even to get that data into Hammerspace. You can simply point to the existing data in your storage, and we can immediately start serving it without waiting, even to scan all of the metadata. …

Meta has been using this for building Llama. They’ve blogged about that. Other big players in the AI arms race building foundation models have used Hammerspace. Numerous companies use Hammerspace not just for AI and GPU but for running their businesses. It has been a dream in the industry to be able to separate or abstract data from the storage and make data available everywhere with the utmost crazy performance levels and from a single namespace.

Technology artificial intelligence digital ai hand concept on cyber future business tech science innovation futuristic network strategy background virtual data communication learning assistant search.

You mentioned working with GPUs, but the technology wasn’t originally developed with GPUs in mind, right?

Actually, some of our earliest customers were in the media and entertainment space doing rendering and special effects and animation. They were using GPUs before they were a sexy part of AI. So yes, our technology was designed for GPUs but focused on doing special effects animation rendering as one of many use cases. It works not just with GPUs, but any data-intensive or HPC kind of workloads, or anything that requires high performance. Data feeding AI just happens to be what’s pushing that into the mainstream.

What did you have to do to make the technology AI-ready?

Beyond the feat of having a parallel, scalable file system, the thing about AI is it demands performance. People don’t have time to build out exotic HPC file systems. We’ve had file systems made to be super-fast, like Luster that was created by Los Alamos or IBM’s GPFS, for supporting HPC. But because AI is going mainstream, there’s not enough talent on the planet for everybody to be able to install it and make it work. So it has to be plug-and-play, easy to use. It needs the convenience of enterprise NAS with the performance of these HPC file systems. Everybody knows how to configure and run NFS, and with Hammerspace you can have that convenience and yet get the scalable performance that used to require those exotic file systems to get. AI going mainstream means this has to be convenient. It has to just work. We can’t have everybody building out their own science project to make it work.


There’s an interesting story behind the name ‘Hammerspace.’ Talk about that.

A ‘hammerspace’ is an extra-dimensional universe where things can be instantly retrieved and is infinite in size. Think of it as when a magician pulls something out of the hat. Where did it come from? That was hammerspace. We use the concept of hammerspace to represent data being pulled out of thin air. It’s infinitely fast because it’s right there, and it’s infinite in size, and it’s available from wherever you go. It has the characteristic of decoupling data from the physical world, just like the hammerspace concept. One way to think of it is, hyperspace is sci-fi for traveling faster than the speed of light by using an alternate universe, right? ‘Hammerspace’ is for storage where hyperspace is for travel.

We didn’t create the term. It’s a fan fiction invention. It was actually talked about extensively in the ‘Spider-Man Into the Spider-Verse’ cartoon. Our company was founded well before that. So it was great to have Disney come out with ‘Spider-Man Into the Spider-Verse’ and talk about hammerspace. … The term originated from some Japanese cartoons where a character would pull a big mallet out of nowhere to whack another character. And the joke was, where did that come from? Where was he carrying that big hammer? And so they started saying, ‘Oh, it was in hammerspace, the space for carrying hammers in this alternate universe.’ It’s like you are gaming your first-person shooter games when you’re carrying tons of massive weapons. How can you fit all of that? Well, it’s in a magic pack that lets you fit an infinite amount of stuff in a finite amount of space and carry it with you in a weightless kind of way. That’s the way we think about it. Long story short, it is data not bound by data gravity, not stuck in storage. That data lives in hammerspace because it’s no longer a prisoner to its own mass or to data gravity.

Talk about the big investment in Hammerspace.

Hammerspace in the past was somewhat uniquely funded. My success with my former company, Fusion-IO, allowed me to fund Hammerspace myself in the early stages. I personally invested nearly $20 million of my own capital to get the company off the ground. We have taken many partners along the way. Some of those strategic partners, like Saudi Aramco, led an investment. But this is the first round of investment in Hammerspace that’s led by a purely financial institution. And it’s not just any. It’s a major, major player in the AI world, Altimeter Capital, founded by Brad Gerstner. They led this new $100 million investment round. That’s not a small amount of capital. And in a way, it’s technically our B round. In addition, there’s Cathie Wood and ARK Invest along with Millennium Capital. The unique thing is that all three of these—Altimeter, ARK and Millennium—are actually in the public markets. They venture into the venture capital world only as a side part of their businesses. They’re not venture-first. They’re public-company-first. These are their crossover funds.

That’s the exciting part for us. They’re focused more on investing in the public markets, which means they would be supportive through an IPO. Pure venture guys tend to want to sell at an IPO, and that leads to shareholder turnover. I like the fact that these guys are public company investors because that makes them buyers when we have our IPO, and that’s a good thing. They’re looking at it for the long term, not just to get to an IPO and then sell.

Unlike traditional venture capital, all of these players have major stakes in public companies. Most of their trading is in public companies, and so that’s sort of the starting gate. They’re reaching back into the private world just to get a jump on public investments. And what attracts me is I’m working with long-term investors who will be supportive and hold or even accumulate a bigger position once we IPO, unlike the pure venture funds, where IPO is the end of the journey and they want to sell.

Digital Dollar. Technology Concepts

Any idea when Hammerspace might be ready for an IPO?

At our growth rate? It could be soon. And that’s really all I could say. Last year, we saw 1,000 percent growth in our sales. We are growing very fast. It could be soon. And I believe that we need to be a public company as soon as possible because we sell to very large companies. After Snowflake’s IPO, there literally was an inflection point [of high growth] once they became public. The same happened with Data Domain. When you sell to large enterprises, they feel much more comfortable buying from you when you’re a public company. It shows that you’re there to stay. So we anticipate needing to be a public company to help accelerate growth and doing that in fairly short order, assuming the whole market doesn’t melt down with tariffs and everything.

Are you seeing any impact from the tariffs? Your company has a hardware component, right?

There is. And we’re always fighting delays in getting hardware delivery in, and that is likely to be exacerbated by all of this. But I can’t say we’ve seen it yet. Impacts will happen if tariffs persist, but it hasn’t yet at this point.

Have you had to change your pricing because of the tariffs yet?

No, we have not. We’re a U.S.-based company, so U.S. [hardware] doesn’t matter. The reciprocal tariffs in China maybe will come into play but hasn’t yet. [And] let me point out that we don’t sell hardware. We’re a software-only company. That gives us a lot of flexibility. But the tech does need to run on hardware, and so hardware supply chains can stall deployments. As a software-defined storage company, you’ve got to wait for the hardware. So we don’t have to worry about the hardware cost side of it, but it becomes an issue for our customers.

Cropped shot of businessman greeting a young professional around the table in office. Close up of business people shaking hands in office.

What part of Hammerspace’s business comes from indirect channels?

All of our business goes through the channel. We do not sell direct. We are like VMware. This is a new paradigm. VMware allows you to manage servers in the virtual. We allow you to work with data abstracted from storage and, like VMware, that’s a new paradigm. It requires that people rethink how they manage servers. Instead of racking and stacking and feeding them CD-ROMs, they could now use vSphere in software. The same thing is true for working with storage data through Hammerspace. Why do I bring that up in the context of the channel? Because the channel is where you can leverage that evangelism. Once partners understand the power of this, then it gets propagated because they bring that into their customer base. So we are focused on being extremely channel-friendly and making sure all deals go through the channel.

Is Hammerspace profitable yet?

I don’t think we’re commenting on that. That would be a major announcement. So let’s right now say we are definitely predisposed for growth. And bringing in a war chest with this investment will definitely help set the stage for further growth.

What are your strategic priorities for 2025?

The strategic priorities for 2025 really drive home the fact that you can have your cake and eat it too. That performance is not just about delivering data from storage into GPUs. It’s about everything from getting the environment up and running—the installation, the ease of use, the plug and play, the standards, the native and built-in—through sourcing existing data using existing storage systems and automating the movement of data across those systems. People think performance is about moving storage to compute. What we’re saying is, performance is how long it takes you to get things going, how long you spend, how much time you waste setting it up, configuring it, moving data, all of that. So our mission this year is to help people understand there’s a better way than the manual logistics of setting up exotic file systems, copying data sets from here to there, and so forth. This is, again, a new paradigm for working with data in the virtual to where it can move freely while you’re using it through the swipe of a mouse, but behind a single pane of glass. So just getting the message out there that what we do actually works.

Companies that had this dream in the past failed because it kills your performance. We had to introduce new protocols to make it work. This is very sophisticated stuff. So my main mission in the next 12 months is to get the word out that you can have your cake and eat it too. You can solve the data logistics and have high performance at the same time. …

Our new capital is mainly about the go-to-market reach and getting our vision of the future across, where this stuff can be automated and it can be very high performance and speed things up at every stage.



Source link

3 Ways Intel CEO Lip-Bu Tan Is Shaking Up Company Leadership

3 Ways Intel CEO Lip-Bu Tan Is Shaking Up Company Leadership


Intel CEO Lip-Bu Tan reportedly tells employees that he is shaking up the company’s leadership structure to cut down on what he sees as ‘organizational complexity and bureaucratic processes [that] have been slowly suffocating the culture of innovation we need to win.’

Intel CEO Lip-Bu Tan is shaking up the company’s leadership structure, making the leaders of Intel’s data center and PC business units directly report to him while naming a new chief technology and AI officer who will lead the chipmaker’s overall AI strategy.

These moves and other changes were announced in a memo Tan recently sent to Intel employees, Reuters reported Thursday.

[Related: Intel’s Christoph Schell Warns Partners Of ‘Pain’ On Path To A Better Future]

An Intel spokesperson declined to confirm the memo’s content but said in a statement: “We continue to focus on fostering a culture of innovation across the company that empowers our engineering teams to create great products and delight our customers.”

Representing Tan’s first major decisions as Intel’s new CEO, the leader reportedly said the changes are meant to cut down on “organizational complexity and bureaucratic processes [that] have been slowly suffocating the culture of innovation we need to win.”

“It takes too long to make decisions. New ideas are not given room or resources to incubate. And unnecessary silos lead to inefficient execution,” Tan reportedly wrote.

The changes were announced internally after Tan, who became Intel’s CEO a month ago, urged the beleaguered semiconductor giant’s partners and customers to “be brutally honest with us” as he takes on the “very challenging task” of building a “new Intel.”

“It had been a tough period for quite a long time for Intel. We fell behind on innovation. As a result, we have been too slow to adapt and to meet your needs. You deserve better, and we need to improve, and we will,” he said at the Intel Vision 2025 event last month.

What follows are the three big changes Tan announced for Intel’s leadership structure.

Business Units To Report To The CEO—Again

Tan reportedly said in his memo that Intel’s biggest money-making business units, the Data Center and AI Group and the Client Computing Group, will now report directly to him.

This is a change from a previous reporting structure that Intel put in place in early December when it announced the retirement of its previous CEO, Pat Gelsinger, who was reportedly forced out by the company’s board of directors.

At the time, Intel announced that company veteran Michelle Johnston Holthaus would take on the newly created role of CEO of Intel Products, a new group that consisted of the Client Computing Group, the Data Center and AI Group and the Network and Edge Group.

This meant that the leaders of those business units reported to Holthaus, who also served as one of Intel’s interim co-CEOs until Tan was appointed its new leader.

While Tan is making the business unit leaders direct reports of the CEO again, he said that Holthaus will remain CEO of Intel Products and see her responsibilities expand.

“I want to roll up my sleeves with the engineering and product teams so I can learn what’s needed to strengthen our solutions,” Tan reportedly wrote. “As Michelle and I drive this work, we plan to evolve and expand her role with more details to come in the future.”

The Data Center and AI Group is led by Karin Eibschitz Segal, who took over from Justin Hotard as the business unit’s interim leader in February, while the Client Computing Group is led by Jim Johnson, who took over from Holthaus in December.

Tan Makes Networking Exec Chief Technology And AI Officer

Tan reportedly announced that he is giving the general manager of Intel’s Network and Edge Group, Sachin Katti, the role of chief technology and AI officer.

The CEO characterized this move as an expansion of Katti’s responsibilities, signaling that Katti would continue to serve to lead the Network and Edge Group, which he has done since early 2023, a little more than a year after he joined Intel.

As Intel’s new chief technology and AI officer, Katti will be responsible for Intel’s overall AI strategy and AI product road map, according to Tan. He will also oversee Intel Labs as well as the company’s relationships with startups and third-party developers.

The executive is succeeding Greg Lavender, who is retiring from Intel, according to Tan. Lavender was hired as CTO by former Intel CEO Pat Gelsinger in mid-2022 and previously worked at VMware with the same title when Gelsinger led that company.

Katti is a professor of electrical engineering and computer science at Stanford University, where he has worked on several research projects related to technology, including one on “delivering visibility and operational automation for [machine learning] at the edge.”

He was previously co-founder and CEO of Kumu Networks, a Sunnyvale, Calif.-based company that sought to revolutionize the way wireless systems are built.

Three Technical Execs To Report Directly To Tan

Tan said three veteran technical executives will now report directly to him.

“This supports our emphasis on becoming an engineering-focused company and will give me visibility into what’s needed to compete and win,” he reportedly wrote.

The executives in question are Rob Bruckner, Mike Hurley and Lisa Peace.

Bruckner, a 28-year Intel veteran, is corporate vice president and CTO of client platform architecture, according to an Intel web page from last year.

Hurley, a 24-year company veteran, is corporate vice president and general manager of the Client Silicon Engineering Group, and Pearce, a 28-year company veteran, is corporate vice president and general manager of GPU and NPU hardware-software technology as well as client graphics, according to their LinkedIn profiles.



Source link

Five Companies That Came To Win This Week

Five Companies That Came To Win This Week


For the week ending April 18, CRN takes a look at the companies that brought their ‘A’ game to the channel including Nvidia, New Relic, Docusign, Infosys and NWN.

The Week Ending April 18

Topping this week’s Five Companies that Came to Win is Nvidia for its plan to work with several manufacturing partners to build complete supercomputer systems entirely within the U.S.

Also making this week’s list are observability platform provider New Relic and document management software developer Docusign for launching new channel programs that will help partners be more effective and profitable amid today’s fast-changing IT industry conditions.

IT services giant Infosys is here for a pair of strategic acquisitions while solution provider superstar NWN makes the list for inking a five-year IT transformation contract with The Kraft Group, owner of the New England Patriots.


Nvidia To Build Entire AI Supercomputers Within The US With Partners

Amid the rapidly shifting situation over tariffs, chip designer Nvidia got everyone’s attention this week when it announced a plan to build complete AI supercomputers in the United States for the first time thanks to investments it’s making with Taiwanese manufacturing partners TSMC, Foxconn and Wistron.

Nvidia said it has “commissioned more than a million square feet of manufacturing space to build and test Nvidia Blackwell chips in Arizona and AI supercomputers in Texas.” This new production capacity will allow the company to “produce up to a half trillion dollars of AI infrastructure” in the U.S. within the next four years.

Nvidia said Blackwell production has already begun at TSMC’s fabrication plants in Phoenix. The supercomputer manufacturing plants are being stood up by Foxconn in Houston and by Wistron in Dallas, with mass production “expected to ramp up in the next 12-15 months.” Nvidia also said it is working with Amkor and SPIL to handle the chip packaging and testing needs of its AI supercomputer products.

“The engines of the world’s AI infrastructure are being built in the United States for the first time,” Nvidia CEO and co-founder Jensen Huang said in a statement. “Adding American manufacturing helps us better meet the incredible and growing demand for AI chips and supercomputers, strengthens our supply chain and boosts our resiliency.”


New Relic Revamps Channel Program, Counts On Partners For Agentic AI Drive

Observability platform developer New Relic wins kudos for rolling out a major expansion of its channel program this week, offering partners a simpler two-tier structure, more financial incentives and increased margins, expanded training opportunities, and new technical and sales certifications.

New Relic is also providing additional partner resources, including more regional and sub-regional partner sales managers – part of a four-fold increase in partner organization staff over the last six months.

The moves are the follow-through of channel veteran Larissa Crandall, who was hired as New Relic’s channel chief in September.

A major change is the program’s reduction from four tiers to just two and the addition of what Crandall called “stronger” financial incentives offered to provide partners with increased margins and improved profitability.

The partner program offers four new technical certifications and two new sales certifications that Crandall said are designed to provide partners with a solid foundation around intelligent observability, including value proposition, use cases and business outcomes. And the program offers expanded training opportunities for partner seller and technical teams across such areas as AI, observability and cloud.

New Relic has been expanding the capabilities of its observability platform to monitor and manage AI systems while at the same time building AI functionality into the platform itself to boost its effectiveness. Partners will benefit from both efforts: CEO Ashan Willy told CRN that New Relic is developing an agentic framework within the company’s observability platform that will allow partners to build their own agentic integrations for their customers.


New Docusign Partner Program Leans On The Channel For Document Management’s AI Era

Staying on the topic of partner programs, Docusign makes this week’s Came to Win list for launching a new partner program that provides more partner specializations and improved product information sharing as the company looks to the channel to help capture new opportunities with its Intelligent Agreement Management (IAM) platform.

The new Docusign Partner Program supports partners selling the vendor’s software for electronic signatures, contract life-cycle management and the AI-powered IAM platform. It aims to assist partners serving legal departments, sales, procurement, human resources and other lines of business.

The new program offers “sell” specializations for IAM Core, IAM for Sales and IAM for CX, plus a “service and sell” specialization for contract life-cycle management.

The new program has tracks for build, sell and service partners with benefits designed for those business models, according to Docusign. Build partners can leverage custom integration applications to increase customer value with IAM. The sell partner track focuses on strategic, consultive engagements for IAM and contract life-cycle management. And the service track is aimed at partners with deep product expertise for deploying IAM and contract life-cycle management offers.

Docusign has also added more IAM accreditations, certifications and training for partners.


Infosys Buying Texas, Australia Companies In IT Services Push

Infosys this week disclosed two significant acquisitions that will expand the service capabilities of the global IT services provider and consultant.

Infosys, headquartered in Bengaluru, India, said it had signed definitive agreements to acquire The Missing Link, a Sydney, Australia-based provider of cybersecurity services, and MRE Consulting, a Houston-based technology and business consulting service provider with a focus on trading and risk management.

The Missing Link offers a full cybersecurity practice including strategic consulting, offensive and defensive security services and support, cybersecurity risk assessments, and managed cybersecurity services.

Infosys plans to combine The Missing Link’s technology with its Infosys Cobalt cloud offering.

The MRE Consulting acquisition brings Infosys new capabilities in trading and risk management, particularly in the energy sector. Infosys will gain a team of over 200 employes focused on energy and commodity trading and risk management, or E/CTRM, platforms.

MRE consulting has developed proprietary E/CTRM business process frameworks for a wide range of commodities, transportation modes, and business models, which are the foundation for commodity trading projects, vendor selection, and solution design and implementation.


NWN Wins Blockbuster Five-Year Deal With The Kraft Group To Drive AI Era Tech Blitz

Fast-growing solution provider NWN wins applause this week for signing a sweeping, five-year deal to transform the IT infrastructure for The Kraft Group and all of its businesses, including the New England Patriots and Gillette Stadium.

The blockbuster deal teams NWN in a tight technology partnership with The Kraft Group and Kraft Group CIO Michael Israel with a charter to overhaul and upgrade the IT network and infrastructure for Gillette Stadium for the AI era.

The deal includes network upgrades, modernized cloud collaboration solutions and AI-enabled applications to improve the Gillette Stadium experience for fans and players of the New England Patriots and the New England Revolution soccer team.

NWN will also provide a state-of-the-art infrastructure foundation for the New England Patriots’ new training facility. That includes new AI-enabled applications for players and coaches.

The NWN deal comes with The Kraft Group planning a number of major IT upgrades as it prepares to host the FIFA World Cup in 2026.

The partnership extends well beyond the New England Patriots, Gillette Stadium and the New England Revolution. It encompasses all of The Kraft Group businesses, including paper and packaging businesses like Rand Whitney and International Forest Products, real estate and private equity investments.



Source link

As Partners Seek VMware Relief For Customers, Dell Touts Disaggregated Infrastructure To Return Value

As Partners Seek VMware Relief For Customers, Dell Touts Disaggregated Infrastructure To Return Value


‘It has been at the forefront with customers over the last four months. We have had numerous customers move away from HCI to take advantage of the savings. With Broadcom’s licensing model many customers are shifting from HCI back to 3-2-1 stacks to save on licensing,’ Todd Johnson, president of Dell Platinum partner Avalon Technologies, tells CRN

Dell’s recently unwrapped storage products and servers are arriving just as customers who have relied on VMware want new ways to design their IT infrastructure to manage costs and return value after changes to pricing have sent bills higher, solution providers told CRN.

“It has been at the forefront with customers over the last four months. We have had numerous customers move away from HCI to take advantage of the savings. With Broadcom’s licensing model, many customers are shifting from HCI back to 3-2-1 stacks to save on licensing,” Todd Johnson (above), president of Dell Platinum partner Avalon Technologies, told CRN via email. “It eliminates the need for the additional vSan costs. PowerStore has been a great fit for customers looking to move and the updates will make it more advantageous.”

CRN reached out to Broadcom VMware for comment, but had not heard back at press time.

Round Rock, Texas-based Dell Technologies this month introduced updates to a number of its storage and server lineup including Dell Powerstore and ObjectScale, PowerScale, and PowerProtect, as well as introducing new PowerEdge R470, R570, R670 and R770 servers running Intel Xeon 6 Processors, with P-cores that come in 1U and 2U form factors.

[RELATED: Dell Gives Partners Bigger Incentives On Networking, Storage, PCs, And Winning New Customers]

Varun Chhabra, senior vice president of ISG and telecom marketing at Dell Technologies, said with some of the advances to storage and servers, Dell’s reseller and system integrator partners can now offer customers a new way to think about designing IT infrastructure so that it can manage modern applications as well as the latest class of AI workloads.

“There’s a huge opportunity for partners to be able to add value, add services on top of this,” he said. “This really requires an end-to-end approach in terms of thinking about strategy, all the way through an actual deployment.”

Chhabra said traditional three-tier infrastructure models could have a different vendor for compute, networking, and storage, making them complex and difficult to manage, while hyperconverged systems that bundle all three can tie customers into a single vendor, which Dell customers want to avoid.

“This brings us to disaggregated infrastructure, which we believe is the new paradigm for how organizations are thinking about their traditional and modern workloads,” Chhabra said. “In this new evolution, disaggregated infrastructure really offers shared resource pools for compute, storage, and networking that can be applied to whatever workload organizations are looking to run.”

He said many are choosing to use a disaggregated infrastructure model that combines the flexibility of three-tier architecture, and the simplicity of HCI together.

Drew Schulke, Dell’s vice president of product management, primary storage, said one advantage of disaggregated models designed using Dell’s storage is that it gives VMware customers a better value for their virtualization investment thanks to “orders of magnitude higher level of utilization.”

“You want a very high level of utilization on those cores, and we’ve consistently seen in that disaggregated model orders of magnitude higher level of utilization,” he told CRN during the product briefing. “And that’s becoming incredibly important in those VMware environments, based upon how the licensing schemes work, so they’re getting the best TCO possible.”

Additionally, Transparent Snapshots for VMware is a security feature that no other vendor offers said David Noy, Dell’s vice president of product management, data protection.

“This approach allows you to back up your VMware infrastructure with nearly zero impact to running VMs,” he told CRN during the pre-briefing. “What it does is it basically uses a very, very lightweight mechanism for determining what’s changed between each backup, and then it can, in high speed, with no actual impact to the performance of running virtual machines back up those VMs into the PowerProtect Data Domain product.”

Thanks to that snapshot functionality and improvements to PowerProtect, Noy said systems can restore much more quickly in the event of downtime. PowerProtect Data Manager scans for encryption changes, compression and deduplication, and also scans VMware configuration to “look for drift.”

“If an attacker went through and disabled within their VMware infrastructure configuration information that would actually turn off security features for that virtual machine,” Noy told CRN during the briefing. “We would notice that, and we would flag it. We also look at within the virtual machine itself. We actually index the content. And if we see things like content changing, that shouldn’t be changing within a virtual machine, we can flag that as well.”

Gary McConnell, CEO of Dell Platinum partner VirtuIT, a member of the 2025 CRN MSP 500, said with rising costs in the data center, solution providers are looking for value inside every upgrade. The advances in Dell’s latest generation of storage products deliver that.

“One thing we’ve seen over the last 18 months is the costs of power continually increase in the data centers, so any time we can get more efficient and consolidate the architectural footprint, that’s a good thing,” he told CRN via email. “That’s certainly the case with ObjectScale moving to all-flash. This becomes more critical as we continue to see more and more customers adopt AI and utilize AI workloads.”

Johnson said the added compute and memory that the devices can now bring to customers saves rack space in colocations and inside private data centers. He said for organizations that need to save on cloud storage, he relies on ObjectScale.

“ObjectScale is perfect for any customer looking for low-cost cloud storage, which is pretty much every industry,” he told CRN. “Then add-in industries focusing on big data analytics, IoT storage, media distribution, or AI and that is what it is built for. I see the ECS updates having even better options focusing on AI and ML which is top of mind for any business looking to lead or adapt to the current market competition.”



Source link

Andy Jassy: ‘Chips Are The Biggest Culprit’ In Expensive AI; AWS Will Fix It

Andy Jassy: ‘Chips Are The Biggest Culprit’ In Expensive AI; AWS Will Fix It


‘Inference will represent the overwhelming majority of future AI cost,’ says Amazon CEO Andy Jassy. ‘We feel strong urgency to make inference less expensive for customers.’

Amazon CEO Andy Jassy took a deep dive with shareholders to explain why AWS is investing so much money in AI, how semiconductor chips are the reason AI is so expensive and how AWS is working to fix the high cost of artificial intelligence.

“AI does not have to be as expensive as it is today, and it won’t be in the future. Chips are the biggest culprit,” said Jassy in his annual letter to shareholders this month.

“Most AI to date has been built on one chip provider. It’s pricey,” he said. “[AWS’] Trainium chips should help, as our new Trainium2 chips offer 30 percent to 40 percent better price-performance than the current GPU-powered compute instances generally available today.”

Jassy was the CEO of AWS from its initial formation in the late 1990s until 2021, when he was selected to replace former Amazon CEO Jeff Bezos.

In 2015, AWS revenue was $4.6 billion. AWS currently has a $115 billion annual run rate, representing a nearly 2,400 percent total sales increase.

“Our AI revenue is growing at triple-digit YoY [year-over-year] percentages and represents a multibillion-dollar annual revenue run rate,” said Jassy.

‘Strong Urgency To Make Inference Less Expensive’

Jassy said while AI model training still accounts for a large amount of the total AI spend, “inference will represent the overwhelming majority of future AI cost.”

“Because customers train their models periodically but produce inferences constantly in large-scale AI applications, inference will become another building block service, along with compute, storage, database and others. We feel strong urgency to make inference less expensive for customers.”

More price-performant chips will help in this, he said, but inference will also get meaningfully more efficient with improvements in model distillation, prompt caching, computing infrastructure and model architectures.

He said reducing the cost per unit in AI “will unleash AI being used as expansively as customers desire” and also lead to more overall AI spending.

“It’s like what happened with AWS. Revolutionizing the cost of compute and storage happily led to lower cost per unit and more invention, better customer experiences and more absolute infrastructure spend,” Jassy said.

Over 1,000 GenAI Apps Are Being Built Across Amazon

Jassy took a deep dive into why the company is investing in AI so heavily at such a rapid pace.

“If your mission is to make customers’ lives better and easier every day, and you believe every customer experience will be reinvented by AI, you’re going to invest deeply and broadly in AI. That’s why there are more than 1,000 GenAI applications being built across Amazon, aiming to meaningfully change customer experiences,” he said.

Amazon’s CEO said AWS is quickly developing key building blocks for AI development, such as custom silicon Trainium AI chips; flexible model-building and inference services in Amazon SageMaker and Amazon Bedrock; Amazon Nova models to provide lower cost and latency for customers’ applications; and AI agent creation.

As demand for AWS grows, Amazon will need more data centers, chips and hardware.

“We spend this capital up front, even though these assets are useful for many years. In the case of data centers, for at least 15 [to] 20 years,” Jassy said.

“We continue to believe AI is a once-in-a-lifetime reinvention of everything we know, the demand is unlike anything we’ve seen before, and our customers, shareholders and business will be well-served by our investing aggressively now,” Jassy said.

‘We Don’t Always Get Everything Right’

Jassy calls himself a “superfan” of Amazon and said its employees can make a bigger impact working at Amazon than at any other company in the world.

“It’s challenging to find a company where you can make a bigger impact on the world than you can at Amazon,” he said. “And for builders who want to change the world and who have fire in their belly, there’s no better place to be than Amazon.”

Jassy said Amazon operates like the world’s largest IT startup because of its culture of questioning the norm.

“We don’t always get everything right, and we learn and iterate like crazy,” he said. “But we’re constantly choosing to prioritize customers, delivery, invention, ownership, speed, scrappiness, curiosity and building a company that outlasts us all.”



Source link