NVIDIA’s Lucrative AI Accelerator H100 Amid Soaring Demand
In a striking revelation, it has been reported that NVIDIA is reaping profits of up to 1000% on each specialized graphics accelerator, the H100, designed for artificial intelligence tasks. According to Tae Kim, a journalist at Barron, citing analysis by consulting firm Raymond James, this considerable profit margin comes despite the recent surge in demand for these accelerators.

Currently, the average cost of an NVIDIA H100 accelerator ranges from $25,000 to $30,000, depending on the region and supplier. However, these figures pertain to the more affordable PCIe version of the solution. Raymond James estimates that the production cost of the graphics processor used in the accelerator, along with additional materials like circuit boards and other components, totals around $3320. It’s worth noting that the depth of the cost analysis and the inclusion of factors such as research and development expenses, engineering salaries, production, and logistics costs remain unspecified in Kim’s report.
Developing specialized accelerators demands significant time and resources. According to Glassdoor data, the average annual salary for a hardware engineer at NVIDIA is approximately $202,000. This figure represents only one engineer, but it’s evident that the creation of products like the H100 involved collaboration among numerous specialists, requiring thousands of hours of work. These factors should ideally be factored into the final product cost.
Nonetheless, NVIDIA currently holds a dominant position in the supply of hardware for AI computations. The demand for their specialized accelerators is so high that they are often sold out long before they reach store shelves. Suppliers indicate that the waiting list for these accelerators stretches into the second quarter of 2024. With analysts estimating the AI computation market to reach $150 billion by 2027, NVIDIA’s future seems promising.
However, the surging demand for AI accelerators has negative repercussions for the broader tech market. Analyst reports indicate a decline in global sales of traditional high-performance computing (HPC) servers. The primary reason for this downturn is the shift of attention by hyperscalers and data center operators towards AI-optimized systems, utilizing solutions like the NVIDIA H100. Consequently, DDR5 memory manufacturers have had to recalibrate their expectations regarding the adoption of the new RAM standard. Data center operators are actively investing in AI accelerators rather than new memory standards. This trend suggests that DDR5 adoption might only reach parity with DDR4 by the third quarter of 2024.
Author Profile

- I'm Vasyl Kolomiiets, a seasoned tech journalist regularly contributing to global publications. Having a profound background in information technologies, I seamlessly blended my technical expertise with my passion for writing, venturing into technology journalism. I've covered a wide range of topics including cutting-edge developments and their impacts on society, contributing to leading tech platforms.
Latest entries
Troubleshooting15/11/2023Intel Fixes Critical Vulnerability Affecting All Processors – CVE-2023-23583
Business15/11/2023Google Pays $8 Billion to Samsung for Default ‘Play Market’ and Search in Galaxy Devices
Technology04/11/2023North Korea Upgrades Mobile Networks with Huawei Equipment Imports
Technology03/11/2023Chinese Scientists Invent Passive Saltwater Cooler, Boosts CPU Speed by a Third