NVIDIA Breaks Major News!

Advertisements

In recent news, NVIDIA continues to make waves in the tech industry, particularly through its strategic partnerships and advancements in AI and computing technologiesA significant development has emerged, indicating that NVIDIA intends to source high bandwidth memory (HBM) chips from South Korean electronics titan SamsungThis move is seen as crucial, given that HBM is a vital component for artificial intelligence processors, enhancing their performance and efficiencyThe backdrop to this decision is the intense competition with SK Hynix, Samsung's rival, which has commenced large-scale production of its next-generation HBM chips.

At the NVIDIA GTC 2024 conference, the company's CEO, Jensen Huang, announced the launch of their next-generation GPU, dubbed Blackwell, with the inaugural chip from this series named GB200, set to hit the market later this yearFinancial estimates suggest that the cost for these advanced AI chips will range between $30,000 and $40,000, with Huang revealing that NVIDIA has invested approximately $10 billion in research and development to bring these chips to fruition.

Furthermore, NVIDIA unveiled their latest GB200 series computing network system which boasts a significant leap in computational performance

This new system reportedly employs a blend of copper and optical connections, with ongoing discussions surrounding the implications of these technologies for future applicationsAs excitement builds, stocks in companies associated with copper connections have begun to surge, indicating that NVIDIA's presence in the market is pivotal to sustaining momentum in tech sectors.

Huang elaborated that HBM is a complex technology with high added value, emphasizing NVIDIA's substantial investment in itDuring a press briefing in San Jose, California, he stated that NVIDIA is currently validating Samsung's HBM chips and plans to integrate them into their operations in the near futureHuang also complimented Samsung, noting that it is an outstanding company to work with.

In the meantime, Samsung's chief competitor, SK Hynix, announced its own progress by starting mass production of the next-generation HBM3E chips, which are essential given the current demands of artificial intelligence technologies

The growing need for faster processing speeds makes HBM an integral part of the AI landscape, setting it apart from traditional memory solutions.

Describing HBM as a technological marvel, Huang pointed out its capability to enhance energy efficiency—an increasingly important trait in today's AI chip landscape where high power consumption is commonThis focus on energy efficiency aligns with global sustainability efforts, showcasing HBM's potential to contribute positively to the environment.

It is also noteworthy that SK Hynix is currently the exclusive supplier of HBM3 chips for NVIDIA, a relationship that underscores the competitive dynamics within the HBM supply chainWhile information regarding customers for the new HBM3E remains unconfirmed, officials from SK Hynix hinted that NVIDIA will be one of the first recipients, integrating the new chips into their Blackwell GPU lineup.

Samsung has aggressively invested in HBM technologies to catch up with SK Hynix

Earlier this year, the company announced its development of HBM3E 12H, which is the first 12-stack HBM3E DRAM in the industry and boasts the highest capacity to dateManufacturing for this advanced chip is set to commence in the first half of the current year.

During the press conference, Huang commented on the remarkable upgrade cycles of both Samsung and SK Hynix, suggesting that as NVIDIA grows, its partners in these companies are likely to grow alongside itHe reiterated the importance of the collaboration between NVIDIA and both SK Hynix and Samsung for the advancement of technology.

HBM technology is a new breed of memory chip designed for CPU/GPU applicationsAccording to analysts, HBM is currently viewed as a bottleneck in computing power, yet it serves as an ideal solution for storage units in high-performance computing contextsThe surging demand for substantial computing power due to emerging AI models has resulted in soaring requirements for data processing and transmission rates, exacerbating the need for effective memory solutions

alefox

HBM has emerged as a vital technological pathway to mitigate these pressures, as it optimizes data transfer speeds and storage capacities essential for parallel computing tasks.

Compared to GDDR memory, HBM holds distinct advantages in terms of scalability, bandwidth, and power consumption, with its bandwidth exceeding that of DDR5 by more than three times at equivalent power levelsHBM thus stands as a formidable solution to the memory bottleneck, establishing itself as a competitive asset for AI GPU storage needs.

Insights from TrendForce indicate that integrating HBM chips into high-end AI server GPUs has become commonplaceGlobal adoption is expected to reach 290 million GB in 2023, marking a year-over-year growth of nearly 60%, with projections estimating an additional 30% increase next yearFurthermore, Omdia forecasts that the HBM market will generate approximately $2.5 billion in revenue by 2025.

Recent reports indicate that SK Hynix plans to invest $1 billion in South Korea to expand and enhance its packaging technologies for HBM chips, particularly focusing on advanced MR-MUF and TSV methods

Meanwhile, Micron has formally launched its industry-leading HBM3E solutions, which are set to be utilized by NVIDIA's H200 Tensor Core GPUs starting in the second quarter of 2024. Samsung's HBM3 products are also undergoing validation through AMD's MI300 series, encompassing both eight-layer and twelve-layer configurations as they strive to keep pace with SK Hynix.

As these storage giants ramp up their efforts, companies throughout the supply chain are preparing for close collaboration, paving the way for an increased influence and emergence of HBM in the technology sector, which will likely result in new opportunities on the horizon.

The frenzy surrounding artificial intelligence is evident, with the market increasingly focused on copper connection componentsFollowing NVIDIA’s unveiling of its GB200 series that significantly enhances computational capabilities through copper and optical connections, there is growing anticipation about the market’s future dynamics.

The GB200 series, alongside the previous GH200 series, represents NVIDIA’s innovation in what they term their “superchip” system

Compared to traditional servers, this system is characterized by its larger scale, connecting 36 or 72 GPUs within the same rack primarily via electrical signals, while employing both NVLink and InfiniBand technologies externally for connectivity.

Analysts observe a shift in market sentiment towards copper connection solutions, marked by the implementation of such connections in GB200, signifying potential broader application in future technologiesCompanies related to copper connections, including Huafeng Technology, Dingtong Technology, and Luxshare Precision, have recently performed remarkably in the stock market, mirroring growing investor confidence.

In China, stocks on application levels have also surged, with concept stocks tied to KIMI, such as Huace Film & Television, hitting a 20% increase limit todayOther firms like Kingsoft and CloudWalk Technology have also registered noticeable gains outperforming market averages

Share:

Leave a comments