½ÃÀ庸°í¼
»óÇ°ÄÚµå
1530751
°í´ë¿ªÆø ¸Þ¸ð¸®(HBM) 2030³â ½ÃÀå ¿¹Ãø : Á¦Ç° À¯Çü, ÀÎÅÍÆäÀ̽º À¯Çü, ¸Þ¸ð¸® ¿ë·®, ¿ëµµ, ÃÖÁ¾»ç¿ëÀÚ, Áö¿ªº° ¼¼°è ºÐ¼®High Bandwidth Memory Market Forecasts to 2030 - Global Analysis By Product Type, Interface Type (HBM Stacked DRAM Interface, Open Compute Networking Interface and Other Interface Types), Memory Capacity, Application, End User and By Geography |
Stratistics MRC¿¡ µû¸£¸é ¼¼°èÀÇ °í´ë¿ªÆø ¸Þ¸ð¸®(HBM) ½ÃÀåÀº 2024³â¿¡ 25¾ï ´Þ·¯¸¦ Â÷ÁöÇÏ°í ¿¹Ãø ±â°£ Áß CAGRÀº 24.2%·Î ¼ºÀåÇϸç, 2030³â¿¡´Â 92¾ï ´Þ·¯¿¡ ´ÞÇÒ °ÍÀ¸·Î ¿¹ÃøµË´Ï´Ù.
°í´ë¿ªÆø ¸Þ¸ð¸®(HBM)´Â ±âÁ¸ DRAM¿¡ ºñÇØ ÈξÀ ºü¸¥ µ¥ÀÌÅÍ Àü¼Û ¼Óµµ¸¦ Á¦°øÇÏ´Â ÄÄÇ»ÅÍ ¸Þ¸ð¸®ÀÇ ÀÏÁ¾ÀÔ´Ï´Ù. HBMÀº ¿©·¯ °³ÀÇ DRAM ´ÙÀ̸¦ ¼öÁ÷À¸·Î ½×°í ½Ç¸®ÄÜ °üÅë Àü±Ø(TSV)À¸·Î ¿¬°áÇÏ¿© ±¸ÇöµÇ¸ç, ÁÖ·Î µ¥ÀÌÅÍ Ã³¸®·®ÀÌ Áß¿äÇÑ °í¼º´É ÄÄÇ»Æà ¿ëµµ, ±×·¡ÇÈ Ä«µå, AI °¡¼Ó±â¿¡ »ç¿ëµË´Ï´Ù. HBMÀº ³·Àº Àü·Â ¼Òºñ¿Í ³ôÀº ´ë¿ªÆøÀ» Á¦°øÇϹǷΠµ¥ÀÌÅͼ¾ÅÍ, °úÇÐ ½Ã¹Ä·¹À̼Ç, ÷´Ü °ÔÀÓ ½Ã½ºÅÛ µî ±î´Ù·Î¿î ÀÛ¾÷¿¡ ÀûÇÕÇÕ´Ï´Ù.
¸¶ÀÌÅ©·Ð Å×Å©³î·¯Áö¿¡ µû¸£¸é ÀÚ»çÀÇ Â÷±â HBM3E ¸Þ¸ð¸®´Â °æÀï»ç ´ëºñ 1.2TB/ÃÊ ÀÌ»óÀÇ ´ë¿ªÆø°ú 30% ³·Àº Àü·Â ¼Òºñ¸¦ Á¦°øÇÕ´Ï´Ù. ¸¶ÀÌÅ©·ÐÀº 2024³â 2ºÐ±â¿¡ ¿£ºñµð¾ÆÀÇ H200 ÅÙ¼ ÄÚ¾î GPU¿¡ »ç¿ëµÇ´Â 24GB ¿ë·®ÀÇ HBM3E ĨÀ» Ãâ½ÃÇÒ ¿¹Á¤ÀÔ´Ï´Ù.
³ô¾ÆÁö´Â °í¼º´É ÄÄÇ»Æà ¼ö¿ä
°í¼º´É ÄÄÇ»ÆÃ(HPC)¿¡ ´ëÇÑ ¼ö¿ä°¡ Áõ°¡ÇÔ¿¡ µû¶ó ´õ ºü¸¥ µ¥ÀÌÅÍ Ã³¸®¿Í Çâ»óµÈ ¸Þ¸ð¸® ´ë¿ªÆø¿¡ ´ëÇÑ ¼ö¿ä°¡ Áõ°¡ÇÏ¸é¼ °í´ë¿ªÆø ¸Þ¸ð¸®(HBM) ½ÃÀåÀ» ÁÖµµÇÏ°í ÀÖ½À´Ï´Ù. HBMÀÇ ¶Ù¾î³ ¼Óµµ¿Í È¿À²¼ºÀº ÀΰøÁö´É(AI), µ¥ÀÌÅͼ¾ÅÍ, °í±Þ ±×·¡ÇÈ°ú °°Àº ¿ëµµ¿¡ ¸Å¿ì Áß¿äÇÕ´Ï´Ù. ÀÌ·¯ÇÑ ¼ö¿ä´Â ºòµ¥ÀÌÅÍ ºÐ¼®°ú º¹ÀâÇÑ ½Ã¹Ä·¹ÀÌ¼Ç Áõ°¡·Î ÀÎÇØ ´õ¿í °¡¼Óȵǰí ÀÖÀ¸¸ç, ´Ù¾çÇÑ °í¼º´É ÄÄÇ»Æà ȯ°æ¿¡¼ HBMÀÇ Ã¤ÅÃÀ» ÃËÁøÇÏ°í ÀÖ½À´Ï´Ù.
Á¦ÇÑµÈ »ý»ê ´É·Â
÷´Ü ±â¼ú°ú Á¤¹Ðµµ°¡ ¿ä±¸µÇ´Â º¹ÀâÇÑ Á¦Á¶ °øÁ¤Àº È¿À²ÀûÀ¸·Î »ý»êÇÒ ¼ö ÀÖ´Â ´ÜÀ§ ¼ö¸¦ Á¦ÇÑÇÕ´Ï´Ù. ÀÌ·¯ÇÑ º´¸ñÇö»óÀº HBMÀÇ °¡¿ë¼º¿¡ ¿µÇâÀ» ¹ÌÃÄ ÀáÀçÀûÀÎ °ø±Þ ºÎÁ·°ú °¡°Ý »ó½ÂÀ¸·Î À̾îÁ® ´Ù¾çÇÑ »ê¾÷ ºÐ¾ß¿¡¼ HBM ¼Ö·ç¼ÇÀÇ º¸±ÞÀ» ÀúÇØÇÒ ¼ö ÀÖ½À´Ï´Ù.
HBM ±â¼úÀÇ ¹ßÀü
HBM ±â¼úÀÇ ¹ßÀüÀº ½ÃÀå ¼ºÀå¿¡ Å« ±âȸ°¡ µÉ °ÍÀ̸ç, HBM2E ¹× ÇâÈÄ Ãâ½ÃµÉ HBM3¿Í °°Àº ±â¼ú Çõ½ÅÀº ´õ ³ôÀº ´ë¿ªÆø, Çâ»óµÈ ¿¡³ÊÁö È¿À²¼º, ´õ Å« ÀúÀå ¿ë·®À» Á¦°øÇÏ¿© Â÷¼¼´ë ÄÄÇ»Æà ¿ëµµ Áõ°¡ÇÏ´Â ¼ö¿ä¸¦ ÃæÁ·½Ãų °ÍÀÔ´Ï´Ù. ÀÌ·¯ÇÑ ±â¼ú Çõ½ÅÀº AI, ¸Ó½Å·¯´×, °¡»óÇö½ÇÀÇ ¼º´É Çâ»óÀ» °¡´ÉÇÏ°Ô ÇÏ°í, »õ·Î¿î ½ÃÀå ºÎ¹®À» °³Ã´ÇÏ°í »õ·Î¿î °í¼º´É ¿ëµµ¿¡¼ HBMÀÇ Ã¤ÅÃÀ» ÃËÁøÇÒ °ÍÀÔ´Ï´Ù.
´ë¾È ±â¼ú°úÀÇ °æÀï
°í´ë¿ªÆø ij½Ã(HBC), ÇÏÀ̺긮µå ¸Þ¸ð¸® Å¥ºê(HMC), ±âÁ¸ DRAM ¹× GDDR ±â¼úÀÇ ¹ßÀü°ú °°Àº »õ·Î¿î ¼Ö·ç¼ÇÀº ´õ ³·Àº ºñ¿ëÀ¸·Î µ¿µîÇÑ ¼º´ÉÀ» Á¦°øÇÒ ¼ö ÀÖ½À´Ï´Ù. ÀÌ·¯ÇÑ ´ëü ±â¼úµéÀÌ ¹ßÀüÇϸé HBMº¸´Ù ºñ¿ë°ú ¼º´ÉÀÇ ÀýÃæÁ¡ÀÌ À¯¸®ÇÑ Æ¯Á¤ ¿ëµµ¿¡¼ ½ÃÀå Á¡À¯À²À» È®º¸ÇÒ ¼ö ÀÖÀ» °ÍÀÔ´Ï´Ù. ¶ÇÇÑ »õ·Î¿î ¸Þ¸ð¸® ¾ÆÅ°ÅØó¿Í Àç·á¿¡ ´ëÇÑ ¿¬±¸°¡ ÁøÇàµÊ¿¡ µû¶ó °í¼º´É ÄÄÇ»Æà ¿ëµµ¿¡¼ HBMÀÇ ÀÔÁö¸¦ À§ÇùÇÏ´Â Æı«ÀûÀÎ ±â¼úÀÌ µîÀåÇÏ¿© Àå±âÀûÀÎ ½ÃÀå ¼ºÀå·ü°ú äÅ÷üÀ» Á¦ÇÑÇÒ ¼ö ÀÖ½À´Ï´Ù.
COVID-19 ÆÒµ¥¹ÍÀ¸·Î ÀÎÇØ °ø±Þ¸Á ¹®Á¦¿Í Á¦Á¶ ´É·Â ÀúÇÏ·Î ÀÎÇØ HBMÀÇ »ý»êÀÌ ÀϽÃÀûÀ¸·Î ÁߴܵǾú½À´Ï´Ù. ±×·¯³ª µðÁöÅÐ Çõ½Å¿¡ ´ëÇÑ ³ë·Âµµ °¡¼ÓȵǾî ÀÇ·á, ¿ø°Ý ±Ù¹«, E-Commerce ¹× ±âŸ ºÐ¾ß¿¡¼ °í¼º´É ÄÄÇ»Æà ¼Ö·ç¼Ç¿¡ ´ëÇÑ ¼ö¿ä°¡ Áõ°¡Çß½À´Ï´Ù. ÀÌ·Î ÀÎÇØ µ¥ÀÌÅͼ¾ÅÍ È®Àå ¹× AI ±¸ÇöÀÌ ±ÞÁõÇÏ¿© °á±¹ ÁßÀå±âÀûÀ¸·Î HBM ¼ö¿ä¸¦ °ßÀÎÇÏ°Ô µÇ¾ú½À´Ï´Ù.
¿¹Ãø ±â°£ Áß HBM2E ºÎ¹®ÀÌ °¡Àå Å« ºñÁßÀ» Â÷ÁöÇÒ °ÍÀ¸·Î ¿¹»óµË´Ï´Ù.
HBM2E ºÎ¹®Àº ÀÌÀü ¼¼´ë¿¡ ºñÇØ ³ôÀº ´ë¿ªÆø°ú Çâ»óµÈ Àü·Â È¿À²À» Á¦°øÇÏ´Â ¿ì¼öÇÑ ¼º´É Ư¼ºÀ¸·Î ÀÎÇØ ½ÃÀåÀ» Àå¾ÇÇÒ °ÍÀ¸·Î ¿¹»óµÇ¸ç, HBM2E´Â AI, ¸Ó½Å·¯´× ¹× °í¼º´É ÄÄÇ»ÆÃÀÇ µ¥ÀÌÅÍ Áý¾àÀû ¿ëµµ¿¡ ´ëÇÑ ¼ö¿ä Áõ°¡¿¡ ´ëÀÀÇÏ°í ÀÖ½À´Ï´Ù. ¿ë·®°ú ¼Óµµ°¡ Çâ»óµÈ HBM2E´Â ±×·¡ÇÈó¸®Àåºñ(GPU) ¹× µ¥ÀÌÅͼ¾ÅÍ °¡¼Ó±â¿¡ ÀûÇÕÇÕ´Ï´Ù. ÀÌ ºÐ¾ßÀÇ ¼ºÀåÀº 5G ÀÎÇÁ¶ó, ÀÚÀ²ÁÖÇàÂ÷, °í±Þ ºÐ¼®°ú °°Àº ÷´Ü ±â¼úÀÇ Ã¤ÅÃÀ¸·Î ´õ¿í °¡¼Óȵǰí ÀÖÀ¸¸ç, HBM2E´Â ´Ù¾çÇÑ »ê¾÷ ºÐ¾ßÀÇ °í´ë¿ªÆø ¸Þ¸ð¸® ¿ä±¸»çÇ׿¡ ´ëÇÑ ÃÖÀûÀÇ ¼Ö·ç¼ÇÀ¸·Î ÀÚ¸®¸Å±èÇÏ°í ÀÖ½À´Ï´Ù.
¿¹Ãø ±â°£ Áß 16GB ºÎ¹®ÀÌ °¡Àå ³ôÀº CAGRÀ» ³ªÅ¸³¾ °ÍÀ¸·Î ¿¹»óµË´Ï´Ù.
´ë¿ë·® ¸Þ¸ð¸®¸¦ ÇÊ¿ä·Î ÇÏ´Â µ¥ÀÌÅÍ Áý¾àÀû ¿ëµµÀÇ º¹À⼺ Áõ°¡·Î ÀÎÇØ 16GB ºÎ¹®ÀÌ °¡Àå ³ôÀº CAGRÀ» ³ªÅ¸³¾ °ÍÀ¸·Î ¿¹»óµË´Ï´Ù. 16GB HBM ¸ðµâÀº ¸Þ¸ð¸® ´ë¿ªÆø°ú ¿ë·®ÀÌ ¸Å¿ì Áß¿äÇÑ AI Æ®·¹ÀÌ´×, °úÇÐ ½Ã¹Ä·¹À̼Ç, °í±Þ ±×·¡ÇÈ ·»´õ¸µ, AI ¹× ºòµ¥ÀÌÅÍ ºÐ¼®¿¡ ƯÈ÷ ¸Å·ÂÀûÀ̸ç, ¸¹Àº ÇÏÀÌ¿£µå ÄÄÇ»Æà ¿ëµµ¿¡¼ ¼º´É ¿ä±¸¿Í ºñ¿ë °í·Á»çÇ×ÀÇ ±ÕÇüÀ» ¸ÂÃß°í ÀÖ½À´Ï´Ù. AI ¹× ºòµ¥ÀÌÅÍ ºÐ¼®ÀÌ Á¡Á¡ ´õ ¸¹Àº »ê¾÷¿¡¼ µµÀԵʿ¡ µû¶ó 16GB HBM ¼Ö·ç¼Ç¿¡ ´ëÇÑ ¼ö¿ä´Â ±ÞÁõÇÒ °ÍÀ¸·Î ¿¹»óµË´Ï´Ù.
ºÏ¹Ì´Â °í¼º´É ÄÄÇ»ÆÃ, ÀΰøÁö´É(AI), µ¥ÀÌÅͼ¾ÅÍ »ê¾÷¿¡¼ °·ÂÇÑ ÀÔÁö¸¦ È®º¸ÇÏ°í ÀÖÀ¸¸ç, °í´ë¿ªÆø ¸Þ¸ð¸®(HBM) ½ÃÀåÀ» ÁÖµµÇÒ ¼ö ÀÖ´Â À§Ä¡¿¡ ÀÖ½À´Ï´Ù. ÀÌ Áö¿ª¿¡´Â HBM ¿ëµµÀÇ Çõ½ÅÀ» ÁÖµµÇÏ´Â ÁÖ¿ä ±â¼ú ±â¾÷ ¹× ¿¬±¸ ±â°üÀÌ ÀÖÀ¸¸ç, AI, Ŭ¶ó¿ìµå ÄÄÇ»ÆÃ, °í±Þ ºÐ¼®¿¡ ´ëÇÑ ´ë±Ô¸ð ÅõÀÚ·Î ÀÎÇØ °í´ë¿ªÆø ¸Þ¸ð¸® ¼Ö·ç¼Ç¿¡ ´ëÇÑ ¼ö¿ä°¡ ´õ¿í Áõ°¡ÇÏ°í ÀÖ½À´Ï´Ù. ºÏ¹Ì´Â ÷´Ü ±â¼ú °³¹ßÀÇ ¸®´õ½Ê°ú ´Ù¾çÇÑ ºÐ¾ß¿¡¼ HBMÀÇ Á¶±â µµÀÔÀ¸·Î Àü ¼¼°è HBM µµÀÔ°ú ±â¼ú ¹ßÀüÀ» ÁÖµµÇÏ¸ç ¾ÐµµÀûÀÎ ½ÃÀå ÁöÀ§¸¦ È®º¸ÇÏ°í ÀÖ½À´Ï´Ù.
¾Æ½Ã¾ÆÅÂÆò¾çÀº ±Þ¼ÓÇÑ »ê¾÷È¿Í ±â¼ú ÀÎÇÁ¶ó¿¡ ´ëÇÑ ÅõÀÚ Áõ°¡·Î ÀÎÇØ °í´ë¿ªÆø ¸Þ¸ð¸®(HBM) ½ÃÀå¿¡¼ °¡Àå ³ôÀº CAGRÀ» ³ªÅ¸³¾ °ÍÀ¸·Î ¿¹»óµË´Ï´Ù. ÀÌ Áö¿ªÀÇ ¹ÝµµÃ¼ »ê¾÷ ¼ºÀå°ú Áß±¹, ÀϺ», Çѱ¹ µîÀÇ ±¹°¡¿¡¼ °í¼º´É ÄÄÇ»Æÿ¡ ´ëÇÑ ¼ö¿ä°¡ Áõ°¡ÇÏ°í ÀÖ´Â °Íµµ ½ÃÀå ¼ºÀåÀ» ÃËÁøÇÏ°í ÀÖ½À´Ï´Ù. µ¥ÀÌÅͼ¾ÅÍÀÇ È®Àå°ú AI ¹× VR ¿ëµµÀÇ ¹ßÀüÀº ÀÌ Áö¿ªÀÇ HBM äÅÃÀ» ´õ¿í °¡¼ÓÈÇÒ °ÍÀÔ´Ï´Ù.
According to Stratistics MRC, the Global High Bandwidth Memory (HBM) Market is accounted for $2.5 billion in 2024 and is expected to reach $9.2 billion by 2030 growing at a CAGR of 24.2% during the forecast period. High-bandwidth memory (HBM) is a type of computer memory that offers significantly faster data transfer rates compared to traditional DRAM. It achieves this by stacking multiple DRAM dies vertically and connecting them with through-silicon vias (TSVs). HBM is primarily used in high-performance computing applications, graphics cards, and AI accelerators where massive data throughput is crucial. Its ability to provide substantial bandwidth while consuming less power makes it ideal for demanding tasks in data centers, scientific simulations, and advanced gaming systems.
According to Micron Technology, their upcoming HBM3E memory will offer over 1.2 TB/s bandwidth and 30% lower power consumption compared to competing products. Micron plans to launch 24GB capacity HBM3E chips in Q2 2024 for use in Nvidia's H200 Tensor Core GPUs.
Growing need for high-performance computing
The growing need for high-performance computing (HPC) drives the High Bandwidth Memory (HBM) market by increasing demand for faster data processing and enhanced memory bandwidth. HBM's ability to deliver exceptional speed and efficiency is crucial for applications in artificial intelligence (AI), data centers, and advanced graphics. This demand is further fueled by the rise in big data analytics and complex simulations, pushing the adoption of HBM in various high-performance computing environments.
Limited production capacity
The complex manufacturing process, requiring advanced technology and precision, restricts the number of units that can be produced efficiently. This bottleneck affects the availability of HBM, leading to potential supply shortages and increased prices, which can hinder the widespread adoption of HBM solutions across different industries.
Advancements in HBM technology
Advancements in HBM technology present significant opportunities for market growth. Innovations such as HBM2E and upcoming HBM3 offer higher bandwidth, improved energy efficiency, and greater storage capacity, meeting the rising demands of next-generation computing applications. These technological breakthroughs enable enhanced performance in AI, machine learning, and virtual reality, opening new market segments and driving the adoption of HBM in emerging high-performance applications.
Competition from alternative technologies
Emerging solutions like High Bandwidth Cache (HBC), Hybrid Memory Cube (HMC), and advancements in traditional DRAM and GDDR technologies could potentially offer comparable performance at lower costs. As these alternatives evolve, they may capture market share in certain applications where the cost-performance trade-off favors them over HBM. Additionally, ongoing research into novel memory architectures and materials could lead to disruptive technologies that challenge HBM's position in high-performance computing applications, potentially limiting its long-term market growth and adoption rates.
The COVID-19 pandemic initially disrupted HBM production due to supply chain issues and reduced manufacturing capacity. However, it also accelerated digital transformation efforts, increasing demand for high-performance computing solutions in sectors like healthcare, remote work, and e-commerce. This led to a surge in data center expansions and AI implementations, ultimately driving demand for HBM in the medium to long term.
The HBM2E segment is expected to be the largest during the forecast period
The HBM2E segment is expected to dominate the market due to its superior performance characteristics, offering higher bandwidth and improved power efficiency compared to previous generations. HBM2E addresses the growing demands of data-intensive applications in AI, machine learning, and high-performance computing. Its increased capacity and speed make it ideal for graphics processing units (GPUs) and data center accelerators. The segment's growth is further fueled by its adoption of cutting-edge technologies like 5G infrastructure, autonomous vehicles, and advanced analytics, positioning HBM2E as the go-to solution for high-bandwidth memory requirements in various industries.
The 16GB segment is expected to have the highest CAGR during the forecast period
The 16GB segment is experiencing the highest CAGR due to the increasing complexity of data-intensive applications requiring larger memory capacities. This capacity sweet spot balances performance needs with cost considerations for many high-end computing applications. The 16GB HBM modules are particularly attractive for AI training, scientific simulations, and advanced graphics rendering, where memory bandwidth and capacity are crucial. As more industries adopt AI and big data analytics, the demand for 16GB HBM solutions is expected to surge.
The North America region is positioned to dominate the High Bandwidth Memory (HBM) Market due to its strong presence in high-performance computing, artificial intelligence, and data center industries. The region hosts major technology companies and research institutions driving innovation in HBM applications. Substantial investments in AI, cloud computing, and advanced analytics further fuel demand for high-bandwidth memory solutions. North America's leadership in developing cutting-edge technologies and its early adoption of HBM in various sectors contribute to its dominant market position, setting trends for global HBM adoption and technological advancements.
The Asia Pacific region anticipates the highest CAGR in the High Bandwidth Memory (HBM) market owing to rapid industrialization and increasing investments in technology infrastructure. The region's growing semiconductor industry, coupled with rising demand for high-performance computing in countries like China, Japan, and South Korea, fuels market growth. Expanding data centers and advancements in AI and VR applications further accelerate HBM adoption in the region.
Key players in the market
Some of the key players in High Bandwidth Memory (HBM) market include Samsung Electronics, SK Hynix, Micron Technology, AMD, NVIDIA, Intel, Xilinx, Fujitsu, IBM, Broadcom, MediaTek, Renesas Electronics, NXP Semiconductors, Texas Instruments, Cadence Design Systems, Arm Holdings, Marvell Technology Group, and InnoGrit Corporation.
In February 2024, Samsung Electronics, a world leader in advanced memory technology announced that it has developed HBM3E 12H, the industry's first 12-stack HBM3E DRAM and the highest-capacity HBM product to date. Samsung's HBM3E 12H provides an all-time high bandwidth of up to 1,280 gigabytes per second (GB/s) and an industry-leading capacity of 36 gigabytes (GB). In comparison to the 8-stack HBM3 8H, both aspects have improved by more than 50%.
In December 2023, Nvidia has paid hundreds of millions of dollars in advance to SK Hynix and Micron to ensure a stable supply of High Bandwidth Memory (HBM). Recently, Samsung Electronics completed product testing and signed an HBM product supply contract with Nvidia. According to industry sources cited by Chosun Biz, SK Hynix, and Micron each received between KRW700 billion and KRW1 trillion (approximately US$540 million to US$770 million) in advance payments from Nvidia for the supply of advanced memory products. Although the details are not disclosed, the industry believes this is a measure by Nvidia to secure the supply of HBM3e for its new GPU products in 2024.
In November 2023, Nvidia announced the H200 and GH200 product line at Supercomputing 23 this morning. These are the most powerful chips Nvidia has ever created, building on the existing Hopper H100 architecture but adding more memory and more compute. These are set to power the future generation of AI supercomputers, with over 200 exaflops of AI compute set to come online during 2024.