{"id":77802,"date":"2026-04-16T09:00:00","date_gmt":"2026-04-16T09:00:00","guid":{"rendered":"https:\/\/exittechnologies.com\/?p=77802"},"modified":"2026-04-13T19:56:11","modified_gmt":"2026-04-13T19:56:11","slug":"what-your-h100-hgx-and-dgx-fleet-is-worth-after-the-b100-launch","status":"publish","type":"post","link":"https:\/\/exittechnologies.com\/fr\/blog\/gpu\/what-your-h100-hgx-and-dgx-fleet-is-worth-after-the-b100-launch\/","title":{"rendered":"What Your H100, HGX, and DGX Fleet Is Worth After the B100 Launch"},"content":{"rendered":"<span class=\"span-reading-time rt-reading-time\" style=\"display: block;\"><span class=\"rt-label rt-prefix\">Temps de lecture : <\/span> <span class=\"rt-time\"> 5<\/span> <span class=\"rt-label rt-postfix\">minutes<\/span><\/span>\n<p>Your spreadsheet has a line item with 64 H100s and a 4-month old price quote.<br><br>How much are they worth now?<br><br>Most finance teams price AI hardware in the simplest terms: acquisition cost minus depreciation over time. GPU value doesn\u2019t track like that. Instead of a slow downward slope, it\u2019s a price chart with sudden drops in between periods of stability.<br><br>NVIDIA\u2019s announcement of Blackwell caused one of those sudden drops. You need to know what this announcement changes and what it doesn\u2019t before you lock in a deal for your used GPUs.&nbsp;&nbsp;<\/p>\n\n\n\n<h2 class=\"wp-block-heading\"><strong>The Market Is Not Repricing One GPU<\/strong><\/h2>\n\n\n\n<p>It\u2019s easy to fall into the trap of talking about a fleet of H100s like they\u2019re all the same asset.&nbsp;<\/p>\n\n\n\n<p>An eight-GPU DGX H100 is not the same thing as a loose pile of PCIe cards. The market discounts hardware based on buyer\u2019s deployment speed, validated topology, and usable memory footprint.<\/p>\n\n\n\n<p>In 2024, NVIDIA announced Blackwell architecture as the successor to Hopper (H100 and H200 models), <a href=\"https:\/\/exittechnologies.com\/fr\/blog\/gpu\/nvidia-h100-vs-b100\/\">marking a leap forward in efficiency<\/a>. According to company documentation, Blackwell cuts large-language-model inference cost by up to 25x compared to Hopper.&nbsp;<\/p>\n\n\n\n<p>Less than 12 months later, Google Cloud was already previewing A4 instances based on NVIDIA HGX B200. Google\u2019s own <a href=\"https:\/\/cloud.google.com\/blog\/products\/compute\/introducing-a4-vms-powered-by-nvidia-b200-gpu-aka-blackwell\" target=\"_blank\" rel=\"noopener\">blog post<\/a> says that each B200 GPU delivered 2.25 times the peak compute and 2.25 times the HBM capacity of the previous-generation A3 High H100 platform.<\/p>\n\n\n\n<div class=\"wp-block-columns is-layout-flex wp-container-core-columns-is-layout-28f84493 wp-block-columns-is-layout-flex\">\n<div class=\"wp-block-column is-layout-flow wp-block-column-is-layout-flow\" style=\"flex-basis:100%\">\n<figure class=\"wp-block-table\"><table class=\"has-fixed-layout\"><thead><tr><th><strong>Deployment Speed<\/strong><\/th><th><strong>Validated Topology<\/strong><\/th><th><strong>Usable Memory Footprint<\/strong><\/th><\/tr><\/thead><tbody><tr><td>Buyers pay for hardware that is ready to run. They don\u2019t care about silicon that requires integration work before it earns its first inference call.\u00a0<\/td><td>A known, documented network fabric and host BOM is a premium asset.<\/td><td>Memory capacity and bandwidth determine which workloads a system can actually run, and how it competes against Blackwell-class alternatives.<\/td><\/tr><\/tbody><\/table><\/figure>\n<\/div>\n<\/div>\n\n\n\n<h2 class=\"wp-block-heading\"><strong>The Data Says Why H100 Value Compresses<\/strong><\/h2>\n\n\n\n<p>This table shows the real reason why H100 value dropped so quickly.<\/p>\n\n\n\n<figure class=\"wp-block-table\"><table class=\"has-fixed-layout\"><tbody><tr><td><strong>Plate-forme<\/strong><\/td><td><strong>Current official signal<\/strong><\/td><td><strong>What that means for resale<\/strong><\/td><\/tr><tr><td>H100<\/td><td>NVIDIA still positions H100 as a data-center GPU for training, inference, and scale-out workloads<\/td><td>The silicon is not obsolete just because it is no longer the newest badge<\/td><\/tr><tr><td>H200<\/td><td>NVIDIA says H200 offers 141 GB of HBM3e at 4.8 TB\/s, nearly double H100 capacity with 1.4x more memory bandwidth<\/td><td>H100 gets pressured first on memory-heavy inference and larger-context workloads<\/td><\/tr><tr><td>Google A3 H100<\/td><td>Google still documents A3 Mega, A3 High, and A3 Edge H100 instances for pre-training, fine-tuning, and serving inference<\/td><td>H100 remains commercially useful in current cloud catalogs<\/td><\/tr><tr><td>Google A4 B200<\/td><td>Google says A4 B200 delivers 2.25x peak compute and 2.25x HBM capacity per GPU over A3 High H100<\/td><td>Buyers now have a much stronger next-gen alternative when comparing full-system purchases<\/td><\/tr><tr><td>DGX H100<\/td><td>NVIDIA DGX H100 ships with 8x H100 GPUs, 640 GB total GPU memory, and 7.2 TB\/s aggregate GPU interconnect bandwidth<\/td><td>A complete DGX H100 is still a real system, not just a component lot<\/td><\/tr><tr><td>DGX B200<\/td><td>NVIDIA says DGX B200 delivers 3x DGX H100 training performance and 15x DGX H100 inference performance, with 1,440 GB total GPU memory and 14.4 TB\/s aggregate NVLink bandwidth<\/td><td>The premium for an older turnkey AI box compresses fast once the new turnkey box exists<\/td><\/tr><\/tbody><\/table><\/figure>\n\n\n\n<p>Your H100 isn\u2019t the default GPU anymore, but it still has a defined market with enterprise use cases. The more your valuation depends on \u201ctop of stack\u201d scarcity, the faster it gets repriced. It\u2019s worth <a href=\"https:\/\/exittechnologies.com\/fr\/blog\/nouvelles-techniques\/nvidia-h100-vs-a100\/\">comparing how A100 and H100 stack up<\/a> before making any decisions.&nbsp;<\/p>\n\n\n\n<h2 class=\"wp-block-heading\"><strong>What Hardware Depreciates First?<\/strong><\/h2>\n\n\n\n<p>The first thing to lose value is not necessarily the intact H100 unit. Incomplete systems get punished before anything else first.<\/p>\n\n\n\n<figure class=\"wp-block-table\"><table class=\"has-fixed-layout\"><thead><tr><th><strong>01 <\/strong><br><strong>Incomplete Systems<\/strong><\/th><th><strong>02<\/strong><br><strong>Turnkey Premium on Older Integrated Systems<\/strong><\/th><th><strong>03<\/strong><br><strong>Hard-to-Support Inventory<\/strong><\/th><\/tr><\/thead><tbody><tr><td>If you break an HGX H100 node into parts, lose the network fabric context, separate the trays from the host, or force the next buyer to reverse-engineer exactly what they are getting, you have already traded a deployment asset for a parts problem. Once Blackwell-class alternatives are in the market, buyers become less patient with parts problems.<\/td><td>NVIDIA\u2019s own DGX numbers make that unavoidable. DGX B200 does not just edge past DGX H100. NVIDIA markets it as a materially different performance class with far more memory and interconnect bandwidth. That means a buyer considering a full-box acquisition is no longer deciding between \u201cbuy H100 or wait.\u201d They are deciding how much discount they need before taking the older box instead of the newer standard.<\/td><td>A clean DGX H100 with a documented configuration, complete inventory, known health, and serviceable condition is one market. A mixed lot of H100-era accelerators with weak provenance, missing networking, or uncertain runtime history is another. When the market gets more options, uncertainty gets repriced faster than performance.<\/td><\/tr><\/tbody><\/table><\/figure>\n\n\n\n<h2 class=\"wp-block-heading\"><strong>What Holds Value Longer Than You Might Think<\/strong><\/h2>\n\n\n\n<p>H100 owners don\u2019t need to panic. The market has not stopped using it.<\/p>\n\n\n\n<p>Google\u2019s current accelerator-optimized machine <a href=\"https:\/\/docs.cloud.google.com\/compute\/docs\/accelerator-optimized-machines\" target=\"_blank\" rel=\"noopener\">documentation<\/a> still lists H100-based A3 Mega, A3 High, and A3 Edge systems alongside H200-based A3 Ultra and B200-based A4 offerings. They\u2019re still in production, so they\u2019ll be attractive for customers that want strong performance without paying the newest-generation premium.<\/p>\n\n\n\n<p>Complete, deployable H100 systems still matter. <a href=\"https:\/\/exittechnologies.com\/fr\/vendre\/serveurs\/nvidia\/\">You can sell Nvidia DGX servers and AI GPUs<\/a> as complete systems and recover meaningfully more than you would by breaking them apart.<\/p>\n\n\n\n<figure class=\"wp-block-table\"><table class=\"has-fixed-layout\"><tbody><tr><td><strong>What Holds Value Best<\/strong><br><br>1.\u00a0 Complete DGX H100 systems with intact configuration and clean provenance<br><br>2.\u00a0 Complete HGX H100 nodes with the networking and host context buyers actually need<br><br>3.\u00a0 H100 inventory already qualified for real inference or training workloads<br><br>4.\u00a0 Lots large enough to solve a buyer\u2019s capacity problem in one transaction<\/td><td><strong>What Holds Value Worst<\/strong><br><br>What holds value worst is anything that makes the next owner go under the hood.<br><br>If your hardware forces the buyer to figure out missing BOM details, uncertain firmware state, unclear topology, or whether the lot is really complete, you are no longer selling AI infrastructure. You\u2019re selling a project that may not be worth their time.<\/td><\/tr><\/tbody><\/table><\/figure>\n\n\n\n<h2 class=\"wp-block-heading\"><strong>Hitting Your Finance Goals Is A Matter of Timing<\/strong><\/h2>\n\n\n\n<p>Business leaders usually want a single number for all their assets. The better way to look at it is a pricing curve based on release timing.<\/p>\n\n\n\n<p>Once a newer generation like Blackwell becomes visible in cloud previews, current platform docs, and integrated system catalogs, the discount for waiting compounds against you.<\/p>\n\n\n\n<p>That is especially true if your current fleet sits in the middle.<\/p>\n\n\n\n<figure class=\"wp-block-table\"><table class=\"has-fixed-layout\"><tbody><tr><td><strong>The Top End<\/strong><br><br>This still moves because buyers want deployable capacity fast. Complete, well-documented systems with clean provenance continue to attract bids from buyers who need capacity now and cannot wait for Blackwell availability.<\/td><td><strong>The Middle<\/strong><br><br>Gets squeezed first. This is where the buyer starts asking whether they should just stretch to H200 or B200 instead. The middle is the most dangerous place to sit when a new generation becomes the reference point.\u00a0<\/td><td><strong>The Bottom End<\/strong><br><br>This still clears because customers want parts and lower-tier use cases exist. But the floor is lower than it was, and the discount for uncertainty compounds faster as Blackwell-class supply grows.<\/td><\/tr><\/tbody><\/table><\/figure>\n\n\n\n<p>If you are holding H100, HGX H100, or DGX H100 inventory, delaying a sale can cut into your return.&nbsp;&nbsp;<\/p>\n\n\n\n<h2 class=\"wp-block-heading\"><strong>How To Adjust Your Refresh Plan Accordingly<\/strong><\/h2>\n\n\n\n<p>If your team expects to retire H100-era gear, the goal is not to predict a magic resale number from a spreadsheet.&nbsp;<\/p>\n\n\n\n<p>The goal is to preserve the part of the asset the buyer will still pay for.<\/p>\n\n\n\n<figure class=\"wp-block-table\"><table class=\"has-fixed-layout\"><tbody><tr><td><strong>1. Document as System<\/strong><\/td><td><strong>2. Separate Premium Inventory<\/strong><\/td><td><strong>3. Record Full Configuration<\/strong><\/td><td><strong>4. Package for Market<\/strong><\/td><\/tr><\/tbody><\/table><\/figure>\n\n\n\n<p>You\u2019ll need to document the hardware as a system before it gets broken apart. Separate premium inventory from tail inventory before the de-rack turns everything into a mixed lot. Record exactly what you have: <a href=\"https:\/\/exittechnologies.com\/fr\/vendre\/carte-graphique\/\">GPU count<\/a>, form factor, host platform, networking, storage, cooling context, and any deployment details. These details will make you a better partner with your buyer, who wants to know if the lot is immediately useful.<\/p>\n\n\n\n<p>It also means being honest about what you are selling.<\/p>\n\n\n\n<p>If the fleet is really a set of full DGX H100 systems, sell it like a full DGX H100 fleet.<\/p>\n\n\n\n<p>If it\u2019s loose boards and leftover nodes, make sure everyone understands that. The buyer and seller both need to know exactly what\u2019s changing hands.&nbsp;<\/p>\n\n\n\n<p>The companies that recover the most value from AI hardware need more than a detailed spreadsheet model. Companies that lock in deals before the project gets messy and package the asset in the shape the market still wants will consistently succeed.<\/p>","protected":false},"excerpt":{"rendered":"<p><span class=\"span-reading-time rt-reading-time\" style=\"display: block;\"><span class=\"rt-label rt-prefix\">Reading Time: <\/span> <span class=\"rt-time\"> 5<\/span> <span class=\"rt-label rt-postfix\">minutes<\/span><\/span>Your spreadsheet has a line item with 64 H100s and a 4-month old price quote. How much are they worth now? Most finance teams price AI hardware in the simplest terms: acquisition cost minus depreciation over time. GPU value doesn\u2019t track like that. Instead of a slow downward slope, it\u2019s a price chart with sudden [&hellip;]<\/p>\n","protected":false},"author":9,"featured_media":77803,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"_acf_changed":false,"episode_type":"","audio_file":"","podmotor_file_id":"","podmotor_episode_id":"","cover_image":"","cover_image_id":"","duration":"","filesize":"","filesize_raw":"","date_recorded":"","explicit":"","block":"","itunes_episode_number":"","itunes_title":"","itunes_season_number":"","itunes_episode_type":"","footnotes":""},"categories":[104],"tags":[],"class_list":["post-77802","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-gpu"],"acf":[],"_links":{"self":[{"href":"https:\/\/exittechnologies.com\/fr\/wp-json\/wp\/v2\/posts\/77802","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/exittechnologies.com\/fr\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/exittechnologies.com\/fr\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/exittechnologies.com\/fr\/wp-json\/wp\/v2\/users\/9"}],"replies":[{"embeddable":true,"href":"https:\/\/exittechnologies.com\/fr\/wp-json\/wp\/v2\/comments?post=77802"}],"version-history":[{"count":0,"href":"https:\/\/exittechnologies.com\/fr\/wp-json\/wp\/v2\/posts\/77802\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/exittechnologies.com\/fr\/wp-json\/wp\/v2\/media\/77803"}],"wp:attachment":[{"href":"https:\/\/exittechnologies.com\/fr\/wp-json\/wp\/v2\/media?parent=77802"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/exittechnologies.com\/fr\/wp-json\/wp\/v2\/categories?post=77802"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/exittechnologies.com\/fr\/wp-json\/wp\/v2\/tags?post=77802"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}