STOCK TITAN

Axe Compute (NASDAQ: AGPU) lands $260M, 3‑year dedicated NVIDIA B300 GPU deal

Filing Impact
(Moderate)
Filing Sentiment
(Neutral)
Form Type
8-K

Rhea-AI Filing Summary

Axe Compute Inc. entered into a 36‑month enterprise infrastructure contract with an aggregate value of approximately $260 million, described as the largest enterprise engagement in its history. The deal covers a dedicated cluster of 2,304 NVIDIA B300 GPUs plus AI‑focused high‑speed storage in a single U.S. Tier 3 data center.

The infrastructure is purpose-built for large-scale AI model training, fine-tuning, inference, and data processing, backed by 4.8 megawatts of N+1 redundant power and enterprise-grade service levels. Deployment is targeted to commence in Q3 2026, with payments structured via deposit, prepayment, and monthly take‑or‑pay charges, and options to renew beyond the initial term.

Positive

  • Securing an aggregate $260 million, 36‑month infrastructure contract, described as the largest in Axe Compute’s history, materially enhances multi‑year revenue visibility and showcases demand for its dedicated enterprise AI GPU platform.

Negative

  • None.

Insights

$260M, 36‑month AI contract adds scale and revenue visibility.

Axe Compute secured an enterprise contract with aggregate value of approximately $260 million over 36 months, its largest engagement to date. The agreement covers a dedicated cluster of 2,304 NVIDIA B300 GPUs plus high-speed storage in a U.S. Tier 3 data center.

The structure is notable: multi-year term, take‑or‑pay payments, and options to renew, which together provide long-dated income visibility as described by the company. The contract is backed by 4.8 megawatts of N+1 redundant power and enterprise-grade service levels tailored to demanding AI workloads.

The company frames this as a template for future multi-year, dedicated GPU deployments with contracted pricing and location specified by customers. Execution will depend on timely deployment starting in Q3 2026, hardware and facility readiness, and customer performance, as highlighted in the forward-looking statements and risk factors.

Item 8.01 Other Events Other
Voluntary disclosure of events the company deems important to shareholders but not covered by other items.
Item 9.01 Financial Statements and Exhibits Exhibits
Financial statements, pro forma financial information, and exhibit attachments filed with this report.
Aggregate contract value $260 million Over 36-month enterprise infrastructure agreement
Contract term 36 months Initial term of enterprise infrastructure agreement
GPU count 2,304 NVIDIA B300 GPUs Dedicated cluster provided under the contract
Power capacity 4.8 megawatts Dedicated N+1 redundant power for the cluster
Deployment start Q3 2026 Targeted deployment commencement for the contracted infrastructure
Data center tier Tier 3 U.S. data center classification for the deployment
take-or-pay financial
"monthly payments made in advance on a take-or-pay basis"
A take-or-pay clause is a contract term that requires a buyer to either take delivery of an agreed amount of a product or pay a penalty if they do not. For investors, it matters because it creates predictable revenue for the seller—like a subscription fee that must be paid whether fully used or not—reducing sales volatility but also introducing counterparty risk if the buyer’s ability to pay is uncertain.
Tier 3 data center technical
"from a single U.S. Tier 3 data center facility"
A tier 3 data center is a high-availability facility built so critical systems (power, cooling, network) have redundant components and can be serviced without interrupting operations — like changing a car’s tire while it’s still running. For investors, tier 3 status signals lower operational risk and fewer outages, which protects revenue, customer contracts and reputation, though it carries higher construction and operating costs than lower-tier facilities.
N+1 redundant technical
"4.8 megawatts of dedicated power capacity, delivered on an N+1 redundant basis"
n+1 redundant describes a setup where a system has one more identical backup unit than is necessary to run at full capacity — for example, three servers can handle the workload while a fourth sits ready to take over if one fails. For investors, this signals higher reliability and lower risk of service interruptions, like an extra spare tire in a car that keeps operations moving without costly downtime.
enterprise-grade service levels financial
"The Agreement includes enterprise-grade service levels"
forward-looking statements regulatory
"This press release contains “forward-looking statements” within the meaning of Section 27A"
Forward-looking statements are predictions or plans that companies share about what they expect to happen in the future, like estimating sales or profits. They matter because they help investors understand a company's outlook, but since they are based on guesses and assumptions, they can sometimes be wrong.
Strategic Compute Reserve financial
"Axe Compute also operates a Strategic Compute Reserve that translates to enterprise GPU access"
A strategic compute reserve is a dedicated pool of computing power—such as servers, GPUs, or cloud capacity—set aside to handle high-priority tasks, emergencies, or sudden demand spikes. For investors, it matters because maintaining this reserve requires spending and affects a company's ability to stay reliable, scale quickly, and protect critical operations; like keeping a backup generator, it trades ongoing cost for reduced downtime and competitive stability.
False000144615900014461592026-04-222026-04-22iso4217:USDxbrli:sharesiso4217:USDxbrli:shares
 

UNITED STATES

SECURITIES AND EXCHANGE COMMISSION

Washington, D.C. 20549

_________________

FORM 8-K

_________________

CURRENT REPORT

Pursuant to Section 13 or 15(d)
of the Securities Exchange Act of 1934

Date of Report (Date of earliest event reported):  April 22, 2026

_______________________________

Axe Compute Inc.

(Exact name of registrant as specified in its charter)

_______________________________

Delaware001-3679033-1007393
(State or Other Jurisdiction of Incorporation)(Commission File Number)(I.R.S. Employer Identification No.)

91 43rd Street, Suite 110

Pittsburgh, Pennsylvania 15201

(Address of Principal Executive Offices) (Zip Code)

(412) 432-1500

(Registrant's telephone number, including area code)

 

(Former name or former address, if changed since last report)

_______________________________

Check the appropriate box below if the Form 8-K filing is intended to simultaneously satisfy the filing obligation of the registrant under any of the following provisions:

Written communications pursuant to Rule 425 under the Securities Act (17 CFR 230.425)
Soliciting material pursuant to Rule 14a-12 under the Exchange Act (17 CFR 240.14a-12)
Pre-commencement communications pursuant to Rule 14d-2(b) under the Exchange Act (17 CFR 240.14d-2(b))
Pre-commencement communications pursuant to Rule 13e-4(c) under the Exchange Act (17 CFR 240.13e-4(c))

Securities registered pursuant to Section 12(b) of the Act:

Title of each classTrading Symbol(s)Name of each exchange on which registered
Common stock, $0.01 par valueAGPUNasdaq Capital Market

Indicate by check mark whether the registrant is an emerging growth company as defined in Rule 405 of the Securities Act of 1933 (§230.405 of this chapter) or Rule 12b-2 of the Securities Exchange Act of 1934 (§240.12b-2 of this chapter).

Emerging growth company

If an emerging growth company, indicate by check mark if the registrant has elected not to use the extended transition period for complying with any new or revised financial accounting standards provided pursuant to Section 13(a) of the Exchange Act. ☐

 
 
Item 8.01. Other Events.

 

On April 22, 2026, Axe Compute Inc. (the “Company”) announced its entry into a 36-month enterprise infrastructure contract with an enterprise customer (the “Customer”). The Agreement has an aggregate contract value of approximately $260 million and represents the largest enterprise engagement in the Company’s history.

 

Under the Agreement, the Company will deliver a dedicated cluster of 2,304 NVIDIA B300 GPUs and AI-focused high-speed storage infrastructure from a single U.S. Tier 3 data center facility. The cluster is purpose-built to support large-scale AI model training, fine-tuning, and high-throughput inference workloads. The infrastructure will maintain NVIDIA reference architecture throughout the contract period. The initial term of the Agreement is 36 months, with targeted deployment commencing in the third quarter of 2026. The Agreement includes options to renew for additional years beyond the initial term.

 

The aggregate contract value is approximately $260 million over the 36-month term, covering GPU compute and high-speed storage. The payment structure consists of a deposit, prepayment, and monthly payments made in advance on a take-or-pay basis. The Agreement includes enterprise-grade service levels.

 

A copy of the press release is filed as Exhibit 99.1 to this Current Report on Form 8-K and is incorporated herein by reference.

 

The information in this Item 8.01 and the exhibit attached hereto shall be deemed “filed” for purposes of Section 18 of the Securities Exchange Act of 1934, as amended (the “Exchange Act”), and shall be deemed incorporated by reference into any filing made by the Company under the Exchange Act or Securities Act of 1933, as amended, except as shall be expressly set forth by specific reference in such a filing.

 

Item 9.01. Financial Statements and Exhibits.

 

(d) Exhibits.

 

Exhibit No. Description
   
99.1 Press Release of Axe Compute Inc. dated April 22, 2026
104 Cover Page Interactive Data File (embedded within the Inline XBRL document)
 
 

 

SIGNATURE

 

Pursuant to the requirements of the Securities Exchange Act of 1934, the registrant has duly caused this report to be signed on its behalf by the undersigned hereunto duly authorized.

 

 Axe Compute Inc.
   
  
Date: April 22, 2026By: /s/ Christopher Miglino        
  Christopher Miglino
  Chief Executive Officer
  

 

EXHIBIT 99.1

Axe Compute Secures $260 million, Three-Year Enterprise Contract for 2,304-GPU NVIDIA B300 Deployment

Redefining enterprise AI infrastructure: enterprises no longer adapt to cloud constraints — they specify what they need, and Axe Compute delivers it

PITTSBURGH, April 22, 2026 (GLOBE NEWSWIRE) -- Axe Compute Inc. (NASDAQ: AGPU), a neocloud AI infrastructure platform delivering dedicated enterprise GPU compute capacity at global scale, today announced the signing of a 36-month enterprise infrastructure contract with aggregate contract value of approximately $260 million to deliver a dedicated cluster of 2,304 NVIDIA B300 GPUs and AI-focused high-speed storage for massive data processing and training, deployed in a Tier 3 data center in the United States. The contract represents the largest enterprise engagement in Axe Compute’s history.

Under the 36-month agreement, which has options to renew for additional years, Axe Compute will deliver dedicated GPU compute and AI-focused high-speed storage infrastructure from a single U.S. Tier 3 data center facility. The cluster is purpose-built to support large-scale AI model training, fine-tuning, and high-throughput inference workloads, powered by current-generation NVIDIA B300 GPUs.

“This agreement is a signal. Enterprise AI customers are no longer willing to adapt their infrastructure roadmaps to the capacity constraints of legacy hyperscalers. A 2,304-GPU B300 deployment, contracted, dedicated, U.S.-based, and priced to compete, is what purpose-built AI infrastructure looks like. We intend to replicate this commercial structure at scale.”

— Christopher Miglino, Chief Executive Officer, Axe Compute Inc.

Contract Highlights

Aggregate Contract Value: Approximately $260 million over 36 months, subject to the terms of the definitive agreement, across both GPU compute and high-speed storage.

Infrastructure: 2,304 NVIDIA B300 GPUs and large AI-focused high-speed storage for massive data processing and training, purpose-built for large-scale AI model training, fine-tuning, and high-throughput inference. All dedicated and committed, while maintaining NVIDIA reference architecture.

Deployment Geography: Single U.S. Tier 3 data center facility.

Power Infrastructure: 4.8 megawatts of dedicated power capacity, delivered on an N+1 redundant basis, providing the fault-tolerant power foundation required for uninterrupted large-scale AI workloads.

Targeted Deployment Start: Q3 2026.

Contract Structure: Secured with a deposit, prepayment, and monthly in advance payments against contracted pricing on a take-or-pay basis. Supported by enterprise-grade service levels, with the ability to add ancillary value-added services like dedicated local loops. Terms architected and provided by Axe Compute to align with the enterprise, not dictated by provider inventory and requirements.

Strategic Significance

This contract illustrates the commercial architecture Axe Compute is scaling toward: multi-year, dedicated GPU deployments with contracted pricing, service levels, and location specified by the customer. At $260 million over 36 months, it establishes a new benchmark for enterprise AI infrastructure engagements and provides the Company with meaningful long-dated income visibility.

Two structural capabilities of the Axe Compute platform directly enable engagements of this size and structure. First, the platform's geographic reach means customers can match compute capacity to the regions their workloads actually require — a structural flexibility that incumbent providers, constrained to the facilities they have built, cannot always offer. Second, Axe Compute is able to offer dedicated clusters backed by delivery guarantees, ensuring customers receive the needed GPU compute when they need it and when they want it, to support scaling their businesses and serving their end clients. This combined with Axe Compute’s predictability means customers know what they will pay each month, with no hidden fees, aligning to their monetization model with no surprises. This deployment is backed by dedicated, N+1 redundant power infrastructure, totaling 4.8 megawatts committed to this cluster alone, fully dedicated, fully supported by 24/7 on-site resources, which is what Axe Compute feels enterprises deserve.

Axe Compute believes this transaction is representative of a broader, structural shift in how enterprise AI infrastructure is procured: customers specify what their AI workloads require and contract accordingly, rather than adapting their AI roadmaps to the constraints of legacy cloud capacity. This agreement is representative of the engagement profile Axe Compute is built to deliver - providing choice, flexibility, dependability, and scalability to a market that is desperate for an alternative model.

Workload Use Cases

The 2,304-GPU B300 cluster delivered under this agreement is purpose-built to support the most demanding AI workloads at enterprise scale. Representative workloads include:

Foundation Model Training: Pre-training large language models and multimodal foundation models requires sustained, high-throughput GPU compute across thousands of accelerators operating in tight coordination. The B300’s memory bandwidth and single-spine interconnect performance make it particularly well-suited for training runs at this scale, where GPU utilization and inter-node communication efficiency directly determine time-to-completion and cost.

Fine-Tuning and Domain Adaptation: Enterprises adapting foundation models to proprietary datasets, whether for legal, financial, biomedical, or customer-specific applications, require dedicated compute that eliminates the multi-tenancy risks and unpredictable availability that characterize shared cloud environments. Dedicated infrastructure ensures data remains within a controlled facility boundary and compute capacity is available on the enterprise’s schedule, not the provider’s.

High-Throughput Inference: Production AI deployments serving real-time or near-real-time inference at scale, including recommendation engines, content generation pipelines, fraud detection systems, and autonomous decision-making platforms, all require low-latency, high-availability GPU infrastructure with predictable performance. Dedicated clusters eliminate the noisy-neighbor latency spikes that plague shared cloud environments, delivering consistent, predictable performance at scale.

AI-Intensive Data Processing: The integration of high-speed AI-focused storage (e.g. Vast) with the GPU cluster enables workloads that demand rapid ingestion, transformation, and processing of massive datasets at training time, including multimodal data pipelines processing image, video, audio, and text at scale. Storage throughput and proximity to compute are critical bottlenecks at this data volume; the co-located architecture directly addresses both.

About Axe Compute Inc.

Axe Compute Inc. (NASDAQ: AGPU) is a neocloud AI infrastructure platform built on a fundamental premise: AI innovation should not be constrained by infrastructure supply and performance limits. Axe Compute gives enterprises and AI innovators choice across hardware, geography, and deployment speed. Axe Compute also operates a Strategic Compute Reserve that translates to enterprise GPU access, converting reserve holdings into deployable AI infrastructure capacity. Axe Compute is among the first publicly traded companies delivering this model at scale. Learn more at axecompute.com.

Forward-Looking Statements

This press release contains “forward-looking statements” within the meaning of Section 27A of the Securities Act of 1933, as amended, and Section 21E of the Securities Exchange Act of 1934, as amended. Forward-looking statements include, but are not limited to, statements regarding the anticipated timing, scope, value, and performance of the contract described herein; the expected deployment schedule; the availability of hardware and facility capacity; the customer relationship and its future progression; the Company’s ability to secure additional engagements of similar scale; and Axe Compute’s broader business strategy and market positioning. These statements are based on the Company’s current expectations and assumptions and are subject to known and unknown risks and uncertainties that could cause actual results to differ materially, including risks related to the execution and enforceability of the definitive agreement, hardware supply chain constraints, facility readiness, customer performance, macroeconomic conditions, competition, regulatory matters, and other risk factors described in the Company’s filings with the U.S. Securities and Exchange Commission. Axe Compute undertakes no obligation to update any forward-looking statement, except as required by applicable law.

Investor & Media Contacts
Investor Relations
Erin McMahon
erin@axecompute.com

FAQ

What did Axe Compute Inc. (AGPU) announce in this 8-K filing?

Axe Compute announced a 36-month enterprise infrastructure contract worth approximately $260 million. The agreement covers a dedicated 2,304-GPU NVIDIA B300 cluster with AI-focused high-speed storage, deployed in a U.S. Tier 3 data center, with enterprise-grade service levels and renewal options.

How large is the new Axe Compute (AGPU) enterprise contract and over what period?

The new enterprise contract has an aggregate value of approximately $260 million over 36 months. This multi-year term gives the company extended income visibility and is described as the largest enterprise engagement in Axe Compute’s history, spanning GPU compute and high-speed storage services.

What infrastructure will Axe Compute (AGPU) provide under the $260 million contract?

Axe Compute will deliver a dedicated cluster of 2,304 NVIDIA B300 GPUs plus AI-focused high-speed storage. The deployment resides in a single U.S. Tier 3 data center with 4.8 megawatts of N+1 redundant power, designed for large-scale AI training, fine-tuning, and high-throughput inference workloads.

When is the Axe Compute (AGPU) contract deployment expected to start and how long is the term?

Targeted deployment for the contract is scheduled to commence in Q3 2026, subject to the agreement’s terms. The initial term is 36 months, and the contract also includes options to renew for additional years, potentially extending the relationship beyond the original period.

How is the Axe Compute (AGPU) enterprise contract structured financially?

The contract uses a combination of a deposit, prepayment, and monthly payments made in advance. These monthly charges are on a take‑or‑pay basis against contracted pricing, covering both GPU compute and high-speed storage, and are supported by enterprise-grade service levels.

Why is the new $260 million contract strategically important for Axe Compute (AGPU)?

The company views this as a benchmark for multi-year, dedicated GPU deployments with customer-specified location, pricing, and service levels. At $260 million over 36 months, it underscores demand for Axe Compute’s neocloud AI infrastructure model and provides meaningful long-dated income visibility, according to the company’s description.

Filing Exhibits & Attachments

5 documents