STOCK TITAN

Backblaze to Present on Scalable AI Data Pipelines at AI & Big Data Expo North America 2026

Rhea-AI Impact
(Neutral)
Rhea-AI Sentiment
(Neutral)
Tags
AI

Key Terms

s3-compatible object storage technical
S3-compatible object storage is a way of saving files and data as discrete objects that follows a common, widely adopted cloud storage protocol called S3. Think of it like a standardized postal format for digital files: any service that uses the same format can send, receive, and organize those files the same way. For investors, this matters because it makes a company’s data systems more portable, reduces dependence on a single vendor, and can lower costs and operational risk when scaling storage or moving workloads.
gpu technical
A GPU (graphics processing unit) is a specialized computer chip designed to handle many calculations at once, originally for rendering images and video but now widely used for tasks like artificial intelligence, data analysis and high-performance computing. Investors watch GPU demand and prices because strong sales often signal growth for chip makers and their customers, affect profit margins and capital spending, and can forecast wider trends in gaming, AI adoption and cloud services.
data pipeline technical
A data pipeline is the series of steps and tools that collect, clean, transform and deliver information from its source to where analysts, managers or machines use it. For investors it matters because reliable pipelines ensure financial metrics, customer figures and operational signals are timely and accurate—like a sealed system of pipes delivering clean water to a city, good pipelines prevent leaks, delays and bad decisions based on faulty numbers.
hyperscaler technical
A hyperscaler is a very large provider of cloud computing and data-center services that owns and operates vast amounts of servers, storage and network capacity to host other companies’ applications and data. Think of them as the electric utility for digital services: their scale cuts unit costs, enables rapid growth for customers, and creates high barriers to entry, so investors watch their market share, margins and capital spending closely.

SAN FRANCISCO--(BUSINESS WIRE)-- Backblaze, Inc. (Nasdaq: BLZE), the high-performance cloud storage platform for the AI era, will exhibit and present at AI & Big Data Expo North America 2026 on May 18-19, 2026 at the San Jose McEnery Convention Center (Booth #434), showcasing how scalable data infrastructure keeps AI pipelines running at full speed.

Recognized as one of the leading U.S. events for enterprise AI and big data, the Expo brings together Fortune 500 leaders, AI innovators, and global technology partners who are moving AI from pilot projects to operational reality.

Product Showcase: B2 Neo

During the show, Backblaze will highlight B2 Neo, its high-performance S3-compatible object storage offering, built to support the massive, high-throughput data flows driving modern AI pipelines, ensuring low-latency data access that keeps GPUs fully utilized as organizations scale.

Expert Speaking Session: Architecting Scalable Data Foundations

Backblaze’s Sr. Director of Solutions Engineering, Troy Liljedahl, will also lead an expert session – “The AI Pipeline Starts with Storage: Architecting Scalable Data Foundations” – scheduled for May 18 at 10:15 a.m. PT.

As the race for AI supremacy accelerates, the model serves as the engine, but the data pipeline delivers the fuel. Liljedahl will examine how legacy cloud constraints and egress gatekeeping can clog the flow, and discuss what is required to build a Neocloud architecture that keeps data moving at the speed modern AI demands.

New Research: Elephant Data Flows

Unlike traditional cloud traffic, which fans out across many endpoints, AI pipelines generate massive, concentrated, bursty transfers between storage and compute. GPU clusters repeatedly pull multi-petabyte datasets across the model development lifecycle.

Backblaze's ongoing Network Stats research tracks these patterns through real network telemetry, and the latest findings show AI-driven neocloud traffic is reshaping infrastructure requirements in ways traditional hyperscaler architectures were not designed to support. One-page summaries of both Network Stats and Performance Stats will be available at Backblaze’s booth.

"GPU availability is only half the equation. Today's AI workloads are driven by what we call elephant data flows—massive, high-throughput bursts of data moving between a small number of storage and compute endpoints,” said Liljedahl. “When your storage layer can't keep pace, GPUs sit idle, models take longer to train, and your competitors iterate faster. Backblaze is built for exactly this reality: high-throughput object storage, direct connectivity, and network capacity designed to keep data moving so GPUs stay fed, AI pipelines keep running, and teams can iterate faster.”

Attendees are invited to visit Booth #434 or book a meeting to learn how S3-compatible object storage keeps GPUs fed and AI pipelines running at full speed.

About Backblaze

Backblaze (NASDAQ: BLZE) gives businesses the freedom to innovate without limits by removing the barriers of lock-in, complexity, and cost. Our high-performance cloud object storage accelerates AI workflows, powers data-heavy applications, streamlines media management, and protects critical data. As an award-winning independent cloud, we provide unparalleled levels of interoperability that enable over 500,000 of our customers to reach and serve hundreds of millions of end users in 175 countries around the world. For more information, please go to www.backblaze.com.

Press Contact:

Caroline Statile
press@backblaze.com

Source: Backblaze, Inc.