PEAK:AIO Solves Long-Running AI Memory Bottleneck for LLM Inference and Model Innovation with Unified Token Memory Feature


Press Release Date 05-19-2025

Purpose-Built for AI: Unifying KVCache Reuse and GPU Memory Expansion Using CXL to Address One of AI’s Most Persistent Infrastructure Challenges

PEAK:AIO

Unified Token Memory Feature
Unified Token Memory Feature

Manchester, UK, May 19, 2025 (GLOBE NEWSWIRE) -- PEAK:AIO, the data infrastructure pioneer redefining AI-first data acceleration, today unveiled the first dedicated solution to unify KVCache acceleration and GPU memory expansion for large-scale AI workloads, including inference, agentic systems, and model creation.

As AI workloads evolve beyond static prompts into dynamic context streams, model creation pipelines, and long-running agents, infrastructure must evolve, too.

"Whether you are deploying agents that think across sessions or scaling toward million-token context windows, where memory demands can exceed 500GB per model, this appliance makes it possible by treating token history as memory, not storage," said Eyal Lemberger, Chief AI Strategist and Co-Founder of PEAK:AIO "It is time for memory to scale like compute has."

As transformer models grow in size and context, AI pipelines face two critical limitations: KVCache inefficiency and GPU memory saturation. Until now, vendors have retrofitted legacy storage stacks or overextended NVMe to delay the inevitable. PEAK:AIO's new 1U Token Memory Feature changes that by building for memory, not files.

The First Token-Centric Architecture Built for Scalable AI

Powered by CXL memory and integrated with Gen5 NVMe and GPUDirect RDMA, PEAK:AIO's feature delivers up to 150 GB/sec sustained throughput with sub-5 microsecond latency. It enables:

  • KVCache reuse across sessions, models, and nodes
  • Context-window expansion for longer LLM history
  • GPU memory offload via true CXL tiering
  • Ultra-low latency access using RDMA over NVMe-oF

This is the first feature that treats token memory as infrastructure rather than storage, allowing teams to cache token history, attention maps, and streaming data at memory-class latency.

Unlike passive NVMe-based storage, PEAK:AIO's architecture aligns directly with NVIDIA’s KVCache reuse and memory reclaim models. This provides plug-in support for teams building on TensorRT-LLM or Triton, accelerating inference with minimal integration effort. By harnessing true CXL memory-class performance, it delivers what others cannot: token memory that behaves like RAM, not files.

"While others are bending file systems to act like memory, we built infrastructure that behaves like memory, because that is what modern AI needs," continued  Lemberger. "At scale, it is not about saving files; it is about keeping every token accessible in microseconds. That is a memory problem, and we solved it at embracing the latest silicon layer."

The fully software-defined solution utilizes off-the-shelf servers is expected to enter production by Q3. To discuss early access, technical consultation, or how PEAK:AIO can support AI infrastructure needs, please contact sales at sales@peakaio.com or visit https://peakaio.com .

"The big vendors are stacking NVMe to fake memory. We went the other way, leveraging CXL to unlock actual memory semantics at rack scale," said Mark Klarzynski, Co-Founder and Chief Strategy Officer at PEAK:AIO. "This is the token memory fabric modern AI has been waiting for."

 

About PEAK:AIO

PEAK:AIO is a software-first infrastructure company delivering next-generation AI data solutions. Trusted across global healthcare, pharmaceutical, and enterprise AI deployments, PEAK:AIO powers real-time, low-latency inference and training with memory-class performance, RDMA acceleration, and zero-maintenance deployment models. Learn more at https://peakaio.com

Attachment

CONTACT: Joanne Hogue
Smart Connections PR for PEAK:AIO
+1 (410) 658-8246
joanne@smartconnectionspr.com

 
NLR Logo
We collaborate with the world's leading lawyers to deliver news tailored for you. Sign Up for any (or all) of our 25+ Newsletters.

 

Sign Up for any (or all) of our 25+ Newsletters