Blog header graphic with title "Neoclouds: The GPU Cloud Providers Powering AI"

Neoclouds Explained: The GPU Cloud Providers Powering AI

Roughly one-third of AI workloads run on neoclouds, not hyperscalers. Here's what neoclouds are, who the major providers are, and why Bitcoin miners should pay attention.

Ian Philpot
Ian Philpot

The AI infrastructure buildout is often framed as a hyperscaler story—Microsoft, Google, Amazon, Meta spending billions on data centers. But roughly one-third of AI workloads aren't running on hyperscaler infrastructure. They're running on neoclouds.

That third running on neoclouds underscores a structural shift in how AI compute gets consumed. Neoclouds are a new category of cloud provider, built specifically for GPU-intensive AI workloads. For anyone watching the AI infrastructure market—including Bitcoin miners evaluating their own transition to AI/HPC—understanding what neoclouds are and how they fit into the ecosystem is essential context.

What Is a Neocloud?

A neocloud is a cloud provider that specializes in GPU compute, purpose-built for AI and high-performance computing (HPC) workloads rather than general-purpose cloud services.

The name is straightforward: “neo” (new) + “cloud”—a newer generation of cloud providers built around AI-era requirements. Where traditional cloud platforms offer broad service portfolios spanning storage, databases, enterprise software integrations, and GPU compute as one offering among many, neoclouds are built from the ground up around GPU availability and performance.

Their business model typically centers on GPU-as-a-Service (GPUaaS), offering customers access to GPU compute on flexible terms—often with faster provisioning and more competitive pricing than what hyperscalers provide. The services provided may be slightly different from one provider to another, but they almost always include GPU compute services, storage services, and tools required to run AI workloads.

The neocloud category has grown quickly. There are now more than 100 neoclouds globally, with 10–15 operating at meaningful scale in the United States and expanding footprints across Europe, the Middle East, and Asia. They’re backed by a mix of venture capital, private equity, and other investment funds.

Neoclouds vs. Hyperscalers

Hyperscalers like AWS, Azure, and Google Cloud operate on a massive scale and offer a wide range of services. GPU capacity is part of their portfolio, but it’s not their sole focus. They serve everything from enterprise SaaS to storage to machine learning, and GPU instances are one product line among hundreds.

Neoclouds are different in scope and focus. They are GPU-first, often offering faster access to the latest hardware generations, more flexible contract terms, and pricing that can be significantly lower—by some estimates, as much as 85% less than hyperscaler rates for comparable compute.

But calling this a strict competition misses the point. Neoclouds and hyperscalers often cater to different needs and may be used at different stages of AI development. According to the insights shared during the PTC conference, two out of every three AI workloads are executed on hyperscalers, while one out of every three AI workloads is executed on neoclouds.

The dynamic is partly driven by capacity constraints. Hyperscale data center capacity is currently sold out until 2028 or 2029. Neoclouds fill availability gaps that hyperscalers can’t address quickly enough, particularly for AI startups and research labs that need GPU access now rather than in two years.

Why Neoclouds Matter Right Now

Most neoclouds today operate primarily as bare-metal-as-a-service (BMaaS) providers—renting GPU hardware directly to customers without the layers of managed services and software abstraction that hyperscalers offer. BMaaS is essentially the foundational layer of GPU-as-a-Service: customers get direct access to GPU servers, and the neocloud handles the infrastructure, power, cooling, and networking.

The BMaaS model is straightforward, but it’s also where the long-term questions lie. Bare-metal GPU rental is inherently commoditized—differentiation is limited when you’re competing on hardware access and price. The neoclouds that endure will likely be those that move up the stack into AI-native software services, such as training orchestration, managed inference platforms, and developer tools. That evolution will determine whether neoclouds become a durable category or repeat the pattern of Cloud 1.0 startups that were eventually absorbed or sidelined by hyperscalers.

For now, though, several factors are keeping neoclouds highly relevant.

The GPU Shortage Created an Opening

GPU scarcity has made access to compute a competitive advantage. In cases where hyperscalers have reached maximum GPU capacity and are restricted due to existing contracts, neoclouds act as an alternative entry point. For AI startups and organizations that are not in a position to wait for 12-18 months for hyperscalers to allocate resources, neoclouds are often the quickest solution to begin workloads.

Flexibility Aligns with AI Development Cycles

The development of AI occurs rapidly, and the infrastructure needs of an AI project may change significantly between phases of training and deployment. Neoclouds often provide more flexible, shorter-term commitments than hyperscaler contracts, which tend to require longer lock-ins. For teams that are still iterating on models or scaling unpredictably, that flexibility has real value.

Data Sovereignty Is Driving Demand

Another structural driver behind neocloud adoption is data sovereignty. Many enterprises, particularly ones who cannot afford to have their proprietary data used in public training sets, need to use private resources. Hyperscalers often require customers to give up data privacy when using their services.

Neoclouds, by contrast, frequently operate smaller, regionally targeted facilities and can offer greater control over data privacy, data locality, jurisdiction, and infrastructure configuration. For organizations that must guarantee that training data, model weights, or inference outputs remain within specific privacy boundaries, data sovereignty is mission-critical.

The Secondary GPU Market Lowers Barriers

The used and refurbished GPU market plays an important role in neocloud economics. Not all workloads require the latest generation hardware (i.e. B200 or B300 series) since many inference and fine-tuning workloads can operate just fine with the previous generation hardware (i.e. H100 or A100 series devices). Neoclouds, which take advantage of the secondary market, can provide competitive pricing with healthy margin potential, thus removing the cost barrier to entry for new players and minimizing cost for the consumer

The secondary market also functions as a natural off-take channel for hyperscalers rotating out older GPU fleets, improving capital recovery for hyperscalers while supplying neoclouds with discounted hardware—creating a symbiotic dynamic rather than a purely adversarial one.

Compute Is Becoming More Distributed

Neoclouds represent an evolution in the development of AI compute, rather than the current trend of hyperscale solutions. As the number of inference and agentic AI workloads, or workloads requiring proximity to the end user to provide low latency performance, increases, the need for distributed, specialized compute solutions increases as well. Neoclouds are poised to meet this need, which may not be achievable with the hyperscale model.

Notable Neocloud Providers

Neoclouds represent a wide range of providers, varying in scale, geographic presence, and specialization. The following table represents some of the notable neocloud providers in the field today.

Company Notable GPUs Pricing Approach Geographic Focus
CoreWeave GB200/GB300 NVL72, B200, H200, H100, L40S, A100 Hourly instance; some SKUs contact sales Multiple US regions, EMEA (Spain)
Crusoe Cloud GB200 NVL72, B200, H200, H100, AMD MI355X On-demand, Reserved, Spot; managed inference US (multiple regions), Iceland
Lambda B200, H100, A100 (80/40GB), V100; clusters to 2,000+ GPUs On-demand hourly; reserved (1-year); cluster commitments Multiple global locations
Nebius B200, H200, H100, L40S; GB200/GB300 (preorder) On-demand per GPU-hour; reserved (up to ~35% discount) Finland, France, Israel, US (Kansas City)
Vultr A100, B200, GH200, H100, L40S; AMD MI300X/MI325X/MI355X On-demand hourly; VMs and bare metal; serverless inference 19 countries, 30+ cities
Civo B200, H100, H200, A100, L40S Hourly per-GPU; discounted reserved rates London (primary GPU region)
Latitude.sh RTX PRO 6000, H100; GPU clusters On-demand hourly; reserved monthly/yearly 23 global locations
Hivelocity Custom GPU servers (configurations vary) Dedicated servers; enterprise cloud; custom quoted 50+ global locations
Hetzner RTX PRO 6000 Blackwell; dedicated GPU configs Hourly and monthly; setup fees for dedicated Germany, Finland, US (Ashburn, Hillsboro)

Note: GPU availability and pricing structures are subject to change. The information provided is based on publicly available data up to early 2026.

The variety of providers listed represents the scope of the neocloud market. Providers, such as CoreWeave and Crusoe, operate on a large scale, utilizing the latest GPU technology and multi-region infrastructure. Providers, including Hetzner and Latitude.sh, target more budget-friendly or region-specific users.

Why Neoclouds Are Relevant to Bitcoin Miners

For Bitcoin miners watching the AI infrastructure space, neoclouds are the category worth paying attention to—not because miners should immediately become one, but because neoclouds represent the clearest picture of how GPU compute demand is being served outside of hyperscalers.

The relevance is structural. Neoclouds need exactly what mining operations have spent years securing: power capacity, physical infrastructure, cooling capability, and operational experience managing high-density compute environments. These are the foundational requirements that take the longest to build and cost the most to acquire. New entrants to the AI infrastructure space have to build them from scratch. Miners already have them.

The knowledge of the neocloud market also gives miners an idea of the variety of opportunities that exist within the market. While joining the neocloud market is an option, it is not the only one. Miners can work with existing neoclouds, offer colocation services to neocloud providers, or offer hardware and power solutions within the market. Each of these opportunities has its own capital, complexity, and revenue requirements.

The key takeaway is that the market demand served by neoclouds is legitimate, and it is growing. As we covered in our GPU-as-a-Service overview, the GPU-as-a-Service market is estimated at $4–6 billion and growing at double-digit rates. Neoclouds have proven that the business model works and that customers are willing to pay for specialized GPU infrastructure. That’s market validation that miners evaluating AI/HPC should take seriously.

We’ll explore the specifics of how miners can engage with the neocloud ecosystem—from site evaluation to partnership models to what it takes to offer GPU-as-a-Service—in upcoming articles in this series.

The Bigger Picture

Neoclouds represent a structural shift in how AI compute is consumed—not a temporary phenomenon born purely from GPU scarcity. As AI workloads continue growing across both training and inference, the category will continue evolving. The neoclouds that survive will be those that move beyond pure hardware rental into differentiated services. The ones that don’t will face the same commoditization pressures that squeezed earlier generations of cloud startups.

For miners and infrastructure operators, neoclouds are both a model to learn from and a potential market to serve. The hyperscaler-dominated narrative of AI infrastructure misses a significant piece of the picture. Neoclouds are where a third of AI workloads actually run—and that share may grow.

AI/HPCGPU

Ian Philpot

Marketing Director at Luxor Technology