Mullet Mining: The Case for Running AI and Bitcoin Mining on the Same Site
There's a middle path between going all-in on AI and keeping the ASICs running. It's called mullet mining, and for the right sites, it's the most capital-efficient way into GPU data center infrastructure.
Everyone's talking about pivoting to AI. Most of the decks we see skip the part where you actually evaluate whether your site supports it.
The miners who are going to win this transition aren't the ones with the boldest press releases. They're the ones who match their assets to the right workload.
And for a lot of operators, the right move isn't a full pivot — it's mullet mining: AI business in the front, mining party in the back.
TLDR
- Mullet mining is running AI/HPC and Bitcoin mining on the same site. AI gets priority and generates fixed USD revenue under SLA. Mining absorbs surplus capacity and curtails when AI needs more.
- Whether your site qualifies comes down to three things: power flexibility, a credible cooling upgrade path, and location match for your target AI workload.
- Purpose-built AI facilities are coming online and the conversion advantage is compressing. Evaluate now or risk evaluating too late.
The Moment We're In
Hashprice hit record lows around $27–28/PH/s. Miners have announced over $65 billion in AI contracts. The economics aren't subtle.
But here's what we keep seeing: operators hear those numbers and split into two camps. The panic-pivoters start ordering GPUs before they've evaluated whether their site can handle the workload. The freezers tell themselves mining will bounce back and they'll figure out AI later. Both are leaving money on the table.
The operators we're most optimistic about are the ones asking a different question: What can my site actually support right now, and how do I phase into AI/HPC without blowing up what's already working?
The demand is real. Hyperscale data center capacity is sold out through 2028 or 2029. Neoclouds are filling the gap. Roughly one-third of AI workloads now run on neocloud infrastructure, and that share is growing. But demand existing in the market doesn't mean your specific site is ready to capture it. And the biggest risk in this market isn't making the wrong move — it's spending 18 months evaluating while the window closes. Purpose-built AI facilities are coming online. The competitive advantage of repurposing mining infrastructure compresses with every quarter that passes.
What Mullet Mining Actually Is (and Isn't)
Here's the simple version: your AI/HPC workloads get priority. They're uptime-committed, SLA-bound, generating fixed USD-denominated revenue under contract. Mining absorbs all remaining capacity — surplus power, off-peak hours, the ramp-up period before AI demand fills the site. When AI needs more, mining curtails. That's the deal.
This is not "running two businesses side by side." It's capacity planning with a clear pecking order. Mining is the junior partner, and it has to behave like it.
It's also not just being a colocation landlord by renting space and power to someone else and calling it a GPU data center. Mullet mining means you're operating compute on both sides, which means you own the operational complexity and the margin. The difference matters because operating your own AI workload (or managing it for a customer under SLA) is a fundamentally different business than handing someone a rack and a power drop.
Why does this work economically?
It works because mining keeps your power from being stranded during the buildout phase. Most AI deployments don't fill a site overnight. There's a ramp period (months, sometimes years) where you're paying for power you're not fully using. Mining monetizes that gap. Without it, you're eating the cost of unused capacity while you wait for AI demand to scale up. Miners already understand flexible load. Scaling down during peak AI demand and scaling up when energy is abundant is the same curtailment logic you've used for years, just applied to a dual-workload site.
Data Center Site Selection for Hybrid Operations
When we're talking to an operator about whether hybrid mining makes sense for their site, three things tell us most of what we need to know. (For the full five-question evaluation framework, see our piece on evaluating Bitcoin mining sites for AI training and inference. What follows is the hybrid-specific lens.)
Power flexibility. Can you actually split load between mining and AI? This is the first thing we check, and it's where a lot of sites stall. Rigid take-or-pay power contracts make hybrid hard. Demand response programs and curtailment agreements make it easier. If your power contract doesn't let you modulate between mining and AI workloads, you need to fix that before anything else on the deck matters.
Cooling upgrade path. You don't need liquid cooling on day one. But you need a credible path from where you are now (most likely air-cooled mining) to where GPU density will eventually take you. The smart approach is phased: start with air-cooled, lower-density GPU deployments and plan the liquid cooling buildout for when AI demand grows. Modern closed-loop liquid cooling systems don't consume meaningful water, which removes a common objection. But if there's no realistic path from your current airflow design to GPU-grade heat density management, that's either a disqualifier or it shrinks your scope to something much smaller than the pitch deck assumes. Modular, prefabricated approaches can reduce time-to-value here. You don't have to retrofit the entire facility at once.
Location match. This is where the AI inference vs training distinction matters for site selection. If you're running a training-oriented hybrid, rural works — cheap power and internal networking are what matter. If you're targeting inference, you need to be within ~100 miles of a major metro with real fiber options, because latency-sensitive AI workloads can't afford long network round trips. That's an edge data center play, and it has different requirements.
One thing we'll add: fiber construction costs are small relative to data center and server hardware buildouts. If you're geographically close enough to make inference work, don't let the fiber investment scare you off. That's a solvable problem. The power and cooling are the hard parts.
Operational maturity (staffing, SLA management, 24/7 ops) is a gate for any AI workload. We covered that in depth in our piece on why the mining-to-AI transition is harder than you think. If you can't run high-availability operations, neither training nor inference is realistic, regardless of how good the power price is.
Where Hybrid Mining Falls Apart
We'd rather tell you this now than have you find out after you've spent the capex.
Blurry boundaries. The number one thing that kills hybrid operations is operators who can't define what gets curtailed and when. If the line between "mining load" and "AI workload" is aspirational rather than contractual, you'll either strand AI capacity and lose the customer OR strand mining capacity and lose the economics. The boundary needs to be defined in writing before a single GPU gets racked.
Underestimating AI customer expectations. AI customers have SLAs. They measure uptime. They measure latency. If your mining load competes with AI for cooling or power and performance dips, they leave. We still see operators treat AI capacity like it's as forgiving as a mining pool. It's not. A pool doesn't fire you for a bad week. An AI customer under SLA absolutely will.
Operational stretch. Running GPU infrastructure alongside ASICs means two hardware lifecycles, two management stacks, two sets of failure modes. Mining farm operations and data center operations are not the same discipline. The jump from "we manage ASICs" to "we manage GPU servers under customer SLAs" is bigger than most operators expect.
Sites that just don't fit. Some sites are great Bitcoin mining data centers and that's it. No fiber path, no credible cooling retrofit, remote with no edge proximity. That's fine. It's genuinely better to know. Don't force a hybrid label on a mining site because "AI" sounds better in a pitch deck. We call this the "neither" category in our site evaluation framework. And it's not a failure. It's useful information.
Match Your Assets to the Workload
Mullet mining isn't for every site. But for the sites where it fits (flexible power, credible cooling path, right location for the workload, operational maturity) it's the most capital-efficient way into AI. You monetize the transition period instead of eating it. You keep mining revenue flowing while AI demand ramps. And you build operational credibility with neocloud customers that can lead to larger contracts down the road.
The window is real. Purpose-built AI facilities are coming online, and the conversion advantage mining operators have today compresses as supply catches up to demand. This isn't urgency for urgency's sake. It's an honest observation from someone watching deal flow every week.
If you're evaluating whether hybrid makes sense for your site, we're happy to talk through it. No deck required. Just bring your site specs and your questions. Schedule a call here.
The best time to evaluate your site was six months ago. The second best time is now. Start with what you actually have, not what you wish you had.
Hashrate Index Newsletter
Join the newsletter to receive the latest updates in your inbox.