Bubl Cloud
Bubl Cloud
Open Source LLMs in your Data Vault

The model
runs in your vault.

Run hundreds of open source AI models entirely inside your encrypted Data Vault. No external API calls. No data leaving your environment. Full sovereignty — from prompt to response.

Two ways to run models

Whether you need dedicated, always-on GPU capacity or flexible compute that scales with demand — every model runs inside your Data Vault, not ours.

Dedicated GPU

Add a dedicated GPU node to your Data Vault for maximum performance and guaranteed capacity. Best for production workloads, high-throughput inference, or models that need persistent availability.

Learn about Dedicated GPUs →

Models available today

A wide range of open source models are ready to deploy. The space moves fast — we add new models continuously. Version numbers change; model families endure. What matters is that your chosen model runs entirely in your vault.

General Purpose LLMs
Llama Meta Mistral & Mixtral Mistral AI EU DeepSeek DeepSeek Qwen Alibaba Gemma Google GLM Zhipu AI Kimi Moonshot AI MiniMax MiniMax ChatGPT OSS OpenAI
Code & Development
DeepSeek Coder DeepSeek Qwen Coder Alibaba Devstral Mistral AI EU Gorilla UC Berkeley
Vision & Multimodal
Llama Vision Meta Pixtral Mistral AI EU Qwen VL Alibaba
Speech & Audio
Whisper / WhisperX OpenAI / Community Canary NVIDIA XTTS Coqui AI
Embeddings & RAG
BGE BAAI Nomic Embed Nomic AI GTE Alibaba
OCR & Document Processing
olmOCR Allen AI PaddleOCR Baidu DeepSeek OCR DeepSeek

Don't see your model?

The open source landscape moves quickly. If there's a model you need — or one you'd like us to evaluate for availability — let us know. We'll confirm whether it's already available or put it on the roadmap.

Ready to run AI inside your vault?

Talk to a solution architect about which models fit your workload and how to get started.