AI Infrastructure
Part of AI & Machine Learning
GPU compute and AI training infrastructure
Services(8)
Ownership Structure
Deployment
License Type
Country
74%
BootstrappedSelf-HostableApache-2.0
AI & Machine Learning
vLLM
High-throughput LLM serving engine with PagedAttention for efficient memory management.
74%
BootstrappedSelf-HostableMIT
AI & Machine Learning
LocalAI
Self-hosted OpenAI-compatible API for running LLMs locally with CPU or GPU support.
72%
FoundationSelf-HostableMIT
AI & Machine Learning
Open WebUI
Self-hosted AI chat interface supporting multiple LLM backends including Ollama, OpenAI-compatible APIs, and more.
70%
BootstrappedSelf-HostableMIT
AI & Machine Learning
llama.cpp
Efficient LLM inference in C/C++ with support for CPU, Metal, and CUDA acceleration.
69%
BootstrappedSelf-HostableAGPL-3.0
AI & Machine Learning
Text Generation WebUI
Web interface for running LLMs locally with support for many model formats and extensions.
62%
EU
Public (EU)
ML Platforms
OVHcloud AI Training
GPU compute for AI/ML training.
62%
EU
EU VC-Backed
Compute
Scaleway GPU Instances
GPU instances for ML workloads.
60%
Mixed (<30% Non-EU)Self-HostableMIT
AI & Machine Learning
Ollama
Run large language models locally with a simple CLI and API, supporting Llama, Mistral, and more.
All 8 services loaded