Env-Doctor
Env-Doctor is a GPU environment diagnostic tool for Python AI/ML developers.
It detects and fixes the most common source of broken GPU setups: version mismatches between your NVIDIA driver, CUDA toolkit, cuDNN, and Python libraries like PyTorch, TensorFlow, and JAX.
Run one command. Get a full diagnosis. Get the exact fix.
Common symptom:
torch.cuda.is_available()returnsFalseright after installing PyTorch — because your driver supports CUDA 11.8, butpip install torchsilently pulled CUDA 12.4 wheels.
Env-Doctor also checks GPU architecture compatibility, Python version conflicts, Docker GPU configs, AI model VRAM requirements, and exposes all diagnostics to AI assistants via a built-in MCP server.
Features
| Feature | What It Does |
|---|---|
| One-Command Diagnosis | Instantly check compatibility between GPU Driver → CUDA Toolkit → cuDNN → PyTorch/TensorFlow/JAX |
| Compute Capability Check | Detect GPU architecture mismatches — catches why torch.cuda.is_available() returns False on new GPUs (e.g. Blackwell RTX 5000) even when driver and CUDA are healthy |
| Python Version Compatibility | Detect Python version conflicts with AI libraries and dependency cascade impacts |
| CUDA Installation Guide | Get platform-specific, copy-paste CUDA installation commands for Ubuntu, Debian, RHEL, Fedora, WSL2, Windows, and Conda |
| Deep CUDA Analysis | Reveals multiple installations, PATH issues, environment misconfigurations |
| Compilation Guard | Warns if system nvcc doesn't match PyTorch's CUDA — preventing flash-attention build failures |
| WSL2 GPU Support | Detects WSL1/WSL2 environments, validates GPU forwarding, catches common driver conflicts for WSL2 |
| Safe Install Commands | Prescribes the exact pip install command that works with YOUR driver |
| Container Validation | Catches GPU config errors in Dockerfiles and docker-compose with DB-driven recommendations |
| AI Model Compatibility | Check if your GPU can run any model (LLMs, Diffusion, Audio) before downloading |
| cuDNN Detection | Finds cuDNN libraries, validates symlinks, checks version compatibility |
| MCP Server | Expose all diagnostics to AI assistants (Claude Code, Claude Desktop, Zed) via Model Context Protocol stdio — no browser needed |
| Migration Helper | Scans code for deprecated imports (LangChain, Pydantic) and suggests fixes |
Installation
Quick start (Other commands explained extensively seperately)
# Diagnose your environment
env-doctor check
# Check Python version compatibility
env-doctor python-compat
# Get CUDA installation instructions
env-doctor cuda-install
# Get safe install command for PyTorch
env-doctor install torch
# Check if a model fits on your GPU
env-doctor model llama-3-8b
CLI Demo — Environment Check
CLI Demo — Model Checker
MCP Server Demo — Claude Code in Action
Use env-doctor as an MCP server and let your AI assistant diagnose GPU environments, fetch safe install commands, and validate Dockerfiles — all without leaving the chat.
Star History
What's Next?
-
Getting Started
Complete installation and first steps guide
-
Commands
Full reference for all CLI commands
-
Container Validation
Validate Dockerfiles and docker-compose for GPU issues
-
Model Compatibility
Check if AI models fit on your GPU before downloading
Frequently Asked Questions
Why does torch.cuda.is_available() return False?
The most common causes are:
- Your GPU's SM architecture isn't in your PyTorch wheel (common on new GPUs like Blackwell RTX 5000)
- Your NVIDIA driver is too old for your CUDA toolkit
- CUDA version mismatch between driver, toolkit, and PyTorch
Run env-doctor check to get the exact cause and fix.
How do I fix a CUDA version mismatch with PyTorch?
Run env-doctor check to identify what's mismatched, then env-doctor install torch to get the exact pip install command with the correct --index-url for your driver.
Why does flash-attention fail to build?
flash-attn requires an exact match between your system nvcc version and PyTorch's CUDA build. Run env-doctor install flash-attn — it detects the mismatch and gives you two fix paths.
How do I use env-doctor with Claude or other AI assistants?
Env-doctor ships a built-in MCP server (env-doctor-mcp). Add it to your Claude Desktop or Claude Code config and your AI assistant can call all diagnostic tools directly from the chat. See the MCP Integration Guide.
My new RTX 5000 / Blackwell GPU isn't working with PyTorch. What do I do?
Stable PyTorch wheels don't yet include SM 120 (Blackwell) support. Run env-doctor check — it detects whether you have a hard or soft architecture mismatch and provides the exact nightly PyTorch install command with sm_120 support.