We built a guided install mission for LocalAI inside KubeStellar Console, a standalone Kubernetes dashboard (unrelated to legacy kubestellar/kubestellar, kubeflex, or OCM — zero shared code).
→ Open the LocalAI install mission
What the mission does
The mission installs LocalAI via the official Helm chart at https://go-skynet.github.io/helm-charts/ with a persistent volume for the model-gallery cache and a ClusterIP Service on the OpenAI-compatible /v1/chat/completions endpoint. Each step:
- Pre-flight — verifies Helm 3, default StorageClass, and GPU device plugin availability when the GPU image variant is selected
- Commands —
helm repo add go-skynet https://go-skynet.github.io/helm-charts/, helm install local-ai with an inline values file pulling a gallery model (llama-3.2-1b-instruct as the default first-run model)
- Validation — waits for Deployment rollout, confirms the models PVC is bound, then hits
/v1/models and /v1/chat/completions to verify end-to-end
- Troubleshooting — covers slow image pulls, gallery model download failures behind a default-deny egress policy, CPU OOM for >3B models, and GPU image scheduling when the NVIDIA device plugin is missing
- Rollback — complete
helm uninstall + PVC + namespace cleanup
LocalAI is registered as a chat-capable provider in the Console's agent selector — set LOCALAI_URL to the in-cluster Service URL and the dropdown shows LocalAI alongside the CLI agents.
Why we're reaching out
The Console now ships with local-LLM runner integrations covering CPU-first (LocalAI, llama.cpp), GPU (vLLM, RHAIIS), workstation (LM Studio, Ollama), and frontend (Open WebUI) paths so operators in regulated or air-gapped environments have a well-lit path for each deployment profile. LocalAI's "drop-in OpenAI-compatible" framing fits the Console's OpenAI-compatible provider path cleanly.
Install
Local (connects to your current kubeconfig context):
curl -sSL https://raw.githubusercontent.com/kubestellar/console/main/start.sh | bash
Deploy into a cluster:
curl -sSL https://raw.githubusercontent.com/kubestellar/console/main/deploy.sh | bash
Mission definitions are open source — PRs welcome at install-localai.json. Feel free to close if not relevant.
We built a guided install mission for LocalAI inside KubeStellar Console, a standalone Kubernetes dashboard (unrelated to legacy kubestellar/kubestellar, kubeflex, or OCM — zero shared code).
→ Open the LocalAI install mission
What the mission does
The mission installs LocalAI via the official Helm chart at https://go-skynet.github.io/helm-charts/ with a persistent volume for the model-gallery cache and a ClusterIP Service on the OpenAI-compatible
/v1/chat/completionsendpoint. Each step:helm repo add go-skynet https://go-skynet.github.io/helm-charts/,helm install local-aiwith an inline values file pulling a gallery model (llama-3.2-1b-instructas the default first-run model)/v1/modelsand/v1/chat/completionsto verify end-to-endhelm uninstall+ PVC + namespace cleanupLocalAI is registered as a chat-capable provider in the Console's agent selector — set
LOCALAI_URLto the in-cluster Service URL and the dropdown shows LocalAI alongside the CLI agents.Why we're reaching out
The Console now ships with local-LLM runner integrations covering CPU-first (LocalAI, llama.cpp), GPU (vLLM, RHAIIS), workstation (LM Studio, Ollama), and frontend (Open WebUI) paths so operators in regulated or air-gapped environments have a well-lit path for each deployment profile. LocalAI's "drop-in OpenAI-compatible" framing fits the Console's OpenAI-compatible provider path cleanly.
Install
Local (connects to your current kubeconfig context):
curl -sSL https://raw.githubusercontent.com/kubestellar/console/main/start.sh | bashDeploy into a cluster:
curl -sSL https://raw.githubusercontent.com/kubestellar/console/main/deploy.sh | bashMission definitions are open source — PRs welcome at install-localai.json. Feel free to close if not relevant.