Skip to content

Guided LocalAI install mission in KubeStellar Console #9372

@clubanderson

Description

@clubanderson

We built a guided install mission for LocalAI inside KubeStellar Console, a standalone Kubernetes dashboard (unrelated to legacy kubestellar/kubestellar, kubeflex, or OCM — zero shared code).

Open the LocalAI install mission

What the mission does

The mission installs LocalAI via the official Helm chart at https://go-skynet.github.io/helm-charts/ with a persistent volume for the model-gallery cache and a ClusterIP Service on the OpenAI-compatible /v1/chat/completions endpoint. Each step:

  1. Pre-flight — verifies Helm 3, default StorageClass, and GPU device plugin availability when the GPU image variant is selected
  2. Commandshelm repo add go-skynet https://go-skynet.github.io/helm-charts/, helm install local-ai with an inline values file pulling a gallery model (llama-3.2-1b-instruct as the default first-run model)
  3. Validation — waits for Deployment rollout, confirms the models PVC is bound, then hits /v1/models and /v1/chat/completions to verify end-to-end
  4. Troubleshooting — covers slow image pulls, gallery model download failures behind a default-deny egress policy, CPU OOM for >3B models, and GPU image scheduling when the NVIDIA device plugin is missing
  5. Rollback — complete helm uninstall + PVC + namespace cleanup

LocalAI is registered as a chat-capable provider in the Console's agent selector — set LOCALAI_URL to the in-cluster Service URL and the dropdown shows LocalAI alongside the CLI agents.

Why we're reaching out

The Console now ships with local-LLM runner integrations covering CPU-first (LocalAI, llama.cpp), GPU (vLLM, RHAIIS), workstation (LM Studio, Ollama), and frontend (Open WebUI) paths so operators in regulated or air-gapped environments have a well-lit path for each deployment profile. LocalAI's "drop-in OpenAI-compatible" framing fits the Console's OpenAI-compatible provider path cleanly.

Install

Local (connects to your current kubeconfig context):

curl -sSL https://raw.githubusercontent.com/kubestellar/console/main/start.sh | bash

Deploy into a cluster:

curl -sSL https://raw.githubusercontent.com/kubestellar/console/main/deploy.sh | bash

Mission definitions are open source — PRs welcome at install-localai.json. Feel free to close if not relevant.

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions