...
2026-03-20 13:20:58 +01:00
2026-03-26 18:15:01 +01:00

Ai @ Home

Multi-Backend AI tooling

WTF?

Running an LLM or other AI backend on a home PC (e.g. Gaming PC) is quite possible, sadly there is rarely enough GPU/RAM to run multiple models simoultanously.

The idea is to create a multi backend deployment that can spawn and destroy a LLM in a Docker Container depending on which endpoint recieves which request, so multiple backends can be used without switching actively between them.

Current State

You got openwebui and ollama. Sablier and Apisix are planned to provide the orchestration functionality.

Prerequisites

Install docker, cuda and the nvidia ctk

How?

Create a config based on the example in ansible/inventory/example, configure your paths and hostname and then run

cd ansible ansible-playbook -i inventory/yourinventory deploy-ollama.yml ansible-playbook -i inventory/yourinventory deploy-openwebui.yml ansible-playbook -i inventory/yourinventory deploy-ai-at-home.yml

This is currently a sloppy work in progress and might or not be developed in future.

Considerations: vllm and llama can only provide one image per invocation, so the structure is different from ollama Need multiple composefiles for different invocations and configurations instead of a single one Requires changes in roles and so on

Description
AI Deployment for Homelabs
Readme 37 KiB
Languages
Shell 100%