31 lines
1.0 KiB
Markdown
31 lines
1.0 KiB
Markdown
# Ai @ Home
|
|
|
|
Multi-Backend AI tooling
|
|
|
|
## WTF?
|
|
|
|
Running an LLM or other AI backend on a home PC (e.g. Gaming PC) is quite possible, sadly there is rarely enough GPU/RAM to run multiple models simoultanously.
|
|
|
|
The idea is to create a multi backend deployment that can spawn and destroy a LLM in a Docker Container depending on which endpoint recieves which request,
|
|
so multiple backends can be used without switching actively between them.
|
|
|
|
## Current State
|
|
|
|
You got openwebui and ollama. Sablier and Apisix are planned to provide the orchestration functionality.
|
|
|
|
## Prerequisites
|
|
|
|
Install docker, cuda and the nvidia ctk
|
|
|
|
## How?
|
|
|
|
Create a config based on the example in ansible/inventory/example, configure your paths and hostname and then run
|
|
|
|
cd ansible
|
|
ansible-playbook -i inventory/yourinventory deploy-ollama.yml
|
|
ansible-playbook -i inventory/yourinventory deploy-openwebui.yml
|
|
ansible-playbook -i inventory/yourinventory deploy-ai-at-home.yml
|
|
|
|
This is currently a sloppy work in progress and might or not be developed in future.
|
|
|