...
This commit is contained in:
30
README.md
Normal file
30
README.md
Normal file
@@ -0,0 +1,30 @@
|
||||
# Ai @ Home
|
||||
|
||||
Multi-Backend AI tooling
|
||||
|
||||
## WTF?
|
||||
|
||||
Running an LLM or other AI backend on a home PC (e.g. Gaming PC) is quite possible, sadly there is rarely enough GPU/RAM to run multiple models simoultanously.
|
||||
|
||||
The idea is to create a multi backend deployment that can spawn and destroy a LLM in a Docker Container depending on which endpoint recieves which request,
|
||||
so multiple backends can be used without switching actively between them.
|
||||
|
||||
## Current State
|
||||
|
||||
You got openwebui and ollama. Sablier and Apisix are planned to provide the orchestration functionality.
|
||||
|
||||
## Prerequisites
|
||||
|
||||
Install docker, cuda and the nvidia ctk
|
||||
|
||||
## How?
|
||||
|
||||
Create a config based on the example in ansible/inventory/example, configure your paths and hostname and then run
|
||||
|
||||
cd ansible
|
||||
ansible-playbook -i inventory/yourinventory deploy-ollama.yml
|
||||
ansible-playbook -i inventory/yourinventory deploy-openwebui.yml
|
||||
ansible-playbook -i inventory/yourinventory deploy-ai-at-home.yml
|
||||
|
||||
This is currently a sloppy work in progress and might or not be developed in future.
|
||||
|
||||
Reference in New Issue
Block a user