LocalAI
How to get started?
How to use OpenAI API
How to list models to download?
curl http://localhost:8080/models/available | jq
🖼️ Model gallery :: LocalAI documentation
How do I list currently running models?
curl http://localhost:8080/models
LOCALAI=http://localhost:9095
curl $LOCALAI/models
Example Query
LOCALAI=http://localhost:9095
curl --location $LOCALAI/v1/chat/completions -H "Content-Type: application/json" -d '{
"model": "lunademo",
"messages": [{"role": "user", "content": "How are you?"}],
"temperature": 0.9
}' | jq
LOCALAI=http://localhost:9095
curl --location $LOCALAI/v1/chat/completions -H "Content-Type: application/json" -d '{
"model": "lunademo",
"messages": [{"role": "user", "content": "What is the role of project management?"}],
"temperature": 0.9
}'
LOCALAI=http://localhost:9095
curl --location $LOCALAI/v1/chat/completions -H "Content-Type: application/json" -d '{
"model": "lunademo",
"messages": [{"role": "user", "content": "Weather in Toronto?"}],
"temperature": 0.9
}'
LOCALAI=http://localhost:9095
curl --location $LOCALAI/v1/chat/completions -H "Content-Type: application/json" -d '{
"model": "lunademo",
"messages": [{"role": "user", "content": "What is the most important invention of all time?"}],
"temperature": 0.9
}' | jq
How to download and run an additional model?
LOCALAI=http://localhost:9095
curl $LOCALAI/models/apply -H "Content-Type: application/json" -d '{
"id": "model-gallery@bert-embeddings"
}'
How to use Embeddings API
curl http://localhost:8080/embeddings -X POST -H "Content-Type: application/json" -d '{
"input": "Your text string goes here",
"model": "bert-embeddings"
}' | jq "."
Sources
- go-skynet/LocalAI: :robot: The free, Open Source OpenAI alternative. Self-hosted, community-driven and local-first. Drop-in replacement for OpenAI running on consumer-grade hardware. No GPU required. Runs ggml, gguf, GPTQ, onnx, TF compatible models: llama, llama2, rwkv, whisper, vicuna, koala, cerebras, falcon, dolly, starcoder, and many others
- LocalAI :: LocalAI documentation