AI Support
Configure SoulFire's built-in AI integrations, choose compatible providers, and pick sensible models for chat, scripting, and captcha flows.
SoulFire has two separate AI stories: MCP lets an external AI assistant control SoulFire, while this page covers the built-in AI features where SoulFire itself calls an LLM provider.
What AI support exists in SoulFire today
SoulFire currently has one shared AI transport layer and three user-facing AI integrations on top of it:
- AI Settings define the provider base URL, API key, timeout, and retries.
LLM Chatlets scripts send prompts to an LLM and use the response in a node graph.AI Chat Botlets in-game players talk to an AI through chat.Captcha Solveruses an LLM to solve image-based or text-based captcha flows.
That means you do not configure OpenAI, Groq, or Ollama separately for each feature from scratch. You usually set the provider once in AI Settings, then choose the right model inside each AI feature.
How the integrations actually work
AI Settings
Open an instance and go to AI Settings. This page controls the shared transport used by the current AI features:
| Setting | What it does |
|---|---|
| API Base URL | Which OpenAI-compatible endpoint SoulFire should call |
| API Key | The credential used for that provider |
| API Request Timeout | How long SoulFire waits before the request fails |
| API Max Retries | How many retries SoulFire attempts before surfacing an error |
The current built-in AI client is OpenAI-compatible. That is why providers such as OpenAI, Groq, and Ollama work cleanly, and why gateways such as OpenRouter can also fit.
LLM Chat in scripts
The LLM Chat node is the most flexible AI integration in SoulFire.
Use it when you want AI inside a script graph.
It:
- uses the bot's current AI Settings
- accepts a prompt and optional system prompt
- supports a per-node model override
- returns success, error, response text, and token usage to the graph
If you leave the model override empty, the current implementation falls back to gpt-4o-mini.
That fallback is implementation detail, not the best modern default, so it is better to set a model explicitly.
AI Chat Bot
AI Chat Bot is a built-in plugin for public-facing chat replies.
It watches incoming chat, looks for a keyword such as !ai, sends the matching message to your configured model, and posts the answer back in-game.
It also:
- keeps a short conversation history per connection
- supports a configurable system prompt
- can filter the trigger keyword back out of responses
- truncates responses to fit normal in-game chat better
Use this when you want a server helper or lightweight NPC-style chat flow without building a script graph.
Captcha Solver
Captcha Solver is the AI feature with the strictest model requirement.
Unlike normal text chat, it needs a model that can understand image input when you use image-based solving.
It currently supports:
- image-based solving from a map in hand
- image-based solving from a POV render
- text-based solving from a regex capture in chat
If you want image-based captcha solving, pick a vision-capable model.
The built-in default model is llava, which makes the most sense for Ollama-style local setups.
How API keys work in SoulFire
The simplest way to think about AI keys in SoulFire is this:
- the key lives in AI Settings
- the current AI features reuse that same provider connection
- changing the provider there changes what
LLM Chat,AI Chat Bot, andCaptcha Solvertalk to
The corresponding CLI flags are:
--ai-api-base-url--ai-api-key--ai-api-request-timeout--ai-api-max-retries
If your provider is local or does not require authentication, the current CLI reference allows the key to be empty.
Provider quick reference
This is the fastest way to map a provider into SoulFire's current AI settings.
| Provider | Base URL for SoulFire | API key needed? | Where to get the key | Where to see model IDs |
|---|---|---|---|---|
| OpenAI | https://api.openai.com/v1 | Yes | OpenAI API keys | OpenAI models |
| Groq | https://api.groq.com/openai/v1 | Yes | Groq keys | Groq models |
| Ollama | http://localhost:11434/v1 | Usually no | None for local Ollama | Ollama library |
| Google AI Studio | https://generativelanguage.googleapis.com/v1beta/openai/ | Yes | Google AI Studio API keys | Gemini models |
| OpenRouter | https://openrouter.ai/api/v1 | Yes | OpenRouter key settings | OpenRouter models |
Which providers make sense
OpenAI
OpenAI is the cleanest default if you want the least setup and the most predictable behavior.
- Base URL:
https://api.openai.com/v1 - API key: required
- Best for: general-purpose
LLM Chat,AI Chat Bot, and fast setup - Good model picks from our AI blog:
gpt-4.1-minias the default,gpt-4.1-nanofor a cheaper public-server helper
Our current AI blog recommends the GPT-4.1 family for most public-server assistant use cases and explicitly says GPT-5-class models are overkill for normal server assistants.
How to get the API key
- Create or log into your account at platform.openai.com.
- Open API keys.
- Create a new key and copy it once.
- Paste it into AI Settings -> API Key in SoulFire.
How to see model IDs
Open OpenAI models. That page is the authoritative current model catalog for OpenAI model IDs.
Useful resources:
Groq
Groq is a good fit when you care about very fast responses and want an OpenAI-compatible hosted API.
- Base URL:
https://api.groq.com/openai/v1 - API key: required
- Best for: low-latency text chat and fast script-side
LLM Chat - Good starting text models:
openai/gpt-oss-20b, thenopenai/gpt-oss-120bif you want a stronger model
For Captcha Solver, do not keep a text-only model such as GPT-OSS selected if you need image input.
Use a Groq vision-capable model for that plugin instead, such as meta-llama/llama-4-scout-17b-16e-instruct.
How to get the API key
- Create or log into your account at console.groq.com.
- Open Groq keys.
- Generate a key and copy it.
- Paste it into AI Settings -> API Key in SoulFire.
How to see model IDs
Open Groq models. That page lists the model IDs exactly as Groq expects them.
Useful resources:
Ollama
Ollama is the best fit when you want local inference, no per-request API bill, and no player chat leaving your machine.
- Base URL:
http://localhost:11434/v1 - API key: usually blank in SoulFire's current setup
- Best for: self-hosted chat, privacy-sensitive setups, and cost control
- Good text models from our AI blog:
gpt-oss:20b,qwen3:8b,gemma3:4b - Good captcha/vision starting point:
llava
If your hardware is modest, start small.
If you want the strongest local setup from our current blog guidance, start with gpt-oss:20b.
How to get the API key
For normal local Ollama, you do not need a real provider account or cloud API key. Run the model locally and point SoulFire at your local base URL.
How to see model IDs
Use the Ollama library to browse model names, or run:
ollama listSoulFire expects the exact model name you pulled locally, such as gpt-oss:20b or llava.
Useful resources:
Google AI Studio
Google AI Studio is worth mentioning because Gemini now has an official OpenAI-compatibility path, which fits SoulFire's current AI transport.
- Base URL:
https://generativelanguage.googleapis.com/v1beta/openai/ - API key: required
- Best for: teams that want Gemini through SoulFire's current OpenAI-compatible setup
- Good starting model:
gemini-2.5-flash
Google's own compatibility docs say the OpenAI-compatible path works by changing the key and base URL, while also noting that the direct Gemini API gives access to more Gemini-specific features. For SoulFire, the compatibility path is exactly what matters.
How to get the API key
- Go to Google AI Studio.
- Open the API keys flow described in Using Gemini API keys.
- Create or select a project and generate a Gemini API key.
- Paste that key into AI Settings -> API Key in SoulFire.
How to see model IDs
Open Gemini models. That page shows the current model IDs and naming patterns.
Useful resources:
OpenRouter
OpenRouter is the most useful compatibility gateway to mention because it gives SoulFire one OpenAI-compatible endpoint for many model families.
- Base URL:
https://openrouter.ai/api/v1 - API key: required
- Best for: one endpoint to many model families and easy provider switching
- Best use case in SoulFire: when you want Claude-, Gemini-, OpenAI-, or other providers behind one OpenAI-compatible base URL
This matters because SoulFire's built-in AI transport is OpenAI-compatible, not a separate native Anthropic client. So if you want Claude-style models inside the current built-in AI features, OpenRouter is one of the cleanest routes.
How to get the API key
- Create or log into your account at openrouter.ai.
- Open key settings.
- Create a key and copy it.
- Paste it into AI Settings -> API Key in SoulFire.
How to see model IDs
Open OpenRouter models. If you are using OpenRouter, use the exact OpenRouter model ID rather than the upstream provider's native naming.
Useful resources:
Which models are good
These recommendations are grounded in SoulFire's own current AI blog post and the current built-in feature surface.
Best default for most users
Use gpt-4.1-mini on OpenAI.
That is the best general default if you want:
- good quality
- simple setup
- stable public-server assistant behavior
Cheapest hosted option
Use gpt-4.1-nano on OpenAI or openai/gpt-oss-20b on Groq.
That is the right direction when you care more about volume and latency than perfect output quality.
Best local/self-hosted option
Use gpt-oss:20b on Ollama.
That is the strongest local recommendation currently reflected in the SoulFire blog content.
Best lightweight local option
Use qwen3:8b or gemma3:4b on Ollama.
These make more sense when you want a lighter local assistant and you do not have enough RAM or GPU headroom for larger models.
Best model type for captcha solving
Use a vision-capable model, not a text-only one.
Practical starting points:
llavaon Ollamameta-llama/llama-4-scout-17b-16e-instructon Groqgemini-2.5-flashon Google AI Studio if you want Gemini through the compatibility layer- a current multimodal chat model on OpenAI if you want hosted vision
Best for long-form roleplay or NPC personality
SoulFire's AI blog says Claude Sonnet-class models are stronger than GPT for long in-character NPC dialogue. If that is your use case, use an OpenAI-compatible gateway path rather than assuming SoulFire talks to Anthropic's native API directly.
Quick setup examples
OpenAI
- Create an account at OpenAI.
- Add billing and create a key at API keys.
- In AI Settings, keep the default base URL.
- Paste the key into API Key.
- Set your text features to
gpt-4.1-mini. - If you use
Captcha Solver, set that plugin to a vision-capable model instead of reusing a text-only model blindly.
Groq
- Create an account at Groq.
- Generate a key at Groq keys.
- Set API Base URL to
https://api.groq.com/openai/v1. - Paste the key into API Key.
- Start with
openai/gpt-oss-20bfor text features. - If you use
Captcha Solver, switch that plugin's model to a vision-capable Groq model.
Ollama
- Install Ollama from ollama.com.
- Pull the models you want, for example:
ollama pull gpt-oss:20b
ollama pull llava- Start the local server:
ollama serve- Set API Base URL to
http://localhost:11434/v1. - Leave API Key empty unless your local setup expects a placeholder.
- Use
gpt-oss:20bfor chat features andllavafor image-based captcha solving.
Google AI Studio
- Open Google AI Studio.
- Create a Gemini API key with the Gemini API keys guide.
- Set API Base URL to
https://generativelanguage.googleapis.com/v1beta/openai/. - Paste your Gemini key into AI Settings -> API Key.
- Start with
gemini-2.5-flash.
OpenRouter
- Create an account at OpenRouter.
- Generate a key at OpenRouter key settings.
- Set API Base URL to
https://openrouter.ai/api/v1. - Paste the key into AI Settings -> API Key.
- Choose the exact OpenRouter model ID you want from the OpenRouter model catalog.
Where to change models inside SoulFire
The provider connection and the actual model choice are not always the same setting.
| Feature | Where to configure it |
|---|---|
| Shared provider URL/key/timeout/retries | AI Settings |
| Script-side LLM calls | LLM Chat node, optionally with a per-node model override |
| In-game AI replies | AI Chat Bot plugin settings |
| Captcha solving | Captcha Solver plugin settings |
That separation is important. For example, you might run everything through Ollama, but still use:
gpt-oss:20bforLLM Chat- a different model for
AI Chat Bot llavaforCaptcha Solver
Common mistakes
- confusing MCP with SoulFire's built-in AI features
- assuming
LLM Chat,AI Chat Bot, andCaptcha Solverall want the same model - leaving old defaults such as
gpt-4o-miniornemotron-miniin place without choosing a model intentionally - using a text-only model for
Captcha Solver - pointing SoulFire at a native provider API that is not OpenAI-compatible
Related pages
Scripting
Use LLM Chat inside visual automation graphs.
Node Reference
See the current LLM Chat node and other expensive nodes.
Built-in Plugins
See ai-chat-bot and ai-captcha-solver in the built-in plugin list.
CLI Flags
Configure AI provider settings from the CLI.
MCP (AI Integration)
Connect ChatGPT, Claude, or other assistants to SoulFire itself.
Further reading
The current model picks above follow the guidance in our own blog post:
How is this page?
Last updated on