SoulFire LogoSoulFire

AI Support

Configure SoulFire's built-in AI integrations, choose compatible providers, and pick sensible models for chat, scripting, and captcha flows.

SoulFire has two separate AI stories: MCP lets an external AI assistant control SoulFire, while this page covers the built-in AI features where SoulFire itself calls an LLM provider.

What AI support exists in SoulFire today

SoulFire currently has one shared AI transport layer and three user-facing AI integrations on top of it:

  • AI Settings define the provider base URL, API key, timeout, and retries.
  • LLM Chat lets scripts send prompts to an LLM and use the response in a node graph.
  • AI Chat Bot lets in-game players talk to an AI through chat.
  • Captcha Solver uses an LLM to solve image-based or text-based captcha flows.

That means you do not configure OpenAI, Groq, or Ollama separately for each feature from scratch. You usually set the provider once in AI Settings, then choose the right model inside each AI feature.

How the integrations actually work

AI Settings

Open an instance and go to AI Settings. This page controls the shared transport used by the current AI features:

SettingWhat it does
API Base URLWhich OpenAI-compatible endpoint SoulFire should call
API KeyThe credential used for that provider
API Request TimeoutHow long SoulFire waits before the request fails
API Max RetriesHow many retries SoulFire attempts before surfacing an error

The current built-in AI client is OpenAI-compatible. That is why providers such as OpenAI, Groq, and Ollama work cleanly, and why gateways such as OpenRouter can also fit.

LLM Chat in scripts

The LLM Chat node is the most flexible AI integration in SoulFire. Use it when you want AI inside a script graph.

It:

  • uses the bot's current AI Settings
  • accepts a prompt and optional system prompt
  • supports a per-node model override
  • returns success, error, response text, and token usage to the graph

If you leave the model override empty, the current implementation falls back to gpt-4o-mini. That fallback is implementation detail, not the best modern default, so it is better to set a model explicitly.

AI Chat Bot

AI Chat Bot is a built-in plugin for public-facing chat replies. It watches incoming chat, looks for a keyword such as !ai, sends the matching message to your configured model, and posts the answer back in-game.

It also:

  • keeps a short conversation history per connection
  • supports a configurable system prompt
  • can filter the trigger keyword back out of responses
  • truncates responses to fit normal in-game chat better

Use this when you want a server helper or lightweight NPC-style chat flow without building a script graph.

Captcha Solver

Captcha Solver is the AI feature with the strictest model requirement. Unlike normal text chat, it needs a model that can understand image input when you use image-based solving.

It currently supports:

  • image-based solving from a map in hand
  • image-based solving from a POV render
  • text-based solving from a regex capture in chat

If you want image-based captcha solving, pick a vision-capable model. The built-in default model is llava, which makes the most sense for Ollama-style local setups.

How API keys work in SoulFire

The simplest way to think about AI keys in SoulFire is this:

  • the key lives in AI Settings
  • the current AI features reuse that same provider connection
  • changing the provider there changes what LLM Chat, AI Chat Bot, and Captcha Solver talk to

The corresponding CLI flags are:

  • --ai-api-base-url
  • --ai-api-key
  • --ai-api-request-timeout
  • --ai-api-max-retries

If your provider is local or does not require authentication, the current CLI reference allows the key to be empty.

Provider quick reference

This is the fastest way to map a provider into SoulFire's current AI settings.

ProviderBase URL for SoulFireAPI key needed?Where to get the keyWhere to see model IDs
OpenAIhttps://api.openai.com/v1YesOpenAI API keysOpenAI models
Groqhttps://api.groq.com/openai/v1YesGroq keysGroq models
Ollamahttp://localhost:11434/v1Usually noNone for local OllamaOllama library
Google AI Studiohttps://generativelanguage.googleapis.com/v1beta/openai/YesGoogle AI Studio API keysGemini models
OpenRouterhttps://openrouter.ai/api/v1YesOpenRouter key settingsOpenRouter models

Which providers make sense

OpenAI

OpenAI is the cleanest default if you want the least setup and the most predictable behavior.

  • Base URL: https://api.openai.com/v1
  • API key: required
  • Best for: general-purpose LLM Chat, AI Chat Bot, and fast setup
  • Good model picks from our AI blog: gpt-4.1-mini as the default, gpt-4.1-nano for a cheaper public-server helper

Our current AI blog recommends the GPT-4.1 family for most public-server assistant use cases and explicitly says GPT-5-class models are overkill for normal server assistants.

How to get the API key

  1. Create or log into your account at platform.openai.com.
  2. Open API keys.
  3. Create a new key and copy it once.
  4. Paste it into AI Settings -> API Key in SoulFire.

How to see model IDs

Open OpenAI models. That page is the authoritative current model catalog for OpenAI model IDs.

Useful resources:

Groq

Groq is a good fit when you care about very fast responses and want an OpenAI-compatible hosted API.

  • Base URL: https://api.groq.com/openai/v1
  • API key: required
  • Best for: low-latency text chat and fast script-side LLM Chat
  • Good starting text models: openai/gpt-oss-20b, then openai/gpt-oss-120b if you want a stronger model

For Captcha Solver, do not keep a text-only model such as GPT-OSS selected if you need image input. Use a Groq vision-capable model for that plugin instead, such as meta-llama/llama-4-scout-17b-16e-instruct.

How to get the API key

  1. Create or log into your account at console.groq.com.
  2. Open Groq keys.
  3. Generate a key and copy it.
  4. Paste it into AI Settings -> API Key in SoulFire.

How to see model IDs

Open Groq models. That page lists the model IDs exactly as Groq expects them.

Useful resources:

Ollama

Ollama is the best fit when you want local inference, no per-request API bill, and no player chat leaving your machine.

  • Base URL: http://localhost:11434/v1
  • API key: usually blank in SoulFire's current setup
  • Best for: self-hosted chat, privacy-sensitive setups, and cost control
  • Good text models from our AI blog: gpt-oss:20b, qwen3:8b, gemma3:4b
  • Good captcha/vision starting point: llava

If your hardware is modest, start small. If you want the strongest local setup from our current blog guidance, start with gpt-oss:20b.

How to get the API key

For normal local Ollama, you do not need a real provider account or cloud API key. Run the model locally and point SoulFire at your local base URL.

How to see model IDs

Use the Ollama library to browse model names, or run:

ollama list

SoulFire expects the exact model name you pulled locally, such as gpt-oss:20b or llava.

Useful resources:

Google AI Studio

Google AI Studio is worth mentioning because Gemini now has an official OpenAI-compatibility path, which fits SoulFire's current AI transport.

  • Base URL: https://generativelanguage.googleapis.com/v1beta/openai/
  • API key: required
  • Best for: teams that want Gemini through SoulFire's current OpenAI-compatible setup
  • Good starting model: gemini-2.5-flash

Google's own compatibility docs say the OpenAI-compatible path works by changing the key and base URL, while also noting that the direct Gemini API gives access to more Gemini-specific features. For SoulFire, the compatibility path is exactly what matters.

How to get the API key

  1. Go to Google AI Studio.
  2. Open the API keys flow described in Using Gemini API keys.
  3. Create or select a project and generate a Gemini API key.
  4. Paste that key into AI Settings -> API Key in SoulFire.

How to see model IDs

Open Gemini models. That page shows the current model IDs and naming patterns.

Useful resources:

OpenRouter

OpenRouter is the most useful compatibility gateway to mention because it gives SoulFire one OpenAI-compatible endpoint for many model families.

  • Base URL: https://openrouter.ai/api/v1
  • API key: required
  • Best for: one endpoint to many model families and easy provider switching
  • Best use case in SoulFire: when you want Claude-, Gemini-, OpenAI-, or other providers behind one OpenAI-compatible base URL

This matters because SoulFire's built-in AI transport is OpenAI-compatible, not a separate native Anthropic client. So if you want Claude-style models inside the current built-in AI features, OpenRouter is one of the cleanest routes.

How to get the API key

  1. Create or log into your account at openrouter.ai.
  2. Open key settings.
  3. Create a key and copy it.
  4. Paste it into AI Settings -> API Key in SoulFire.

How to see model IDs

Open OpenRouter models. If you are using OpenRouter, use the exact OpenRouter model ID rather than the upstream provider's native naming.

Useful resources:

Which models are good

These recommendations are grounded in SoulFire's own current AI blog post and the current built-in feature surface.

Best default for most users

Use gpt-4.1-mini on OpenAI.

That is the best general default if you want:

  • good quality
  • simple setup
  • stable public-server assistant behavior

Cheapest hosted option

Use gpt-4.1-nano on OpenAI or openai/gpt-oss-20b on Groq.

That is the right direction when you care more about volume and latency than perfect output quality.

Best local/self-hosted option

Use gpt-oss:20b on Ollama.

That is the strongest local recommendation currently reflected in the SoulFire blog content.

Best lightweight local option

Use qwen3:8b or gemma3:4b on Ollama.

These make more sense when you want a lighter local assistant and you do not have enough RAM or GPU headroom for larger models.

Best model type for captcha solving

Use a vision-capable model, not a text-only one.

Practical starting points:

  • llava on Ollama
  • meta-llama/llama-4-scout-17b-16e-instruct on Groq
  • gemini-2.5-flash on Google AI Studio if you want Gemini through the compatibility layer
  • a current multimodal chat model on OpenAI if you want hosted vision

Best for long-form roleplay or NPC personality

SoulFire's AI blog says Claude Sonnet-class models are stronger than GPT for long in-character NPC dialogue. If that is your use case, use an OpenAI-compatible gateway path rather than assuming SoulFire talks to Anthropic's native API directly.

Quick setup examples

OpenAI

  1. Create an account at OpenAI.
  2. Add billing and create a key at API keys.
  3. In AI Settings, keep the default base URL.
  4. Paste the key into API Key.
  5. Set your text features to gpt-4.1-mini.
  6. If you use Captcha Solver, set that plugin to a vision-capable model instead of reusing a text-only model blindly.

Groq

  1. Create an account at Groq.
  2. Generate a key at Groq keys.
  3. Set API Base URL to https://api.groq.com/openai/v1.
  4. Paste the key into API Key.
  5. Start with openai/gpt-oss-20b for text features.
  6. If you use Captcha Solver, switch that plugin's model to a vision-capable Groq model.

Ollama

  1. Install Ollama from ollama.com.
  2. Pull the models you want, for example:
ollama pull gpt-oss:20b
ollama pull llava
  1. Start the local server:
ollama serve
  1. Set API Base URL to http://localhost:11434/v1.
  2. Leave API Key empty unless your local setup expects a placeholder.
  3. Use gpt-oss:20b for chat features and llava for image-based captcha solving.

Google AI Studio

  1. Open Google AI Studio.
  2. Create a Gemini API key with the Gemini API keys guide.
  3. Set API Base URL to https://generativelanguage.googleapis.com/v1beta/openai/.
  4. Paste your Gemini key into AI Settings -> API Key.
  5. Start with gemini-2.5-flash.

OpenRouter

  1. Create an account at OpenRouter.
  2. Generate a key at OpenRouter key settings.
  3. Set API Base URL to https://openrouter.ai/api/v1.
  4. Paste the key into AI Settings -> API Key.
  5. Choose the exact OpenRouter model ID you want from the OpenRouter model catalog.

Where to change models inside SoulFire

The provider connection and the actual model choice are not always the same setting.

FeatureWhere to configure it
Shared provider URL/key/timeout/retriesAI Settings
Script-side LLM callsLLM Chat node, optionally with a per-node model override
In-game AI repliesAI Chat Bot plugin settings
Captcha solvingCaptcha Solver plugin settings

That separation is important. For example, you might run everything through Ollama, but still use:

  • gpt-oss:20b for LLM Chat
  • a different model for AI Chat Bot
  • llava for Captcha Solver

Common mistakes

  • confusing MCP with SoulFire's built-in AI features
  • assuming LLM Chat, AI Chat Bot, and Captcha Solver all want the same model
  • leaving old defaults such as gpt-4o-mini or nemotron-mini in place without choosing a model intentionally
  • using a text-only model for Captcha Solver
  • pointing SoulFire at a native provider API that is not OpenAI-compatible

Further reading

The current model picks above follow the guidance in our own blog post:

How is this page?

Last updated on

On this page