fix(diarization-ui): validate non-empty content before LLM call, add OLLAMA_THINK flag
Empty documents caused the model to spin in its thinking loop and waste all tokens. Now raises a clear job error before the Ollama call. Also adds OLLAMA_THINK env var (default false) to control whether the model uses extended thinking - disabling it avoids runaway thinking loops on ambiguous inputs. Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
This commit is contained in:
@@ -2,3 +2,4 @@ API_BASE=http://gx10.aquantico.lan:8093
|
||||
OLLAMA_BASE_URL=http://gx10.aquantico.lan:11434
|
||||
OLLAMA_MODEL=qwen3.5:9b
|
||||
OLLAMA_NUM_PREDICT=4096
|
||||
OLLAMA_THINK=false
|
||||
|
||||
Reference in New Issue
Block a user