7 Commits

Author SHA1 Message Date
wb
39250e6582 fix(diarization-ui): raise default num_predict to 16384
Thinking tokens count against num_predict. At 4096 the model was
running out mid-response after spending ~3000 tokens on thinking.
16384 gives enough headroom for thinking + full response.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-05-06 16:26:55 +02:00
wb
831a29a23a fix(diarization-ui): enable thinking by default (OLLAMA_THINK=true)
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-05-06 16:12:25 +02:00
wb
f5756f38af fix(diarization-ui): validate non-empty content before LLM call, add OLLAMA_THINK flag
Empty documents caused the model to spin in its thinking loop and waste
all tokens. Now raises a clear job error before the Ollama call.
Also adds OLLAMA_THINK env var (default false) to control whether
the model uses extended thinking - disabling it avoids runaway thinking
loops on ambiguous inputs.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-05-06 16:10:01 +02:00
wb
aae53d91b1 fix(diarization-ui): prevent repetition loops in Ollama generation
Adds repeat_penalty=1.15 and repeat_last_n=128 to suppress token
repetition loops (e.g. "tragen" -> "tragen" -> ...). Also caps output
via num_predict (default 4096, configurable via OLLAMA_NUM_PREDICT env
var) as a hard stop in case the model still gets stuck.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-05-06 16:04:12 +02:00
25793c022c feat(diarization-ui): add FastAPI UI backend with sqlite storage and Ollama analysis pipeline 2026-03-21 13:47:36 +01:00
85a36a0e5e chore(diarization-ui): default API_BASE to gx10 and use fully qualified nginx image for podman 2026-03-21 13:24:11 +01:00
66d681279e feat: split diarization UI into separate project folder 2026-03-21 13:14:54 +01:00