Increase Ollama context window to 16K tokens

Default 4K context was truncating the HTML before the model could see it.
Added num_ctx: 16384 to all 4 Ollama API calls to ensure the full
~25K character HTML content fits in context.

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
This commit is contained in:
clucraft 2026-01-25 19:35:27 -05:00
parent 57ba90ee25
commit aad5a797b6

View file

@ -293,6 +293,9 @@ async function extractWithOllama(
],
stream: false,
think: false, // Disable thinking mode for Qwen3/DeepSeek models
options: {
num_ctx: 16384, // Increase context window for large HTML content
},
},
{
headers: {
@ -388,6 +391,9 @@ async function verifyWithOllama(
messages: [{ role: 'user', content: prompt }],
stream: false,
think: false, // Disable thinking mode for Qwen3/DeepSeek models
options: {
num_ctx: 16384, // Increase context window for large HTML content
},
},
{
headers: { 'Content-Type': 'application/json' },
@ -481,6 +487,9 @@ async function verifyStockStatusWithOllama(
messages: [{ role: 'user', content: prompt }],
stream: false,
think: false, // Disable thinking mode for Qwen3/DeepSeek models
options: {
num_ctx: 16384, // Increase context window for large HTML content
},
},
{
headers: { 'Content-Type': 'application/json' },
@ -937,6 +946,9 @@ async function arbitrateWithOllama(
messages: [{ role: 'user', content: prompt }],
stream: false,
think: false, // Disable thinking mode for Qwen3/DeepSeek models
options: {
num_ctx: 16384, // Increase context window for large HTML content
},
},
{
headers: { 'Content-Type': 'application/json' },