Skip to content

Conversation

@hectorem2
Copy link
Contributor

Quick fix in the retrieval example so that only at most n_seq_max chunks of
text are added to the llama_batch and this example stop crashing due attempting
to process more token sequences in a batch than allocated by llama_context::
encode and llama_context::decode.

The retrieval example crashes because (according to my understanding of the llama_context code) llama_context::encode is going to allocate n_seq_max sequences of tokens, and the loop in the example keeps adding more sequences. I added a condition so that the loop stops adding tokens when s reaches llama_n_seq_max(ctx).

…nks of

text are added to the llama_batch and this example stop crashing due attempting
to process more token sequences in a batch than allocated by llama_context::
encode and llama_context::decode.
@ggerganov ggerganov merged commit 0c89864 into ggml-org:master Dec 29, 2025
68 of 71 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants