Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

llama_kv_cache_tokens_rm functioning on index and not position #3840

Closed
MrJackSpade opened this issue Oct 28, 2023 · 5 comments · Fixed by #3843
Closed

llama_kv_cache_tokens_rm functioning on index and not position #3840

MrJackSpade opened this issue Oct 28, 2023 · 5 comments · Fixed by #3843
Labels
bug Something isn't working

Comments

@MrJackSpade
Copy link

Prerequisites

Please answer the following questions for yourself before submitting an issue.

  • [Y] I am running the latest code. Development is very rapid so there are no tagged versions as of now.
  • [Y] I carefully followed the README.md.
  • [Y] I searched using keywords relevant to my issue to make sure that I am creating a new issue that is not already open (or closed).
  • [Y] I reviewed the Discussions, and have a new bug or useful enhancement to share.

This bug is related to the issue in #3825 described here

#3825 (comment)

llama_eval calls llama_kv_cache_tokens_rm using n_past in an attempt to truncate all cache data beyond the point being currently evaluated.

llama_kv_cache_tokens_rm executes this cache clear by iterating over all cells of the kv cache, however instead of honoring the position property of the cell, it clears the cell out by using the index of that cell within the cache

static void llama_kv_cache_tokens_rm(struct llama_kv_cache & cache, int32_t c0, int32_t c1) {
    if (c0 < 0) c0 = 0;
    if (c1 < 0) c1 = cache.size;

    for (int32_t i = c0; i < c1; ++i) {
        cache.cells[i].pos = -1;
        cache.cells[i].delta = 0;
        cache.cells[i].seq_id.clear();
    }

    // Searching for a free slot can start here since we know it will be empty.
    cache.head = uint32_t(c0);
}

This functionality is not compatible with the new "ring cache" design, because given the following cache values

idx:  0  1  2  3  4  5  6  7  8  9 10 11 12 13 14 15
pos: -1 -1 -1 -1 -1 -1 -1  0  1  2  3  4  5  6  7  8

Calling Eval with an n_past of 8 in this scenario, will completely clear all data from the kv cache, when it should be expected to retain all data.

I'm not sure if the proper method to resolve this would be to modify llama_eval to point to the new llama_kv_cache_seq_rm method with a seq_id of zero, or if another adjustment should me made to resolve this issue.

@MrJackSpade
Copy link
Author

Looking again, it looks like the intent of llama_kv_cache_tokens_rm may be different from llama_kv_cache_seq_rm since the former clears all sequences and not just an assumed "default of zero" so maybe the method should be adjusted instead?

@KerfuffleV2
Copy link
Collaborator

This does look pretty weird. There are only a few places that don't call it with -1, -1 (nuke everything):

The main example:

 300   │         // remove any "future" tokens that we might have inherited from the previous session
 301   │         llama_kv_cache_tokens_rm(ctx, n_matching_session_tokens, -1);

But since if it doesn't operate on the token position, then talking about "future" makes no sense and this wouldn't function properly as far as I can see.

llama_eval in llama.cpp

9654   │     llama_kv_cache_tokens_rm(ctx->kv_self, n_past, -1);

and

9669   │     llama_kv_cache_tokens_rm(ctx->kv_self, n_past, -1);

which would be broken for the same reason. (I think those functions are deprecated currently.)

Seems like all the calls to llama_kv_cache_tokens_rm are actually looking for a variant of llama_kv_cache_seq_rm where the sequence doesn't matter.

@MrJackSpade
Copy link
Author

I fixed this locally by just basically copying and pasting the llama_kv_cache_seq_rm function without the sequence, and it seems to be doing the job. I was hesitant to PR anything though since I haven't worked much within the code, most of my work has been leveraging the API

static void llama_kv_cache_tokens_rm(struct llama_kv_cache & cache, llama_pos p0, int32_t p1) {
    uint32_t new_head = cache.size;

    if (p0 < 0) p0 = 0;
    if (p1 < 0) p1 = cache.size;

    for (uint32_t i = 0; i < cache.size; ++i)
    {
        if (cache.cells[i].pos >= p0 && cache.cells[i].pos < p1)
        {
            cache.cells[i].seq_id.clear();
            cache.cells[i].pos = -1;
            cache.cells[i].delta = 0;
            if (new_head == cache.size) new_head = i;
        }
    }

    // If we freed up a slot, set head to it so searching can start there.
    if (new_head != cache.size) cache.head = new_head;
}

Modifying the code as above seems to have resolved my last issue integrating with the new API. All of the tokens are now present and accounted for.

I expect that since llama_eval is being deprecated in favor of directly calling llama_decode this really represents kind of "transitional edge case"

@KerfuffleV2
Copy link
Collaborator

KerfuffleV2 commented Oct 29, 2023

Yeah, pretty much the same as what I did in #3843 here. (Just extends the existing function to allow a "seq id doesn't matter" value.)

this really represents kind of "transitional edge case"

In those cases, yeah. main also tries to use that in the session loading code. Maybe it doesn't matter since presumably the cache is usually empty when the session is loaded.

@ggerganov
Copy link
Owner

I expect that since llama_eval is being deprecated in favor of directly calling llama_decode this really represents kind of "transitional edge case"

Yes, now I understand. llama_eval is deprecated and will work correctly only for the old style generation where we never shift the KV cache and only generate a single sequence. After the #3228 rework, this call was deprecated and will not function correctly in your case.

Technically, llama_kv_cache_tokens_rm functions correctly, but we will remove it as discussed in #3843

@KerfuffleV2 KerfuffleV2 added bug Something isn't working and removed bug-unconfirmed labels Oct 29, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants