Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Questions about modeling_llama2 #24

Open
lbc12345 opened this issue May 4, 2024 · 0 comments
Open

Questions about modeling_llama2 #24

lbc12345 opened this issue May 4, 2024 · 0 comments

Comments

@lbc12345
Copy link

lbc12345 commented May 4, 2024

Hi!
Thanks for your brilliant work! However, when I try to use Q-Align with Llama3
simultaneously in one python file, I find that following code in Q-Align scripts file "modeling_llama2.py" will change the codec of transformers and cause conflicts with Llama3 weight loading and inference process.

def replace_llama_modality_adaptive():
    transformers.models.llama.configuration_llama.LlamaConfig = LlamaConfig
    transformers.models.llama.modeling_llama.LlamaAttention = LlamaAttention
    transformers.models.llama.modeling_llama.LlamaFlashAttention2 = LlamaFlashAttention2
    transformers.models.llama.modeling_llama.LlamaSdpaAttention = LlamaSdpaAttention
    transformers.models.llama.modeling_llama.LlamaDecoderLayer = LlamaDecoderLayer
    transformers.models.llama.modeling_llama.LlamaModel.forward = model_forward
    transformers.models.llama.modeling_llama.LlamaForCausalLM.forward = causal_model_forward

May you please refine these scripts to avoid direct change of transformers codec?
Thanks!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant