We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
我需要加载自定义的INT8量化模型以开展数学推理能力的评测 我现在只需要把加载hf模型的源代码 model = Qwen2ForCausalLM.from_pretrained('/home/maoshizhuo/2025/deepseek-Qwen-1.5B', torch_dtype=torch.bfloat16, device_map='auto') 改成 model = Int8Qwen2ForCausalLM.from_pretrained('/home/maoshizhuo/smoothquant/int8_models/deepseek-Qwen-1.5B-smoothquant_ds_Qwen2_1.5B_2048', torch_dtype=torch.float16, device_map='auto') load_parameters(model,'./models/model_params_ds_qwen2_1.5B.pth') 就可以开始推理了。 但我找遍了opencompass/models下的模型,并没有模型使用huggingface的transformers库加载的,但是在使用手册里面是有这个的,并且--hf-path就是可以加载hf模型,但我找不到哪里有model = Qwen2ForCausalLM.from_pretrained或者AutoModelForCausalLM?
The text was updated successfully, but these errors were encountered:
tonysy
No branches or pull requests
Describe the feature
我需要加载自定义的INT8量化模型以开展数学推理能力的评测
我现在只需要把加载hf模型的源代码
model = Qwen2ForCausalLM.from_pretrained('/home/maoshizhuo/2025/deepseek-Qwen-1.5B', torch_dtype=torch.bfloat16, device_map='auto')
改成
model = Int8Qwen2ForCausalLM.from_pretrained('/home/maoshizhuo/smoothquant/int8_models/deepseek-Qwen-1.5B-smoothquant_ds_Qwen2_1.5B_2048', torch_dtype=torch.float16, device_map='auto')
load_parameters(model,'./models/model_params_ds_qwen2_1.5B.pth')
就可以开始推理了。
但我找遍了opencompass/models下的模型,并没有模型使用huggingface的transformers库加载的,但是在使用手册里面是有这个的,并且--hf-path就是可以加载hf模型,但我找不到哪里有model = Qwen2ForCausalLM.from_pretrained或者AutoModelForCausalLM?
Will you implement it?
The text was updated successfully, but these errors were encountered: