Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

export and run with bfloat16 weight matrices #407

Open
wants to merge 2 commits into
base: master
Choose a base branch
from

Conversation

efocht
Copy link

@efocht efocht commented Sep 25, 2023

Added export method to save matrix data in weights as bfloat16 while saving the rest as fp32. The --version value must be set to -1 (preliminary). Only the legacy export was changed so far.

Added runbf16.c demonstrator how to run the bfloat16 matrix multiply using float32 arithmetics.
Speed is ok'ish, not great, but space is halved compared to fp32. 12 core skylake gold 6126@2.6GHz runs with 1.3 tokens/s. Maybe somebody cares to optimize the matmul further.

This patch is a prerequisite for a SX-Aurora Vector Engine patch enabling it to run on bfloat16 data with >32 tokens / s (while running only in fp32 arithmetic units, too).

…saving

the rest as float32. The `--version` value must be set to -1 (preliminary).
Added runbf16.c demonstrator how to run the bfloat16 matrix multiply using
float32 arithmetics. Speed is ok'ish, not great, but space is halved.
format: attempt to convert into torch.bfloat16, then view it as torch.int16.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant