Skip to content
View nanowell's full-sized avatar
🎯
Focusing
🎯
Focusing
  • World

Block or report nanowell

Block user

Prevent this user from interacting with your repositories and sending you notifications. Learn more about blocking users.

You must be logged in to block users.

Please don't include any personal information such as legal names or email addresses. Maximum 100 characters, markdown supported. This note will be visible to only you.
Report abuse

Contact GitHub support about this user’s behavior. Learn more about reporting abuse.

Report abuse

Pinned Loading

  1. Q-Sparse-LLM Q-Sparse-LLM Public

    My Implementation of Q-Sparse: All Large Language Models can be Fully Sparsely-Activated

    Python 31 2

  2. AdEMAMix-Optimizer-Pytorch AdEMAMix-Optimizer-Pytorch Public

    The AdEMAMix Optimizer: Better, Faster, Older.

    Python 174 10

  3. Differential-Transformer-PyTorch Differential-Transformer-PyTorch Public

    PyTorch implementation of the Differential-Transformer architecture for sequence modeling, specifically tailored as a decoder-only model similar to large language models (LLMs). The architecture in…

    Python 50 5

  4. Brainstorm-science Brainstorm-science Public

    Sample from uniform distribution towards automation of math.

    C 150 21