Skip to content
View WANGHanshuo1220's full-sized avatar
🤡
🤡

Block or report WANGHanshuo1220

Block user

Prevent this user from interacting with your repositories and sending you notifications. Learn more about blocking users.

You must be logged in to block users.

Please don't include any personal information such as legal names or email addresses. Maximum 100 characters, markdown supported. This note will be visible to only you.
Report abuse

Contact GitHub support about this user’s behavior. Learn more about reporting abuse.

Report abuse
WANGHanshuo1220/README.md

Hi there 👋

Welcome to IDreamer's homeland:

  • 🔭 I’m currently working on LLM-related Systems and Storage Systems
  • 🌱 I’m currently learning how to be a musician and a rock star
  • 🤔 I’m looking for help learning many musical instruments
  • 💬 Ask me about whatever you want to know
  • 📫 How to reach me: hanshuo_wang@163.com or wanghanshuo1220@gmail.com

Top Langs GitHub Streak

Pinned Loading

  1. DistServe DistServe Public

    Forked from LLMServe/DistServe

    Disaggregated serving system for Large Language Models (LLMs).

    Jupyter Notebook

  2. FlexFlow FlexFlow Public

    Forked from flexflow/flexflow-train

    FlexFlow Serve: Low-Latency, High-Performance LLM Serving

    C++

  3. INFaaS INFaaS Public

    Forked from stanford-mast/INFaaS

    Model-less Inference Serving

    C++

  4. Mooncake Mooncake Public

    Forked from kvcache-ai/Mooncake

    Mooncake is the serving platform for Kimi, a leading LLM service provided by Moonshot AI.

    C++

  5. Quest Quest Public

    Forked from mit-han-lab/Quest

    [ICML 2024] Quest: Query-Aware Sparsity for Efficient Long-Context LLM Inference

    Cuda

  6. vllm vllm Public

    Forked from vllm-project/vllm

    A high-throughput and memory-efficient inference and serving engine for LLMs

    Python