Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[ds-inference bloom] tweaks #340

Merged
merged 17 commits into from
Sep 7, 2022
Merged

[ds-inference bloom] tweaks #340

merged 17 commits into from
Sep 7, 2022

Conversation

stas00
Copy link
Member

@stas00 stas00 commented Sep 1, 2022

This PR is adding:

  1. support for tp-pre-sharded bloom repos
  2. support for int8 with accelerate
  3. nvme offload to ds-zero-inference
  4. many benchmarks

@mayank31398
Copy link
Collaborator

@stas00 I used to get ~66msec / token (batch size = 1) for DS-inference with fp16.
After moving to the newest commit with int8 support, the performance for fp16 has dropped for me to 74msec / token.

Can you confirm if you are also observing performance drops?

@stas00
Copy link
Member Author

stas00 commented Sep 3, 2022

I'm not sure if we are using the same hardware, I'm getting pretty similar performance, please see:

https://github.com/bigscience-workshop/Megatron-DeepSpeed/blob/a57c2a50011c7a350fa97029f5dd2b32b2cb104f/scripts/bloom-inference-scripts/README.md

and the diff from before 40msec => 44msec

int8 itself is of course slower than fp16

@mayank31398
Copy link
Collaborator

mayank31398 commented Sep 3, 2022

So, you are saying you only dropped performance by 4msec?
For me it dropped by 8 msec.
How are your timings so much better than mine though? :(
I think this is a CPU difference + RAM speed difference maybe?
Can you confirm your CPU specs and RAM clock timings for me?

The int8 numbers match exactly for me.
But not fp16 performance.
Not sure why.

Also, if possible please let me know about your pytorch and CUDA versions

@stas00
Copy link
Member Author

stas00 commented Sep 4, 2022

no idea, you're not using the same machine, so it's normal to have different results. Even the GPUs could be slightly different I think, or perhaps PCIe type/channels.

PyTorch version: 1.12.0
CUDA used to build PyTorch: 11.6
CUDA runtime version: 11.2.67

specs:

$ lscpu
Architecture:        x86_64
CPU op-mode(s):      32-bit, 64-bit
Byte Order:          Little Endian
CPU(s):              128
On-line CPU(s) list: 0-127
Thread(s) per core:  2
Core(s) per socket:  32
Socket(s):           2
NUMA node(s):        8
Vendor ID:           AuthenticAMD
CPU family:          25
Model:               1
Model name:          AMD EPYC 7543 32-Core Processor
Stepping:            1
CPU MHz:             1796.335
BogoMIPS:            5589.74
Virtualization:      AMD-V
L1d cache:           32K
L1i cache:           32K
L2 cache:            512K
L3 cache:            32768K
NUMA node0 CPU(s):   0-7,64-71
NUMA node1 CPU(s):   8-15,72-79
NUMA node2 CPU(s):   16-23,80-87
NUMA node3 CPU(s):   24-31,88-95
NUMA node4 CPU(s):   32-39,96-103
NUMA node5 CPU(s):   40-47,104-111
NUMA node6 CPU(s):   48-55,112-119
NUMA node7 CPU(s):   56-63,120-127
Flags:               fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl nonstop_tsc cpuid extd_apicid aperfmperf pni pclmulqdq monitor ssse3 fma cx16 pcid sse4_1 sse4_2 movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_llc mwaitx cpb cat_l3 cdp_l3 invpcid_single hw_pstate sme ssbd mba sev ibrs ibpb stibp vmmcall sev_es fsgsbase bmi1 avx2 smep bmi2 invpcid cqm rdt_a rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local clzero irperf xsaveerptr wbnoinvd amd_ppin arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold v_vmsave_vmload vgif umip pku ospke vaes vpclmulqdq rdpid overflow_recov succor smca
$ lshw -C memory
WARNING: you should run this program as super-user.
  *-memory                  
       description: System memory
       physical id: 0
       size: 511GiB
  *-memory UNCLAIMED
       description: Memory controller
       product: PMC-Sierra Inc.
       vendor: PMC-Sierra Inc.
       physical id: 0.1
       bus info: pci@0000:03:00.1
       version: 00
       width: 64 bits
       clock: 33MHz (30.3ns)
       capabilities: bus_master cap_list
       configuration: latency=0
       resources: iomemory:2000-1fff memory:20002000000-200023fffff
[...]
WARNING: output may be incomplete or inaccurate, you should run this program as super-user.

@stas00 stas00 merged commit 479aac3 into main Sep 7, 2022
@stas00 stas00 deleted the bloom-ds-inference-repos branch September 7, 2022 17:43
younesbelkada pushed a commit to younesbelkada/Megatron-DeepSpeed that referenced this pull request Sep 28, 2022
* wip

* wip

* wip

* wip

* wip

* wip

* wip

* wip

* wip

* wip

* wip

* wip

* wip

* wip

* wip

* wip

* wip
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants