Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Verify MKLDNN benchmark using 0.11.0 image #6568

Closed
luotao1 opened this issue Dec 13, 2017 · 7 comments · Fixed by #7295
Closed

Verify MKLDNN benchmark using 0.11.0 image #6568

luotao1 opened this issue Dec 13, 2017 · 7 comments · Fixed by #7295
Assignees

Comments

@luotao1
Copy link
Contributor

luotao1 commented Dec 13, 2017

IntelOptimizedPaddle.md use paddle:latest and paddle:latest-openblas, we should verify it using paddle:0.11.0 and paddle:0.11.0-openblas.

@luotao1 luotao1 self-assigned this Dec 13, 2017
@luotao1
Copy link
Contributor Author

luotao1 commented Dec 14, 2017

Training: before->0.11.0

  • MKLML and MKL-DNN, the benchmark is almost the same.
  • OpenBlas: Not finished yet.

Compared Result:

  • VGG-19
BatchSize 64 128 256
OpenBLAS 7.80->7.57 9.00->8.93 10.80->10.60
MKLML 12.12->12.59 13.70->14.36 16.18->17.11
MKL-DNN 28.46->28.54 29.83->29.85 30.44->30.62
  • ResNet-50
BatchSize 64 128 256
OpenBLAS 25.22->22.81 25.68->23.83 27.12->25.85
MKLML 32.52->32.27 31.89->32.14 33.12->33.28
MKL-DNN 81.69->81.16 82.35->83.84 84.08->85.28
  • GoogLeNet
BatchSize 64 128 256
OpenBLAS 89.52->88.49 96.97->92.99 108.25->107.22
MKLML 128.46->129.37 137.89->136.49 158.63->157.50
MKL-DNN     250.46->253.73 264.83->254.01 269.50->270.79

@luotao1 luotao1 added the MKL label Dec 14, 2017
@luotao1
Copy link
Contributor Author

luotao1 commented Dec 14, 2017

Inference: before->0.11.0

  • MKLML and MKL-DNN: the benchmark is almost the same except GoogleNet-MKLDNN.
  • OpenBlas: Not finished yet.

Compared Result:

  • VGG-19
BatchSize 1 2 4 8 16
OpenBLAS 1.07->1.10 1.08->1.96 1.06->3.62 0.88->3.63 0.65->2.25
MKLML 5.58->5.51 9.80->9.52 15.15->15.81 21.21->21.38 28.67->32.56
MKL-DNN 75.07->71.23 88.64->88.52 82.58->89.76 92.29-> 92.15 96.75->97.04
  • ResNet-50
BatchSize 1 2 4 8 16
OpenBLAS 3.35->3.31 3.19->6.72 3.09->11.59 2.55->13.17 1.96->9.27
MKLML 6.33->6.28 12.02->11.85 22.88->21.67 40.53->40.23 63.09->62.47
MKL-DNN 107.83->108.94 148.84->151.12 177.78->188.24 189.35->181.82 217.69->224.56
  • GoogLeNet
BatchSize 1 2 4 8 16
OpenBLAS 12.04->12.06 11.31->23.56 10.00->34.48 9.07->36.45 4.34->23.12
MKLML 22.74->22.04 41.56->40.82 81.22->81.89 133.47->133.75 210.53->190.48
MKL-DNN 175.10->238.81 272.92->309.93 450.70->269.47 512.00->341.33 600.94->355.56
  • AlexNet
BatchSize 1 2 4 8 16
OpenBLAS 3.53 6.23 15.04 26.06 31.62
MKLML 21.32 36.55 73.06 131.15 192.77
MKL-DNN 442.91 656.41 719.10 847.68 850.51

@tensor-tang
Copy link
Contributor

tensor-tang commented Dec 28, 2017

I've tested on 0.11.0 and I think below data need double check:

  • Training GoogLeNet
BatchSize 128
MKL-DNN 264.83->254.01
  • Inference GoogLeNet
BatchSize 8 16
MKL-DNN 512.00->341.33 600.94->355.56

My data did not reduce that much.

@luotao1
Copy link
Contributor Author

luotao1 commented Dec 28, 2017

My data did not reduce that much.

Yes, The double check is ok. Thus, both results of mklml and mkl-dnn are ok.

  • Training GoogLeNet
BatchSize 128
MKL-DNN 264.83->265.41
  • Inference GoogLeNet
BatchSize 4 8 16
MKL-DNN 450.70->415.58 512.00->524.59 600.94->603.77
BatchSize 16
MKL-DNN 210.53->204.47

@luotao1
Copy link
Contributor Author

luotao1 commented Dec 29, 2017

openblas results are all error due to not set env.

BatchSize 64
VGG-19 7.80->0.23
ResNet-50 25.22->0.69
GoogleNet-v1 89.52->1.53

@luotao1
Copy link
Contributor Author

luotao1 commented Dec 29, 2017

openblas (Training) results are almost same after #7104

  • AlexNet (Training) : need double check
BatchSize 64 128 256
OpenBLAS 45.62->50.80 72.79->73.38 107.22->110.81

@tensor-tang
Copy link
Contributor

I checked these data, LGTM.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants