Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

optimize dygraph performance with move runtime import to begining #37759

Merged

Conversation

JiabinYang
Copy link
Contributor

PR types

Performance optimization

PR changes

Others

Describe

This PR try to optimize performance with moving runtime import to beginning and we can make dygraph faster 5% on code below:

import os
import paddle.fluid.core as core
from paddle import _C_ops
import paddle
import numpy as np
from time import time

num_runs = 100000

os.environ["CUDA_VISIBLE_DEVICES"] = ""
paddle.set_device("cpu")

class FluidMatmulx2(paddle.nn.Layer):
    def __init__(self):
        super(FluidMatmulx2, self).__init__()

        arrW1 = np.ones([4, 128]).astype('float32')
        self.W1 = paddle.to_tensor(arrW1, 'float32', core.CPUPlace())
        self.W1.stop_gradient = False
        
        arrW2 = np.ones([128, 2]).astype('float32')
        self.W2 = paddle.to_tensor(arrW2, 'float32', core.CPUPlace())
        self.W2.stop_gradient = False

    def forward(self, obs):
        Out1 = _C_ops.matmul_v2(obs, self.W1, 'trans_x', False, 'trans_y', False)
        Out = _C_ops.matmul_v2(Out1, self.W2, 'trans_x', False, 'trans_y', False)
        return Out

if __name__ == "__main__":
    input_data = np.ones([32, 4]).astype('float32')
    
    ###########
    # Warm Up #
    ###########
    data_paddle = paddle.to_tensor(input_data.astype(np.float32))
    fluid_matmul = FluidMatmulx2()
    for _ in range(num_runs):
        fluid_matmul.forward(data_paddle)
    
    ###############
    # Performance #
    ###############
    # Fluid Matmul Forward
    data_paddle = paddle.to_tensor(input_data.astype(np.float32))
    ts = time()
    for _ in range(num_runs):
        out = fluid_matmul.forward(data_paddle)
    te = time()
    print("Fluid Matmul Forward: ", 1e6*(te-ts))
    
    # Fluid Matmul Call
    data_paddle = paddle.to_tensor(input_data.astype(np.float32))
    ts = time()
    for _ in range(num_runs):
        out = fluid_matmul(data_paddle)
    te = time()
    print("Fluid Matmul Call: ", 1e6*(te-ts))

before:
image
after:
image

@paddle-bot-old
Copy link

paddle-bot-old bot commented Dec 1, 2021

Thanks for your contribution!
Please wait for the result of CI firstly. See Paddle CI Manual for details.

Copy link
Contributor

@Aurelius84 Aurelius84 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@JiabinYang JiabinYang merged commit bfb8577 into PaddlePaddle:develop Dec 2, 2021
Zjq9409 pushed a commit to Zjq9409/Paddle that referenced this pull request Dec 10, 2021
…ddlePaddle#37759)

* optimize dygraph probl

* refine code

* fix convert dtype error

* fix import datafeeder error
0x45f pushed a commit to 0x45f/Paddle that referenced this pull request Dec 24, 2021
…ddlePaddle#37759)

* optimize dygraph probl

* refine code

* fix convert dtype error

* fix import datafeeder error
lanxianghit pushed a commit that referenced this pull request Jan 5, 2022
… in dy2stat (#38418)

Fix error when calling sublayer's non-forward func in dy2stat
cherrypick: #37713#37759#37296#38540#37888
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants