Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

support view strategy in eager_fluid state #40830

Merged
merged 7 commits into from
Mar 31, 2022

Conversation

pangyoki
Copy link
Contributor

@pangyoki pangyoki commented Mar 22, 2022

PR types

New features

PR changes

Others

Describe

动态图中间态添加View策略:输入输出Tensor共享底层数据。包括reshapesqueezeunsqueezeflatten API。

使用HandleViewBetweenInputAndOutput方法让输入输出共享底层数据,并共享inplace_version

  • ShareBufferWith
  • ShareInplaceVersionCounterWith

中间态View op代码示例:

std::tuple<paddle::experimental::Tensor,paddle::experimental::Tensor> reshape2_dygraph_function(const paddle::experimental::Tensor& X,const paddle::experimental::Tensor& Shape, const paddle::framework::AttributeMap& attr_map) {

  paddle::platform::RecordEvent dygraph_entrance_record_event("reshape2 dygraph", paddle::platform::TracerEventType::Operator, 1);
  VLOG(3) << "Running Eager Forward Op: reshape2";
  // Dygraph Forward Pass

  std::map<std::string, std::vector<std::shared_ptr<egr::EagerVariable>>> ins = { { "X", egr::EagerUtils::TrySyncToVars(X) } };

  if(Shape.initialized()) ins["Shape"] = egr::EagerUtils::TrySyncToVars(Shape);
  std::map<std::string, std::vector<std::shared_ptr<egr::EagerVariable>>> outs = { { "Out", {std::make_shared<egr::EagerVariable>(egr::Controller::Instance().GenerateUniqueName())}},{ "XShape", {std::make_shared<egr::EagerVariable>(egr::Controller::Instance().GenerateUniqueName())}} };

  if (ins.count("X") && outs.count("Out")) {
    egr::EagerUtils::HandleViewBetweenInputAndOutput(ins["X"][0], outs["Out"][0]);
  };


  // Prepare Autograd Meta 
  egr::AutogradMeta* p_autograd_X = egr::EagerUtils::nullable_autograd_meta(X);
  egr::AutogradMeta* p_autograd_Shape = egr::EagerUtils::nullable_autograd_meta(Shape);

  bool trace_backward = egr::Controller::Instance().HasGrad();

  bool require_any_grad = egr::EagerUtils::ComputeRequireGrad(trace_backward, p_autograd_X, p_autograd_Shape);

  paddle::framework::AttributeMap attrs = attr_map;
  paddle::framework::AttributeMap default_attrs;
  egr::Controller::Instance().GetCurrentTracer()->TraceOp("reshape2", ins, outs, attrs, 
     egr::Controller::Instance().GetExpectedPlace(),
     &default_attrs, true, {});

  paddle::experimental::Tensor Out;
  egr::EagerUtils::GetOutput(outs["Out"][0], &Out);
  paddle::experimental::Tensor XShape;
  egr::EagerUtils::GetOutput(outs["XShape"][0], &XShape);

  {
    paddle::platform::RecordEvent node_creation_record_event("reshape2 node_creation", paddle::platform::TracerEventType::Operator, 1);
    egr::AutogradMeta* p_autograd_Out = egr::EagerUtils::autograd_meta(&Out);
    egr::AutogradMeta* p_autograd_XShape = egr::EagerUtils::autograd_meta(&XShape);
    if(require_any_grad) {
      VLOG(6) << " Construct Grad for reshape2 "; 
      egr::EagerUtils::PassStopGradient(false, p_autograd_Out, p_autograd_XShape);
      // Create GradOpNode
      auto grad_node = std::make_shared<GradNodereshape2>(2, 2);

      // Set Attributes
      grad_node->SetAttrMap(std::move(attrs));
      grad_node->SetDefaultAttrMap(std::move(default_attrs));

      // Set Tensor Wrappers
      grad_node->SetTensorWrapperXShape(XShape, false);

      grad_node->SetGradOutMeta(X, 0);
      if(p_autograd_X) grad_node->AddEdges(p_autograd_X, 0);
      grad_node->SetGradOutMeta(Shape, 1);
      if(p_autograd_Shape) grad_node->AddEdges(p_autograd_Shape, 1);
      egr::EagerUtils::SetOutRankWithSlot(p_autograd_Out, 0);
      egr::EagerUtils::SetHistory(p_autograd_Out, grad_node);
      grad_node->SetGradInMeta(Out, 0);
      egr::EagerUtils::CheckAndRetainGrad(Out);
      egr::EagerUtils::SetOutRankWithSlot(p_autograd_XShape, 1);
      grad_node->SetGradInMeta(XShape, 1);

    }
  }

  return std::make_tuple(Out,XShape);

}

@pangyoki pangyoki added AMD and removed AMD labels Mar 23, 2022
Copy link
Contributor

@JiabinYang JiabinYang left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@pangyoki pangyoki merged commit 2f1c1ae into PaddlePaddle:develop Mar 31, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants