-
Notifications
You must be signed in to change notification settings - Fork 5.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[dy2s] fix error when using same tensor as inputs in one call & fix bugs in jit.save #55963
[dy2s] fix error when using same tensor as inputs in one call & fix bugs in jit.save #55963
Conversation
你的PR提交成功,感谢你对开源项目的贡献! |
@@ -321,7 +346,8 @@ inline void RunProgramAPI( | |||
|
|||
VLOG(4) << "global_inner_scope:" << global_inner_scope; | |||
|
|||
auto input_names = details::GetTensorsName(x); | |||
auto input_names = |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@feifei-111 之前的方案会有什么问题?Tensor.name 不符合预期么?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
如果两个输入是同一个tensor,组的program会不一致,这个导致sot里面没办法复用之前的program,因为sot希望能直接绕过动转静的cache拿到program。现在会对输入的tensor重命名,这样生成的program就没问题了
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
原来的Tensor.name无法处理同一个Tensor绑定了Program中两个名字的情况。现在是可以兼容所有的情况的。
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
… fix_run_program_op
… fix_run_program_op
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM for docs
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
static void ShareTensorsIntoScopeWithName( | ||
const std::vector<Tensor> &tensors, | ||
const std::vector<std::string> &tensor_names, | ||
paddle::framework::Scope *scope) { | ||
for (size_t i = 0; i < tensors.size(); ++i) { | ||
auto name = tensor_names[i]; | ||
if (name == paddle::framework::kFakeVarName) { | ||
continue; | ||
} | ||
auto *var = scope->Var(name); | ||
CheckInputVarStatus(tensors[i]); | ||
// share tensor | ||
auto tensor_base = tensors[i].impl(); | ||
if (phi::DenseTensor::classof(tensor_base.get())) { | ||
auto *dst_tensor = var->GetMutable<phi::DenseTensor>(); | ||
auto t = std::dynamic_pointer_cast<phi::DenseTensor>(tensor_base); | ||
*dst_tensor = *t; | ||
} else if (phi::SelectedRows::classof(tensor_base.get())) { | ||
auto *dst_tensor = var->GetMutable<phi::SelectedRows>(); | ||
auto t = std::dynamic_pointer_cast<phi::SelectedRows>(tensor_base); | ||
*dst_tensor = *t; | ||
} | ||
} | ||
} | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
消除重复,这个可以被 ShareTensorsIntoScope 复用。
@@ -488,11 +491,12 @@ def keep_name_table(self, value): | |||
|
|||
def _parse_save_configs(configs): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
让日升approve一下这里
@@ -302,6 +311,44 @@ def _replace_value_with_input_spec(args): | |||
return args_with_spec | |||
|
|||
|
|||
def _replace_to_input_spec_with_new_name(args, arg_names): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
删除掉没用的:_replace_value_with_input_spec
@@ -887,6 +904,8 @@ def _prepare(self, inputs): | |||
flatten_inputs = paddle.utils.flatten(inputs) | |||
# Convert variable into Tensor and feed in training data. | |||
input_vars = [] | |||
input_var_names = [] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
写成函数式:
map(lambda x: x.desct.name, self.inputs)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM for run_program_op
PR types
Others
PR changes
Others
Description
fix error when using same tensor as inputs in one call
---- wrong program created, when case like:
t = paddle.to_tensor(1)
f(t, t)
fix bugs in jit.save, when program_cache is not empty, jit.save might save a wrong program when input_spec is given
Others
PCard-66972