-
Notifications
You must be signed in to change notification settings - Fork 5.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Feature/multigpu #4838
Merged
Merged
Feature/multigpu #4838
Changes from all commits
Commits
Show all changes
48 commits
Select commit
Hold shift + click to select a range
48dea84
"nccl multigpu init"
dzhwinter 4e95c49
Merge remote-tracking branch 'origin/develop' into impl
dzhwinter 408e21a
"remove clang format detect"
dzhwinter 0fa34db
nccl init
dzhwinter 4118782
Merge remote-tracking branch 'origin/develop' into multigpu
dzhwinter 51abb6c
add test
dzhwinter d144310
"nccl add interface"
dzhwinter 54d3dbd
"add enforce check"
dzhwinter d2be7ec
Merge remote-tracking branch 'origin/develop' into multigpu
dzhwinter d8aebaf
"fix enforce error"
dzhwinter 5bcb638
"fix common test"
dzhwinter d4d215a
Merge remote-tracking branch 'origin/develop' into multigpu
dzhwinter 73883bd
"fix error"
dzhwinter 23cb825
"add python test case"
dzhwinter fdfc8f9
"switch to Init op"
dzhwinter 333045d
"move nccl to another directory"
dzhwinter 423d743
"add register gpu macro"
dzhwinter ec47565
"add reduce hash function"
dzhwinter f6106ff
merge develop into multigpu
dzhwinter 50f04dc
"add init allreduce test"
dzhwinter ef257e6
write nccl c++ test case
dzhwinter da1181b
Merge remote-tracking branch 'origin/develop' into feature/multigpu
dzhwinter 0990c87
checkin nccl operator
dzhwinter 1e8474b
"delete python ops testcase"
dzhwinter 026c61c
"fix allreduce python test"
dzhwinter 63fb41b
"redefine the initop from kernel to OpBase"
dzhwinter 5200c65
"move Tensor to LoDTensor"
dzhwinter 6d1493a
"add bcast c++ test case"
dzhwinter 11cf3e3
"refactorization of nccl test case"
dzhwinter bab5e53
Merge remote-tracking branch 'origin/develop' into feature/multigpu
dzhwinter 94992a9
"add multiop testcase"
dzhwinter 38d3adf
"add multioperator testcase"
dzhwinter 61c1b04
"fix multigpu testcase"
dzhwinter 4e165f4
"fix create output variable bug"
dzhwinter 2573ac1
"remove python side test case to another PR."
dzhwinter 626ff3b
"polish cmake file"
dzhwinter 4b9cf0e
"add disable"
dzhwinter 6bc261b
fix ci
dzhwinter dbfd130
"FIX CI"
dzhwinter 16a39d2
fix conflict
dzhwinter 6cce526
"fixed based on comment"
dzhwinter 5220052
"polish code based on comment"
dzhwinter f632706
fix based on comment
dzhwinter 75eaccc
"rerun ci"
dzhwinter 37842d8
rerun ci
dzhwinter 99308b1
rerun CI
dzhwinter 6f009cf
rerun ci
dzhwinter 71305e5
"polish code based on comment"
dzhwinter File filter
Filter by extension
Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,3 @@ | ||
if(WITH_GPU) | ||
nv_library(nccl_common SRCS nccl_gpu_common.cc DEPS device_context operator ) | ||
endif() |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,17 @@ | ||
/* Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserve. | ||
Licensed under the Apache License, Version 2.0 (the "License"); | ||
you may not use this file except in compliance with the License. | ||
You may obtain a copy of the License at | ||
http://www.apache.org/licenses/LICENSE-2.0 | ||
Unless required by applicable law or agreed to in writing, software | ||
distributed under the License is distributed on an "AS IS" BASIS, | ||
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. | ||
See the License for the specific language governing permissions and | ||
limitations under the License. */ | ||
|
||
#include "paddle/operators/nccl/nccl_gpu_common.h" | ||
#include "paddle/platform/gpu_info.h" | ||
|
||
namespace paddle { | ||
namespace platform {} // namespace platform | ||
} // namespace paddle |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,63 @@ | ||
/* Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserve. | ||
|
||
Licensed under the Apache License, Version 2.0 (the "License"); | ||
you may not use this file except in compliance with the License. | ||
You may obtain a copy of the License at | ||
|
||
http://www.apache.org/licenses/LICENSE-2.0 | ||
|
||
Unless required by applicable law or agreed to in writing, software | ||
distributed under the License is distributed on an "AS IS" BASIS, | ||
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. | ||
See the License for the specific language governing permissions and | ||
limitations under the License. */ | ||
|
||
#pragma once | ||
|
||
#include <algorithm> | ||
#include <condition_variable> | ||
#include <memory> | ||
#include <mutex> | ||
#include <string> | ||
#include <unordered_map> | ||
#include <vector> | ||
|
||
#include "paddle/platform/device_context.h" | ||
#include "paddle/platform/dynload/nccl.h" | ||
#include "paddle/platform/enforce.h" | ||
#include "paddle/platform/macros.h" | ||
|
||
namespace paddle { | ||
namespace platform { | ||
|
||
constexpr int kInvalidGPUId = -1; | ||
|
||
struct Communicator { | ||
std::vector<ncclComm_t> comms_; | ||
std::unordered_map<int, int> comm_id_map_; | ||
|
||
Communicator() {} | ||
|
||
int GetCommId(int device_id) const { return comm_id_map_.at(device_id); } | ||
|
||
void InitAll(const std::vector<int>& gpus) { | ||
comms_.resize(gpus.size()); | ||
for (size_t i = 0; i < gpus.size(); ++i) { | ||
comm_id_map_[gpus[i]] = i; | ||
} | ||
PADDLE_ENFORCE( | ||
dynload::ncclCommInitAll(comms_.data(), gpus.size(), gpus.data())); | ||
} | ||
|
||
~Communicator() { | ||
for (size_t i = 0; i < comms_.size(); ++i) { | ||
// FIXME(dzh) : PADDLE_ENFORCE return void | ||
dynload::ncclCommDestroy(comms_[i]); | ||
} | ||
} | ||
|
||
DISABLE_COPY_AND_ASSIGN(Communicator); | ||
}; | ||
|
||
} // namespace platform | ||
} // namespace paddle |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,206 @@ | ||
/* Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserve. | ||
Licensed under the Apache License, Version 2.0 (the "License"); | ||
you may not use this file except in compliance with the License. | ||
You may obtain a copy of the License at | ||
http://www.apache.org/licenses/LICENSE-2.0 | ||
Unless required by applicable law or agreed to in writing, software | ||
distributed under the License is distributed on an "AS IS" BASIS, | ||
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. | ||
See the License for the specific language governing permissions and | ||
limitations under the License. */ | ||
|
||
#include "paddle/framework/op_registry.h" | ||
#include "paddle/operators/nccl/nccl_gpu_common.h" | ||
|
||
namespace paddle { | ||
namespace operators { | ||
|
||
// NCCLinitOp | ||
class NCCLInitOp : public framework::OperatorBase { | ||
public: | ||
NCCLInitOp(const std::string &type, const framework::VariableNameMap &inputs, | ||
const framework::VariableNameMap &outputs, | ||
const framework::AttributeMap &attrs) | ||
: OperatorBase(type, inputs, outputs, attrs) {} | ||
|
||
void Run(const framework::Scope &scope, | ||
const platform::DeviceContext &dev_ctx) const override { | ||
const auto &name = Output("Communicator"); | ||
PADDLE_ENFORCE_NOT_NULL(scope.FindVar(name), | ||
"Can not find variable '%s' in the scope.", name); | ||
std::vector<int> gpus = Attr<std::vector<int>>("gpus"); | ||
PADDLE_ENFORCE(!gpus.empty(), "Attr(gpus) should not be empty."); | ||
|
||
if (scope.FindVar(name) == nullptr) { | ||
PADDLE_THROW("Output(Communicator) is needed for ncclInit operator."); | ||
} | ||
|
||
platform::Communicator *comm = | ||
scope.FindVar(name)->GetMutable<platform::Communicator>(); | ||
comm->InitAll(gpus); | ||
} | ||
}; | ||
|
||
class NCCLInitOpMaker : public framework::OpProtoAndCheckerMaker { | ||
public: | ||
NCCLInitOpMaker(framework::OpProto *proto, | ||
framework::OpAttrChecker *op_checker) | ||
: OpProtoAndCheckerMaker(proto, op_checker) { | ||
AddOutput("Communicator", | ||
"Create Communicator for communicating between gpus"); | ||
AddAttr<std::vector<int>>("gpus", "gpu id lists"); | ||
AddAttr<int>("data_type", "output data type") | ||
.SetDefault(framework::DataType::FP32); | ||
AddComment(R"DOC( | ||
create communicator. | ||
)DOC"); | ||
} | ||
}; | ||
|
||
// AllReduceOp | ||
class NCCLAllReduceOp : public framework::OperatorWithKernel { | ||
public: | ||
using framework::OperatorWithKernel::OperatorWithKernel; | ||
|
||
protected: | ||
void InferShape(framework::InferShapeContext *ctx) const override { | ||
PADDLE_ENFORCE(ctx->HasInput("X"), | ||
" Input(X) of AllReduce op input should not be NULL"); | ||
PADDLE_ENFORCE( | ||
ctx->HasInput("Communicator"), | ||
" Input(Communicator) of AllReduce op input should not be NULL"); | ||
PADDLE_ENFORCE(ctx->HasOutput("Out"), | ||
" Input(X) of AllReduce op input should not be NULL"); | ||
|
||
auto x_dims = ctx->GetInputsDim("X"); | ||
|
||
std::string reduction = ctx->Attrs().Get<std::string>("reduction"); | ||
PADDLE_ENFORCE((reduction == "ncclSum" || reduction == "ncclProd" || | ||
reduction == "ncclMin" || reduction == "ncclMax"), | ||
"invalid reduction."); | ||
|
||
ctx->SetOutputsDim("Out", x_dims); | ||
ctx->ShareLoD("X", /*->*/ "Out"); | ||
} | ||
}; | ||
|
||
// ReduceOp | ||
class NCCLReduceOp : public framework::OperatorWithKernel { | ||
public: | ||
using framework::OperatorWithKernel::OperatorWithKernel; | ||
|
||
protected: | ||
void InferShape(framework::InferShapeContext *ctx) const override { | ||
PADDLE_ENFORCE(ctx->HasInput("X"), | ||
" Input(X) of Reduce op input should not be NULL"); | ||
PADDLE_ENFORCE( | ||
ctx->HasInput("Communicator"), | ||
" Input(Communicator) of Reduce op input should not be NULL"); | ||
PADDLE_ENFORCE(ctx->HasOutput("Out"), | ||
" Input(X) of Reduce op input should not be NULL"); | ||
|
||
std::string reduction = ctx->Attrs().Get<std::string>("reduction"); | ||
PADDLE_ENFORCE((reduction == "ncclSum" || reduction == "ncclProd" || | ||
reduction == "ncclMin" || reduction == "ncclMax"), | ||
"invalid reduction."); | ||
|
||
auto x_dims = ctx->GetInputsDim("X"); | ||
ctx->SetOutputsDim("Out", x_dims); | ||
ctx->ShareLoD("X", /*->*/ "Out"); | ||
} | ||
}; | ||
|
||
// BcastOp | ||
class NCCLBcastOp : public framework::OperatorWithKernel { | ||
public: | ||
using framework::OperatorWithKernel::OperatorWithKernel; | ||
|
||
protected: | ||
void InferShape(framework::InferShapeContext *ctx) const override { | ||
PADDLE_ENFORCE(ctx->HasInput("X"), | ||
" Input(X) of Bcast op input should not be NULL"); | ||
PADDLE_ENFORCE(ctx->HasInput("Communicator"), | ||
" Input(Communicator) of Bcast op input should not be NULL"); | ||
PADDLE_ENFORCE(ctx->HasOutput("Out"), | ||
" Output(Out) of Bcast op output should not be NULL"); | ||
|
||
int root = ctx->Attrs().Get<int>("root"); | ||
PADDLE_ENFORCE(root != platform::kInvalidGPUId, "Bcast root must be set."); | ||
|
||
auto x_dims = ctx->GetInputsDim("X"); | ||
ctx->SetOutputsDim("Out", x_dims); | ||
ctx->ShareLoD("X", /*->*/ "Out"); | ||
} | ||
}; | ||
|
||
// AllreduceOp | ||
class NCCLAllReduceOpMaker : public framework::OpProtoAndCheckerMaker { | ||
public: | ||
NCCLAllReduceOpMaker(framework::OpProto *proto, | ||
framework::OpAttrChecker *op_checker) | ||
: OpProtoAndCheckerMaker(proto, op_checker) { | ||
AddInput("X", "The input of AllReduce op"); | ||
AddInput("Communicator", "Communicator for communicating between gpus"); | ||
AddOutput("Out", "The output of AllReduce op"); | ||
AddAttr<std::string>("reduction", | ||
"{'ncclMin', 'ncclMax', 'ncclProd', 'ncclSum'}.") | ||
.SetDefault("ncclSum"); | ||
AddComment(R"DOC( | ||
AllReduce the input tensors. | ||
)DOC"); | ||
} | ||
}; | ||
|
||
// ReduceOp | ||
class NCCLReduceOpMaker : public framework::OpProtoAndCheckerMaker { | ||
public: | ||
NCCLReduceOpMaker(framework::OpProto *proto, | ||
framework::OpAttrChecker *op_checker) | ||
: OpProtoAndCheckerMaker(proto, op_checker) { | ||
AddInput("X", "The input of Reduce op"); | ||
AddInput("Communicator", "Communicator for communicating between gpus"); | ||
AddOutput("Out", "The output of Reduce op"); | ||
AddAttr<std::string>("reduction", | ||
"{'ncclMin', 'ncclMax', 'ncclProd', 'ncclSum'}.") | ||
.SetDefault("ncclSum"); | ||
AddAttr<int>("root", | ||
"root gpu of the parameter. if not " | ||
"set(platform::kInvalidGPUId). hashed by name.") | ||
.SetDefault(platform::kInvalidGPUId); | ||
AddComment(R"DOC( | ||
Reduce the tensors)DOC"); | ||
} | ||
}; | ||
|
||
// BcastOp | ||
class NCCLBcastOpMaker : public framework::OpProtoAndCheckerMaker { | ||
public: | ||
NCCLBcastOpMaker(framework::OpProto *proto, | ||
framework::OpAttrChecker *op_checker) | ||
: OpProtoAndCheckerMaker(proto, op_checker) { | ||
AddInput("X", "The input of BcastSend op"); | ||
AddInput("Communicator", "Communicator for communicating between gpus"); | ||
AddOutput("Out", "The output of Bcast"); | ||
AddAttr<int>("root", | ||
"root gpu of the parameter. if not " | ||
"set(platform::kInvalidGPUId). hashed by name.") | ||
.SetDefault(platform::kInvalidGPUId); | ||
AddComment(R"DOC( | ||
Bcast the tensors. | ||
)DOC"); | ||
} | ||
}; | ||
|
||
} // namespace operators | ||
} // namespace paddle | ||
|
||
namespace ops = paddle::operators; | ||
REGISTER_OPERATOR(ncclInit, ops::NCCLInitOp, | ||
paddle::framework::EmptyGradOpMaker, ops::NCCLInitOpMaker); | ||
|
||
REGISTER_OP_WITHOUT_GRADIENT(ncclAllReduce, ops::NCCLAllReduceOp, | ||
ops::NCCLAllReduceOpMaker); | ||
REGISTER_OP_WITHOUT_GRADIENT(ncclBcast, ops::NCCLBcastOp, | ||
ops::NCCLBcastOpMaker); | ||
REGISTER_OP_WITHOUT_GRADIENT(ncclReduce, ops::NCCLReduceOp, | ||
ops::NCCLReduceOpMaker); |
Oops, something went wrong.
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
what is this file used for
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
build an operator independent module, which may be used for
Multiexecutor
or some more module.