Skip to content

Commit

Permalink
Update release branch with latest test fixes (#1339)
Browse files Browse the repository at this point in the history
* chore: additional options for perf_run tool

Signed-off-by: dperi <dperi@nvidia.com>

* feat: Add fx2trt backend and revamp current perf utility to accept CLI arguments

Signed-off-by: Dheeraj Peri <peri.dheeraj@gmail.com>

* chore: Refactor fx2trt functionality

Signed-off-by: Dheeraj Peri <peri.dheeraj@gmail.com>

* chore: Fix fp16 functionality for fx2trt backend

Signed-off-by: Dheeraj Peri <peri.dheeraj@gmail.com>

* chore: refactor

Signed-off-by: Dheeraj Peri <peri.dheeraj@gmail.com>

* chore: minor change

Signed-off-by: Dheeraj Peri <peri.dheeraj@gmail.com>

* refactor: Refactor perf_run and add internal benchmark scripts

Signed-off-by: Dheeraj Peri <peri.dheeraj@gmail.com>

* chore : minor refactor

Signed-off-by: Dheeraj Peri <peri.dheeraj@gmail.com>

* chore: Apply precommit tooling

Signed-off-by: Dheeraj Peri <peri.dheeraj@gmail.com>

* chore: Fix data loader issues and nox file paths

Signed-off-by: Dheeraj Peri <peri.dheeraj@gmail.com>

* chore: rebase and minor changes

Signed-off-by: Dheeraj Peri <peri.dheeraj@gmail.com>

* chore: Fix reporting to a file setting

Signed-off-by: Dheeraj Peri <peri.dheeraj@gmail.com>

* Update lower.py (#1324)

* docs: [Automated] Regenerating documenation for e374eb1

Signed-off-by: Torch-TensorRT Github Bot <torch-tensorrt.github.bot@nvidia.com>

* refactor: Refactor testing to use cosine similarity, remove redundancy models and restructuring

Signed-off-by: Dheeraj Peri <peri.dheeraj@gmail.com>

* chore: move to cosine similarity comparison

Signed-off-by: Dheeraj Peri <peri.dheeraj@gmail.com>

* refactor: Refactor nox file testing

Signed-off-by: Dheeraj Peri <peri.dheeraj@gmail.com>

* chore: add missing scripts

Signed-off-by: Dheeraj Peri <peri.dheeraj@gmail.com>

* chore: Linter fixes

Signed-off-by: Dheeraj Peri <peri.dheeraj@gmail.com>

* fix!: Fixed Windows compilation failures

Signed-off-by: Anurag Dixit <a.dixit91@gmail.com>

* chore: Minor fix

Signed-off-by: Dheeraj Peri <peri.dheeraj@gmail.com>

* chore: use rn18 instead of rn50

Signed-off-by: Dheeraj Peri <peri.dheeraj@gmail.com>

* docs: [Automated] Regenerating documenation for a1a4786

Signed-off-by: Torch-TensorRT Github Bot <torch-tensorrt.github.bot@nvidia.com>

* chore: Add cpp tests with cosine sim

Signed-off-by: Dheeraj Peri <peri.dheeraj@gmail.com>

* chore: linter fixes

Signed-off-by: Dheeraj Peri <peri.dheeraj@gmail.com>

* [feat] Add support for argmax and argmin (#1312)

* [feat] Add support for argmax and argmin

Adds support for aten::argmax and aten::argmin.

Fixes # (issue)

Please delete options that are not relevant and/or add your own.

- Bug fix (non-breaking change which fixes an issue)
- New feature (non-breaking change which adds functionality)
- Breaking change (fix or feature that would cause existing functionality to not work as expected)
- This change requires a documentation update

- [ ] My code follows the style guidelines of this project (You can use the linters)
- [ ] I have performed a self-review of my own code
- [ ] I have commented my code, particularly in hard-to-understand areas and hacks
- [ ] I have made corresponding changes to the documentation
- [ ] I have added tests to verify my fix or my feature
- [ ] New and existing unit tests pass locally with my changes
- [ ] I have added the relevant labels to my PR in so that relevant reviewers are notified

* move max.cpp tests to test_max.cpp no functional change

* fix permissions on max.cpp

* docs: [Automated] Regenerating documenation for 9db2852

Signed-off-by: Torch-TensorRT Github Bot <torch-tensorrt.github.bot@nvidia.com>

* chore: Deepcopy other objects

Signed-off-by: Dheeraj Peri <peri.dheeraj@gmail.com>

* fix: Fix deepcopy issues of PTQ calibrators

Signed-off-by: Dheeraj Peri <peri.dheeraj@gmail.com>

* chore: linter fixes

Signed-off-by: Dheeraj Peri <peri.dheeraj@gmail.com>

* chore: Adding a guideline to build on Windows platform (#1337)

* chore: Adding Windows build guideline

Signed-off-by: Anurag Dixit <a.dixit91@gmail.com>

* chore: Fix formatting

Signed-off-by: Anurag Dixit <a.dixit91@gmail.com>

Signed-off-by: Anurag Dixit <a.dixit91@gmail.com>

* docs: [Automated] Regenerating documenation for 00a1f03

Signed-off-by: Torch-TensorRT Github Bot <torch-tensorrt.github.bot@nvidia.com>

* chore: minor fixes

Signed-off-by: Dheeraj Peri <peri.dheeraj@gmail.com>

* chore: Linter fixes

Signed-off-by: Dheeraj Peri <peri.dheeraj@gmail.com>

* chore: Linter fixes

Signed-off-by: Dheeraj Peri <peri.dheeraj@gmail.com>

* docs: [Automated] Regenerating documenation for 1efe4b1

Signed-off-by: Torch-TensorRT Github Bot <torch-tensorrt.github.bot@nvidia.com>

* docs: [Automated] Regenerating documenation for 10b9ecd

Signed-off-by: Torch-TensorRT Github Bot <torch-tensorrt.github.bot@nvidia.com>

* add support for aten::reciprocal(int) (#1308)

* docs: [Automated] Regenerating documenation for 096fd41

Signed-off-by: Torch-TensorRT Github Bot <torch-tensorrt.github.bot@nvidia.com>

Signed-off-by: dperi <dperi@nvidia.com>
Signed-off-by: Dheeraj Peri <peri.dheeraj@gmail.com>
Signed-off-by: Torch-TensorRT Github Bot <torch-tensorrt.github.bot@nvidia.com>
Signed-off-by: Anurag Dixit <a.dixit91@gmail.com>
Co-authored-by: dperi <dperi@nvidia.com>
Co-authored-by: Dheeraj Peri <peri.dheeraj@gmail.com>
Co-authored-by: Wei <wwei6@fb.com>
Co-authored-by: Torch-TensorRT Github Bot <torch-tensorrt.github.bot@nvidia.com>
Co-authored-by: Anurag Dixit <a.dixit91@gmail.com>
Co-authored-by: Michael Feliz <104801882+mfeliz-cruise@users.noreply.github.com>
  • Loading branch information
7 people authored Sep 9, 2022
1 parent 087f97d commit bfbaebe
Show file tree
Hide file tree
Showing 162 changed files with 2,627 additions and 668 deletions.
1 change: 1 addition & 0 deletions .circleci/config.yml
Original file line number Diff line number Diff line change
Expand Up @@ -435,6 +435,7 @@ commands:
mkdir -p /tmp/artifacts/test_results
cd tests/py
pytest --junitxml=/tmp/artifacts/test_results/api/api_test_results.xml api/
pytest --junitxml=/tmp/artifacts/test_results/models/models_test_results.xml models/
pytest --junitxml=/tmp/artifacts/test_results/integrations/integrations_test_results.xml integrations/
cd ~/project
Expand Down
2 changes: 1 addition & 1 deletion .github/workflows/docgen.yml
Original file line number Diff line number Diff line change
Expand Up @@ -31,7 +31,7 @@ jobs:
- name: Set up Python 3.9.4
uses: actions/setup-python@v2
with:
python-version: 3.9.4
python-version: 3.9.4
- uses: actions/checkout@v2
with:
ref: ${{github.head_ref}}
Expand Down
2 changes: 1 addition & 1 deletion .github/workflows/linter.yml
Original file line number Diff line number Diff line change
Expand Up @@ -39,7 +39,7 @@ jobs:
pip3 install -r $GITHUB_WORKSPACE/.github/scripts/requirements.txt
pip3 install -r $GITHUB_WORKSPACE/requirements-dev.txt
- name: Lint C++
run: |
run: |
cd $GITHUB_WORKSPACE
python3 $GITHUB_WORKSPACE/.github/scripts/run_cpp_linter.py
env:
Expand Down
130 changes: 89 additions & 41 deletions core/conversion/converters/impl/max.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -13,47 +13,95 @@ namespace conversion {
namespace converters {
namespace impl {
namespace {
auto max_registrations TORCHTRT_UNUSED = RegisterNodeConversionPatterns().pattern(
{"aten::max.dim(Tensor self, int dim, bool keepdim=False) -> (Tensor values, Tensor indices)",
[](ConversionCtx* ctx, const torch::jit::Node* n, args& args) -> bool {
auto self = args[0].ITensorOrFreeze(ctx);
auto dim = args[1].unwrapToInt();
auto keep_dims = args[2].unwrapToBool();
auto selfDim = util::toVec(self->getDimensions());
if (dim < 0) {
dim = selfDim.size() + dim;
}
uint32_t shiftDim = 1 << dim;
auto TopKOperation = nvinfer1::TopKOperation::kMAX;
auto topk_layer = ctx->net->addTopK(*self, TopKOperation, 1, shiftDim);
TORCHTRT_CHECK(topk_layer, "Unable to create max layer from node: " << *n);
auto topk_dims = util::toVec(topk_layer->getOutput(0)->getDimensions());

nvinfer1::ITensor* out0 = nullptr;
nvinfer1::ITensor* out1 = nullptr;
if (!keep_dims) {
if (topk_dims[dim] == 1) {
auto squeeze_layer = ctx->net->addShuffle(*topk_layer->getOutput(0));
squeeze_layer->setReshapeDimensions(util::squeezeDims(topk_layer->getOutput(0)->getDimensions(), dim));
TORCHTRT_CHECK(squeeze_layer, "Unable to create squeeze_layer layer from node: " << *n);
out0 = ctx->AssociateValueAndTensor(n->outputs()[0], squeeze_layer->getOutput(0));

auto squeeze_layer_indices = ctx->net->addShuffle(*topk_layer->getOutput(1));
squeeze_layer_indices->setReshapeDimensions(
util::squeezeDims(topk_layer->getOutput(1)->getDimensions(), dim));
TORCHTRT_CHECK(squeeze_layer_indices, "Unable to create squeeze_layer_indices layer from node: " << *n);
out1 = ctx->AssociateValueAndTensor(n->outputs()[1], squeeze_layer_indices->getOutput(0));
}
} else {
out0 = ctx->AssociateValueAndTensor(n->outputs()[0], topk_layer->getOutput(0));
out1 = ctx->AssociateValueAndTensor(n->outputs()[1], topk_layer->getOutput(1));
}

LOG_DEBUG("Output tensor(0) shape: " << out0->getDimensions());
LOG_DEBUG("Output tensor(1) shape: " << out1->getDimensions());

return true;
}});

bool min_max_dim(ConversionCtx* ctx, const torch::jit::Node* n, args& args, nvinfer1::TopKOperation topKOperation) {
auto self = args[0].ITensorOrFreeze(ctx);
auto dim = args[1].unwrapToInt();
auto keep_dims = args[2].unwrapToBool();
auto selfDim = util::toVec(self->getDimensions());
if (dim < 0) {
dim = selfDim.size() + dim;
}
uint32_t reduce_axes_mask = 1 << dim;
auto topk_layer = ctx->net->addTopK(*self, topKOperation, 1, reduce_axes_mask);
TORCHTRT_CHECK(topk_layer, "Unable to create topk layer from node: " << *n);
auto topk_dims = util::toVec(topk_layer->getOutput(0)->getDimensions());

nvinfer1::ITensor* out0 = nullptr;
nvinfer1::ITensor* out1 = nullptr;
if (!keep_dims) {
TORCHTRT_CHECK(topk_dims[dim] == 1, "Unexpected size in squeeze dimension. Expected: 1 Actual: " << topk_dims[dim]);
auto squeeze_layer = ctx->net->addShuffle(*topk_layer->getOutput(0));
squeeze_layer->setReshapeDimensions(util::squeezeDims(topk_layer->getOutput(0)->getDimensions(), dim));
TORCHTRT_CHECK(squeeze_layer, "Unable to create squeeze_layer layer from node: " << *n);
out0 = ctx->AssociateValueAndTensor(n->outputs()[0], squeeze_layer->getOutput(0));

auto squeeze_layer_indices = ctx->net->addShuffle(*topk_layer->getOutput(1));
squeeze_layer_indices->setReshapeDimensions(util::squeezeDims(topk_layer->getOutput(1)->getDimensions(), dim));
TORCHTRT_CHECK(squeeze_layer_indices, "Unable to create squeeze_layer_indices layer from node: " << *n);
out1 = ctx->AssociateValueAndTensor(n->outputs()[1], squeeze_layer_indices->getOutput(0));
} else {
out0 = ctx->AssociateValueAndTensor(n->outputs()[0], topk_layer->getOutput(0));
out1 = ctx->AssociateValueAndTensor(n->outputs()[1], topk_layer->getOutput(1));
}

LOG_DEBUG("Output tensor(0) shape: " << out0->getDimensions());
LOG_DEBUG("Output tensor(1) shape: " << out1->getDimensions());

return true;
}

bool arg_min_max(ConversionCtx* ctx, const torch::jit::Node* n, args& args, nvinfer1::TopKOperation topKOperation) {
auto self = args[0].ITensorOrFreeze(ctx);
auto dim = args[1].unwrapToInt();
auto keep_dims = args[2].unwrapToBool();
auto selfDim = util::toVec(self->getDimensions());
if (dim < 0) {
dim = selfDim.size() + dim;
}
uint32_t reduce_axes_mask = 1 << dim;
auto topk_layer = ctx->net->addTopK(*self, topKOperation, 1, reduce_axes_mask);
TORCHTRT_CHECK(topk_layer, "Unable to create topk layer from node: " << *n);
auto topk_dims = util::toVec(topk_layer->getOutput(0)->getDimensions());

nvinfer1::ITensor* out = nullptr;
if (!keep_dims) {
TORCHTRT_CHECK(topk_dims[dim] == 1, "Unexpected size in squeeze dimension. Expected: 1 Actual: " << topk_dims[dim]);
auto squeeze_layer_indices = ctx->net->addShuffle(*topk_layer->getOutput(1));
squeeze_layer_indices->setReshapeDimensions(util::squeezeDims(topk_layer->getOutput(1)->getDimensions(), dim));
TORCHTRT_CHECK(squeeze_layer_indices, "Unable to create squeeze_layer_indices layer from node: " << *n);
out = ctx->AssociateValueAndTensor(n->outputs()[0], squeeze_layer_indices->getOutput(0));
} else {
out = ctx->AssociateValueAndTensor(n->outputs()[0], topk_layer->getOutput(1));
}

LOG_DEBUG("Output tensor shape: " << out->getDimensions());

return true;
}

auto max_registrations TORCHTRT_UNUSED =
RegisterNodeConversionPatterns()
.pattern(
{"aten::max.dim(Tensor self, int dim, bool keepdim=False) -> (Tensor values, Tensor indices)",
[](ConversionCtx* ctx, const torch::jit::Node* n, args& args) -> bool {
return min_max_dim(ctx, n, args, nvinfer1::TopKOperation::kMAX);
}})
.pattern(
{"aten::min.dim(Tensor self, int dim, bool keepdim=False) -> (Tensor values, Tensor indices)",
[](ConversionCtx* ctx, const torch::jit::Node* n, args& args) -> bool {
return min_max_dim(ctx, n, args, nvinfer1::TopKOperation::kMIN);
}})
.pattern(
{"aten::argmax(Tensor self, int dim, bool keepdim=False) -> (Tensor)",
[](ConversionCtx* ctx, const torch::jit::Node* n, args& args) -> bool {
return arg_min_max(ctx, n, args, nvinfer1::TopKOperation::kMAX);
}})
.pattern(
{"aten::argmin(Tensor self, int dim, bool keepdim=False) -> (Tensor)",
[](ConversionCtx* ctx, const torch::jit::Node* n, args& args) -> bool {
return arg_min_max(ctx, n, args, nvinfer1::TopKOperation::kMIN);
}});
} // namespace
} // namespace impl
} // namespace converters
Expand Down
16 changes: 15 additions & 1 deletion core/conversion/converters/impl/unary.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -49,6 +49,21 @@ auto abs_registration TORCHTRT_UNUSED = RegisterNodeConversionPatterns().pattern
}
}});

auto reciprocal_registration TORCHTRT_UNUSED = RegisterNodeConversionPatterns().pattern(
{"aten::reciprocal(Tensor self) -> Tensor", [](ConversionCtx* ctx, const torch::jit::Node* n, args& args) -> bool {
auto in = args[0].ITensorOrFreeze(ctx);
if (in->getType() == nvinfer1::DataType::kINT32) {
// pytorch implicitly casts to float for aten::reciprocal(int)
in = castITensor(ctx, in, nvinfer1::DataType::kFLOAT);
}
auto unary_layer = ctx->net->addUnary(*in, nvinfer1::UnaryOperation::kRECIP);
TORCHTRT_CHECK(unary_layer, "Unable to create recip layer from node: " << *n);
unary_layer->setName(util::node_info(n).c_str());
auto out_tensor = ctx->AssociateValueAndTensor(n->outputs()[0], unary_layer->getOutput(0));
LOG_DEBUG("Output tensor shape: " << out_tensor->getDimensions());
return true;
}});

#define convert(unary, trt_type) \
auto unary##_registrations TORCHTRT_UNUSED = RegisterNodeConversionPatterns().pattern( \
{"aten::" #unary "(Tensor self) -> Tensor", \
Expand All @@ -74,7 +89,6 @@ convert(sinh, kSINH);
convert(tan, kTAN);
convert(atan, kATAN);
convert(floor, kFLOOR);
convert(reciprocal, kRECIP);
convert(log, kLOG);
convert(ceil, kCEIL);
convert(sqrt, kSQRT);
Expand Down
2 changes: 1 addition & 1 deletion core/partitioning/shape_analysis.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -167,7 +167,7 @@ void getSegmentsOutputByRunning(
}
if (cur_ivalue.toTensor().sizes().size() == 0) {
// handle Scalar types, which has sizes of []
input_shapes.push_back(util::toVec(util::toDims(c10::List<long int>({1}))));
input_shapes.push_back(util::toVec(util::toDims(c10::List<int64_t>({1}))));
} else {
input_shapes.push_back(util::toVec(util::toDims(cur_ivalue.toTensor().sizes())));
}
Expand Down
2 changes: 1 addition & 1 deletion cpp/bin/torchtrtc/main.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -35,7 +35,7 @@ bool unload_library(void* custom_lib) {
bool success = false;
#if defined(_WIN32)
// Returns status non-zero for success
success = FreeLibrary(custom_lib) ? true : false;
success = FreeLibrary((HMODULE)custom_lib) ? true : false;
#else
success = dlclose(custom_lib) ? false : true;
#endif
Expand Down
8 changes: 4 additions & 4 deletions cpp/include/torch_tensorrt/torch_tensorrt.h
Original file line number Diff line number Diff line change
Expand Up @@ -365,7 +365,7 @@ class TensorFormat {
* signifying a static input shape or a set of three input shapes representing
* the min, optiminal and max input shapes allowed for the engine.
*/
struct TORCHTRT_API Input : torch::CustomClassHolder {
struct Input : torch::CustomClassHolder {
/// Minimum acceptable input size into the engine
std::vector<int64_t> min_shape;
/// Optimal input size into the engine (size optimized for given kernels accept any size in min max range)
Expand Down Expand Up @@ -520,7 +520,7 @@ struct TORCHTRT_API Input : torch::CustomClassHolder {
*
* This struct can either hold a complex inputs of shape or a flattened one,
*/
struct TORCHTRT_API GraphInputs {
struct GraphInputs {
torch::jit::IValue input_signature; // nested Input, full input spec
std::vector<Input> inputs; // flatten input spec
};
Expand Down Expand Up @@ -592,14 +592,14 @@ struct CompileSpec {
*
* @param inputs
*/
CompileSpec(std::vector<Input> inputs);
TORCHTRT_API CompileSpec(std::vector<Input> inputs);

/**
* @brief Construct a new Compile Spec object from IValue which represents the nesting of input tensors for a module.
*
* @param input_signature
*/
CompileSpec(torch::jit::IValue input_signature);
TORCHTRT_API CompileSpec(torch::jit::IValue input_signature);
// Defaults should reflect TensorRT defaults for BuilderConfig

/**
Expand Down
2 changes: 1 addition & 1 deletion docs/_cpp_api/classtorch__tensorrt_1_1DataType.html
Original file line number Diff line number Diff line change
Expand Up @@ -199,7 +199,7 @@


<div class="version">
master (1.2.0a0+51a991e)
master (1.2.0a0+096fd41)
</div>


Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -199,7 +199,7 @@


<div class="version">
master (1.2.0a0+51a991e)
master (1.2.0a0+096fd41)
</div>


Expand Down
2 changes: 1 addition & 1 deletion docs/_cpp_api/classtorch__tensorrt_1_1TensorFormat.html
Original file line number Diff line number Diff line change
Expand Up @@ -199,7 +199,7 @@


<div class="version">
master (1.2.0a0+51a991e)
master (1.2.0a0+096fd41)
</div>


Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -199,7 +199,7 @@


<div class="version">
master (1.2.0a0+51a991e)
master (1.2.0a0+096fd41)
</div>


Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -199,7 +199,7 @@


<div class="version">
master (1.2.0a0+51a991e)
master (1.2.0a0+096fd41)
</div>


Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -199,7 +199,7 @@


<div class="version">
master (1.2.0a0+51a991e)
master (1.2.0a0+096fd41)
</div>


Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -199,7 +199,7 @@


<div class="version">
master (1.2.0a0+51a991e)
master (1.2.0a0+096fd41)
</div>


Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -199,7 +199,7 @@


<div class="version">
master (1.2.0a0+51a991e)
master (1.2.0a0+096fd41)
</div>


Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -199,7 +199,7 @@


<div class="version">
master (1.2.0a0+51a991e)
master (1.2.0a0+096fd41)
</div>


Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -199,7 +199,7 @@


<div class="version">
master (1.2.0a0+51a991e)
master (1.2.0a0+096fd41)
</div>


Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -199,7 +199,7 @@


<div class="version">
master (1.2.0a0+51a991e)
master (1.2.0a0+096fd41)
</div>


Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -199,7 +199,7 @@


<div class="version">
master (1.2.0a0+51a991e)
master (1.2.0a0+096fd41)
</div>


Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -199,7 +199,7 @@


<div class="version">
master (1.2.0a0+51a991e)
master (1.2.0a0+096fd41)
</div>


Expand Down
2 changes: 1 addition & 1 deletion docs/_cpp_api/dir_cpp.html
Original file line number Diff line number Diff line change
Expand Up @@ -197,7 +197,7 @@


<div class="version">
master (1.2.0a0+51a991e)
master (1.2.0a0+096fd41)
</div>


Expand Down
2 changes: 1 addition & 1 deletion docs/_cpp_api/dir_cpp_include.html
Original file line number Diff line number Diff line change
Expand Up @@ -197,7 +197,7 @@


<div class="version">
master (1.2.0a0+51a991e)
master (1.2.0a0+096fd41)
</div>


Expand Down
2 changes: 1 addition & 1 deletion docs/_cpp_api/dir_cpp_include_torch_tensorrt.html
Original file line number Diff line number Diff line change
Expand Up @@ -197,7 +197,7 @@


<div class="version">
master (1.2.0a0+51a991e)
master (1.2.0a0+096fd41)
</div>


Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -199,7 +199,7 @@


<div class="version">
master (1.2.0a0+51a991e)
master (1.2.0a0+096fd41)
</div>


Expand Down
Loading

0 comments on commit bfbaebe

Please sign in to comment.