Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[WIP] WebGPU EP [skip ci] #21904

Draft
wants to merge 142 commits into
base: main
Choose a base branch
from
Draft
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
142 commits
Select commit Hold shift + click to select a range
4037bd4
[WIP] WebGPU EP initial commit
fs-eire Aug 28, 2024
9c36250
update C-API
fs-eire Aug 28, 2024
3a0756d
fix build break
fs-eire Aug 28, 2024
5199e98
add an empty symbols.txt file
fs-eire Aug 28, 2024
1c68dbd
fix an error in doc
fs-eire Aug 29, 2024
7db03de
remove string_join.h in favor of absl::StrJoin
fs-eire Aug 29, 2024
6a373c2
fix DLL copy
fs-eire Aug 29, 2024
ee42bba
update doc: require --skip_tests
fs-eire Aug 29, 2024
5fac202
Merge remote-tracking branch 'origin/main' into fs-eire/webgpu-ep
fs-eire Aug 29, 2024
3f46e5c
update dawn version
fs-eire Aug 29, 2024
9f61279
disable Tint tests
fs-eire Aug 29, 2024
6bb6335
fix one build break in Linux
fs-eire Aug 29, 2024
d839dbc
remove unused variables
fs-eire Aug 30, 2024
b70943d
make webgpu build on linux and known to most tools (#21937)
guschmue Aug 30, 2024
c33ac2e
Merge remote-tracking branch 'origin/main' into fs-eire/webgpu-ep
fs-eire Aug 30, 2024
8437267
revert type of ShaderVariable::rank_ to int
fs-eire Aug 30, 2024
3caf032
output Impl() for variables
fs-eire Aug 30, 2024
84494c4
code formatting
fs-eire Aug 30, 2024
aa70163
better format of Uniform
fs-eire Aug 30, 2024
d772db7
revise document
fs-eire Aug 30, 2024
6ef3dad
more build fix for linux
fs-eire Aug 31, 2024
a56f6c3
apply formatter
fs-eire Aug 31, 2024
12cd79d
simple test runner
fs-eire Aug 31, 2024
14c8966
Program macros update - allow extend
fs-eire Aug 31, 2024
4fff35f
fix BucketCacheManager
fs-eire Sep 1, 2024
4fd8ad1
add a method to get logger from ComputeContext
fs-eire Sep 1, 2024
3bd92ad
add verbose log for cache key
fs-eire Sep 1, 2024
6a1bbfe
revise suite test
fs-eire Sep 1, 2024
947aee1
device lost handler
fs-eire Sep 1, 2024
99b2578
add '-a' and '-t' to test runner
fs-eire Sep 1, 2024
aa7b3f5
atol/rtol 0.0001 -> 0.001
fs-eire Sep 1, 2024
e659acd
Fix uniform
fs-eire Sep 2, 2024
6ad89c5
add some unary ops
fs-eire Sep 2, 2024
8361fc3
various of fixes
fs-eire Sep 2, 2024
c89159d
fix workgroup_size, cache key stringnify and indices type
fs-eire Sep 3, 2024
5ea5936
shape_uniforms preparation
fs-eire Sep 3, 2024
7d83054
allow uniforms of input/output shape/stride being added automatically
fs-eire Sep 3, 2024
7a64cc7
Merge remote-tracking branch 'origin/main' into fs-eire/webgpu-ep
fs-eire Sep 3, 2024
1d53ac8
fix build (linux)
fs-eire Sep 3, 2024
4d52602
fix stride
fs-eire Sep 3, 2024
3761aad
fix "{res_name}_bi2o_{name}"
fs-eire Sep 3, 2024
351da84
Add Expand operator (#21933)
qjia7 Sep 3, 2024
0b7ce77
support onnxruntime_test_all
fs-eire Sep 3, 2024
33726b1
reflect change in WebGpuProviderFactoryCreator::Create signature (#21…
guschmue Sep 3, 2024
50ea9eb
compare the content of WEBGPU_BUFFER, not the address (#21967)
guschmue Sep 3, 2024
d6f6148
fix tanh
fs-eire Sep 3, 2024
626edaf
support size==0 for element wise operators
fs-eire Sep 4, 2024
8913da1
Merge remote-tracking branch 'origin/main' into fs-eire/webgpu-ep
fs-eire Sep 4, 2024
bacc54c
use shared ComputeBroadcastOutputShape()
fs-eire Sep 4, 2024
7ecc5bb
add workgroup_idx
fs-eire Sep 4, 2024
ae836b1
expose name for shader variable
fs-eire Sep 4, 2024
243078b
add uniform for 1D variable
fs-eire Sep 5, 2024
4d48d28
fix GetElementAt with uniform
fs-eire Sep 5, 2024
dbe673b
document update folder
fs-eire Sep 5, 2024
38f182e
fix adapter/device creating: add toggles
fs-eire Sep 5, 2024
eb80f7c
more strict shape&stride usage check
fs-eire Sep 6, 2024
39d5509
fix vector realloc
fs-eire Sep 6, 2024
cd961c3
simplify cache hint interface.
fs-eire Sep 6, 2024
ddc2fbb
revise expand
fs-eire Sep 6, 2024
e8be835
revise unary
fs-eire Sep 6, 2024
bd7d592
Elu/Relu/LeakyRelu/ThresholdedRelu/Gelu
fs-eire Sep 6, 2024
eecac18
Merge remote-tracking branch 'origin/main' into fs-eire/webgpu-ep
fs-eire Sep 6, 2024
601e50f
remove unused field in class Gelu
fs-eire Sep 6, 2024
8f36da2
remove out-of-dated comments
fs-eire Sep 6, 2024
72ebd85
Clip
fs-eire Sep 7, 2024
a3244ae
fix rank in shader helper
fs-eire Sep 7, 2024
5a2ae8c
fix shader variable
fs-eire Sep 9, 2024
aa54ff8
move components number from variable to program
fs-eire Sep 9, 2024
969384d
mark components in cache key
fs-eire Sep 9, 2024
6b82486
Add FastGelu op (#21991)
qjia7 Sep 10, 2024
2b3e7c2
use 'set/add' as prefix for some functions
fs-eire Sep 10, 2024
ef0d53b
remove unnecessary cache hint for FastGelu
fs-eire Sep 10, 2024
c4ca47f
revise unary - expose consts in header
fs-eire Sep 10, 2024
8806d57
use path for header file
fs-eire Sep 10, 2024
0568e2b
a few revises to the code (#22047)
fs-eire Sep 10, 2024
b7a9c0e
use OrtMutex
fs-eire Sep 11, 2024
f65ade9
Merge remote-tracking branch 'origin/main' into fs-eire/webgpu-ep
fs-eire Sep 11, 2024
d4a963d
[webgpu-native] Add transpose op (#21986)
axinging Sep 11, 2024
8b61532
PushErrorScope and PopErrorScope
fs-eire Sep 11, 2024
dce0f18
placeholder for setting proc table
fs-eire Sep 12, 2024
8978d89
Revert "placeholder for setting proc table"
fs-eire Sep 12, 2024
43ccaf4
allow setting "ValidationMode"
fs-eire Sep 12, 2024
eae4c3f
make shape/stride correct when component != 1
fs-eire Sep 13, 2024
b8c369d
expose number of components
fs-eire Sep 13, 2024
c3086d6
Fix build errors
skottmckay Sep 13, 2024
c5cf2ab
[WebGPU EP] Support Shape operator (#22095)
satyajandhyala Sep 14, 2024
0bc714f
[webgpu EP] Binary operators (#22112)
fs-eire Sep 17, 2024
4421676
Merge remote-tracking branch 'origin/main' into fs-eire/webgpu-ep
fs-eire Sep 17, 2024
2e91a8b
use f32 for pow anyway
fs-eire Sep 17, 2024
87f9edb
Cast operator
fs-eire Sep 17, 2024
19ee9f3
do not use virtual function for getting ProgramMetadata
fs-eire Sep 17, 2024
d9f7f19
reshape, squeeze and unsqueeze
fs-eire Sep 18, 2024
07675cf
fix Cast and Clip
fs-eire Sep 18, 2024
dfab322
[webgpu-native] Add where op (#22014)
axinging Sep 20, 2024
cb9f3a4
fix linux build break
fs-eire Sep 20, 2024
207be92
Merge remote-tracking branch 'origin/main' into fs-eire/webgpu-ep
fs-eire Sep 24, 2024
929725e
expose KernelContext
fs-eire Sep 25, 2024
c5e5af3
revise fast gelu
fs-eire Sep 25, 2024
82cd59e
expose Rank in IndicesHelper
fs-eire Sep 25, 2024
2393dbf
fix: move inline impl to .h
fs-eire Sep 25, 2024
9bdbd85
add const modifier
fs-eire Sep 25, 2024
0101ce8
remove toggle "disable_workgroup_init"
fs-eire Sep 25, 2024
3896706
set backend type to D3D12 since we always uses dxc (win).
fs-eire Sep 25, 2024
f02e85a
update build configurations to webgpu EP (#22047)
fs-eire Sep 25, 2024
e5233ce
enable build pipeline on Windows for WebGPU
fs-eire Sep 26, 2024
0f7a5f6
[webgpu native] Add RotaryEmbedding op (#22194)
axinging Sep 27, 2024
41f6ff3
[webgpu native] Add transpose shared (#22098)
axinging Sep 27, 2024
b1b5e1f
[webgpu-native] Add gather (#22183)
qjia7 Sep 27, 2024
92a08e2
[Native-WebGPU] Add Concat (#22225)
satyajandhyala Sep 27, 2024
8da1f7a
[webgpu-native] Add MatmulNBits (#22150)
qjia7 Sep 27, 2024
f9b6b7c
[WebGPU-Native] Tile Operator (#22239)
prathikr Sep 30, 2024
c1ae1fd
use Abseil OStringStream in WebGPU EP string concat (#22241)
fs-eire Sep 30, 2024
b574f2c
Range
fs-eire Sep 30, 2024
14ea5db
webgpu: support MultiHeadAttention operator (#22144)
xhcao Sep 30, 2024
c70441e
[webgpu-native] support for webgpu layernorms (#22249)
guschmue Oct 1, 2024
468c720
nodejs binding support webgpu
fs-eire Oct 1, 2024
cbf106e
fix where
fs-eire Oct 1, 2024
bce7a98
Merge remote-tracking branch 'origin/main' into fs-eire/webgpu-ep
fs-eire Oct 1, 2024
5086c7c
revert some changes that are not necessary
fs-eire Oct 1, 2024
fe7d3e4
revise perftest help msg
fs-eire Oct 1, 2024
d219bb7
[webgpu-native] Fix a few build errors on Linux (#22286)
snnn Oct 1, 2024
7f7d6da
format
fs-eire Oct 1, 2024
c561ed6
Merge remote-tracking branch 'origin/main' into fs-eire/webgpu-ep
fs-eire Oct 1, 2024
27640e3
fix issues for e2e phi3 (#22287)
guschmue Oct 1, 2024
4129cd6
fix perf problem: force Flush by end of session
fs-eire Oct 3, 2024
5fd65e7
Uniform buffer mode: LazyRelease -> Simple
fs-eire Oct 3, 2024
dcf2062
nodejs binding support IO binding for webgpu
fs-eire Oct 3, 2024
481111b
Merge remote-tracking branch 'origin/main' into fs-eire/webgpu-ep
fs-eire Oct 4, 2024
b84401d
fix matmul test after conflict resolve
fs-eire Oct 4, 2024
08434d2
a few build fixes
fs-eire Oct 4, 2024
1b01583
fix build break in android build
fs-eire Oct 4, 2024
130dc9b
fix duplicate "it"
fs-eire Oct 4, 2024
646a744
always disable DAWN_ENABLE_SPIRV_VALIDATION
fs-eire Oct 4, 2024
da6406b
Enable OBJC/OBJCXX for all projects if necessary
fs-eire Oct 5, 2024
53ff621
minimal webgpu io-binding support for python (#22334)
guschmue Oct 7, 2024
74b5131
reset dispatch count
fs-eire Oct 8, 2024
4f4efcb
Merge remote-tracking branch 'origin/main' into fs-eire/webgpu-ep
fs-eire Oct 9, 2024
0a8f872
remove unnecessary initialization options in test
fs-eire Oct 9, 2024
3f104fb
support ORT profiling in node.js
fs-eire Oct 9, 2024
8261ca6
Merge remote-tracking branch 'origin/main' into fs-eire/webgpu-ep
fs-eire Oct 11, 2024
e7d05ba
[webgpu-native] support webgpu profiling (#22255)
qjia7 Oct 11, 2024
613ad6d
check ValidationMode for push/pop error scope
fs-eire Oct 11, 2024
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
16 changes: 16 additions & 0 deletions cmake/onnxruntime_unittests.cmake
Original file line number Diff line number Diff line change
Expand Up @@ -1128,6 +1128,22 @@ if (NOT IOS)
LIBRARY DESTINATION ${CMAKE_INSTALL_LIBDIR}
BUNDLE DESTINATION ${CMAKE_INSTALL_LIBDIR}
RUNTIME DESTINATION ${CMAKE_INSTALL_BINDIR})

## TODO: remove this when merging to main branch
#
# should support better test runner
#
if (onnxruntime_USE_WEBGPU)
add_custom_command(
TARGET onnx_test_runner
POST_BUILD
COMMAND ${CMAKE_COMMAND} -E copy_if_different
"${ONNXRUNTIME_ROOT}/test/providers/webgpu/test_webgpu.js"
"${ONNXRUNTIME_ROOT}/test/providers/webgpu/test_webgpu.bat"
"$<TARGET_FILE_DIR:onnx_test_runner>"
VERBATIM )
endif()

endif()

if (NOT onnxruntime_ENABLE_TRAINING_TORCH_INTEROP)
Expand Down
4 changes: 4 additions & 0 deletions js/node/CMakeLists.txt
Original file line number Diff line number Diff line change
Expand Up @@ -34,6 +34,7 @@ include_directories(${CMAKE_SOURCE_DIR}/node_modules/node-addon-api)

# optional providers
option(USE_DML "Build with DirectML support" OFF)
option(USE_WEBGPU "Build with WebGPU support" OFF)
option(USE_CUDA "Build with CUDA support" OFF)
option(USE_TENSORRT "Build with TensorRT support" OFF)
option(USE_COREML "Build with CoreML support" OFF)
Expand All @@ -42,6 +43,9 @@ option(USE_QNN "Build with QNN support" OFF)
if(USE_DML)
add_compile_definitions(USE_DML=1)
endif()
if(USE_WEBGPU)
add_compile_definitions(USE_WEBGPU=1)
endif()
if(USE_CUDA)
add_compile_definitions(USE_CUDA=1)
endif()
Expand Down
10 changes: 7 additions & 3 deletions js/node/lib/backend.ts
Original file line number Diff line number Diff line change
Expand Up @@ -3,12 +3,14 @@

import { Backend, InferenceSession, InferenceSessionHandler, SessionHandler } from 'onnxruntime-common';

import { Binding, binding } from './binding';
import { Binding, binding, initOrt } from './binding';

class OnnxruntimeSessionHandler implements InferenceSessionHandler {
#inferenceSession: Binding.InferenceSession;

constructor(pathOrBuffer: string | Uint8Array, options: InferenceSession.SessionOptions) {
initOrt();

this.#inferenceSession = new binding.InferenceSession();
if (typeof pathOrBuffer === 'string') {
this.#inferenceSession.loadModel(pathOrBuffer, options);
Expand All @@ -27,10 +29,12 @@ class OnnxruntimeSessionHandler implements InferenceSessionHandler {
readonly outputNames: string[];

startProfiling(): void {
// TODO: implement profiling
// startProfiling is a no-op.
//
// if sessionOptions.enableProfiling is true, profiling will be enabled when the model is loaded.
}
endProfiling(): void {
// TODO: implement profiling
this.#inferenceSession.endProfiling();
}

async run(
Expand Down
35 changes: 34 additions & 1 deletion js/node/lib/binding.ts
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
// Copyright (c) Microsoft Corporation. All rights reserved.
// Licensed under the MIT License.

import { InferenceSession, OnnxValue } from 'onnxruntime-common';
import { InferenceSession, OnnxValue, Tensor, TensorConstructor, env } from 'onnxruntime-common';

type SessionOptions = InferenceSession.SessionOptions;
type FeedsType = {
Expand All @@ -28,6 +28,8 @@ export declare namespace Binding {

run(feeds: FeedsType, fetches: FetchesType, options: RunOptions): ReturnType;

endProfiling(): void;

dispose(): void;
}

Expand All @@ -48,4 +50,35 @@ export const binding =
// eslint-disable-next-line @typescript-eslint/naming-convention
InferenceSession: Binding.InferenceSessionConstructor;
listSupportedBackends: () => Binding.SupportedBackend[];
initOrtOnce: (logLevel: number, tensorConstructor: TensorConstructor) => void;
};

let ortInitialized = false;
export const initOrt = (): void => {
if (!ortInitialized) {
ortInitialized = true;
let logLevel = 2;
if (env.logLevel) {
switch (env.logLevel) {
case 'verbose':
logLevel = 0;
break;
case 'info':
logLevel = 1;
break;
case 'warning':
logLevel = 2;
break;
case 'error':
logLevel = 3;
break;
case 'fatal':
logLevel = 4;
break;
default:
throw new Error(`Unsupported log level: ${env.logLevel}`);
}
}
binding.initOrtOnce(logLevel, Tensor);
}
};
5 changes: 5 additions & 0 deletions js/node/script/build.ts
Original file line number Diff line number Diff line change
Expand Up @@ -29,6 +29,8 @@ const ONNXRUNTIME_GENERATOR = buildArgs['onnxruntime-generator'];
const REBUILD = !!buildArgs.rebuild;
// --use_dml
const USE_DML = !!buildArgs.use_dml;
// --use_webgpu
const USE_WEBGPU = !!buildArgs.use_webgpu;
// --use_cuda
const USE_CUDA = !!buildArgs.use_cuda;
// --use_tensorrt
Expand Down Expand Up @@ -65,6 +67,9 @@ if (ONNXRUNTIME_GENERATOR && typeof ONNXRUNTIME_GENERATOR === 'string') {
if (USE_DML) {
args.push('--CDUSE_DML=ON');
}
if (USE_WEBGPU) {
args.push('--CDUSE_WEBGPU=ON');
}
if (USE_CUDA) {
args.push('--CDUSE_CUDA=ON');
}
Expand Down
118 changes: 100 additions & 18 deletions js/node/src/inference_session_wrap.cc
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,12 @@
#include "tensor_helper.h"
#include <string>

Napi::FunctionReference InferenceSessionWrap::constructor;
Napi::FunctionReference InferenceSessionWrap::wrappedSessionConstructor;
Napi::FunctionReference InferenceSessionWrap::ortTensorConstructor;

Napi::FunctionReference& InferenceSessionWrap::GetTensorConstructor() {
return InferenceSessionWrap::ortTensorConstructor;
}

Napi::Object InferenceSessionWrap::Init(Napi::Env env, Napi::Object exports) {
#if defined(USE_DML) && defined(_WIN32)
Expand All @@ -23,28 +28,51 @@
Ort::Global<void>::api_ == nullptr, env,
"Failed to initialize ONNX Runtime API. It could happen when this nodejs binding was built with a higher version "
"ONNX Runtime but now runs with a lower version ONNX Runtime DLL(or shared library).");
auto ortEnv = new Ort::Env{ORT_LOGGING_LEVEL_WARNING, "onnxruntime-node"};
env.SetInstanceData(ortEnv);

// initialize binding
Napi::HandleScope scope(env);

Napi::Function func = DefineClass(
env, "InferenceSession",
{InstanceMethod("loadModel", &InferenceSessionWrap::LoadModel), InstanceMethod("run", &InferenceSessionWrap::Run),
{InstanceMethod("loadModel", &InferenceSessionWrap::LoadModel),
InstanceMethod("run", &InferenceSessionWrap::Run),
InstanceMethod("dispose", &InferenceSessionWrap::Dispose),
InstanceMethod("endProfiling", &InferenceSessionWrap::EndProfiling),
InstanceAccessor("inputNames", &InferenceSessionWrap::GetInputNames, nullptr, napi_default, nullptr),
InstanceAccessor("outputNames", &InferenceSessionWrap::GetOutputNames, nullptr, napi_default, nullptr)});

constructor = Napi::Persistent(func);
constructor.SuppressDestruct();
wrappedSessionConstructor = Napi::Persistent(func);
wrappedSessionConstructor.SuppressDestruct();
exports.Set("InferenceSession", func);

Napi::Function listSupportedBackends = Napi::Function::New(env, InferenceSessionWrap::ListSupportedBackends);
exports.Set("listSupportedBackends", listSupportedBackends);

Napi::Function initOrtOnce = Napi::Function::New(env, InferenceSessionWrap::InitOrtOnce);
exports.Set("initOrtOnce", initOrtOnce);

return exports;
}

Napi::Value InferenceSessionWrap::InitOrtOnce(const Napi::CallbackInfo& info) {
Napi::Env env = info.Env();
Napi::HandleScope scope(env);

int log_level = info[0].As<Napi::Number>().Int32Value();

Ort::Env* ortEnv = env.GetInstanceData<Ort::Env>();
if (ortEnv == nullptr) {
ortEnv = new Ort::Env{OrtLoggingLevel(log_level), "onnxruntime-node"};
env.SetInstanceData(ortEnv);
}

Napi::Function tensorConstructor = info[1].As<Napi::Function>();
ortTensorConstructor = Napi::Persistent(tensorConstructor);
ortTensorConstructor.SuppressDestruct();

return env.Undefined();
}

InferenceSessionWrap::InferenceSessionWrap(const Napi::CallbackInfo& info)
: Napi::ObjectWrap<InferenceSessionWrap>(info), initialized_(false), disposed_(false), session_(nullptr), defaultRunOptions_(nullptr) {}

Expand Down Expand Up @@ -118,6 +146,12 @@
? typeInfo.GetTensorTypeAndShapeInfo().GetElementType()
: ONNX_TENSOR_ELEMENT_DATA_TYPE_UNDEFINED);
}

// cache preferred output locations
ParsePreferredOutputLocations(info[argsLength - 1].As<Napi::Object>(), outputNames_, preferredOutputLocations_);
if (preferredOutputLocations_.size() > 0) {
ioBinding_ = std::make_unique<Ort::IoBinding>(*session_);

Check warning on line 153 in js/node/src/inference_session_wrap.cc

View workflow job for this annotation

GitHub Actions / Optional Lint C++

[cpplint] reported by reviewdog 🐶 Add #include <memory> for make_unique<> [build/include_what_you_use] [4] Raw Output: js/node/src/inference_session_wrap.cc:153: Add #include <memory> for make_unique<> [build/include_what_you_use] [4]
}
} catch (Napi::Error const& e) {
throw e;
} catch (std::exception const& e) {
Expand Down Expand Up @@ -167,15 +201,16 @@
std::vector<bool> reuseOutput;
size_t inputIndex = 0;
size_t outputIndex = 0;
OrtMemoryInfo* memory_info = Ort::MemoryInfo::CreateCpu(OrtArenaAllocator, OrtMemTypeDefault).release();
Ort::MemoryInfo cpuMemoryInfo = Ort::MemoryInfo::CreateCpu(OrtDeviceAllocator, OrtMemTypeDefault);
Ort::MemoryInfo gpuBufferMemoryInfo{"WebGPU_Buffer", OrtDeviceAllocator, 0, OrtMemTypeDefault};

try {
for (auto& name : inputNames_) {
if (feed.Has(name)) {
inputIndex++;
inputNames_cstr.push_back(name.c_str());
auto value = feed.Get(name);
inputValues.push_back(NapiValueToOrtValue(env, value, memory_info));
inputValues.push_back(NapiValueToOrtValue(env, value, cpuMemoryInfo, gpuBufferMemoryInfo));
}
}
for (auto& name : outputNames_) {
Expand All @@ -184,7 +219,7 @@
outputNames_cstr.push_back(name.c_str());
auto value = fetch.Get(name);
reuseOutput.push_back(!value.IsNull());
outputValues.emplace_back(value.IsNull() ? Ort::Value{nullptr} : NapiValueToOrtValue(env, value, memory_info));
outputValues.emplace_back(value.IsNull() ? Ort::Value{nullptr} : NapiValueToOrtValue(env, value, cpuMemoryInfo, gpuBufferMemoryInfo));

Check warning on line 222 in js/node/src/inference_session_wrap.cc

View workflow job for this annotation

GitHub Actions / Optional Lint C++

[cpplint] reported by reviewdog 🐶 Lines should be <= 120 characters long [whitespace/line_length] [2] Raw Output: js/node/src/inference_session_wrap.cc:222: Lines should be <= 120 characters long [whitespace/line_length] [2]
}
}

Expand All @@ -193,19 +228,47 @@
runOptions = Ort::RunOptions{};
ParseRunOptions(info[2].As<Napi::Object>(), runOptions);
}
if (preferredOutputLocations_.size() == 0) {
session_->Run(runOptions == nullptr ? *defaultRunOptions_.get() : runOptions,
inputIndex == 0 ? nullptr : &inputNames_cstr[0], inputIndex == 0 ? nullptr : &inputValues[0],
inputIndex, outputIndex == 0 ? nullptr : &outputNames_cstr[0],
outputIndex == 0 ? nullptr : &outputValues[0], outputIndex);

session_->Run(runOptions == nullptr ? *defaultRunOptions_.get() : runOptions,
inputIndex == 0 ? nullptr : &inputNames_cstr[0], inputIndex == 0 ? nullptr : &inputValues[0],
inputIndex, outputIndex == 0 ? nullptr : &outputNames_cstr[0],
outputIndex == 0 ? nullptr : &outputValues[0], outputIndex);
Napi::Object result = Napi::Object::New(env);

Napi::Object result = Napi::Object::New(env);
for (size_t i = 0; i < outputIndex; i++) {
result.Set(outputNames_[i], OrtValueToNapiValue(env, std::move(outputValues[i])));
}
return scope.Escape(result);
} else {
// IO binding
ORT_NAPI_THROW_ERROR_IF(preferredOutputLocations_.size() != outputNames_.size(), env,
"Preferred output locations must have the same size as output names.");

for (size_t i = 0; i < outputIndex; i++) {
result.Set(outputNames_[i], OrtValueToNapiValue(env, outputValues[i]));
}
for (size_t i = 0; i < inputIndex; i++) {
ioBinding_->BindInput(inputNames_cstr[i], inputValues[i]);
}
for (size_t i = 0; i < outputIndex; i++) {
// TODO: support preallocated output tensor (outputValues[i])

Check warning on line 252 in js/node/src/inference_session_wrap.cc

View workflow job for this annotation

GitHub Actions / Optional Lint C++

[cpplint] reported by reviewdog 🐶 Missing username in TODO; it should look like "// TODO(my_username): Stuff." [readability/todo] [2] Raw Output: js/node/src/inference_session_wrap.cc:252: Missing username in TODO; it should look like "// TODO(my_username): Stuff." [readability/todo] [2]

if (preferredOutputLocations_[i] == DATA_LOCATION_GPU_BUFFER) {
ioBinding_->BindOutput(outputNames_cstr[i], gpuBufferMemoryInfo);
} else {
ioBinding_->BindOutput(outputNames_cstr[i], cpuMemoryInfo);
}
}

session_->Run(runOptions == nullptr ? *defaultRunOptions_.get() : runOptions, *ioBinding_);

auto outputs = ioBinding_->GetOutputValues();
ORT_NAPI_THROW_ERROR_IF(outputs.size() != outputIndex, env, "Output count mismatch.");

return scope.Escape(result);
Napi::Object result = Napi::Object::New(env);
for (size_t i = 0; i < outputIndex; i++) {
result.Set(outputNames_[i], OrtValueToNapiValue(env, std::move(outputs[i])));

Check warning on line 268 in js/node/src/inference_session_wrap.cc

View workflow job for this annotation

GitHub Actions / Optional Lint C++

[cpplint] reported by reviewdog 🐶 Add #include <utility> for move [build/include_what_you_use] [4] Raw Output: js/node/src/inference_session_wrap.cc:268: Add #include <utility> for move [build/include_what_you_use] [4]
}
return scope.Escape(result);
}
} catch (Napi::Error const& e) {
throw e;
} catch (std::exception const& e) {
Expand All @@ -218,13 +281,29 @@
ORT_NAPI_THROW_ERROR_IF(!this->initialized_, env, "Session is not initialized.");
ORT_NAPI_THROW_ERROR_IF(this->disposed_, env, "Session already disposed.");

this->ioBinding_.reset(nullptr);

this->defaultRunOptions_.reset(nullptr);
this->session_.reset(nullptr);

this->disposed_ = true;
return env.Undefined();
}

Napi::Value InferenceSessionWrap::EndProfiling(const Napi::CallbackInfo& info) {
Napi::Env env = info.Env();
ORT_NAPI_THROW_ERROR_IF(!this->initialized_, env, "Session is not initialized.");
ORT_NAPI_THROW_ERROR_IF(this->disposed_, env, "Session already disposed.");

Napi::EscapableHandleScope scope(env);

Ort::AllocatorWithDefaultOptions allocator;

auto filename = session_->EndProfilingAllocated(allocator);
Napi::String filenameValue = Napi::String::From(env, filename.get());
return scope.Escape(filenameValue);
}

Napi::Value InferenceSessionWrap::ListSupportedBackends(const Napi::CallbackInfo& info) {
Napi::Env env = info.Env();
Napi::EscapableHandleScope scope(env);
Expand All @@ -242,6 +321,9 @@
#ifdef USE_DML
result.Set(result.Length(), createObject("dml", true));
#endif
#ifdef USE_WEBGPU
result.Set(result.Length(), createObject("webgpu", true));
#endif
#ifdef USE_CUDA
result.Set(result.Length(), createObject("cuda", false));
#endif
Expand Down
Loading
Loading