Skip to content

Commit

Permalink
Introduce MLResource for common compute call
Browse files Browse the repository at this point in the history
Leave MLInput only for dynamic input shape.
  • Loading branch information
huningxin committed May 31, 2021
1 parent a9e6dff commit d0e59a6
Show file tree
Hide file tree
Showing 2 changed files with 53 additions and 51 deletions.
16 changes: 8 additions & 8 deletions explainer.md
Original file line number Diff line number Diff line change
Expand Up @@ -45,8 +45,8 @@ const graph = builder.build({'C': C});
const bufferA = new Float32Array(4).fill(1.0);
const bufferB = new Float32Array(4).fill(0.8);
const bufferC = new Float32Array(4);
const inputs = {'A': {data: bufferA}, 'B': {data: bufferB}};
const outputs = {'C': {data: BufferC}};
const inputs = {'A': bufferA, 'B': bufferB};
const outputs = {'C': bufferC};
graph.compute(inputs, outputs);
// The computed result of [[1, 1], [1, 1]] is in the buffer associated with
// the output operand.
Expand Down Expand Up @@ -144,14 +144,14 @@ export class NSNet2 {

compute(inputBuffer, initialState92Buffer, initialState155Buffer, outputBuffer, gru94Buffer, gru157Buffer) {
const inputs = {
'input': {data: inputBuffer},
'initialState92': {data: initialState92Buffer},
'initialState155': {data: initialState155Buffer},
'input': inputBuffer,
'initialState92': initialState92Buffer,
'initialState155': initialState155Buffer,
};
const outputs = {
'output': {data: outputBuffer},
'gru94': {data: gru94Buffer},
'gru157': {data: gru157Buffer}
'output': outputBuffer,
'gru94': gru94Buffer,
'gru157': gru157Buffer
};
return this.graph.compute(inputs, outputs);
}
Expand Down
88 changes: 45 additions & 43 deletions index.bs
Original file line number Diff line number Diff line change
Expand Up @@ -1639,17 +1639,15 @@ partial interface MLGraphBuilder {
The {{MLGraph}} interface represents a compiled computational graph. A compiled graph once constructed is immutable and cannot be subsequently changed.

<script type=idl>
typedef (MLBufferView or WebGLTexture or GPUTexture) MLResource;

dictionary MLInput {
required (MLBufferView or WebGLTexture or GPUTexture) data;
required MLResource resource;
sequence<long> dimensions;
};

dictionary MLOutput {
required (MLBufferView or WebGLTexture or GPUTexture) data;
};

typedef record<DOMString, MLInput> MLNamedInputs;
typedef record<DOMString, MLOutput> MLNamedOutputs;
typedef record<DOMString, (MLResource or MLInput)> MLNamedInputs;
typedef record<DOMString, MLResource> MLNamedOutputs;

[SecureContext, Exposed=(Window, DedicatedWorker)]
interface MLGraph {
Expand All @@ -1664,11 +1662,11 @@ interface MLGraph {
::
The context of type {{MLContext}} associated with this {{MLGraph}}.

: <dfn>\[[inputOperands]]</dfn> of type [=record=]&lt;{{DOMString}}, {{MLOperandDescriptor}}&gt;
: <dfn>\[[inputDescriptors]]</dfn> of type [=record=]&lt;{{DOMString}}, {{MLOperandDescriptor}}&gt;
::
Maps the name of an input {{MLOperand}} to its {{MLOperandDescriptor}} for all input {{MLOperand}}s of this {{MLGraph}}.

: <dfn>\[[outputOperands]]</dfn> of type [=sequence=]&lt;{{DOMString}}&gt;
: <dfn>\[[outputNames]]</dfn> of type [=sequence=]&lt;{{DOMString}}&gt;
::
Contains the names of all output {{MLOperand}}s of this {{MLGraph}}.

Expand Down Expand Up @@ -1696,57 +1694,61 @@ interface MLGraph {
1. If any of the following requirements are unmet, then throw a {{DataError}} {{DOMException}} and stop.
<div class=validusage>
1. For each |key| -> |value| of |inputs|:
1. |this|.{{MLGraph/[[inputOperands]]}}[|key|] must exist.
1. Let |inputOperand| be |this|.{{MLGraph/[[inputOperands]]}}[|key|].
1. |this|.{{MLGraph/[[inputDescriptors]]}}[|key|] must exist.
1. Let |inputDesc| be |this|.{{MLGraph/[[inputDescriptors]]}}[|key|].
1. Let |inputSize| be 1.
1. If |value|.{{MLInput/dimensions}} was given, then:
1. The length of |value|.{{MLInput/dimensions}} must be the same as the length of |inputOperand|.{{MLOperandDescriptor/dimensions}}.
1. If |value| is an {{MLInput}} and |value|.{{MLInput/dimensions}} is given, then:
1. The length of |value|.{{MLInput/dimensions}} must be the same as the length of |inputDesc|.{{MLOperandDescriptor/dimensions}}.
1. Let |i| be 0.
1. While true:
1. Let |dimension| be |value|.{{MLInput/dimensions}}[|i|].
1. |dimension| must be greater than 0.
1. If |inputOperand|.{{MLOperandDescriptor/dimensions}}[|i|] is greater than 0, then |dimension| must be equal to |inputOperand|.{{MLOperandDescriptor/dimensions}}[|i|].
1. Let |inputSize| be the result of multiplication of |inputSize| and |dimension|.
1. If |inputDesc|.{{MLOperandDescriptor/dimensions}}[|i|] is greater than 0, then |dimension| must be equal to |inputDesc|.{{MLOperandDescriptor/dimensions}}[|i|].
1. Set |inputSize| to the product of |inputSize| and |dimension|.
1. Increment |i| by 1.
1. If |i| if equal to the length of |value|.{{MLInput/dimensions}}, then break.
1. Else:
1. For each |dimension| of |inputOperand|.{{MLOperandDescriptor/dimensions}}:
1. For each |dimension| of |inputDesc|.{{MLOperandDescriptor/dimensions}}:
1. The value of |dimension| must be greater than 0.
1. Let |inputSize| be the result of multiplication of |inputSize| and |dimension|.
1. If |value|.{{MLInput/data}} is an {{ArrayBufferView}}, then:
1. The kind of |value|.{{MLInput/data}} must be compatible to |inputOperand|.{{MLOperandDescriptor/type}} according to [this table](#appendices-mloperandtype-arraybufferview-compatibility).
1. The length of |value|.{{MLInput/data}} must be the same as |inputSize|.
1. Set |inputSize| to the product of |inputSize| and |dimension|.
1. If |value| is an {{MLInput}}, then let |resource| be |value|.{{MLInput/resource}}.
1. If |value| is an {{MLResource}}, then let |resource| be |value|.
1. If |resource| is an {{ArrayBufferView}}, then:
1. The kind of |resource| must be compatible with |inputDesc|.{{MLOperandDescriptor/type}} according to [this table](#appendices-mloperandtype-arraybufferview-compatibility).
1. The length of |resource| must be the same as |inputSize|.

1. For each |key| -> |value| of |outputs|:
1. |this|.{{MLGraph/[[outputOperands]]}}[|key|] must exist.
1. Let |outputOperand| be |this|.{{MLGraph/[[outputOperands]]}}[|key|].
1. If |value|.{{MLOutput/data}} is an {{ArrayBufferView}}, then:
1. The kind of |value|.{{MLOutput/data}} must be compatible to |outputOperand|.{{MLOperandDescriptor/type}} according to [this table](#appendices-mloperandtype-arraybufferview-compatibility).
1. |this|.{{MLGraph/[[outputNames]]}}[|key|] must exist.
</div>
<!-- Compute -->
1. For each |key| -> |value| of |inputs|:
1. Let |inputOperand| be |this|.{{MLGraph/[[inputOperands]]}}[|key|].
1. Let |inputTensor| be a new tensor for |this|.{{MLGraph/[[implementation]]}} of data type that is compatible to |inputOperand|.{{MLOperandDescriptor/type}}.
1. If |value|.{{MLInput/dimensions}} was given, then:
1. Let |inputDesc| be |this|.{{MLGraph/[[inputDescriptors]]}}[|key|].
1. Let |inputTensor| be a new tensor for |this|.{{MLGraph/[[implementation]]}} of data type that is compatible with |inputDesc|.{{MLOperandDescriptor/type}}.
1. If |value| is an {{MLInput}} and |value|.{{MLInput/dimensions}} is given, then:
1. Set the dimensions of |inputTensor| to |value|.{{MLInput/dimensions}}.
1. Else:
1. Set the dimensions of |inputTensor| to |inputOperand|.{{MLOperandDescriptor/dimensions}}.
1. Set the values of |inputTensor| to the values of |value|.{{MLInput/data}}.
1. Set the input of |this|.{{MLGraph/[[implementation]]}} for |key| to |inputTensor|.
1. Set the dimensions of |inputTensor| to |inputDesc|.{{MLOperandDescriptor/dimensions}}.
1. If |value| is an {{MLInput}}, then:
1. Set the values of |inputTensor| to the values of |value|.{{MLInput/resource}}.
1. If |value| is an {{MLResource}}, then:
1. Set the values of |inputTensor| to the values of |value|.
1. Set the input of |this|.{{MLGraph/[[implementation]]}} that is associated with |key| to |inputTensor|.
1. For each |key| -> |value| of |outputs|:
1. Issue a compute request of |this|.{{MLGraph/[[implementation]]}} for output whose name is |key|.
1. Issue a compute request for output of |this|.{{MLGraph/[[implementation]]}} that is associated with |key|.
1. Wait for the compute request to be completed.
1. If there is an error returned by |this|.{{MLGraph/[[implementation]]}}, then:
1. Throw an {{OperationError}} {{DOMException}} and stop.
1. Else:
1. Let |outputTensor| be the output tensor returned by |this|.{{MLGraph/[[implementation]]}}.
1. If the kind of |value| is not compatible with the value type of |outputTensor|, then throw a {{DataError}} {{DOMException}} and stop.
1. Let |outputSize| be 1.
1. For each |dimension| of dimensions of |outputTensor|:
1. Let |outputSize| be the result of multiplication of |outputSize| and |dimension|.
1. If |outputSize| is greater than the length of |value|.{{MLOutput/data}}, then:
1. Set |outputSize| to the product of |outputSize| and |dimension|.
1. If |outputSize| is greater than the length of |value|, then:
1. Throw a {{DataError}} {{DOMException}} and stop.
1. Else:
1. Set the values of |value|.{{MLOutput/data}} to the values of |outputTensor|.
1. Set the values of |value| to the values of |outputTensor|.
1. Return {{undefined}}.

Issue: Describe the algorithm steps for |this|.{{MLGraph/[[context]]}} created from {{WebGLRenderingContext}} and {{GPUDevice}}.
</div>
Expand Down Expand Up @@ -1775,10 +1777,10 @@ function compute(shapeA, shapeB, shapeC) {

// Specify the shape of inputs when computing.
const inputs = {
'a': {data: bufferA, dimensions: shapeA},
'b': {data: bufferB, dimensions: shapeB},
'a': {resource: bufferA, dimensions: shapeA},
'b': {resource: bufferB, dimensions: shapeB},
};
const outputs = {'c': {data: bufferC}};
const outputs = {'c': bufferC};
graph.compute(inputs, outputs);
console.log(&#96;values: ${bufferC}&#96;);
}
Expand Down Expand Up @@ -1809,16 +1811,16 @@ const e = builder.add(d, c);
const graph = builder.build({'d': d, 'e': e});

const bufferA = new Float32Array(sizeOfShape(descA.dimensions)).fill(0.5);
const inputs = {'a': {data: bufferA}};
const inputs = {'a': bufferA};

// Compute d.
const bufferD = new Float32Array(sizeOfShape([3, 3]));
graph.compute(inputs, {'d': {data: bufferD}});
graph.compute(inputs, {'d': bufferD});
console.log(&#96;values: ${bufferD}&#96;);

// Compute e.
const bufferE = new Float32Array(sizeOfShape([3, 3]));
graph.compute(inputs, {'e': {data: bufferE}});
graph.compute(inputs, {'e': bufferE});
console.log(&#96;values: ${bufferE}&#96;);
</pre>
</div>
Expand Down Expand Up @@ -1897,10 +1899,10 @@ const outputBuffer = new Float32Array(TENSOR_SIZE);

// Execute the compiled graph with the specified inputs.
const inputs = {
'input1': {data: inputBuffer1},
'input2': {data: inputBuffer2},
'input1': inputBuffer1,
'input2': inputBuffer2,
};
const outputs = {'output': {data: outputBuffer}}
const outputs = {'output': outputBuffer};
graph.compute(inputs, outputs);

console.log('Output value: ' + outputBuffer);
Expand Down

0 comments on commit d0e59a6

Please sign in to comment.