Skip to content

Commit

Permalink
Change MLOutput to download data asynchronously
Browse files Browse the repository at this point in the history
  • Loading branch information
huningxin committed May 7, 2021
1 parent 0c847fb commit 697e425
Showing 1 changed file with 75 additions and 67 deletions.
142 changes: 75 additions & 67 deletions index.bs
Original file line number Diff line number Diff line change
Expand Up @@ -1701,23 +1701,26 @@ partial interface MLGraphBuilder {
The {{MLGraph}} interface represents a compiled computational graph. A compiled graph once constructed is immutable and cannot be subsequently changed.

<script type=idl>
typedef (MLBufferView or WebGLTexture or GPUTexture) MLData;

dictionary MLInput {
required (MLBufferView or WebGLTexture or GPUTexture) data;
required MLData data;
sequence<long> dimensions;
};

dictionary MLOutput {
(MLBufferView or WebGLTexture or GPUTexture) data;
sequence<long> dimensions;
interface MLOutput {
Promise<MLData> data();
Promise<undefined> data(MLData data);
sequence<long> dimensions();
};

typedef record<DOMString, MLInput> MLNamedInputs;
typedef record<DOMString, (MLInput or MLOutput)> MLNamedInputs;
typedef record<DOMString, MLOutput> MLNamedOutputs;

[SecureContext, Exposed=Window]
interface MLGraph {
Promise<MLNamedOutputs> compute(MLNamedInputs inputs,
optional MLNamedOutputs outputs = {});
MLNamedOutputs compute(MLNamedInputs inputs,
optional sequence<DOMString> outputNames = []);
};
</script>

Expand All @@ -1742,24 +1745,23 @@ interface MLGraph {
</dl>

<dl dfn-type=method dfn-for=MLGraph>
: <dfn>compute(inputs, outputs)</dfn>
: <dfn>compute(inputs, outputNames)</dfn>
::
Issue a compute request of the {{MLGraph}} given {{MLNamedInputs}} and optional {{MLNamedOutputs}}. The returned {{Promise}} resolves when the results in {{MLNamedOutputs}} are ready to be consumed.
Issue a compute request of the {{MLGraph}} given {{MLNamedInputs}} and optional [=sequence=]&lt;{{DOMString}}&gt. Return {{MLNamedOutputs}}.

<div algorithm=MLGraph.compute>
**Called on:** {{MLGraph}} |this|.

**Arguments:**
<pre class=argumentdef for="MLGraph/compute(inputs, outputs)">
<pre class=argumentdef for="MLGraph/compute(inputs, outputNames)">
|inputs|: a {{MLNamedInputs}}. The data and optional dimensions of inputs for the compute request.
|outputs|: an optional {{MLNamedOutputs}}. The names and pre-allocated resources of required outputs for the compute request. Default to be an empty [=record=] which means that the compute request is for all outputs.
|outputNames|: an optional [=sequence=]&lt;{{DOMString}}&gt. The names of required outputs for the compute request. Default to be an empty [=sequence=] which means that the compute request is for all outputs.
</pre>

**Returns:** {{Promise}}&lt;{{MLNamedOutputs}}&gt;. The dimensions and data of outputs returned by the compute request.
**Returns:** {{MLNamedOutputs}}.

1. Let |promise| be [=a new promise=].
<!-- Validate inputs and outputs -->
1. If any of the following requirements are unmet, then [=reject=] |promise| with a {{TypeError}} and stop.
1. If any of the following requirements are unmet, then throw a {{TypeError}} and stop.

<div class=validusage>
1. For each |key| -> |value| of |inputs|:
Expand All @@ -1774,22 +1776,22 @@ interface MLGraph {
1. Let |dimension| be |value|.{{MLInput/dimensions}}[|i|].
1. |dimension| must be greater than 0.
1. If |inputOperand|.{{MLOperandDescriptor/dimensions}}[|i|] is greater than 0, then |dimension| must be equal to |inputOperand|.{{MLOperandDescriptor/dimensions}}[|i|].
1. Set |i| to |i| + 1.
1. Increment |i| by 1.
1. If |i| if equal to the length of |value|.{{MLInput/dimensions}}, then break.
1. Else:
1. For each |dimension| of |inputOperand|.{{MLOperandDescriptor/dimensions}}:
1. The value of |dimension| must be greater than 0.

1. If |outputs| was not an empty [=record=], then:
1. For each |key| -> |value| of |outputs|:
1. |this|.{{MLGraph/[[outputOperands]]}}[|key|] must exist.
1. If |value|.{{MLOutput/data}} was given, then the kind of |value|.{{MLOutput/data}} must be compatible to |this|.{{MLGraph/[[outputOperands]]}}[|key|] according to [this table](#appendices-mloperandtype-arraybufferview-compatibility).
1. If |outputNames| was not an empty [=sequence=], then:
1. For each |name| of |outputNames|:
1. |this|.{{MLGraph/[[outputOperands]]}}[|name|] must exist.

</div>
<!-- Filter the required outputs -->
1. Let |requiredOutputNames| be a new [=ordered set=]&lt;{{DOMString}}&gt;.
1. If |outputs| was not an empty [=record=], then:
1. For each |key| -> |value| of |outputs|:
1. Append |key| to |requiredOutputNames|.
1. If |outputNames| was not an empty [=sequence=], then:
1. For each |name| of |outputNames|:
1. Append |name| to |requiredOutputNames|.
1. Else:
1. For each |key| -> |value| of |this|.{{MLGraph/[[outputOperands]]}}:
1. Append |key| to |requiredOutputNames|.
Expand All @@ -1804,45 +1806,22 @@ interface MLGraph {
1. Set |copiedInputs|[key] to |copiedInputs|.
<!-- Compute -->
1. Let |results| be a new {{MLNamedOutputs}}.
1. Let |remainingOutputNames| be a new [=ordered set=]&lt;{{DOMString}}&gt;.
1. Set the content of |remainingOutputNames| to the content of |requiredOutputNames|.
1. Issue the following steps on the [=Device timeline=] of |this|.{{MLGraph/[[implementation]]}}:
<div class=device-timeline>
1. For each |outputName| of |requiredOutputNames|:
1. Issue a compute request of |this|.{{MLGraph/[[implementation]]}} for output whose name is |outputName| with given |copiedInputs|.
1. When the compute request is completed, issue the following steps on the appropriate [=Queue timeline=]:
<div class=queue-timeline>
1. If there is an error returned by |this|.{{MLGraph/[[implementation]]}}, then:
1. [=reject=] |promise| with an {{OperationError}} and stop.
1. Else:
1. Let |outputRank| be a {{unsigned long}}.
1. Set |outputRank| to the rank of output tensor returned by |this|.{{MLGraph/[[implementation]]}}.
1. Let |outputDemisions| be a new [=sequence=]&lt;{{long}}&gt; of size |outputRank|.
1. Let |i| be 0.
1. Let |outputSize| to 1.
1. While true:
1. Set |outputDimensions|[|i|] to the dimension at |i|th axis of output tensor returned by |this|.{{MLGraph/[[implementation]]}}.
1. Set |outputSize| to |outputSize| * |outputDimensions|[|i|].
1. Set |i| to |i| + 1.
1. If |i| is equal to |outputRank|, then break.
1. Set |results|[|outputName|].{{MLOutput/dimensions}} to |outputDemisions|.
1. If |this|.{{MLGraph/[[context]]}} is created from {{MLContextOptions}}, then:
1. If |outputs|[|outputName|].{{MLOutput/data}} was given, then:
1. If outputs|[|outputName|].{{MLOutput/data}} is not an {{ArrayBufferView}}, then [=reject=] |promise| with an {{TypeError}} and stop.
1. If the kind of |outputs|[|outputName|].{{MLOutput/data}} is not compatible to output tensor according to [this table](#appendices-mloperandtype-arraybufferview-compatibility), then [=reject=] |promise| with a {{TypeError}} and stop.
1. If the length of |outputs|[|outputName|].{{MLOutput/data}} is less than |outputSize|, then [=reject=] |promise| with a {{TypeError}} and stop.
1. Set the content of |outputs|[|outputName|].{{MLOutput/data}} to the content of output tensor returned by |this|.{{MLGraph/[[implementation]]}}.
1. Else:
1. Let |results|[|outputName|].{{MLOutput/data}} be a new {{ArrayBufferView}} of size |outputSize| and kind that is compatible to output tensor according to [this table](#appendices-mloperandtype-arraybufferview-compatibility).
1. Set the content of |results|[|outputName|].{{MLOutput/data}} to the content of output tensor returned by |this|.{{MLGraph/[[implementation]]}}.
1. Remove |outputName| from |remainingOutputNames|.
1. If |remainingOutputNames| is empty, then resolve |promise| with |results| and stop.
</div>
1. If there is an error returned by |this|.{{MLGraph/[[implementation]]}}, then:
1. Throw an {{OperationError}} and stop.
1. Else:
1. Let |output| be a new {{MLOutput}}.
1. Associate |output| with the output tensor returned by |this|.{{MLGraph/[[implementation]]}}.
1. Set |results|[|outputName|] to |output|.
</div>

1. Return |promise|.
1. Return |results|.

Issue: Describe the algorithm steps for |this|.{{MLGraph/[[context]]}} created from {{WebGLRenderingContext}} and {{GPUDevice}}.

Issue: Describe the algorithm steps for {{MLOutput}}.
</div>
</dl>

Expand All @@ -1860,7 +1839,7 @@ const a = builder.input('a', descA);
const descB = {type: 'float32', dimensions: [4, -1]};
const b = builder.input('b', descB);
const c = builder.matmul(a, b);
const graph = await builder.build({c});
const graph = await builder.build({'c': c});

async function compute(shapeA, shapeB) {
const bufferA = new Float32Array(sizeOfShape(shapeA)).fill(0.5);
Expand All @@ -1871,8 +1850,8 @@ async function compute(shapeA, shapeB) {
'a': {data: bufferA, dimensions: shapeA},
'b': {data: bufferB, dimensions: shapeB},
};
const outputs = await graph.compute(inputs);
console.log(&#96;shape: [${outputs.c.dimensions}], values: ${outputs.c.data}&#96;);
const outputs = graph.compute(inputs);
console.log(&#96;shape: [${outputs.c.dimensions()}], values: ${await outputs.c.data()}&#96;);
}

await compute([3, 4], [4, 3]);
Expand All @@ -1895,14 +1874,15 @@ const descB = {type: 'float32', dimensions: [4, 3]};
const bufferB = new Float32Array(sizeOfShape(descB.dimensions)).fill(0.5);
const b = builder.constant(descB, bufferB);
const c = builder.matmul(a, b);
const graph = await builder.build({c});
const graph = await builder.build({'c': c});

const bufferA = new Float32Array(sizeOfShape(descA.dimensions)).fill(0.5);
const inputs = {'a': {data: bufferA}};
// Pre-allocate output buffer for c.
const outputs = {'c': {data: new Float32Array(sizeOfShape([3, 3]))}};
await graph.compute(inputs, outputs);
console.log(&#96;values: ${outputs.c.data}&#96;);
const bufferC = new Float32Array(sizeOfShape([3, 3]));
const outputs = graph.compute(inputs);
await outputs.c.data(bufferC);
console.log(&#96;values: ${bufferC}&#96;);
</pre>
</div>

Expand All @@ -1923,24 +1903,52 @@ const bufferC = new Float32Array(sizeOfShape(descC.dimensions)).fill(1);
const c = builder.constant(descC, bufferC);
const d = builder.matmul(a, b);
const e = builder.add(d, c);
const graph = await builder.build({d, e});
const graph = await builder.build({'d': d, 'e': e});

const bufferA = new Float32Array(sizeOfShape(descA.dimensions)).fill(0.5);
const inputs = {'a': {data: bufferA}};

// Compute both d and e.
let outputs = await graph.compute(inputs);
let outputs = graph.compute(inputs);
console.log(&#96;outputs include ${Object.keys(outputs)}&#96;);

// Compute d.
outputs = await graph.compute(inputs, {d});
outputs = graph.compute(inputs, ['d']);
console.log(&#96;outputs include ${Object.keys(outputs)}&#96;);
console.log(&#96;shape: [${outputs.d.dimensions}], values: ${outputs.d.data}&#96;);
console.log(&#96;shape: [${outputs.d.dimensions()}], values: ${await outputs.d.data()}&#96;);

// Compute e.
outputs = await graph.compute(inputs, {e});
outputs = graph.compute(inputs, ['e']);
console.log(&#96;outputs include ${Object.keys(outputs)}&#96;);
console.log(&#96;shape: [${outputs.e.dimensions}], values: ${outputs.e.data}&#96;);
console.log(&#96;shape: [${outputs.e.dimensions()}], values: ${await outputs.e.data()}&#96;);
</pre>
</div>

<div class="example">
The following code showcases the computation of multiple graphs without accessing the intermediate results.
<pre highlight="js">
const context = navigator.ml.createContext();
const builder = new MLGraphBuilder(context);

async function buildConv2d(inputShape, filterShape) {
const input = builder.input('input', {type: 'float32', dimensions: inputShape});
const filter = builder.constant({type: 'float32', dimensions: filterShape},
new Float32Array(sizeOfShape(filterShape)).fill(0.5));
const output = builder.conv2d(input, filter);
return await builder.build({'output': output});
}

// Build three graphs that each one contains a conv2d op.
const conv2dOp1 = await buildConv2d([1, 1, 9, 9], [1, 1, 3, 3]);
const conv2dOp2 = await buildConv2d([1, 1, 7, 7], [1, 1, 3, 3]);
const conv2dOp3 = await buildConv2d([1, 1, 5, 5], [1, 1, 3, 3]);

// Compute the graphs and access the final result.
const inputBuffer = new Float32Array(9*9).fill(0.5);
const output1 = conv2dOp1.compute({'input': {data: inputBuffer}).output;
const output2 = conv2dOp2.compute({'input': output1}).output;
const output3 = conv2dOp3.compute({'input': output2}).output;
console.log(&#96;shape: [${output3.dimensions()}], values: ${await output3.data()}&#96;);
</pre>
</div>

Expand Down

0 comments on commit 697e425

Please sign in to comment.