Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Remove usages of voicingUtteranceQueue.addToBack oustide of scenery #1403

Closed
jessegreenberg opened this issue Apr 11, 2022 · 12 comments
Closed

Comments

@jessegreenberg
Copy link
Contributor

In order to leverage #1300 we generally need to stop using voicingUtteranceQueue.addToBack outside of scenery. Instead, the Voicing.ts voicingSpeak* functions need to be used so that voicing goes through the Node and its voicingVisibleProperty.

In places where this is not possible, consider passing a Node's voicingVisibleProperty to the Utterance handed to the voicingUtteranceQueue so that spoken content in a sim is controlled as necessary.

@jessegreenberg
Copy link
Contributor Author

I am not sure how to best handle Alerter.js usage of voicingUtteranceQueue. I think we should change it to be more like alertDescriptionUtterance, which goes through a descriptionAlertNode. Then we have a Node to speak through and leverage the voicingVisiblePropertys of #1300. But it would still require many changes:

@jessegreenberg
Copy link
Contributor Author

The remaining usages of voicingUtteranceQueue.addToBack() are in RAP, Friction, GrabDragInteraction, or are related to Alerter. It would be good to check in with @zepumph before making any more changes.

phetsims/gravity-force-lab@6d30c3c is an example for what we might need to do for Alerter.ts. I got lucky in that example because the descriptionAlertNode is a ScreenView and gets its voicingVisibleProperty set. But ideally we would need to provide a Node that is composed with Voicing so that we can check a voicingCanSpeakProperty and not assume how voicingVisibleProperty is set on the ScreenView.

What if we do this kind of change to Alerter.ts

Index: js/accessibility/describers/Alerter.ts
IDEA additional info:
Subsystem: com.intellij.openapi.diff.impl.patch.CharsetEP
<+>UTF-8
===================================================================
diff --git a/js/accessibility/describers/Alerter.ts b/js/accessibility/describers/Alerter.ts
--- a/js/accessibility/describers/Alerter.ts	(revision e9db863557d51d0b6dd79d8a64a80d76bb7c8c9b)
+++ b/js/accessibility/describers/Alerter.ts	(date 1649712819507)
@@ -15,9 +15,9 @@
 
 export type AlerterOptions = {
 
-  // When true, alerts will be sent to the voicingUtteranceQueue. This shutoff valve is similar to
-  // descriptionAlertNode, but for voicing.
-  alertToVoicing?: boolean;
+
+  // If provided, use this Node to send Voicing alerts to the voicingUtteranceQueue. No Voicing output if this is null.
+  voicingAlertNode?: VoicingNode | null;
 
   // If provided, use this Node to send description alerts to one or more Display's UtteranceQueue. Unlike for
   // Voicing, description alerts must occur through a Node connected to a Display through the scene graph. If null,
@@ -26,18 +26,18 @@
 }
 
 class Alerter {
-  alertToVoicing: boolean;
+  voicingAlertNode: VoicingNode | null;
   descriptionAlertNode: Node | null;
 
   constructor( providedOptions?: AlerterOptions ) {
 
     const options = optionize<AlerterOptions>( {
-      alertToVoicing: true,
+      voicingAlertNode: null,
       descriptionAlertNode: null
     }, providedOptions );
 
     // @public - only subtypes can mutate
-    this.alertToVoicing = options.alertToVoicing;
+    this.voicingAlertNode = options.voicingAlertNode;
 
     // @public (read-only)
     this.descriptionAlertNode = options.descriptionAlertNode;
@@ -47,7 +47,7 @@
    * Alert to both description and voicing utteranceQueues, depending on if both are supported by this instance
    */
   alert( alertable: IAlertable ): void {
-    this.alertToVoicing && voicingUtteranceQueue.addToBack( alertable );
+    this.alertVoicingUtterance( alertable );
     this.alertDescriptionUtterance( alertable );
   }
 
@@ -58,6 +58,14 @@
     this.descriptionAlertNode && this.descriptionAlertNode.alertDescriptionUtterance( alertable );
   }
 
+  alertVoicingUtterance( alertable: IAlertable ): void {
+
+    // NOTE: I don't know how to best provide the alertable in a specific response??
+    this.voicingAlertNode && this.voicingAlertNode.voicingSpeakResponse( {
+      nameResponse: alertable
+    } );
+  }
+
   /**
    * Forward to provided Node for UtteranceQueue alerting logic. See ParallelDOM.forEachUtteranceQueue() for details.
    */

@jessegreenberg
Copy link
Contributor Author

@zepumph and I discussed the Alerter issue. We want to do two things

  1. Building off of the patch in Remove usages of voicingUtteranceQueue.addToBack oustide of scenery #1403 (comment), Alerter.ts should get its own Utterance. The voicingAlertNode of the Alerter will add its voicingCanSpeakProperty to that Utterance. When time to announce for Voicing, we use a new function in 2) below:
  2. Add a function to Voicing.ts that will just take an Utterance, assert that the Utterance has one canAnnounceProperty, and then adds the Utterance to the back of the queue. Alerter.ts will be its only usage for now, but perhaps in the future it will be used without Alerter.ts.

Raw notes I took while we met, in a patch to Alerter.ts, in particular see notes in announceVoicingUtterance

Index: js/accessibility/describers/Alerter.ts
IDEA additional info:
Subsystem: com.intellij.openapi.diff.impl.patch.CharsetEP
<+>UTF-8
===================================================================
diff --git a/js/accessibility/describers/Alerter.ts b/js/accessibility/describers/Alerter.ts
--- a/js/accessibility/describers/Alerter.ts	(revision 95223865a134b508bf13e927c7d960081eee5a32)
+++ b/js/accessibility/describers/Alerter.ts	(date 1649720421477)
@@ -7,7 +7,7 @@
 
 import optionize from '../../../../phet-core/js/optionize.js';
 import { Node, voicingUtteranceQueue } from '../../../../scenery/js/imports.js';
-import { IAlertable } from '../../../../utterance-queue/js/Utterance.js';
+import Utterance, { IAlertable } from '../../../../utterance-queue/js/Utterance.js';
 import UtteranceQueue from '../../../../utterance-queue/js/UtteranceQueue.js';
 import sceneryPhet from '../../sceneryPhet.js';
 
@@ -15,9 +15,9 @@
 
 export type AlerterOptions = {
 
-  // When true, alerts will be sent to the voicingUtteranceQueue. This shutoff valve is similar to
-  // descriptionAlertNode, but for voicing.
-  alertToVoicing?: boolean;
+
+  // If provided, use this Node to send Voicing alerts to the voicingUtteranceQueue. No Voicing output if this is null.
+  voicingAlertNode?: VoicingNode | null;
 
   // If provided, use this Node to send description alerts to one or more Display's UtteranceQueue. Unlike for
   // Voicing, description alerts must occur through a Node connected to a Display through the scene graph. If null,
@@ -26,18 +26,18 @@
 }
 
 class Alerter {
-  alertToVoicing: boolean;
+  voicingAlertNode: VoicingNode | null;
   descriptionAlertNode: Node | null;
 
   constructor( providedOptions?: AlerterOptions ) {
 
     const options = optionize<AlerterOptions>( {
-      alertToVoicing: true,
+      voicingAlertNode: null,
       descriptionAlertNode: null
     }, providedOptions );
 
     // @public - only subtypes can mutate
-    this.alertToVoicing = options.alertToVoicing;
+    this.voicingAlertNode = options.voicingAlertNode;
 
     // @public (read-only)
     this.descriptionAlertNode = options.descriptionAlertNode;
@@ -47,7 +47,7 @@
    * Alert to both description and voicing utteranceQueues, depending on if both are supported by this instance
    */
   alert( alertable: IAlertable ): void {
-    this.alertToVoicing && voicingUtteranceQueue.addToBack( alertable );
+    this.announceVoicingUtterance( alertable );
     this.alertDescriptionUtterance( alertable );
   }
 
@@ -58,6 +58,50 @@
     this.descriptionAlertNode && this.descriptionAlertNode.alertDescriptionUtterance( alertable );
   }
 
+  announceVoicingUtterance( alertable: IAlertable ): void {
+
+    if ( alertable instanceof Utterance ) {
+
+      // shenanigans
+      // add the voicingAlertNode.canSpeakProperty to the Utterance's Properties.
+        // NO GOOD because when the Utterance gets announce we don't remove the canSpeakProperty we added!
+
+      // If we try speak with Voicing it is ambiguous what features of Voicing.ts we use vs the Utterance
+      // here. What responses to we speak? Do we use this utterance or the Node's utterance? Etc...
+
+      // If the Alerter has its own Utterance, it gets around the "side effect" drawback of the first
+      // proposal. But we are still worried about reusing the same Utterance all the time. It gets
+      // around the drawback of the second proposal because the Utterance would always have the
+      // voicingAlertNode's voicingCanSpeakProperty.
+        // This Utterance is only for Voicing!
+        // That means this is a content specific alerter!
+
+        // We want to add a setter for the
+
+        // Remember that Utterances will need a canAnnounceProperty to be announced wiht voicingUtteranceQueue.
+
+        // We need to do both! Add a function to Voicing that takes an Utterance (not an Alertable) and
+        // assert that there is a canAnnounceProperty on it. And then it just adds the Utterance to the
+        // back of the queue. For now we will recommend going through alerter. But possible to use this
+        // if we need in the future. This way we also don't need to extend voicingUtteranceQueue.
+        //
+    }
+    else {
+
+      // This utterance has no concept of
+      const newUtterance = new Utterance( {
+        alert: alertable,
+        canAnnounceProperties: [ this.voicingAlertNode.voicingCanSpeakProperty ]
+      } );
+      voicingUtteranceQueue.addToBack( newUtterance );
+    }
+
+    // // NOTE: I don't know how to best provide the alertable in a specific response??
+    // this.voicingAlertNode && this.voicingAlertNode.voicingSpeakResponse( {
+    //   nameResponse: alertable
+    // } );
+  }
+
   /**
    * Forward to provided Node for UtteranceQueue alerting logic. See ParallelDOM.forEachUtteranceQueue() for details.
    */

@jessegreenberg
Copy link
Contributor Author

jessegreenberg commented Apr 12, 2022

I am less confident in the strategy above after looking at usages in Friction. For example in TemperatureIncreasingAlerter it uses two Utterances to control output and that seems reasonable. The Utterances use shared state information on the TemperatureIncreasingAlerter in their predicates, it would be unfortunate to have to break it into two Alerters.

Here is a patch but I want to step back and reconsider for a moment:

Index: scenery-phet/js/accessibility/describers/Alerter.ts
IDEA additional info:
Subsystem: com.intellij.openapi.diff.impl.patch.CharsetEP
<+>UTF-8
===================================================================
diff --git a/scenery-phet/js/accessibility/describers/Alerter.ts b/scenery-phet/js/accessibility/describers/Alerter.ts
--- a/scenery-phet/js/accessibility/describers/Alerter.ts	(revision 95223865a134b508bf13e927c7d960081eee5a32)
+++ b/scenery-phet/js/accessibility/describers/Alerter.ts	(date 1649771446931)
@@ -6,8 +6,8 @@
  */
 
 import optionize from '../../../../phet-core/js/optionize.js';
-import { Node, voicingUtteranceQueue } from '../../../../scenery/js/imports.js';
-import { IAlertable } from '../../../../utterance-queue/js/Utterance.js';
+import { Node, Voicing, VoicingNode } from '../../../../scenery/js/imports.js';
+import Utterance, { IAlertable } from '../../../../utterance-queue/js/Utterance.js';
 import UtteranceQueue from '../../../../utterance-queue/js/UtteranceQueue.js';
 import sceneryPhet from '../../sceneryPhet.js';
 
@@ -15,39 +15,52 @@
 
 export type AlerterOptions = {
 
-  // When true, alerts will be sent to the voicingUtteranceQueue. This shutoff valve is similar to
-  // descriptionAlertNode, but for voicing.
-  alertToVoicing?: boolean;
+  // If provided, Voicing responses will be spoken through this Node such that this VoicingNode's
+  // voicingCanSpeakProperty controls whether the voicingUtterance can be spoken. That way if the Node is not visible
+  // or has any ancestor with `voicingVisible: false`, the voicingUtterance can never be spoken. NOTE: No voicing will
+  // alert without this option!
+  voicingAlertNode?: VoicingNode | null;
 
   // If provided, use this Node to send description alerts to one or more Display's UtteranceQueue. Unlike for
   // Voicing, description alerts must occur through a Node connected to a Display through the scene graph. If null,
   // do not alert for description (same as alertToVoicing:false). NOTE: No description will alert without this option!
   descriptionAlertNode?: Node | null;
+
+  // The Utterance that will be used by this Alerter to announce with the Voicing feature. If a voicingAlertNode
+  // is provided, the voicingUtterance will have its canAnnounceProperties set to the Node's voicingCanSpeakProperty
+  // so that this Utterance is only spoken with the Node and all of its ancestors are visible and voicingVisible.
+  voicingUtterance?: Utterance;
 }
 
 class Alerter {
-  alertToVoicing: boolean;
-  descriptionAlertNode: Node | null;
+  public descriptionAlertNode: Node | null;
+  private _voicingAlertNode: VoicingNode | null;
+
+  private _voicingUtterance: Utterance | null;
 
   constructor( providedOptions?: AlerterOptions ) {
 
     const options = optionize<AlerterOptions>( {
-      alertToVoicing: true,
-      descriptionAlertNode: null
+      descriptionAlertNode: null,
+      voicingAlertNode: null,
+      voicingUtterance: new Utterance()
     }, providedOptions );
 
-    // @public - only subtypes can mutate
-    this.alertToVoicing = options.alertToVoicing;
-
     // @public (read-only)
     this.descriptionAlertNode = options.descriptionAlertNode;
+
+    this._voicingAlertNode = null;
+    this.setVoicingAlertNode( options.voicingAlertNode );
+
+    this._voicingUtterance = null;
+    this.setVoicingUtterance( options.voicingUtterance );
   }
 
   /**
    * Alert to both description and voicing utteranceQueues, depending on if both are supported by this instance
    */
   alert( alertable: IAlertable ): void {
-    this.alertToVoicing && voicingUtteranceQueue.addToBack( alertable );
+    this.alertVoicingUtterance();
     this.alertDescriptionUtterance( alertable );
   }
 
@@ -58,12 +71,91 @@
     this.descriptionAlertNode && this.descriptionAlertNode.alertDescriptionUtterance( alertable );
   }
 
+  /**
+   * Forward to the alertable to the voicingUtteranceQueue if there is a voicingAlertNode.
+   */
+  alertVoicingUtterance(): void {
+    if ( this._voicingAlertNode && this._voicingUtterance ) {
+      Voicing.alertVoicingUtterance( this._voicingUtterance );
+    }
+  }
+
   /**
    * Forward to provided Node for UtteranceQueue alerting logic. See ParallelDOM.forEachUtteranceQueue() for details.
    */
   forEachUtteranceQueue( utteranceQueueCallback: UtteranceQueueCallback ): void {
     this.descriptionAlertNode && this.descriptionAlertNode.forEachUtteranceQueue( utteranceQueueCallback );
   }
+
+  /**
+   * Set the voicingUtterance for this Alerter. If there is a voicingAlertNode, the voicingCanSpeakProperty
+   * of the Node is added to the Utterance so that this Utterance is only spoken when that Node and its ancestors are
+   * visible and voicingVisible. Before setting, a voicingCanSpeakProperty is removed from the old Utterance if it
+   * was set.
+   */
+  public setVoicingUtterance( utterance: Utterance ): void {
+
+    if ( this._voicingAlertNode ) {
+      if ( this._voicingUtterance ) {
+        this.removeVoicingCanSpeakPropertyFromUtterance();
+      }
+
+      utterance.canAnnounceProperties = utterance.canAnnounceProperties.concat( [ this._voicingAlertNode.voicingCanSpeakProperty ] );
+    }
+
+    this._voicingUtterance = utterance;
+  }
+
+  set voicingUtterance( utterance: Utterance ) { this.setVoicingUtterance( utterance ); }
+
+  /**
+   * Returns the Utterance used to alert to Voicing.
+   */
+  public getVoicingUtterance(): Utterance {
+    return this._voicingUtterance!;
+  }
+
+  get voicingUtterance(): Utterance { return this.getVoicingUtterance(); }
+
+  /**
+   * Clean this._voicingUtterance by removing this._voicingAlertNode.voicingCanSpeakProperty from the list of
+   * canAnnounceProperties of the Utterance.
+   * @private
+   */
+  private removeVoicingCanSpeakPropertyFromUtterance() {
+    assert && assert( this._voicingUtterance, 'Need a voicingUtterance to remove a canAnnounceProperty' );
+    assert && assert( this._voicingAlertNode, 'Need a voicingAlertNode to remove a voicingCanSpeakProperty' );
+
+    const currentCanAnnounceProperties = this._voicingUtterance!.canAnnounceProperties;
+    const index = currentCanAnnounceProperties.indexOf( this._voicingAlertNode!.voicingCanSpeakProperty );
+
+    // This may be called consecutively and we have to be graceful.
+    if ( index >= 0 ) {
+      this._voicingUtterance!.canAnnounceProperties = currentCanAnnounceProperties.splice( index, 1 );
+    }
+  }
+
+  /**
+   * Sets the Node that will control whether the voicingUtterance is spoken. This Node and its ancestors must be
+   * visible and voicingVisible for the voicingUtterance to be announced at all.
+   */
+  public setVoicingAlertNode( node: VoicingNode | null ): void {
+    if ( this._voicingAlertNode && this._voicingUtterance ) {
+      debugger;
+      this.removeVoicingCanSpeakPropertyFromUtterance();
+    }
+
+    this._voicingAlertNode = node;
+    this.setVoicingUtterance( this._voicingUtterance! );
+  }
+
+  set voicingAlertNode( node: VoicingNode | null ) { this.setVoicingAlertNode( node ); }
+
+  getVoicingAlertNode(): VoicingNode | null {
+    return this._voicingAlertNode;
+  }
+
+  get voicingAlertNode() { return this.getVoicingAlertNode(); }
 }
 
 sceneryPhet.register( 'Alerter', Alerter );
Index: scenery/js/imports.ts
IDEA additional info:
Subsystem: com.intellij.openapi.diff.impl.patch.CharsetEP
<+>UTF-8
===================================================================
diff --git a/scenery/js/imports.ts b/scenery/js/imports.ts
--- a/scenery/js/imports.ts	(revision 6f738c1e3c52102506af6da96c0abe7b8a85945a)
+++ b/scenery/js/imports.ts	(date 1649735853137)
@@ -163,7 +163,7 @@
 export { default as voicingManager } from './accessibility/voicing/voicingManager.js';
 export { default as voicingUtteranceQueue } from './accessibility/voicing/voicingUtteranceQueue.js';
 export { default as Voicing } from './accessibility/voicing/Voicing.js';
-export type { VoicingOptions, SpeakingOptions } from './accessibility/voicing/Voicing.js';
+export type { VoicingOptions, SpeakingOptions, VoicingNode } from './accessibility/voicing/Voicing.js';
 export { default as ReadingBlockUtterance } from './accessibility/voicing/ReadingBlockUtterance.js';
 export { default as FocusDisplayedController } from './accessibility/FocusDisplayedController.js';
 export { default as FocusManager } from './accessibility/FocusManager.js';
Index: scenery/js/accessibility/voicing/Voicing.ts
IDEA additional info:
Subsystem: com.intellij.openapi.diff.impl.patch.CharsetEP
<+>UTF-8
===================================================================
diff --git a/scenery/js/accessibility/voicing/Voicing.ts b/scenery/js/accessibility/voicing/Voicing.ts
--- a/scenery/js/accessibility/voicing/Voicing.ts	(revision 6f738c1e3c52102506af6da96c0abe7b8a85945a)
+++ b/scenery/js/accessibility/voicing/Voicing.ts	(date 1649735582932)
@@ -716,5 +716,23 @@
 
 Voicing.VOICING_OPTION_KEYS = VOICING_OPTION_KEYS;
 
+/**
+ * Simply adds an Utterance to the voicingUtteranceQueue. There are no checks on responseCollector Properties. But
+ * this does enforce that the provided Utterance does have at least one Property in `canAnnounceProperties`, indicating
+ * that the Voicing of the Utterance is being controlled by the visibility and voicingVisibility of a Node in the
+ * scene graph.
+ *
+ * @public
+ * @static
+ */
+Voicing.alertVoicingUtterance = ( utterance: Utterance ) => {
+  assert && assert( utterance.canAnnounceProperties.length > 0, 'canAnnounceProperties required.' );
+  voicingUtteranceQueue.addToBack( utterance );
+};
+
+// Export a type that lets you check if your Node is composed with ReadingBlock
+const wrapper = () => Voicing( Node, 1 );
+export type VoicingNode = InstanceType<ReturnType<typeof wrapper>>;
+
 scenery.register( 'Voicing', Voicing );
 export default Voicing;
Index: friction/js/friction/view/BreakAwayAlerter.js
IDEA additional info:
Subsystem: com.intellij.openapi.diff.impl.patch.CharsetEP
<+>UTF-8
===================================================================
diff --git a/friction/js/friction/view/BreakAwayAlerter.js b/friction/js/friction/view/BreakAwayAlerter.js
--- a/friction/js/friction/view/BreakAwayAlerter.js	(revision 5eb1fe2b3fc85396e711095b5edb7302013f4715)
+++ b/friction/js/friction/view/BreakAwayAlerter.js	(date 1649770708151)
@@ -54,8 +54,7 @@
     // @private
     this.tooSoonForNextAlert = false;
 
-    // @private
-    this.utterance = new Utterance( {
+    this.voicingUtterance = new Utterance( {
       alert: new ResponsePacket(),
       priority: Utterance.HIGH_PRIORITY
     } );
@@ -90,12 +89,13 @@
       alertContent = this.alertedBreakAwayProperty.value ? BREAK_AWAY_THRESHOLD_AGAIN : BREAK_AWAY_THRESHOLD_FIRST;
     }
 
-    this.utterance.alert.contextResponse = alertContent;
+    this.voicingUtterance.alert.contextResponse = alertContent;
 
     this.forEachUtteranceQueue( utteranceQueue => utteranceQueue.clear() );
     voicingUtteranceQueue.clear();
 
-    this.alert( this.utterance );
+    debugger;
+    this.alert( this.voicingUtterance );
 
     this.alertedBreakAwayProperty.value = true;
     this.tooSoonForNextAlert = true;
Index: inverse-square-law-common/js/view/ISLCAlertManager.js
IDEA additional info:
Subsystem: com.intellij.openapi.diff.impl.patch.CharsetEP
<+>UTF-8
===================================================================
diff --git a/inverse-square-law-common/js/view/ISLCAlertManager.js b/inverse-square-law-common/js/view/ISLCAlertManager.js
--- a/inverse-square-law-common/js/view/ISLCAlertManager.js	(revision f2f1092c96a571b39d525298fcd8f9b12ef66a8e)
+++ b/inverse-square-law-common/js/view/ISLCAlertManager.js	(date 1649736129898)
@@ -26,14 +26,6 @@
    * @param {Object} [options]
    */
   constructor( model, forceDescriber, options ) {
-
-    options = merge( {
-
-      // For now, we do not want the ISLC Alerters to alert anything to Voicing, Interactive Description and Voicing
-      // alerts have differences and we are managing them manually.
-      alertToVoicing: false
-    }, options );
-
     super( options );
 
     // @protected
Index: friction/js/friction/view/FrictionScreenView.js
IDEA additional info:
Subsystem: com.intellij.openapi.diff.impl.patch.CharsetEP
<+>UTF-8
===================================================================
diff --git a/friction/js/friction/view/FrictionScreenView.js b/friction/js/friction/view/FrictionScreenView.js
--- a/friction/js/friction/view/FrictionScreenView.js	(revision 5eb1fe2b3fc85396e711095b5edb7302013f4715)
+++ b/friction/js/friction/view/FrictionScreenView.js	(date 1649769750139)
@@ -211,6 +211,12 @@
     // pdom
     this.pdomPlayAreaNode.pdomOrder = [ chemistryBookNode, this.magnifierNode ];
 
+    // voicing - Alerters announce through the BookNode which is composed with Voicing.
+    temperatureIncreasingAlerter.voicingAlertNode = chemistryBookNode;
+    temperatureDecreasingAlerter.voicingAlertNode = chemistryBookNode;
+    breakAwayAlerter.voicingAlertNode = chemistryBookNode;
+    bookMovementAlerter.voicingAlertNode = chemistryBookNode;
+
     // add reset button
     const resetAllButton = new ResetAllButton( {
       listener: () => {

@zepumph
Copy link
Member

zepumph commented Apr 12, 2022

Maybe Alerter could have something explicit, like "Alerter.registerUtteranceToNode" which links up the Property needed, and there is also an unregister. This could be a bit more clunky, but it is also more flexible and explicit.
.

@jessegreenberg
Copy link
Contributor Author

I like that, thanks. It is a little clunky and puts more responsibility on the client but it has the potential to reduce the complexity in Alerter.ts quite a bit.

@jessegreenberg
Copy link
Contributor Author

OK I like this a lot better. It puts more responsibility at usage sites but is way less intrusive and we get to keep the benefits of Alerter. With this we

  1. Add a static registerUtteranceToVoicingNode the client must call. Adds a VoicingNode.canAnnounceProperty to an Utterance.canAnnounceProperties.
  2. Add a static Voicing.alertVoicingUtterance which asserts that an Utterance has one canAnnounceProperty before sending it to the voicingUtteranceQueue.
  3. If Alerter.alertToVoicing is true Alerter.alert() must take an Utterance (not an IAlertable) so that it is clear that when using Voicing you need to provide an Utterance with canAnnounceProperties.

If you don't want to use resgisterUtteranceToVoicingNode you can just give your Utterance the canAnnounceProperties it needs at construction. Again, thats up to the client.

Full patch:

Index: scenery-phet/js/accessibility/describers/Alerter.ts
IDEA additional info:
Subsystem: com.intellij.openapi.diff.impl.patch.CharsetEP
<+>UTF-8
===================================================================
diff --git a/scenery-phet/js/accessibility/describers/Alerter.ts b/scenery-phet/js/accessibility/describers/Alerter.ts
--- a/scenery-phet/js/accessibility/describers/Alerter.ts	(revision 95223865a134b508bf13e927c7d960081eee5a32)
+++ b/scenery-phet/js/accessibility/describers/Alerter.ts	(date 1649781962641)
@@ -6,8 +6,8 @@
  */
 
 import optionize from '../../../../phet-core/js/optionize.js';
-import { Node, voicingUtteranceQueue } from '../../../../scenery/js/imports.js';
-import { IAlertable } from '../../../../utterance-queue/js/Utterance.js';
+import { Node, Voicing, VoicingNode } from '../../../../scenery/js/imports.js';
+import Utterance, { IAlertable } from '../../../../utterance-queue/js/Utterance.js';
 import UtteranceQueue from '../../../../utterance-queue/js/UtteranceQueue.js';
 import sceneryPhet from '../../sceneryPhet.js';
 
@@ -47,7 +47,13 @@
    * Alert to both description and voicing utteranceQueues, depending on if both are supported by this instance
    */
   alert( alertable: IAlertable ): void {
-    this.alertToVoicing && voicingUtteranceQueue.addToBack( alertable );
+    if ( this.alertToVoicing ) {
+      assert && assert( alertable instanceof Utterance,
+        'If alerting to voicing, the alertable needs to be an Utterance for canAnnounceProperties' );
+      Voicing.alertVoicingUtterance( alertable as Utterance );
+    }
+
+    // this.alertToVoicing && voicingUtteranceQueue.addToBack( alertable );
     this.alertDescriptionUtterance( alertable );
   }
 
@@ -64,6 +70,20 @@
   forEachUtteranceQueue( utteranceQueueCallback: UtteranceQueueCallback ): void {
     this.descriptionAlertNode && this.descriptionAlertNode.forEachUtteranceQueue( utteranceQueueCallback );
   }
+
+  public static registerUtteranceToVoicingNode( utterance: Utterance, voicingNode: VoicingNode ) {
+    const existingCanAnnounceProperties = utterance.canAnnounceProperties;
+    if ( !existingCanAnnounceProperties.includes( voicingNode.voicingCanSpeakProperty ) ) {
+      utterance.canAnnounceProperties = existingCanAnnounceProperties.concat( [ voicingNode.voicingCanSpeakProperty ] );
+    }
+  }
+
+  public static unregisterUtteranceToVoicingNode( utterance: Utterance, voicingNode: VoicingNode ) {
+    const existingCanAnnounceProperties = utterance.canAnnounceProperties;
+    const index = existingCanAnnounceProperties.indexOf( voicingNode.voicingCanSpeakProperty );
+    assert && assert( index > -1, 'voicingNode.voicingCanSpeakProperty is not on the Utterance, was it not registered?' );
+    utterance.canAnnounceProperties = existingCanAnnounceProperties.splice( index, 1 );
+  }
 }
 
 sceneryPhet.register( 'Alerter', Alerter );
Index: scenery/js/imports.ts
IDEA additional info:
Subsystem: com.intellij.openapi.diff.impl.patch.CharsetEP
<+>UTF-8
===================================================================
diff --git a/scenery/js/imports.ts b/scenery/js/imports.ts
--- a/scenery/js/imports.ts	(revision 6f738c1e3c52102506af6da96c0abe7b8a85945a)
+++ b/scenery/js/imports.ts	(date 1649774585866)
@@ -163,7 +163,7 @@
 export { default as voicingManager } from './accessibility/voicing/voicingManager.js';
 export { default as voicingUtteranceQueue } from './accessibility/voicing/voicingUtteranceQueue.js';
 export { default as Voicing } from './accessibility/voicing/Voicing.js';
-export type { VoicingOptions, SpeakingOptions } from './accessibility/voicing/Voicing.js';
+export type { VoicingOptions, SpeakingOptions, VoicingNode } from './accessibility/voicing/Voicing.js';
 export { default as ReadingBlockUtterance } from './accessibility/voicing/ReadingBlockUtterance.js';
 export { default as FocusDisplayedController } from './accessibility/FocusDisplayedController.js';
 export { default as FocusManager } from './accessibility/FocusManager.js';
Index: scenery/js/accessibility/voicing/Voicing.ts
IDEA additional info:
Subsystem: com.intellij.openapi.diff.impl.patch.CharsetEP
<+>UTF-8
===================================================================
diff --git a/scenery/js/accessibility/voicing/Voicing.ts b/scenery/js/accessibility/voicing/Voicing.ts
--- a/scenery/js/accessibility/voicing/Voicing.ts	(revision 6f738c1e3c52102506af6da96c0abe7b8a85945a)
+++ b/scenery/js/accessibility/voicing/Voicing.ts	(date 1649779419697)
@@ -716,5 +716,14 @@
 
 Voicing.VOICING_OPTION_KEYS = VOICING_OPTION_KEYS;
 
+Voicing.alertVoicingUtterance = ( utterance: Utterance ) => {
+  assert && assert( utterance.canAnnounceProperties.length > 0, 'canAnnounceProperties required.' );
+  voicingUtteranceQueue.addToBack( utterance );
+};
+
+// Export a type that lets you check if your Node is composed with Voicing
+const wrapper = () => Voicing( Node, 1 );
+export type VoicingNode = InstanceType<ReturnType<typeof wrapper>>;
+
 scenery.register( 'Voicing', Voicing );
 export default Voicing;
Index: friction/js/friction/view/book/BookMovementAlerter.js
IDEA additional info:
Subsystem: com.intellij.openapi.diff.impl.patch.CharsetEP
<+>UTF-8
===================================================================
diff --git a/friction/js/friction/view/book/BookMovementAlerter.js b/friction/js/friction/view/book/BookMovementAlerter.js
--- a/friction/js/friction/view/book/BookMovementAlerter.js	(revision 5eb1fe2b3fc85396e711095b5edb7302013f4715)
+++ b/friction/js/friction/view/book/BookMovementAlerter.js	(date 1649780738963)
@@ -10,7 +10,7 @@
 import BorderAlertsDescriber from '../../../../../scenery-phet/js/accessibility/describers/BorderAlertsDescriber.js';
 import DirectionEnum from '../../../../../scenery-phet/js/accessibility/describers/DirectionEnum.js';
 import MovementAlerter from '../../../../../scenery-phet/js/accessibility/describers/MovementAlerter.js';
-import { voicingUtteranceQueue } from '../../../../../scenery/js/imports.js';
+import { Voicing } from '../../../../../scenery/js/imports.js';
 import ResponsePacket from '../../../../../utterance-queue/js/ResponsePacket.js';
 import Utterance from '../../../../../utterance-queue/js/Utterance.js';
 import friction from '../../../friction.js';
@@ -108,7 +108,7 @@
       // Once touching, speak the alert
       if ( !wasTouching && isTouching && model.numberOfAtomsShearedOffProperty.value === 0 ) {
         this.alert( this.bottomDescriptionUtterance );
-        voicingUtteranceQueue.addToBack( this.bottomVoicingUtterance );
+        Voicing.alertVoicingUtterance( this.bottomVoicingUtterance );
       }
     } );
   }
@@ -153,7 +153,7 @@
           this.alert( this.moveDownToRubHarderUtterance );
 
           // Support voicing for this hint
-          voicingUtteranceQueue.addToBack( this.moveDownToRubHarderUtterance );
+          Voicing.alertVoicingUtterance( this.moveDownToRubHarderUtterance );
 
           // This means that we will get left/right alerts again after a "move down" cue
           this.separatedAlertPair.reset();
Index: friction/js/friction/view/FrictionScreenView.js
IDEA additional info:
Subsystem: com.intellij.openapi.diff.impl.patch.CharsetEP
<+>UTF-8
===================================================================
diff --git a/friction/js/friction/view/FrictionScreenView.js b/friction/js/friction/view/FrictionScreenView.js
--- a/friction/js/friction/view/FrictionScreenView.js	(revision 5eb1fe2b3fc85396e711095b5edb7302013f4715)
+++ b/friction/js/friction/view/FrictionScreenView.js	(date 1649782263296)
@@ -12,7 +12,7 @@
 import ScreenView from '../../../../joist/js/ScreenView.js';
 import ResetAllButton from '../../../../scenery-phet/js/buttons/ResetAllButton.js';
 import ThermometerNode from '../../../../scenery-phet/js/ThermometerNode.js';
-import { voicingUtteranceQueue } from '../../../../scenery/js/imports.js';
+import { Voicing, voicingUtteranceQueue } from '../../../../scenery/js/imports.js';
 import { Node } from '../../../../scenery/js/imports.js';
 import SoundClip from '../../../../tambo/js/sound-generators/SoundClip.js';
 import SoundLevelEnum from '../../../../tambo/js/SoundLevelEnum.js';
@@ -29,6 +29,7 @@
 import BookNode from './book/BookNode.js';
 import BookRubSoundGenerator from './book/BookRubSoundGenerator.js';
 import BreakAwayAlerter from './BreakAwayAlerter.js';
+import Alerter from '../../../../scenery-phet/js/accessibility/describers/Alerter.js';
 import CoolingSoundGenerator from './CoolingSoundGenerator.js';
 import FrictionScreenSummaryNode from './FrictionScreenSummaryNode.js';
 import GrabbedDescriber from './GrabbedDescriber.js';
@@ -97,8 +98,7 @@
                                                        frictionStrings.a11y.resetSimMoreObservationSentence : null;
 
       this.alertDescriptionUtterance( atomsJiggleTinyBitUtterance );
-
-      voicingUtteranceQueue.addToBack( atomsJiggleTinyBitUtterance );
+      Voicing.alertVoicingUtterance( atomsJiggleTinyBitUtterance );
     };
 
     let amplitudeIncreasedEnoughForSettledAndCoolAlert = false;
@@ -260,6 +260,16 @@
     // add a node that creates a "play area" accessible section in the PDOM
     this.pdomControlAreaNode.pdomOrder = [ resetAllButton ];
 
+    // voicing
+    Alerter.registerUtteranceToVoicingNode( temperatureIncreasingAlerter.maxTempUtterance, chemistryBookNode );
+    Alerter.registerUtteranceToVoicingNode( temperatureIncreasingAlerter.temperatureJiggleUtterance, chemistryBookNode );
+    Alerter.registerUtteranceToVoicingNode( temperatureDecreasingAlerter.utterance, chemistryBookNode );
+    Alerter.registerUtteranceToVoicingNode( breakAwayAlerter.utterance, chemistryBookNode );
+    Alerter.registerUtteranceToVoicingNode( bookMovementAlerter.bottomDescriptionUtterance, chemistryBookNode );
+    Alerter.registerUtteranceToVoicingNode( bookMovementAlerter.bottomVoicingUtterance, chemistryBookNode );
+    Alerter.registerUtteranceToVoicingNode( bookMovementAlerter.moveDownToRubHarderUtterance, chemistryBookNode );
+    Alerter.registerUtteranceToVoicingNode( atomsJiggleTinyBitUtterance, chemistryBookNode );
+
     // @private
     this.resetFrictionScreenView = () => {
 

The bad part in particular is in FrictionScreenView, where this was added

    Alerter.registerUtteranceToVoicingNode( temperatureIncreasingAlerter.maxTempUtterance, chemistryBookNode );
    Alerter.registerUtteranceToVoicingNode( temperatureIncreasingAlerter.temperatureJiggleUtterance, chemistryBookNode );
    Alerter.registerUtteranceToVoicingNode( temperatureDecreasingAlerter.utterance, chemistryBookNode );
    Alerter.registerUtteranceToVoicingNode( breakAwayAlerter.utterance, chemistryBookNode );
    Alerter.registerUtteranceToVoicingNode( bookMovementAlerter.bottomDescriptionUtterance, chemistryBookNode );
    Alerter.registerUtteranceToVoicingNode( bookMovementAlerter.bottomVoicingUtterance, chemistryBookNode );
    Alerter.registerUtteranceToVoicingNode( bookMovementAlerter.moveDownToRubHarderUtterance, chemistryBookNode );
    Alerter.registerUtteranceToVoicingNode( atomsJiggleTinyBitUtterance, chemistryBookNode );

Id like to review with @zepumph again before proceeding with this change. @zepumph are you OK with this?

@jessegreenberg
Copy link
Contributor Author

Discussed with @zepumph today:

  1. For the case of FrictionScreenView example above, it would make more sense to speak through the ScreenView (not the chemistryBookNode), as we are describing a global context of the sim. We talked about how we might want to register to more than one node and have each Node's canAnnounceProperty be checked with an or operation instead of an and. That is a complexity that we are not worried about now and if we wanted to we can create a new register function that takes multiple Nodes and links their canAnnounceProperties like that.
  2. We should put the registerUtteranceToVoicing on Voicing. Lets make sure we don't duplicate the implementation so the instance method setVoicingUtterance should use the static registerVoicingUtteranceToNode.

Otherwise we are OK to move forward with this change.

@jessegreenberg
Copy link
Contributor Author

jessegreenberg commented Apr 14, 2022

I have made commits now for this issue. Here are notes:

@jessegreenberg
Copy link
Contributor Author

There are no more usages of voicingUtteranceQueue.addToBack except for the ones in tests and in Voicing.ts. Issues above are closed or ready for review. I think this one is as well. The above comment had notes for myself, here is should be flagged in particular for review:

  • Can we move voicingUtteranceQueue.addToBack to bad-sim-text?
  • I couldn't seem to use static in the VoicingClass definition, is that expected for trait?
  • I decided not to have Voicing.speakContent use Voicing.alertUtterance. collectResponse may take a different Utterance or no Utterance and may return an IAlertable instead of a string. I think that is correct.
  • In addition to registerUtteranceToVoicingNode I added a registerUtteranceToVoicing which can take Node and registers to the voicingVisibleProperty for ease of use with the ScreenView. See friction/FrictionScreenView for an example.

@zepumph otherwise things match what we discussed in #1403 (comment). Assigning this to you for review but let me know if in-meeting review would be best.

@zepumph
Copy link
Member

zepumph commented Jun 9, 2022

I couldn't seem to use static in the VoicingClass definition, is that expected for trait?

Yes, because it is an annonymous class, so if it was a static in the VoicingClass definition, you would have to mix it to be able to call these methods, i.e. Voicing( Node ).alertUtterance

  • I decided not to have Voicing.speakContent use Voicing.alertUtterance. collectResponse may take a different Utterance or no Utterance and may return an IAlertable instead of a string. I think that is correct.

Excellent!

  • In addition to registerUtteranceToVoicingNode I added a registerUtteranceToVoicing which can take Node and registers to the voicingVisibleProperty for ease of use with the ScreenView. See friction/FrictionScreenView for an example.

Looks like these got combined into a single function at one point. I like that!

In general I don't love registerUtteranceToVoicingNode, but I think that for the most part it is because we tacked it on to existing implementations, in the future, the describer/alerter will have a node to register its utterance upon construction. I think the code will look better then. Ready to close?

@zepumph zepumph removed their assignment Jun 9, 2022
zepumph added a commit to phetsims/chipper that referenced this issue Jun 9, 2022
@jessegreenberg
Copy link
Contributor Author

jessegreenberg commented Aug 8, 2022

OK, thanks for review. Closing. I see voicingUtteranceQueue.addToBack in bad-sim-text.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

2 participants