-
Notifications
You must be signed in to change notification settings - Fork 30.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Lay of the land / help / overview command #201898
Comments
That sounds like a super powerful idea. My initial thought is that this should be part of the accessible view simply because not everyone will have access to Github copilot which is a paid product, but copilot would make much more sense because it would allow the user to ask more specific questions, for example, tell me more about the typescript files, or "how can I close that terminal?". If, on top of that, copilot could perform actions on the editor on behalf of the user... combine that with the speech feature, and we'll have powerful assistant that would help multiple user groups in a number of situations. |
@meganrogge I would vote for Copilot voice instead of accessible view so that we can give a clear cue for users to distinguish the response from other text info in the accessible view. |
What about Braille users? Is copilot voice available to users who do not have access to GitHub copilot? Is it possible to interact with messages coming from copilot voice using the screen reader? I personally haven't tried copilot voice. |
That is a really great point @rperez030. I'm not certain, but would hope that Copilot Voice works with braille devices already. @bpasero do you know this? |
braille support is really a screen reader feature. i don't know much about copilot voice. If it is supposed to be a voice only experience, it probably doesn't make sense that it works with Braille, but then all the features that are available through copilot voice should also be available through copilot chat. |
@meganrogge can you clarify what you are asking me? Today the I miss some context on what it would mean to support braille from the extension, maybe someone can clarify for me. |
I was confused. I thought Copilot Voice allowed for text to be read to the user. The link you shared looks like what I'd actually want to use, thanks. So, a user would trigger the help command, we'd ask copilot to describe the lay of the land, and we'd use that library to read the response aloud. We'd have to see if the library integrates with braille. if it does not, I would guess we'd want to |
The ability to read out text is something currently not exposed from the library or the extension but I think could be added: |
On a teams call, @rperez030 used an NVDA extension to get this lay of the land: "This image appears to be a screenshot of an online video conference call interface. There are four participants shown in the conference, each with a distinct video or profile image and name displayed. Starting from the top left corner of the window, you can see a row of various icons and the duration of the current call, which is 03:35. Next to the call duration, there are options for engaging in the video conference call, such as Chat, People (highlighted with a number indicating there are 4 people in the call), Raise hand, React, View options, Notes, Whiteboard, Apps, and More options. There is a prominent Leave button in red, indicating the option to exit the call. In the main area of the interface, there are four participants:
It's important to note that all names and any personal characteristics described here are derived from the visual content of the screenshot and do not violate the privacy rules set forth. The video conference interface itself bears a strong resemblance to Microsoft Teams or a similar application." |
We now have this with the copilot vision extension. It will be enhanced when we merge it into vscode-copilot |
@kieferrm had an idea where we would use copilot to describe the current state & actions that can be taken to a screen reader user. Imagine what we do currently using the contextual help dialogs, but with more information.
In thinking about this, I wondered if we could have a command that would describe the current lay of the land / overview. For example, if a user is unsure where the focus has gone or if something is not working as expected, it might say "there are 2 editor groups open. The first has a bash terminal and the second has 4 typescript files. The extensions view is focused and contains 3 extensions that suggest reloading the window.". This would save time as otherwise, a user would have to navigate all over the workbench for this information.
@jooyoungseo, @rperez030 please let us know what you think of these ideas.
One thing we'd want to consider is should this content be read using Copilot voice or presented in an accessible view? I imagine the latter.
The text was updated successfully, but these errors were encountered: