Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Accessibility (A11y) #167

Closed
nvzqz opened this issue Feb 8, 2021 · 60 comments · Fixed by #2294
Closed

Accessibility (A11y) #167

nvzqz opened this issue Feb 8, 2021 · 60 comments · Fixed by #2294
Labels
accessibility More accessible to e.g. the visually impaired design Some architectual design work needed feature New feature or request help wanted Extra attention is needed

Comments

@nvzqz
Copy link

nvzqz commented Feb 8, 2021

Is your feature request related to a problem? Please describe.
Not all GUI app users are sighted or can see well. This project does not indicate any effort to make apps accessible to the visually impaired.

Describe the solution you'd like
For this project to make a conscious and focused effort to support apps that can be used by the visually impaired.

Describe alternatives you've considered
None.

Additional context
I (and many others) will not use a GUI library in production unless it is designed with accessibility in mind.

@nvzqz nvzqz added the feature New feature or request label Feb 8, 2021
@nvzqz nvzqz changed the title Accessibility Accessibility (A11y) Feb 8, 2021
@emilk
Copy link
Owner

emilk commented Feb 9, 2021

From the top of my head, here are some tasks to improving accessibility:

  1. A high-contrast, large text visual theme
  2. Output data necessary for screen readers, braille displays and similar tools
  3. Hook up such data in egui_web (and maybe egui_glium)

The accessibility data needs structure ("there is a window here with a scroll area") and semantics ("this is a heading, this is a checkbox"). What are the good, open standards for such accessibility data?

Is there an accessibility API for the web that doesn't require equi_web to create a fake DOM that mirrors the GUI?

Ideally I'd like egui to just output the GUI structure as e.g. JSON and pipe that through some accessibility API.

I'd appreciate some help with this as I don't have any experience here!

@emilk emilk added design Some architectual design work needed help wanted Extra attention is needed labels Feb 9, 2021
@follower
Copy link
Contributor

follower commented Feb 9, 2021

@emilk There's some potentially helpful content in this Bevy issue:

@emilk
Copy link
Owner

emilk commented Feb 9, 2021

Thanks for those links @follower !

@follower
Copy link
Contributor

follower commented Feb 9, 2021

I'd hoped I might've been able to find some Free/Open Source Software developer specific accessibility tools/guidelines[0] but didn't manage to find much from a brief search.

[0] I wonder if there's one or more companies that might want to support such an effort? *cough* :D

There's a reasonable amount of coverage of WWW/HTML/DOM aspects but less so for Canvas/desktop applications. (There's some older somewhat "enterprisey" content but it's a little difficult to separate the wheat from the seo content marketing...)

A couple of links that might still have some applicability:

Additional thoughts

There definitely seems to be opportunity for egui to have accessibility as a strong part of its story which has the potential to be compelling for both legally mandated & philosophical reasons--and seems consistent with the values of the Rust ecosystem itself.

Of course, if it were easy everyone would be doing it, I guess...

Based on the thread I linked above perhaps the recent Bevy+egui integration might be a good place to explore accessibility-related development further.

As I mentioned in one of the comments in that thread (while quoting the godot-accessibility dev):

Examples like "Working alongside sighted developers who would prefer a visual editor" really highlight to me the importance that the tools we create not exclude people from being part of a development team by being inaccessible.

Hope some of the linked resources might be useful.


More links

Update: I may have subsequently gone a little overboard :D but figured I might as well put these here too:

Update (August 2021):

@emilk
Copy link
Owner

emilk commented Feb 11, 2021

I'd like to point out that I think this is a very important issue, but I fear it is also a pretty large task. So far egui is only a hobby project, and I have just a few hours per week to spend on it, so any help here would be greatly appreciated!

@follower
Copy link
Contributor

[Started writing this almost a week ago so just wrapped it up quickly to finally get it posted. :) ]

Appreciate you providing that context, @emilk.

At a meta level, based on what I've learned (hopefully correctly :) ) from other communities with regard to accessibility & inclusiveness, I'm conscious of these aspects:

  1. Where possible, it seems best to build on existing information, experience & resources provided by people who are directly affected by whether or not the related functionality is included.

    (i.e. It's not inclusive to expect the people affected to do the work of making software accessible to them.)

  2. However, it's also important to develop accessibility-related functionality with the input of those affected.

    (i.e. "Nothing About Us Without Us".)

  3. It would be nice for any code and/or research to be able to be re-used by other projects/developers, so that the effort/knowledge required to make applications/tools more accessible & inclusive is reduced--thus leading to more projects being more accessible, more earlier. ;)

(And, while in my mind a site/repo acting as "one stop shop" on "how to make my project more accessible & inclusive" seems beneficial, for now I'm limiting myself to just documenting in this issue what I learn. :D )

Will follow-up with further specifics.

@follower
Copy link
Contributor

Existing information, experience & resources

In light of (1) above I decided to revisit the work done in https://github.com/lightsoutgames/godot-accessibility to see what I could learn; and, (re-)discovered that the included Godot-specific plugin https://github.com/lightsoutgames/godot-tts (see) is actually built on top of a Rust crate https://crates.io/crates/tts (see & see) (all developed by @ndarilek).

egui & tts proof-of-concept

Having discovered tts I thought I'd see what it would take to get a proof-of-concept running with egui & tts running together.

The initial result is available here: https://gitlab.com/RancidBacon/egui-and-tts

I was intending to document the functionality & prerequisites/build process better (as I ran into a couple issues along the way--still need to write them up) but was losing forward momentum so have just made it public as-is.

The proof-of-concept features:

  • Two egui buttons.
  • When the mouse is hovered over a button its tooltip is spoken via TTS.
  • Keyboard navigation via TAB / Shift-TAB key.
  • When keyboard navigation is used navigating to a button speaks its label.
  • Speech when a button is pressed (via Enter) or clicked.
  • Some attempt to not have a cacophony of sound through use of "cool down" etc. (I assume there's probably some standard set of edge-cases to handle with regard to this.)
  • Code written by someone still learning both Rust & idiomatic Rust. (That would be me. :) )
  • An, on reflection, probably not very TTS inclusive pun-based app name of "WhaTTSApp" (i.e. "What TTS App").

I think ideally if similar functionality was to be integrated with egui it would be best to handle it without requiring additional setup code such as that needed in the proof-of-concept.

But the main conclusion is that, yes, it's possible to get egui & tts working together (at least on Linux) without too much trouble but it has highlighted some areas within the API that might benefit from further development.

(Also, it's entirely possible the way I've implemented things is terrible from the perspective of someone who relies on TTS which would also be helpful feedback. It also doesn't deal with other accessibility support which standard OS-level UI toolkits provide but is hopefully at least a useful starting point.)

Feedback

With regard to (2) above, I was aware that @ndarilek had a pre-existing interest in Bevy, so was going to ping him about the fact that there was a egui+bevy integration and that this issue existed, if he was open to providing feedback.

However, in the process of researching egui keyboard control I discovered he was way ahead of me :) and was already active in the project issue tracker at #31 (comment).

[So, hi @ndarilek! Also, I have just noticed that your bio mentions you're in Texas, so I imagine given the current power/weather situation there, GitHub issues are unlikely to be a priority but I'd welcome your feedback on the proof-of-concept at some point if you're open to doing so. I appreciate all the existing effort that you've put into sharing your experiences & motivations and developing tts & related crates. Thanks! (And...wow, the weather/power situation sounds crappy, hope you're doing okay.)]

If anyone else with experience with TTS would like to try out the proof-of-concept & provide feedback that would also be appreciated--particularly with regard to aspects that would benefit from being designed in from the start.

Next steps

Unfortunately at this point in time I can't make any commitment to further development on this (thanks to some combination of ADHD & finances :) ) but at a minimum hopefully this work may serve as a useful building block or step in the right direction.

@follower
Copy link
Contributor

BTW @nvzqz do you have a specific evaluation criteria/check-list in mind that could serve as a framework/target during design/development?

@follower
Copy link
Contributor

Also, a couple of relevant recent items discovered re: GTK4 & Accessibility:

@ndarilek
Copy link
Contributor

ndarilek commented Feb 17, 2021 via email

@emilk emilk added the accessibility More accessible to e.g. the visually impaired label Feb 20, 2021
@emilk
Copy link
Owner

emilk commented Feb 20, 2021

Amazing work @follower !

It seems to me that #31 is a high priority issue for this. Once that is fixed, then we could add a mode where egui emits events in its Output whenever a widget is selected and clicked. That could then easily be plugged into the tts backend.

@ndarilek
Copy link
Contributor

ndarilek commented Mar 2, 2021 via email

@emilk
Copy link
Owner

emilk commented Mar 6, 2021

@ndarilek There is ctx.memory().has_kb_focus(widget_id), and I just added gained_kb_focus as well.

So one design would be that widgets emit an event to egui::Output when they gain keyboard focus so that the integration can e.g. read their contents with TTS. For instance, egui/src/widgets/text_edit.rs would need something like:

ui.memory().interested_in_kb_focus(id);
if ui.memory().gained_id_focus(id) {
    ui.output().events.push(OutputEvent::WidgetGainedFocus(WidgetType::TextEdit, text.clone()));
}

Also, is there some way to get a widget based on its ID?

Widgets are pieces of of code that is run each frame. egui stores nothing about a widget, so there is nothing to get. See https://docs.rs/egui/0.10.0/egui/#understanding-immediate-mode

@emilk
Copy link
Owner

emilk commented Mar 7, 2021

#31 has been closed - you can now move keyboard focus to any clickable widget with tab/shift tab.

Next up: I'm gonna add some outgoing events from egui every time a new widget is given focus. That should then be fairly easy to hook up to to TTS system.

emilk added a commit that referenced this issue Mar 7, 2021
@emilk
Copy link
Owner

emilk commented Mar 7, 2021

egui now outputs events when widgets gain focus: https://github.com/emilk/egui/blob/master/egui/src/data/output.rs#L56

This should be enough to start experimenting with a screen reader, and should provide a framework for building more features around. There's still a lot more to do!

@ndarilek
Copy link
Contributor

ndarilek commented Mar 8, 2021 via email

emilk added a commit that referenced this issue Mar 8, 2021
@emilk
Copy link
Owner

emilk commented Mar 8, 2021

@ndarilek thanks for helping out!

The problem with storing references to widgets is that in immediate mode, widgets is not data, but code that is run once per frame. See for instance the toggle_switch.rs example widget or the Understanding immediate mode section of the docs.

As for events: I've just added so that egui outputs events when a widget is selected (given keyboard focus). This can then be hooked up to a screen-reader. The selected widget is controlled with space/return (buttons etc), arrow keys (sliders) or keyboard (text edit). You advance to the next widget with tab (or shift-tab to go backwards).

You can checkout latest master and cargo run --release and use TAB to select widgets. You should be able to see this in the Backend panel:

Screenshot 2021-03-08 at 18 42 53

I'm gonna try hooking this up to a simple TTS system in egui_glium to close the loop so we can start playing with it for real.

emilk added a commit that referenced this issue Mar 8, 2021
@parasyte
Copy link
Contributor

parasyte commented Mar 8, 2021

I recognize the challenges immediate-mode GUIs pose. I think, though, that there does need to be some sort of central registry of widgets independent from the GUI code, queryable by ID.

I almost posted my 2 cents here yesterday on exactly this topic. I believe that immediate mode GUIs can build an in-memory representation of the UI as a DAG, just as easily as a retained mode GUI can. This DAG can be queryable, individual elements can provide additional accessibility context, and user code can go the extra mile to provide application-specific context as needed.

egui already retains some state for UI elements between each frame and identifies those elements by a unique ID:

ui.label("\
Widgets that store state require unique and persisting identifiers so we can track their state between frames.\n\
For instance, collapsable headers needs to store wether or not they are open. \
Their Id:s are derived from their names. \
If you fail to give them unique names then clicking one will open both. \
To help you debug this, an error message is printed on screen:");
ui.collapsing("Collapsing header", |ui| {
ui.label("Contents of first foldable ui");
});
ui.collapsing("Collapsing header", |ui| {
ui.label("Contents of second foldable ui");
});

This is necessary for several reasons, but the same trick can be used on each frame to create the structures necessary for interacting with accessibility APIs.

This is a design I have been kicking around in my head for quite a long time. I like the flexibility and no-nonsense approach to immediate mode GUIs, but I am also aware of (some of) the needs of accessibility software in relation to GUIs. Something of a hybrid immediate/retained approach is the best-of-both-worlds; The API remains immediate, and some state is retained for ease of use and doubles as a source of truth for screen readers.

@emilk
Copy link
Owner

emilk commented Mar 8, 2021

If you checkout main and edit build_demo_web.sh to enable the screen_reader feature and run the script, you should now hear a voice describing the focused widget as you move the selection with tab.

This is still very early, and more events needs to be hooked up (e.g. you probably want to hear a voice when editing a label and not just when first focusing it).

There is one more thing to consider: how should an egui app know whether or not to read things out loud? I can't find a javascript function for "does the user want a screen reader". The same problem exists on native, but I'm sure with some digging one can find platform-specific ways of doing so.

@emilk
Copy link
Owner

emilk commented Mar 8, 2021

@parasyte Having egui keep a shadow-DOM behind the scenes is a big change though, and requires a lot of work and planning. Before doing that I'd like to be absolutely sure we need it. In my proof-of-concept screen reader I get away with describing just the newly focused widget so there is no need to store and manage a bunch of state behind the scenes.

@CAD97
Copy link

CAD97 commented Oct 25, 2021

While I'm not a user of assistive technology, I want to bring up a point: I think "implementing TTS" is the wrong way of looking at the problem.

Someone using a screen reader already is using a screen reader, whether it be Microsoft Narrator, macOS VoiceOver, or whatever. Support for screen readers thus isn't implementing TTS readouts for your own window, but letting existing screen readers effectively read your window.

I don't know what that entails, to be completely honest. I just want to make sure that we're not accidentally going the wrong direction (which I kinda got vibes from "how do we tell if the browser wants TTS").

@ndarilek
Copy link
Contributor

Agreed, but that problem is a lot more complicated than a single, or even a small handful, of developers can manage, and is further complicated by immediate mode.

AccessKit aims to solve it in a cross-platform way, and when that's available, an egui integration might be practical. But until that happens, implementing "real" accessibility is likely beyond us.

@mwcampbell
Copy link
Contributor

AccessKit, which @ndarilek mentioned in the previous comment, is now far enough along that we can start working on integration into egui. I have a very basic integration of AccessKit into egui on this branch.

Here's a quick summary of how AccessKit works, and how it fits into egui. AccessKit takes a push-based approach to accessibility. That is, for each frame where something in the UI has changed, the application creates a TreeUpdate, which can be either a complete tree snapshot or an incremental update, and pushes it to an AccessKit platform adapter. That platform adapter can then handle requests from assistive technologies (e.g. screen readers) without having to call back into the application, except when the user requests an action such as changing the keyboard focus or doing the equivalent of a mouse click. So in principle, this model is a good fit for an immediate-mode GUI. (In practice, the implementation could probably be made more efficient, e.g. by eliminating repeated heap allocations.) My integration creates a complete AccessKit tree for every egui frame, and AccessKit does comparisons to figure out what actually changed and fire the appropriate events.

AccessKit itself is still far from complete, and so is the integration. Most notably, I still need to work on support for text edit controls, as well as reading the value of a slider, and lots of smaller stuff. Also, AccessKit is only implemented for Windows so far. Still, at this point, you can run the eframe hello_world example on Windows, start up any Windows screen reader (Narrator, NVDA, JAWS...), tab around and get feedback, or navigate with the screen reader's commands. AccessKit and egui support one screen-reader-initiated action so far: setting the keyboard focus. It won't be hard to implement more.

I've modified egui-winit to use a proof-of-concept integration of AccessKit into winit which I've posted in my fork of that project. That direct integration into winit isn't likely to be accepted upstream, so I'll ultimately have to come up with another solution for that part.

It's also worth discussing how this work should relate to the existing "widget info" support. My AccessKit integration into egui currently uses the widget info. But another option would be to have all of the widgets manipulate AccessKit nodes directly, implement a generic, egui-independent screen reader library that uses the AccessKit tree, and ultimately drop widget info from the output struct. We're going to need direct text-to-speech output for a while yet, until AccessKit is implemented on all of the other platforms. (And even then, self-voicing would be useful for devices with no built-in screen reader, like game consoles.) But perhaps egui itself shouldn't have two ways of doing accessibility.

@emilk
Copy link
Owner

emilk commented Dec 28, 2021

Wow @mwcampbell that's sounds great!

@mwcampbell
Copy link
Contributor

A quick update: AccessKit is still Windows-only, and there are still serious limitations in the Windows implementation, most notably lack of support for text editing. But one major blocker has just been resolved: the newly published accesskit_winit crate makes it straightforward to use AccessKit with winit, without requiring changes to winit itself.

I'm aware that my fork of egui with prototype AccessKit integration is way out of date. My next task is to update it and use the new winit adapter rather than my forked winit.

@mwcampbell
Copy link
Contributor

@emilk On #1844, I mentioned the possibility of replacing egui's current WidgetInfo with AccessKit, and you seemed to be in favor of it. Do you want me to replace WidgetInfo with AccessKit in one big leap, including the implementation of a new TTS output module based on AccessKit (for platforms that don't yet have a native AccessKit implementation), or would you prefer that I implement AccessKit support alongside the current WidgetInfo and work toward eventual replacement?

@mwcampbell
Copy link
Contributor

The accesskit branch in my egui fork has a rough but basically working AccessKit integration. It's based on the egui master branch as of earlier today. The other key difference between this branch and the work I did last December (which is now in the accesskit-old branch) is that I'm no longer using a fork of winit. (In fact, all dependencies are published crates.)

Currently, Response::widget_info fills in fields on the AccessKit node as well. I can, of course, change the widgets to fill in AccessKit node fields directly, in addition to or instead of providing WidgetInfo. I'm just waiting on @emilk 's answer to my previous comment before I decide how to approach that.

The big missing feature, still, is text editing. I'm starting on that in AccessKit next week. Aside from that, egui still needs to expose some more things that are already supported by AccessKit, such as the value and range of a slider.

And, for now, AccessKit is still only natively implemented on Windows. That's changing later this year. In the meantime, a platform-independent embedded screen reader, which is what accessible egui-based applications currently have to use, can be written based on AccessKit, using the accesskit_consumer crate to process the tree updates, traverse the tree, and find out what changed from one frame to the next.

@emilk
Copy link
Owner

emilk commented Jul 30, 2022

@mwcampbell thank you so much for your work on AccessKit, and on working on the egui integration!

I like your current approach of having WidgetInfo fill in AccessKit data; it allows for a gradual migration to AccessKit, and is potentially a smaller PR (which is always good!). In particular, I like that existing widgets don't need to be re-written (always nice to avoid breaking changes for egui users).

The egui screen reader is mostly a proof-of-concept, and I don't believe it has many users right now, so breaking that end of things is less worrying to me. Still, if given a choice I would keep PlatformOutput::events etc until a replacement is merged (i.e. AcessKit + accesskit_consumer is in place and works with screenreaders on various platforms, including web).

I took a look at your egui fork, and it looks great so far. But, I would prefer having #[cfg(feature = accesskit)] around all the AccessKit code.

@mwcampbell
Copy link
Contributor

I just rebased my accesskit branch on the head of the egui repo (as of yesterday).

@emilk Do you want AccessKit integration to be an optional feature at all layers, including eframe, or just in the core egui crate? I'm also wondering if the accesskit_winit adapter should be integrated in egui-winit, as it is now in my branch, or only in eframe. The latter would reduce the total PR size, but would mean that anyone using egui-winit but not eframe would have to do more work to get AccessKit support.

@CAD97
Copy link

CAD97 commented Oct 11, 2022

As a developer user of egui,

eframe is framework enough that it makes sense IMHO to always have AccessKit enabled. At most, it could be a default feature; people should be pushed towards accessibility by default.

For egui and egui-wgpu, though, it's a major use case to use egui for development debug UI on top of your own rendering, and in that case it's likely the case that a graphics focused application wouldn't want the default integration as it'd likely need a fully custom solution to be accessible via screen reader.

As such, my vote goes to having an opt-in feature for AccessKit in egui/egui-wgpu with a separate entry point to turn on the integration, but that it should be as simple as using the ak enabled initialization.

I don't know how practical that is, but it's what I'd personally like using.

@mwcampbell
Copy link
Contributor

AccessKit is now an optional dependency in the core egui crate. I won't do anything more with egui_winit, egui_glium, egui_glow, and eframe until I get more input from @emilk.

@mwcampbell
Copy link
Contributor

I'd also like input on what milestones I should reach before I submit my AccessKit integration as a PR. AccessKit still only has a Windows adapter, though adapters for other platforms are now being developed (by others), and the accesskit_winit crate uses a no-op implementation on non-Windows platforms. Meanwhile, the core AccessKit crate hasn't yet reached 1.0, and I'm not sure when it will. The biggest missing functionality at the moment is text editing support. I'm hoping the API for that will be close to its final form by sometime next week.

On the one hand, if I wait until everything is done and perfect before I submit a PR, then I'll need to keep maintaining my own branch, and rebasing it on new versions of egui, for a while. On the other hand, I don't want to impose on the egui team the burden of keeping up with changes to AccessKit too soon.

@emilk
Copy link
Owner

emilk commented Oct 18, 2022

You can open a draft PR right away @mwcampbell - it will make it easier for me to review your work!

@mwcampbell
Copy link
Contributor

OK. I'm currently in the middle of working on text editing, both in AccessKit and in my egui branch. Once I finish that and get sliders working, I'll open a draft PR.

@mwcampbell
Copy link
Contributor

If anyone wants to play with my work-in-progress text editing support with a Windows screen reader, here's the egui branch. Note that the AccessKit side of this is still a work in progress and isn't in the published crates yet; here's the AccessKit branch.

At this point, the major missing feature in text editing support is that the bounding rectangles of text ranges aren't yet exposed. This is why, if you use Narrator, the highlight cursor isn't where it should be when you're in a text edit widget. I suspect it's also why the JAWS cursor isn't working. I plan to implement this today. Also, Narrator isn't providing the expected feedback when deleting text; I'm guessing that's because AccessKit is returning an inappropriate error when Narrator tries to work with the old text range. I'll also look at this today.

Once I resolve those two issues, I'll open a PR on the AccessKit repo. Once that's merged, I can merge the egui work back into my main AccessKit branch.

@mwcampbell
Copy link
Contributor

I just released text editing support in AccessKit (still Windows only), and the matching support in egui is now on my main accesskit branch. I'm going to rebase that branch to the head of the upstream master branch, then I think I'm ready to open a draft PR.

@mwcampbell
Copy link
Contributor

FWIW, @DataTriny is working on a Linux platform adapter for AccessKit, implementing AT-SPI in pure Rust. He thinks that might be usable by the end of the year. Once that feature is merged in AccessKit, and AccessKit support is merged in egui, I think we will no longer need speech-dispatcher on Linux. That dependency seems to be a recurring source of frustration for egui developers.

@mwcampbell
Copy link
Contributor

If anyone wants to try out the work-in-progress AccessKit macOS adapter, check out this temporary egui branch. The major missing features that you're likely to encounter in simple example apps are:

  • Hit-testing (e.g. for moving the VoiceOver cursor to the mouse pointer)
  • Adjusting sliders and steppers with VoiceOver commands, as opposed to normal keyboard input to the application
  • Text editing

I plan to address the first two early next week. Text editing will likely take several days. I hope to have the macOS adapter on par with the Windows one by early December.

I'm posting a status update on this here because I know that macOS is popular among developers, and I figure that when AccessKit's macOS adapter is reasonably complete, interest in AccessKit in general will increase.

@mwcampbell
Copy link
Contributor

Quick update for anyone watching this issue but not #2294 (AccessKit integration PR): I've marked that PR as ready for review. That means I've frozen the initial implementation, except to address review feedback. On the macOS side, that adapter is now published and is used by the published AccessKit winit adapter. So my egui AccessKit integration now supports Windows and macOS using only published AccessKit crates. I resolved the hit-testing and slider/stepper issues mentioned in the previous comment. Now the big feature I need to work on for macOS is text editing support.

@CAD97
Copy link

CAD97 commented Dec 4, 2022

I just want to say: Thank you @mwcampbell for your work here! Your efforts are greatly appreciated. 😄

@mwcampbell
Copy link
Contributor

Thanks to @emilk for merging AccessKit support. To be clear, this doesn't mean the end of all work on accessibility in egui. AccessKit itself is still incomplete, and not all widgets are fully accessible yet. But I think this is an appropriate time to close this original issue and open new ones as they come up. @emilk Feel free to reopen if you disagree.

@emilk
Copy link
Owner

emilk commented Dec 4, 2022

I agree with closing this, and I also agree with @CAD97 - thanks for working on this @mwcampbell ❤️

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
accessibility More accessible to e.g. the visually impaired design Some architectual design work needed feature New feature or request help wanted Extra attention is needed
Projects
None yet
Development

Successfully merging a pull request may close this issue.

7 participants