-
Notifications
You must be signed in to change notification settings - Fork 2.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[HOLD] Implement phased rollouts of the New Expensify mobile app #6306
Comments
@Expensify/mobile-deployers is there anyone who's interested in looking into this? I think it's valuable for stability but I may need to drop it for higher priority things. |
No update here, switching this to monthly for now. |
No update yet |
Still no update |
No update, actually going to send this back into the pool for now. |
My suggestion is we shouldn't do this until we have two things:
|
I think we have crashlytics set up, right? So it would just be a matter of chorifying the monitoring of crashes, right?
This makes sense, but I have a few thoughts:
All that said, I agree we can sit on this until we're closer to that time. It's not super urgent, but imo as soon as someone has some spare cycles to work on this there's not really a strong reason not to implement phased rollouts. |
Agree with all the above in regards to making the NewDot shift, if we want to keep it on GitHub actions we can use However, I do think we really need to invest more into crashalytics. I personally have never looked into the dashboard, so we need to feed crashes into GitHub issues somehow (e.g. https://github.com/Expensify/Expensify/issues/144832), so we can at least monitor crashes when we do create fires, otherwise we are just bumping for no gain. Once we have this in place, I could see us skipping all bumps when there are known deploy blockers (which hopefully we can create automatically via the above automatic issue creation from crashalytics). |
Just discovered we can implement phased rollouts on the desktop app(s) too: https://www.electron.build/auto-update#staged-rollouts |
@roryabraham, this Monthly task hasn't been acted upon in 6 weeks; closing. If you disagree, feel encouraged to reopen it -- but pick your least important issue to close instead. |
We'll definitely still want this. It's in-scope for N9 |
Triggered auto assignment to @abekkala ( |
Marking this as a new feature |
No update, not prioritizing deploy improvements at this time |
On HOLD for WAQ |
Dropping for now, see #12021 (comment) |
@roryabraham, this Monthly task hasn't been acted upon in 6 weeks; closing. If you disagree, feel encouraged to reopen it -- but pick your least important issue to close instead. |
@roryabraham, this Monthly task hasn't been acted upon in 6 weeks; closing. If you disagree, feel encouraged to reopen it -- but pick your least important issue to close instead. |
@roryabraham, this Monthly task hasn't been acted upon in 6 weeks; closing. If you disagree, feel encouraged to reopen it -- but pick your least important issue to close instead. |
If you haven’t already, check out our contributing guidelines for onboarding and email contributors@expensify.com to request to join our Slack channel!
Problem
There is no fast way to roll back a mobile production release, so if any bugs get through our QA process to production, we will have to scramble to fix the issue as fast as possible. We also don't have a way to deploy hotfixes or CP to production, so if we need to fix an issue on production, we'll need to first clear the checklist of any existing deploy blockers before running another production deploy including the fix.
Why this is important
As New Expensify is no longer a beta product, stability of the mobile app is now very important.
Solution
Start using phased rollouts on NewDot mobile. While this doesn't completely eliminate the problem, it allows us to reduce the number of affected users by pausing the rollout while we work on releasing a fix.
Platform:
Where is this issue occurring?
Slack conversation: https://expensify.slack.com/archives/C07J32337/p1634940809050800
The text was updated successfully, but these errors were encountered: