-
Notifications
You must be signed in to change notification settings - Fork 0
/
Copy pathtodo
351 lines (333 loc) · 22.8 KB
/
todo
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
/* ex: set tabstop=2 shiftwidth=2: */
left off
========
- fix not being able to debug w/ ipdb
- runserver wasn't pid 1 in the container, so added 'exec' to start script
- install sass and watch
- could take package.json and gulpfile.js from cookiecutter-django to mimic if I had answered YES to using a js_task_runner...
- but just going to brew install sass and run it in --watch mode outside docker, for now
- `sass --watch sass/project.scss css/project.css`
- figure out sass with compressor for production deployment
- do the compression locally and upload resulting assets?
- doing this for now
- making /add-paprika-account view and form
- form shows up, but formatting is bad
- handle form submission and call actions.import_account
- test error cases (bad credentials, can't reach paprika API, etc.)
- redirect first-login to add-paprika-account
- test large number of recipes (does import need to be async somehow?)
- too slow
- ideas: multiprocessing pool, celery, every-minute cronjob
- do every-minute cronjob... simplest while still performant
- create newsitem on new account import
- add basic newsfeed to homepage
- fix logging config (no log statements or prints show in docker)
- no failure count, b/c should give user immediate feedback if they requested the import and it fails
- make sync-ing functionality work
- mgmt command to regularly sync all accounts
- create new Recipe if new uid and if hash differs for existing uid
- produce NewsItem for any diffs, additions, deletions
- cronjob script to sync every minute, but lockfile first
- test lockfile in linux bash
- add "force-sync" button to UI (in navbar?)
- make this just change a field on PaprikaAccount (sync_status? requested, inprogress, done) that an every minute cronjob checks?
- diagram state changes
- test state changes
- test unique constraint on PaprikaAccount.username and alias
- disallow any model changes til import/sync completes...
- looked at using django-fsm ConcurrentTransitionMixin, but it doesn't seem like a great answer because I can't sensibly expose an in-progress state outside of the given transaction so that cronjobs can read it (ConcurrentTransitionMixin requires that you save changes in an atomic block, which means that I'd be wrapping all my save calls in an atomic block... guess that could work)
- add last_synced field to PaprikaAccount
- move actions to methods on PaprikaAccount
- fix RecipeSerializer... complaining about image_url being invalid url and photo_hash being null
- sometimes image_url points to a local, on-device url? is this the default image?
- how to handle that paprika images are only accessible for a few minutes? cache them all on my own server?
- JUST REMOVE all queryparams (Signature, Expires, AWSAccessKeyId) and it still works
- add all other fields to recipe model
- fetch categories when fetching recipes (do this in RecipeSerializer if it can't find a matching category in the database?)
- refactor import_recipes and sync_recipes to be more DRY (so don't have to sync_categories in multiple places)
- update serializers to handle categories
- update tests to handle categories
- figure out why some category uids have extra characters on the end... are these old, deleted category uids that are still hanging around?
- not sure, must be a paprika 2 thing? tested deleting categories and that removed them from the list of categories returned by the API, recipes aren't changed (still link to old categories)
- handle unknown/missing categories when serializing recipes
- handle parent_uid for nested categories
- test importing new account -- seems to import recipes and categories correctly
- check that refactor of sync_recipes doesn't create newsitems for every recipe on new import
- they do, fix it
- make recipe list view
- test it on mobile
- make a grid thumbnail version? yes, but needs more work to overlay recipe name and handle clicks and lazy-loading
- make recipe detail view
- make recipe diff view
- make diff accounts view
- make view of all other accounts
- update nav
- update auth to use email only, login immediately on email confirm
- make paprikaaccount form bootstrappy
- update all "not implemented yet" stuff (clone recipes, newsitems links)
- deploy and share
- switch from caddy to traefik
- https://github.com/pydanny/cookiecutter-django/pull/1714
- deploy
- wrote deploy.sh script
- update sentry from api to newer sentry_sdk
- env vars not loading when running compose files in production...
- SENTRY_DSN is loaded from env WITH quotes (https://github.com/docker/compose/issues/3702) so removed quotes there
- having to fix everything by re-deploying, kinda hard to change things on the server (can't docker exec into container and run entrypoint script, which is needed for db connection info)
- use `docker-compose run` (rather than eval) which will bring up a new container (and run entrypoint) but changes will write to the same db
- make admin user account
- user test run
- add note that you must have a paprika cloud sync account (give instructions for this) for the site to work
- change "username" to "paprika cloud sync account email" or similar
- if cloud sync for the first time and try to associate paprika account before sync finishes, creating PaprikaAccount works but shows error: 401 Client Error: UNAUTHORIZED
- maybe username/password is actually wrong
- confirm user/pass works, otherwise prompt again
- prevent auto-capitalization on mobile devices: https://stackoverflow.com/questions/5171764/how-do-you-turn-off-auto-capitalisation-in-html-form-fields-in-ios
- fix logging in production
- run cronjobs (make a helper script to install them on deploy)
- need to be careful about crontab interpreting any output as worthy of emailing? no mailto set up
- not too happy with efficiency of creating new container to run the cronjob command then removing it... and doing this every minute
- category import failed when they already existed!
- test requested sync again... working now
- use run_one_instance wrapper for cronjobs so if they slow down we don't run many versions
- supervisor to restart docker-compose command on server reboot
- or use systemd?
- test restarting machine
- logrotate for cronlogs
- change force sync to be a descriptive button
- invitation link (should live in database or environment so it's not committed in code)
- disable normal registration and limit signups to invitees
- write tests
- make sure signup works (incl verification email, forgot password, etc.)
- test with account that has many recipes
- add about page
- add changelog
- review mobile UX
- open source code (review for secrets)
- add sample .production env files
- update README to talk about setup (e.g. deploy, crontab scripts, supervisor)
- link to repo from website about page
- change HSTS in config/settings/production.py
- why didn't categories associate with recipes in prod?
- write test
- share
- fix issue with recipes having null values for fields
- also make sure that this doesn't block importing other recipes
- it doesn't
- write a test
- categories duplicated because they attach to the original account and the account the recipes were copied to
- limit to categories in same paprika account
- fix bug where all thumbnails show as different... uid in url matches but there's a user id prefix or something changing
- have to calc my own recipe hash to be able to diff recipes quickly, add new field to model
- getting somewhat spurious diffs b/c my account doesn't have nested categories... just reimport my account to fix, perhaps
- how to handle hashing categories before Recipe has id? exclude hashing categories?
- excluding categories for now (makes it hard to add new Recipes due to needing pk in relation, also diff users might have diff organization/categorization schemes)
- mgmt command to update hash for all recipes? use it when changing the recipe model? maybe YAGNI
- just did this in the migration
- deleted recipes are showing... save that field and exclude it from PaprikaAccount.recipes
- test importing before adding field, then triggering sync -- does it update correctly?
- doesn't work because hashes match...
- wipe account and re-sync? no, would lose associated NewsItems from edits
- just script it from the json and update existing recipes
- sticky table header
- on th elems, position:sticky and top:0px and box-shadow:0px 1px 0 #dee2e6
- on table-responsive, unset overflow-x
- test sending email to paprika-sync.com
- seems like it forwards correctly now
- change "from" addresses to support@mg.paprika-sync.com instead of noreply@paprika-sync.com
- make newsitems on news feed more descriptive (ie - what part of the recipe changed)
- add "compare accounts" link to "new_account" newsitem type
- link to compare before-after editing recipe
- template tag to see if a newsitem recipe already exists in your account
- always split rating into its own NewsItem? yes
- don't create NewsItem if no fields_changed?
- delete existing EDITED NewsItems with no fields_changed (or with fields_changed of 'hash' or other fields that aren't user-visible)
- add last-date-edited or similar to the recipe-diff view (for distinguishing edited versions)
- watch page performance while adding lots of one-off recipe lookups... doesn't seem too bad for 25 news items
- make diff view better on mobile by letting thumbnails shrink
- take a backup before deploying
- test locally on backup before deploying?
- improve newslist formatting, language
- make pagination span full width on mobile, or have min width between buttons for easier tapping
- add some margin-bottom to base template, so bottom of page is more accessible on mobile
- shift NewsItem.payload fields into model directly (esp. recipe relation, for faster homepage queries)
- 96 queries, ~1375ms overall page time (local)
- dropped to 60 queries, ~900ms overall page time (local)
- also prefetching paprika-account drops to 35 queries, ~600ms overall page time (local)
- add recipe thumbnails to homepage
- prevent logging out -- make identity cookies last forever? currently, 2 weeks means you might have to login every time you visit
- set page titles
coming back after a year away from the project!
- figure out how to deploy, how to ssh to machine
- update deploy script to use ssh instead of docker-machine
- debug sentry issue with trying to import recipe with blank name?
- https://sentry.io/organizations/greg-schafer/issues/1122017842/?project=1396826&query=is%3Aunresolved
- mark as resolved on deploy
- 4 affected recipes: CA0CE04F-F11A-40BB-9039-FF15282BED2E, FE1A2CAA-DCF8-41CE-B66C-70BF26986BE4, 41C81BF9-B0BB-4415-9966-343CABE3DC2D, 22CD907D-E2BF-4A28-9666-446110BFE021
- they're all on my account and with in_trash=true
- tests not working due to: TypeError: attrib() got an unexpected keyword argument 'convert'
- upgrade pytest as suggested: https://stackoverflow.com/a/58189684
- also have to update pytest_django and pytest_sugar
- fix failing test_sync_recipes_from_api test
- pull prod db dump and restore locally
- updated instructions at bottom of this document
- error when trying to login locally: ValueError: Couldn't load 'Argon2PasswordHasher' algorithm library: No module named 'argon2._ffi'
- https://stackoverflow.com/questions/46416985/django-issue-with-argon2-hasher-in-live-production
- rebuilding docker images... no luck
- update argon2-cffi from 19.1 to 20.1 and that fixed it
- add clone recipe (i.e. copy a recipe you don't have at all)
- error when trying to create a recipe... but api doesn't give any details
- try different uid
- try using same if something else proves to be causing the error
- remove fields I have locally that the server won't (id, import_stable_hash, created_date, modified_date, date_ended, paprika_account)
- response is successful if I post the results of get_recipe call back to the API (including with changes)
- if I make changes, it doesn't update in the app though... maybe doesn't think the recipe changed? hash is calc'd server-side though and it did change
- changing uid created a new recipe
- photo didn't get copied too
- photo and photo_url and photo_hash and image_url keys are required, maybe is_pinned also
- created datetime might be rendering in wrong format
- look at recipe upload request from ipad
- run mitmproxy and change ipad wifi settings to manual proxy to local ip addr, port 8080
- perform actions on ipad, save flow in mitmproxy to a local file
- find start of data chunk in request, do `xxd` on the flow and look for gzip header (1f8b)
- extract that part of the request: `xxd -s 0x00004e7c -l 600 <flow-filename> > hex` and narrow "600" to stop before the next mime multipart separator (preceded by \r\n aka bytes 0d0a)... for me it was 473 bytes
- convert the hex file back to binary (making sure not to fill with zeroes up to 0x4e7c): `xxd -r -s -0x00004e7c hex > hexbin`
- open file: `gzcat hexbin`
- photo_url might be in wrong s3 url format
- initially, don't try to clone categories, just notify that it's imported as uncategorized
- change clone-recipe button to be ajax instead of page submit (use DRF APIView)
- added example of this, no reason to over-complicate now though... leave it as page submit
- copy clone-recipe functionality to news feed
- future
- eventually, prompt user if they want to copy any categories they don't have? what about categories spelled similarly... do a fuzzy string match?
- OR allow user to select their categories to copy the recipe into?
- add search across all recipe names/ingredients/sources instead of the giant compare-recipes list
- when begin searching:
- change magnifying glass icon to an X
- hide "browse" section of page and show "search results" section instead
- after 3+ characters have been typed, fetch results whenever there's a 150ms pause in typing
- add css "loading" animation
- match only from start of word? do people type "wheat" and expect to see "buckwheat" results?
- update changelog
- deploy
- share, w/ gifs of new behavior (cloning and searching)
- deleted recipes are duplicating somehow
- sync_recipes_from_api cronjob seems to be the culprit... re-creating all in_trash recipes every 2 days
- because account.recipes excludes in_trash, so it thinks it's new
- stop duplication
- delete or hide duplicates
- can just do this with django shell `Recipe.objects.filter(in_trash=True).delete()` (deletes 18532 recipes!)
- then run `python3 manage.py sync_recipes_from_api --exclude-synced-within=0` to re-import trashed recipes
- bugfix for bad permissions on image
- https://s3.amazonaws.com/uploads.paprikaapp.com/511238/E8081C98-B324-41A7-A5E8-62455FECFE61.jpg
- https://sentry.io/organizations/greg-schafer/issues/1934926264/?project=1396826&referrer=alert_email
- paprika turned off public-read for all their images... actually need the signature now
- allow cloning without the photo? yes
- save all images locally, or to my s3 bucket (w/ cloudfront CDN)
- update templates to load image from MEDIA
- had to assign uid/gid `chown -R 100:101 /var/lib/docker/volumes/app_production_media` permissions to docker volume so it could save media images there
- test cloning recipe w/ photo
- TODO: change search results to add "+1" or similar after "Account", instead of duplicating rows
- TODO: make the search button clickable to force the ajax request
- TODO: keep search results when navigating back to the search page
- TODO: put html in localStorage on navigate event?
- TODO: search, filter, sort on recipe list views
- TODO: update "about" page text... move marketing-y speak to the readme
- TODO: about page shows publicly, so maybe split into internal/external "about" pages?
- TODO: make cards full-width in mobile view, try to include image
- TODO: fix horizontal scroll on recipe detail pages on mobile
- TODO: favicon
- TODO: update to paprika v2 api (to use token instead of basic http-auth... doesn't seem like endpoints have changed behavior at all)
- TODO: change ratings to 1-5 star icons?
- TODO: aside: switch to f-strings
- TODO: weekly digest email
- TODO: add opt-out option in settings
- TODO: add one-click opt-out option in email itself
- TODO: set custom user agent on requests to paprika API
- TODO: put showcase screenshots/video on about page?
- TODO: share feature roadmap (trello board?)
- TODO: add "new" icon to navbar for new changelog items (get latest changelog date on server init, save it to localStorage when visited)
- TODO: or do it via db, so it translates across devices
- TODO: figure out how to update recipes (do they get a new uid? or just an updated hash?)
- TODO: add update recipe (i.e. update your recipe with changes they've made)
- TODO: show finer-grained diffs (hard to see diff in changed ingredients, directions, notes)
- TODO: limit categories queryset to currently logged-in user (in admin)
- TODO: set up deadmansnitch or uptime robot for website
- TODO: every-minute cronjobs are expensive, maybe change those to API endpoints hit with ajax? but then there might be request timeout that kills the serverside process
- could offload making all the paprika API requests to AWS lambda then just give full json file back for importing into db
- TODO: could use SQS to enqueue and a separate, always-running process to work on the jobs (jobs being: import new account and sync existing account)
- TODO: or just use redis locally with pubsub to an always-running process
- TODO: or could switch to django 3 w/ async workers, but can those do background work?
- TODO: devops extras
- TODO: try out digitalocean CDN
- does whitenoise fingerprint asset files? looks like yes
- TODO: test setting up a new machine? might break with the domain stuff
- TODO: privacy page (allow setting default and per-user checkboxes for "publish my actions to" and "show me actions from")
- TODO: instacart integration?
- TODO: db backups
- TODO: zero-downtime deploys
2023-12-10
==========
- getting 500 (and 401?) errors when trying to clone a recipe, what changed?
- start local dev env and try cloning a recipe in django shell, inspect response body
- got local repo by signing up as grschafer@gmail.com
- ran `docker compose -f local.yml run --rm django python3 manage.py import_new_account_recipes`
- ran `docker compose -f local.yml run --rm django python3 manage.py shell_plus`
- ran `pa = PaprikaAccount.objects.get()`
- ran `r = Recipe.objects.first()`
- ran `pa.clone_recipe(r)`
- resp.content is: b'{"error":{"code":0,"message":"There was an error with your request."}}'
- need to sniff app request that creates a recipe with mitmproxy?
- try wireguard proxy: https://docs.mitmproxy.org/stable/concepts-modes/#wireguard-transparent-proxy
- Client TLS handshake failed. The client disconnected during the handshake. If this happens consistently for www.paprikaapp.com, this may indicate that the client does not trust the proxy's certificate.
- how to solve above?
- just had to go to mitm.it and install the cert!
- app uses v2 API endpoint
- request to https://www.paprikaapp.com/api/v2/sync/recipe/5BBF1735-3323-4CAD-A6DC-48249F2A0CD5/ has multipart body
- saved body from mitmweb to api-v2-sync-recipe.data file in paprika-sync repo
- parse it with
```
from requests_toolbelt.multipart import decoder
fin = open('api-v2-sync-recipe.data', 'rb')
data = fin.read()
class A():
content = data
headers = {"content-type": 'multipart/form-data; boundary="ccba0de8-a2f7-4146-8198-8c1aad6bc918"'}
mp_data = decoder.MultipartDecoder.from_response(b)
import gzip
gzip.decompress(mp_data.parts[0].content)
```
- result is: b'{"uid":"5BBF1735-3323-4CAD-A6DC-48249F2A0CD5","deleted":false,"created":"2023-12-10 18:11:51","hash":"B36F4CE185135695418FF23EB07724105FFEA69F010669E8004D8C231779458C","name":"gregtest","description":"","ingredients":"","directions":"","notes":"testing paprika api","nutritional_info":"","prep_time":"","cook_time":"","total_time":"","difficulty":"","servings":"","rating":0,"scale":null,"source":"","source_url":"","on_favorites":false,"is_pinned":false,"in_trash":false,"photo":"1BEA352F-163D-4E56-B98C-C68B6E4B98DD.jpg","photo_large":"1C000B69-3BF3-4C8B-8985-C489B2624678.jpg","photo_hash":"86E2DACEC137B24626A677F178E5F0A9D9D15D5DBD276AF5DB48B875B27C507F","image_url":null,"categories":["9F1FC8AC-D07B-4482-A655-446FA7D2AC11"]}'
- switch to using jwt instead of http basic: https://gist.github.com/mattdsteele/7386ec363badfdeaad05a418b9a1f30a?permalink_comment_id=3300172#gistcomment-3300172
- updated to v2 api endpoint and attach jwt to all requests... cloning recipe still fails
- fetching recipes and categories still works though
- review body of recipe creation request
- the request works with photo/photo_large/photo_hash set to empty-string
- test what is broken with photos
- FIXED! photos are fine, the issue was an extra space at the end of the "created" datetime field
- test if I can stay on api v1 with that fix
- looks like yes
- test if I need to add any of the extra empty fields (deleted, photo_hash, is_pinned, photo_large, scale) to still have a valid request
- looks like no
- TODO: good reference? https://github.com/ThiefMaster/paprikasync
- TODO: may need to run app in emulator: https://docs.mitmproxy.org/stable/howto-install-system-trusted-ca-android/
general
=======
to start local server: docker-compose -f local.yml up
to continuously re-compile sass: sass --watch paprika_sync/static/sass/project.scss paprika_sync/static/css/project.css
to run tests: docker-compose -f local.yml run --rm django pytest paprika_sync/core
to run a shell: docker-compose -f local.yml run --rm django python3 manage.py shell_plus
to open a psql shell: docker-compose -f local.yml exec postgres psql -U gKzgEyHJkqcXFestYkKXbPDcNyKjzXrC paprika_sync
to force rebuild the container...
force rebuild: docker-compose build django
or delete the image: docker rmi -f paprika_sync_local_django
connect to production:
docker-machine env paprika-sync
eval $(docker-machine env paprika-sync)
to take, copy, and restore a prod backup to local:
on remote: docker-compose -f production.yml exec postgres backup
on remote: docker cp app_postgres_1:/backups backups
scp <remote>:/app/backups/* backups/
docker cp backups/* paprikasync_postgres_1:/backups/
docker-compose -f local.yml exec postgres psql -U gKzgEyHJkqcXFestYkKXbPDcNyKjzXrC -P pager=off paprika_sync -c "SELECT pg_terminate_backend(pg_stat_activity.pid) FROM pg_stat_activity WHERE pg_stat_activity.datname = 'paprika_sync' AND pid <> pg_backend_pid();"
docker-compose -f local.yml exec postgres restore <backup file name>