-
Notifications
You must be signed in to change notification settings - Fork 218
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Do not ingest Jamendo records with downloads disabled #3618
Do not ingest Jamendo records with downloads disabled #3618
Conversation
Full-stack documentation: https://docs.openverse.org/_preview/3618 Please note that GitHub pages takes a little time to deploy newly pushed code, if the links above don't work or you see old versions, wait 5 minutes and try again. You can check the GitHub pages deployment action list to see the current status of the deployments. Changed files 🔄: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I tested it following instructions, and it works great. I like this approach 👍
The choice of (provider, foreign_identifier)
as the selected key to compare if a record already exists in the deleted table is interesting, though. I was thinking just using identifier
would be easier as it's just one field to compare to and our internal primary key already, but glad this works as well.
I think an index for identifier
is still necessary to make queries in these deleted tables faster. I was checking for some specific rows in recent days, and it took quite a long time to give me the row (just one). I was filtering by this field and thought it should be quick, but it wasn't the case. What do you think?
Aside from that side point, it all looks good to me!
def create_postingestion_tasks(): | ||
""" | ||
Create postingestion tasks to delete records that have downloads | ||
disabled from the audio table and preserve them in the | ||
deleted_audio table. | ||
|
||
If we instead simply discarded these records during ingestion, | ||
existing records which have had their downloads disabled since their | ||
last ingestion would remain in the catalog. This approach ensures | ||
all records with downloads disabled are removed. | ||
""" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nice to make use of this function again. Clever setup! Also because the reason makes perfect sense 💯
It's only necessary to use (provider, foreign_identifier) instead of identifier because of the reingestion case. If a record gets deleted and then reingested by the provider DAG, it will have a different internal identifier than it did the first time it was deleted. But the (provider, fid) pair will still allow us to detect duplicates. |
…downloads disabled" This change would have resulted in all previously ingested download-disabled records being reingested every single time, and the delete_records DAG needing to be run each time. (Because it does not distinguish between records that are not currently in the catalog, and ones that are). This reverts commit bc41375.
This reverts commit 7b8e9eb.
Co-authored-by: Krystle Salazar <krystle.salazar@automattic.com>
9e7a32d
to
d70a4c8
Compare
Based on the medium urgency of this PR, the following reviewers are being gently reminded to review this PR: @dhruvkb Excluding weekend1 days, this PR was ready for review 7 day(s) ago. PRs labelled with medium urgency are expected to be reviewed within 4 weekday(s)2. @stacimc, if this PR is not ready for a review, please draft it to prevent reviewers from getting further unnecessary pings. Footnotes
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The changes look good, the documentation makes sense, the tests are through and the testing instructions work!
Fixes
Fixes #3530 by @stacimc
Description
This PR updates the Jamendo DAG to discard records with
audiodownload_allowed
set to False. It does this by adding a postingestion TaskGroup, using the existingdelete_records
tasks to add those records todeleted_audio
and drop them fromaudio
:It required updating the
delete_records
logic to handle the case where a record is deleted, reingested, and then deleted again. I opted for the simple approach of discarding duplicates without updating the existing record in the deleted media table. That means the Deleted Media table will contain rows preserving records as they were the first time they were deleted. Anything more sophisticated is out of scope and can be decided on in a future project.Alternative approaches
Initially I planned on just discarding the records during ingestion when
audiodownload_allowed
is False. However, Jamendo allows users to turn off audio downloads at any time -- so it's possible that existing records, which have already been previously ingested into the catalog, may on a subsequent DagRun have downloads disabled. If we simply discard the record here at ingestion, the stale old record will remain.You can see in this commit a quick exploration into trying to identify such records at ingestion, and report to Slack that we need to do a manual
delete_records
run when this occurs. However that approach requires further manual intervention, and also would have required investing more time than is warranted into getting it working.However, we already have the ability to add postingestion tasks to any provider ingester, and we have reusable tasks for deleting records! So this approach could be implemented very quickly, and will work with no manual intervention. The delete steps complete very quickly even on production data (tested with a
delete_records
run in production that took just 22 seconds 😄)Testing Instructions
I added unique constraints to
(provider, foreign_identifier)
for the deleted media tables. You can either runjust recreate
to update your local env, or add them manually by runningjust catalog/pgcli
and then:Now run the Jamendo DAG locally. Let it ingest at least 1000 records to ensure it gets some with
audiodownload_allowed
set to False.Check that all steps complete. Check the logs to ensure that the counts make sense. In my test, 1000 rows were upserted and 93 were both added to the
deleted_media
table (found by checking the logs of theupdate_deleted_media_table
step), and deleted fromaudio
(check logs fordelete_records_from_media_table
)Unit tests should cover the duplicate detection, but we can also test manually!
Now clear the dagrun from the
create_loading_table
step and ensure that all steps still pass. By clearing the loading steps, we'll reingest those records back into the audio table. You should see in the logs that 0 new records were added todeleted_audio
(as these are all duplicates), but 93 were still deleted fromaudio
.Finally, start a new Jamendo dagrun and let it run for a few 100 records longer than the previous attempt. This will cause it to reingest all the records from the first run, plus a few 100 additional records. When we check the logs, we should see something like:
audio
tabledeleted_audio
table. This is because 93 were duplicates from the first DagRun and were not added.Checklist
Update index.md
).main
) or a parent feature branch.Developer Certificate of Origin
Developer Certificate of Origin