Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Do not ingest Jamendo records with downloads disabled #3618

Merged
merged 11 commits into from
Jan 18, 2024

Conversation

stacimc
Copy link
Collaborator

@stacimc stacimc commented Jan 2, 2024

Fixes

Fixes #3530 by @stacimc

Description

This PR updates the Jamendo DAG to discard records with audiodownload_allowed set to False. It does this by adding a postingestion TaskGroup, using the existing delete_records tasks to add those records to deleted_audio and drop them from audio:

Screenshot 2024-01-03 at 1 32 38 PM

It required updating the delete_records logic to handle the case where a record is deleted, reingested, and then deleted again. I opted for the simple approach of discarding duplicates without updating the existing record in the deleted media table. That means the Deleted Media table will contain rows preserving records as they were the first time they were deleted. Anything more sophisticated is out of scope and can be decided on in a future project.

Alternative approaches

Initially I planned on just discarding the records during ingestion when audiodownload_allowed is False. However, Jamendo allows users to turn off audio downloads at any time -- so it's possible that existing records, which have already been previously ingested into the catalog, may on a subsequent DagRun have downloads disabled. If we simply discard the record here at ingestion, the stale old record will remain.

You can see in this commit a quick exploration into trying to identify such records at ingestion, and report to Slack that we need to do a manual delete_records run when this occurs. However that approach requires further manual intervention, and also would have required investing more time than is warranted into getting it working.

However, we already have the ability to add postingestion tasks to any provider ingester, and we have reusable tasks for deleting records! So this approach could be implemented very quickly, and will work with no manual intervention. The delete steps complete very quickly even on production data (tested with a delete_records run in production that took just 22 seconds 😄)

Testing Instructions

I added unique constraints to (provider, foreign_identifier) for the deleted media tables. You can either run just recreate to update your local env, or add them manually by running just catalog/pgcli and then:

CREATE UNIQUE INDEX deleted_image_provider_fid_idx
    ON public.deleted_image
        USING btree (provider, md5(foreign_identifier));

CREATE UNIQUE INDEX deleted_audio_provider_fid_idx
    ON public.deleted_audio
        USING btree (provider, md5(foreign_identifier));

Now run the Jamendo DAG locally. Let it ingest at least 1000 records to ensure it gets some with audiodownload_allowed set to False.

Check that all steps complete. Check the logs to ensure that the counts make sense. In my test, 1000 rows were upserted and 93 were both added to the deleted_media table (found by checking the logs of the update_deleted_media_table step), and deleted from audio (check logs for delete_records_from_media_table)

Unit tests should cover the duplicate detection, but we can also test manually!

Now clear the dagrun from the create_loading_table step and ensure that all steps still pass. By clearing the loading steps, we'll reingest those records back into the audio table. You should see in the logs that 0 new records were added to deleted_audio (as these are all duplicates), but 93 were still deleted from audio.

Finally, start a new Jamendo dagrun and let it run for a few 100 records longer than the previous attempt. This will cause it to reingest all the records from the first run, plus a few 100 additional records. When we check the logs, we should see something like:

  • 1300 records upserted
  • 143 records were deleted from the audio table
  • only 50 records were added to the deleted_audio table. This is because 93 were duplicates from the first DagRun and were not added.

Checklist

  • My pull request has a descriptive title (not a vague title likeUpdate index.md).
  • My pull request targets the default branch of the repository (main) or a parent feature branch.
  • My commit messages follow best practices.
  • My code follows the established code style of the repository.
  • I added or updated tests for the changes I made (if applicable).
  • I added or updated documentation (if applicable).
  • I tried running the project locally and verified that there are no visible errors.
  • I ran the DAG documentation generator (if applicable).

Developer Certificate of Origin

Developer Certificate of Origin
Developer Certificate of Origin
Version 1.1

Copyright (C) 2004, 2006 The Linux Foundation and its contributors.
1 Letterman Drive
Suite D4700
San Francisco, CA, 94129

Everyone is permitted to copy and distribute verbatim copies of this
license document, but changing it is not allowed.


Developer's Certificate of Origin 1.1

By making a contribution to this project, I certify that:

(a) The contribution was created in whole or in part by me and I
    have the right to submit it under the open source license
    indicated in the file; or

(b) The contribution is based upon previous work that, to the best
    of my knowledge, is covered under an appropriate open source
    license and I have the right under that license to submit that
    work with modifications, whether created in whole or in part
    by me, under the same open source license (unless I am
    permitted to submit under a different license), as indicated
    in the file; or

(c) The contribution was provided directly to me by some other
    person who certified (a), (b) or (c) and I have not modified
    it.

(d) I understand and agree that this project and the contribution
    are public and that a record of the contribution (including all
    personal information I submit with it, including my sign-off) is
    maintained indefinitely and may be redistributed consistent with
    this project or the open source license(s) involved.

@stacimc stacimc added 🟨 priority: medium Not blocking but should be addressed soon ✨ goal: improvement Improvement to an existing user-facing feature 💻 aspect: code Concerns the software code in the repository 🧱 stack: catalog Related to the catalog and Airflow DAGs labels Jan 2, 2024
@stacimc stacimc self-assigned this Jan 2, 2024
Copy link

github-actions bot commented Jan 3, 2024

Full-stack documentation: https://docs.openverse.org/_preview/3618

Please note that GitHub pages takes a little time to deploy newly pushed code, if the links above don't work or you see old versions, wait 5 minutes and try again.

You can check the GitHub pages deployment action list to see the current status of the deployments.

Changed files 🔄:

@stacimc stacimc marked this pull request as ready for review January 3, 2024 22:03
@stacimc stacimc requested review from a team as code owners January 3, 2024 22:03
@stacimc stacimc requested review from fcoveram, dhruvkb and krysal and removed request for fcoveram January 3, 2024 22:03
Copy link
Member

@krysal krysal left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I tested it following instructions, and it works great. I like this approach 👍

The choice of (provider, foreign_identifier) as the selected key to compare if a record already exists in the deleted table is interesting, though. I was thinking just using identifier would be easier as it's just one field to compare to and our internal primary key already, but glad this works as well.

I think an index for identifier is still necessary to make queries in these deleted tables faster. I was checking for some specific rows in recent days, and it took quite a long time to give me the row (just one). I was filtering by this field and thought it should be quick, but it wasn't the case. What do you think?

Aside from that side point, it all looks good to me!

Comment on lines +248 to +258
def create_postingestion_tasks():
"""
Create postingestion tasks to delete records that have downloads
disabled from the audio table and preserve them in the
deleted_audio table.

If we instead simply discarded these records during ingestion,
existing records which have had their downloads disabled since their
last ingestion would remain in the catalog. This approach ensures
all records with downloads disabled are removed.
"""
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nice to make use of this function again. Clever setup! Also because the reason makes perfect sense 💯

catalog/dags/providers/provider_api_scripts/jamendo.py Outdated Show resolved Hide resolved
@stacimc
Copy link
Collaborator Author

stacimc commented Jan 4, 2024

The choice of (provider, foreign_identifier) as the selected key to compare if a record already exists in the deleted table is interesting, though. I was thinking just using identifier would be easier as it's just one field to compare to and our internal primary key already, but glad this works as well.

It's only necessary to use (provider, foreign_identifier) instead of identifier because of the reingestion case. If a record gets deleted and then reingested by the provider DAG, it will have a different internal identifier than it did the first time it was deleted. But the (provider, fid) pair will still allow us to detect duplicates.

stacimc and others added 11 commits January 9, 2024 11:09
…downloads disabled"

This change would have resulted in all previously ingested download-disabled records being reingested every single time, and the delete_records DAG needing to be run each time. (Because it does not distinguish between records that are not currently in the catalog, and ones that are).
This reverts commit bc41375.
Co-authored-by: Krystle Salazar <krystle.salazar@automattic.com>
@stacimc stacimc force-pushed the update/jamendo-prevent-ingestion-for-download-disabled branch from 9e7a32d to d70a4c8 Compare January 9, 2024 19:11
@openverse-bot
Copy link
Collaborator

Based on the medium urgency of this PR, the following reviewers are being gently reminded to review this PR:

@dhruvkb
This reminder is being automatically generated due to the urgency configuration.

Excluding weekend1 days, this PR was ready for review 7 day(s) ago. PRs labelled with medium urgency are expected to be reviewed within 4 weekday(s)2.

@stacimc, if this PR is not ready for a review, please draft it to prevent reviewers from getting further unnecessary pings.

Footnotes

  1. Specifically, Saturday and Sunday.

  2. For the purpose of these reminders we treat Monday - Friday as weekdays. Please note that the operation that generates these reminders runs at midnight UTC on Monday - Friday. This means that depending on your timezone, you may be pinged outside of the expected range.

@dhruvkb dhruvkb requested review from AetherUnbound and removed request for dhruvkb January 15, 2024 12:00
Copy link
Member

@dhruvkb dhruvkb left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The changes look good, the documentation makes sense, the tests are through and the testing instructions work!

@stacimc stacimc merged commit 8b0ea73 into main Jan 18, 2024
41 checks passed
@stacimc stacimc deleted the update/jamendo-prevent-ingestion-for-download-disabled branch January 18, 2024 19:22
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
💻 aspect: code Concerns the software code in the repository ✨ goal: improvement Improvement to an existing user-facing feature 🟨 priority: medium Not blocking but should be addressed soon 🧱 stack: catalog Related to the catalog and Airflow DAGs
Projects
Archived in project
Development

Successfully merging this pull request may close these issues.

Do not ingest Jamendo tracks that do not allow downloads
4 participants