Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

fix(multiprocessing): Increase default block sizes #319

Open
wants to merge 2 commits into
base: main
Choose a base branch
from

Conversation

nikhars
Copy link
Member

@nikhars nikhars commented Dec 28, 2023

There are some Sentry topics for which we allow a single message to be as large as 25 MB. In cases like these, using the autoresizing of block sizes could fail if the default block sizes are smaller than the maximum size of a single message. This change modifies the default block sizes to be atleast as high as the maximum size of a single message.

There are some Sentry topics for which we allow a single message to be as
large as 25 MB. In cases like these, using the autoresizing of block sizes
could fail if the default block sizes are smaller than the maximum size of
a single message. This change modifies the default block sizes to be atleast
as high as the maximum size of a single message.
@nikhars nikhars requested a review from a team as a code owner December 28, 2023 21:10
@lynnagara
Copy link
Member

Were you able to confirm all sentry and snuba consumers have adequate memory requested for this?

@untitaker
Copy link
Member

for the tests, i had to refactor quite a bit last time to get them to pass: #306

i would recommend fixing the input block size to the old value:

@pytest.fixture(autouse=True)
def input_block_size_for_tests(monkeypatch):
    from arroyo.processing.strategies import run_task_with_multiprocessing as x 
    monkeypatch.setattr(x, "DEFAULT_INPUT_BLOCK_SIZE", 16 * 1024 * 1024)
    monkeypatch.setattr(x, "DEFAULT_OUTPUT_BLOCK_SIZE", 16 * 1024 * 1024)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants