-
Notifications
You must be signed in to change notification settings - Fork 24
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Documentation request - safe async usage #531
Comments
➤ Automation for Jira commented: The link to the corresponding Jira issue is https://ably.atlassian.net/browse/SDK-3856 |
@hsghori I have notified the author of the of the library who wrote |
Hey @hsghori, thanks for reaching out! I understand your concern with asynchronous operations changing internal state, however we have aimed to design the library so that it 'just works' without needing to worry too much about concurrency, so I don't have any specific advice on how to use the async methods. In this particular case, when the internal http client encounters an error which necessitates the use of a fallback host, it will iterate through fallback hosts and if it finds a working host it will temporarily save that as a preference. I suppose there are cases where several requests fail at once and the fallback mechanism would occur for each failure with each overwriting the host preference when finished, but it's quite rare for this to happen and there are only 5 fallback hosts to try so, while it's not the perfect behaviour, I don't consider it to be unsafe or otherwise unacceptable. If this behaviour is unacceptable for you, an alternative and more efficient way to publish multiple messages to multiple channels is to use our batch API. We don't have first class support for it in ably-python just yet, however you can still make use of the SDKs auth and retry mechanism by querying the api with the generic AblyRest.request method. If you do go down this route I'm happy to answer any questions or share an example if needed. |
@owenpearson thanks for the response! I would love an example if you have time. Also is there a limit to the total number of messages that can be sent via the bulk API at once? And is the 10s timeout sufficient for using the bulk API? |
Hey @hsghori, here's an example of using the batch API. Note that while this example publishes a single 'batch spec', you can also publish an array of batch specs in order to send different messages to a different collection of channels. The results for each batch spec contain a There is no limit to the number of messages, however there is a limit of 100 channels and 2MiB request body per request. And yes, the 10s timeout is fine for this endpoint. import asyncio
from ably import AblyRest
async def main():
client = AblyRest(key="your_api_key")
batch_spec = {
"channels": ["channel1", "channel2", "channel3"],
"messages": [{"data": "message1"}, {"data": "message2"}],
}
res = await client.request(
"POST", "/messages", "3", None, batch_spec
)
# an array of BatchResults, one for each BatchSpec, each containing a
# successCount and a failureCount indicating the number of channels to
# which the batch of messages was published successfully
print(res.items)
asyncio.run(main()) |
Edit: I see that's explained in the docs. It's 100 channels per |
In the example you provided though: res = await client.request(
"POST", "/messages", "3", None, batch_spec
) It looks like you're passing the |
Also do y'all have a P50 for performance of the bulk endpoint (esp related to input size). One of my goals here is to ensure that customers aren't waiting too long on message publishing. My application supports a lot of bulk operations which send a sends one message per item being operated on. So I want to make sure that a bulk op that takes ~1-5s does not end up getting significantly slower on average. |
The body arg comes before headers, see: Lines 122 to 123 in 9b7c951
As for the batch API performance guarantees, I'll need to check with another team so will let you know when I have something to share. If possible, it might be worth testing some large queries yourself to see how it performs with large payloads? |
@hsghori currently, I am in a position to ans. this question. ably-python doesn't use multithreading, so |
I'm trying to understand to what extent actual parallel usage of this client is safe / tested / supported.
For example, since I can't use bulk publish to send multiple messages in a single payload if those messages are from different channels, I've been looking into using python coroutines to reduce the amount of time I need to wait on the ably servers.
But it's not clear to me if this usage is actually safe from a state / resources perspective.
Under the hood it looks like
Channel.publish
calls back to the sharedably.http
client and that can actually set parameters likeself.__host
andself.__host_expires
which feels fairly unsafe since different coroutines could be setting those parameters on the same client. But sincechannel.publish
is an explicitly async method I'd expect it to be safe to parallelize.So I'd love to understand to what extent async usage of this sdk is safe / how y'all would recommend we use the async capabilities of the sdk.
┆Issue is synchronized with this Jira Task by Unito
The text was updated successfully, but these errors were encountered: