-
Notifications
You must be signed in to change notification settings - Fork 1.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Documentation for the thread model #4614
Comments
@MovieStoreGuy is this something you'd be interested in providing? |
Yeah, feel free to add me to it. I wouldn't mind understanding it a bit more so it works for me :D |
If you need any help, let me know. |
@MovieStoreGuy I think I can help with the question1 that I figured it out. |
Sorry, I had been unwell for some time and struggling to get back into things. This is on my list of things to get through this week :) |
is there any explanation to this? we are seeing 10 threads while running top command. i have see pprof extension is there but we want to understand from code perspective. |
Related discussion related issues |
Is your feature request related to a problem? Please describe.
It is not clear from the readme what's the thread model of opentelemetry collector and we have to read the code to understand how things works(it's not easy based on feedback from a few folks in my team) since the collector is spun up based on the config and the server is embedded in the code.
it would be good if the thread model is added to the public doc or readme in the project.
synchronous or asynchronous request handling
does the collector handle each grpc request in a synchronous way? in other words, does the collector send the response to the client until the entire data flow succeeds (receiver->processor->exporter)?
from the code it looks like the case since the receiver calls the chained next consumer but I am not sure if this is the desired behavior if you have some processor doing staff like compression/sampling/extraction and landing data in the backend, which could take tens of seconds?
thread safety and resource contention
to my understanding, the receiver registers a grpc server, and the server should spin up a new goroutine to process each request. and the components are created once and shared across the goroutines(i am not sure since the code base is complicated and I am still reading). is it thread-safe? if so, could it be resource contention since they could be shared across hundreds of goroutines?
Describe the solution you'd like
enhance the protocol specification
since it already contains some of the request/response model
The text was updated successfully, but these errors were encountered: