Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Documentation for the thread model #4614

Open
sl1316 opened this issue Dec 30, 2021 · 7 comments
Open

Documentation for the thread model #4614

sl1316 opened this issue Dec 30, 2021 · 7 comments

Comments

@sl1316
Copy link

sl1316 commented Dec 30, 2021

Is your feature request related to a problem? Please describe.
It is not clear from the readme what's the thread model of opentelemetry collector and we have to read the code to understand how things works(it's not easy based on feedback from a few folks in my team) since the collector is spun up based on the config and the server is embedded in the code.
it would be good if the thread model is added to the public doc or readme in the project.

  1. synchronous or asynchronous request handling
    does the collector handle each grpc request in a synchronous way? in other words, does the collector send the response to the client until the entire data flow succeeds (receiver->processor->exporter)?
    from the code it looks like the case since the receiver calls the chained next consumer but I am not sure if this is the desired behavior if you have some processor doing staff like compression/sampling/extraction and landing data in the backend, which could take tens of seconds?

  2. thread safety and resource contention
    to my understanding, the receiver registers a grpc server, and the server should spin up a new goroutine to process each request. and the components are created once and shared across the goroutines(i am not sure since the code base is complicated and I am still reading). is it thread-safe? if so, could it be resource contention since they could be shared across hundreds of goroutines?

Describe the solution you'd like
enhance the protocol specification
since it already contains some of the request/response model

@sl1316 sl1316 changed the title what's the thread model Documentation for the thread model Dec 31, 2021
@jpkrohling
Copy link
Member

@MovieStoreGuy is this something you'd be interested in providing?

@MovieStoreGuy
Copy link
Contributor

Yeah, feel free to add me to it.

I wouldn't mind understanding it a bit more so it works for me :D

@jpkrohling
Copy link
Member

If you need any help, let me know.

@sl1316
Copy link
Author

sl1316 commented Jan 11, 2022

@MovieStoreGuy I think I can help with the question1 that I figured it out.
do you have any insight for quetion2? especially for resource contention and thread safety

@MovieStoreGuy
Copy link
Contributor

Sorry, I had been unwell for some time and struggling to get back into things.

This is on my list of things to get through this week :)

@Invisiblesil123
Copy link

Invisiblesil123 commented Feb 28, 2024

is there any explanation to this? we are seeing 10 threads while running top command. i have see pprof extension is there but we want to understand from code perspective.

@cforce
Copy link
Contributor

cforce commented Mar 17, 2024

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

5 participants