Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

limit vertex size with 50000 to avoid huge grpc message making client… #2456

Merged
merged 1 commit into from
Nov 19, 2021

Conversation

sunchunming
Copy link

fix issue in #1539

when run maven compile in Dockerfile there are a lot of log vertex in status grpc response(>200000) . The grpc header and other fields in vertex message create grpc message exceeded 20M, which cause receiving buffer in buildctl overflow

@tonistiigi
Copy link
Member

This does not look correct. All items after 50000 seem to be sent separately. If your case is that you have a lot of log messages with small Data then maybe we need to add additional overhead to size count. eg. logSize += len(v.Data)*const

@sunchunming
Copy link
Author

Not all items after 50000 seem to be sent separately, it's every 50000 items to be sent separately, In this case, limiting the number of messages is more accurate than limiting the size of the log, and it may be better to limit the number of messages and the size of the log at the same time.

@tonistiigi
Copy link
Member

Still, where is that 50000 coming from? Take the size of controlapi.VertexLog with zero data length and add it to the len(v.Data). There is also VertexLog.Size() that could be used instead of len() but I think that maybe has too much overhead for our use case.

@sunchunming
Copy link
Author

Similar to 1M of log data size, 50000 is an experience value which can make size of message under the safe line. And agree with you, logically vertexLog.Size() is much better but it's so expensive. Considering that this kind of scene is relatively rare, other two methods is enough, which one you prefer?

@tonistiigi
Copy link
Member

Similar to 1M of log data size, 50000 is an experience value which can make size of message under the safe line.

I mean that there is no actual known limit now. Sometimes the limit might be 1M (eg. if it is a single log) but if there are a lot of items it could be much higher (1M + 50K*empty record size). We should stick to one precise limit.

@sunchunming
Copy link
Author

I think the server's sending buffer limit should be consistent with the client's receiving buffer limit (now it's 16M)

@tonistiigi
Copy link
Member

tonistiigi commented Nov 15, 2021

There is no need to set it very big as it also means that the client view becomes less responsive. I think 1MB is fine, just need to count it correctly.

@@ -325,6 +325,8 @@ func (c *Controller) Status(req *controlapi.StatusRequest, stream controlapi.Con
eg.Go(func() error {
return c.solver.Status(ctx, req.Ref, ch)
})
emtpyLogVertex := controlapi.VertexLog{}
emtpyLogVertexSize := emtpyLogVertex.Size()
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

you can make this global and move setting it to init()

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

fixed

// avoid logs growing big and split apart if they do
if logSize > 1024*1024 {
ss.Vertexes = nil
ss.Statuses = nil
ss.Logs = ss.Logs[i+1:]
retry = true
break
}else {
retry = false
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

why this?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

after first big message the retry always be true and there is an endless loop.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

hmm, looks weird indeed. maybe the retry := false should be inside the for

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

fixed

// avoid logs growing big and split apart if they do
if logSize > 1024*1024 {
ss.Vertexes = nil
ss.Statuses = nil
ss.Logs = ss.Logs[i+1:]
retry = true
break
}else {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

doesn't look formatted.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

fixed

Copy link
Member

@tonistiigi tonistiigi left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please squash the commits as well

…t buffer overflow

Signed-off-by: sunchunming <sunchunming1@jd.com>
@sunchunming
Copy link
Author

Please squash the commits as well

@sunchunming sunchunming reopened this Nov 19, 2021
@tonistiigi tonistiigi merged commit f9cb2b7 into moby:master Nov 19, 2021
@crazy-max crazy-max added this to the v0.10.0 milestone Feb 4, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants