-
-
Notifications
You must be signed in to change notification settings - Fork 604
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Engine refactoring #90
Conversation
Added benchmark to estimate how much work required to handle one message from PUB/SUB: func BenchmarkClientMsg(b *testing.B) {
app := testMemoryApp()
// create one client so clientMsg really marshal into client response JSON.
c, _ := newClient(app, &testSession{})
messagePoolSize := 1000
messagePool := make([][]byte, messagePoolSize)
for i := 0; i < len(messagePool); i++ {
channel := Channel("test" + strconv.Itoa(i))
// subscribe client to channel so we need to encode message to JSON
app.clients.addSub(channel, c)
// add message to pool so we have messages for different channels.
testMsg := newMessage(channel, []byte("{\"hello world\": true}"), "", nil)
byteMessage, _ := testMsg.Marshal() // protobuf
messagePool[i] = byteMessage
}
b.ResetTimer()
for i := 0; i < b.N; i++ {
var msg Message
err := msg.Unmarshal(messagePool[i%len(messagePool)]) // unmarshal from protobuf
if err != nil {
panic(err)
}
err = app.clientMsg(Channel("test"+strconv.Itoa(i%len(messagePool))), &msg)
if err != nil {
panic(err)
}
}
} Results:
|
I've added custom manual marshaling for client message, join, leave JSON responses so benchmark now looks like:
So I don't think this place will be a bottleneck in any setup as node now able to process more than 300k NEW messages from engine on my machine (working in one goroutine, it will be possible to spread a work among workers based on channel name if we need this). |
As a note, if we need to process messages coming from Redis in different goroutines here is a gist with implementation of this: https://gist.github.com/FZambia/1864a76e9c51f8e3eea5d0f8bdc2f739 |
So after all changes some interesting times, all without connected clients just to show how unnecessary JSON encoding of every message affected performance. Broadcast one message into 100000 channels: Memory engine: master branch:
gogoprotobuf branch:
Redis engine: master branch:
gogoprotobuf:
Publish 50000 messages in one request into different channels: master branch:
gogoprotobuf branch:
Redis engine: master branch:
gogoprotobuf:
Huge numbers when publishing 50k into Redis because we don't utilize batching for commands in one request (as I wrote here) so we wait for 50k RTT. In practice I dont think someone will send such requests - all publishes will be in separate requests to API. Also the most actual bench of PUB/SUB receive shows this on Mac Pro 2012:
|
Nice work! |
This is not a complete work. But this branch has a working version of Centrifugo with refactored engine interface.
Engine has refactored publish methods.
Benefits:
ChannelID
only used in Redis Engine nowDownsides: