Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Engine refactoring #90

Merged
merged 20 commits into from
Jun 9, 2016
Merged

Engine refactoring #90

merged 20 commits into from
Jun 9, 2016

Conversation

FZambia
Copy link
Member

@FZambia FZambia commented Apr 30, 2016

This is not a complete work. But this branch has a working version of Centrifugo with refactored engine interface.

Engine has refactored publish methods.

Benefits:

  • Cleaner Engine interface
  • ChannelID only used in Redis Engine now
  • Very fast Memory Engine - only does the work that is required.
  • No extra JSON marshaling when channel has no active subscribers. At moment master branch wraps every published message into client JSON response.
  • More fast and compact serialization format for Redis (gogoprotobuf) for client messages, join/leave messages. For example here is benchmark of message serialization:
BenchmarkMsgMarshalJSON-2             500000          2608 ns/op         512 B/op          6 allocs/op
BenchmarkMsgMarshalGogoprotobuf-2    5000000           252 ns/op          64 B/op          1 allocs/op

Downsides:

  • Now after message published and serialized into client JSON we publish it into Redis. There is no extra marshal/unmarshal work required when receive message from Redis PUB/SUB on another node. Everything that Centrifugo node should do when receiving published message from Redis PUB/SUB - put it into client queues. Now it should first decode it into Message struct (gogoprotobuf unmarshal), then marshal it into client response and then put into client queues - i.e. pretty heavy work (it does this only for messages which must be delivered to clients connected to this node). It's hard to say at moment how it will affect overall performance - need to test/benchmark this with active subscribers in channels.
  • 3 channels instead of 1 in Redis for subscription: published messages, join messages, leave messages

@FZambia
Copy link
Member Author

FZambia commented Apr 30, 2016

Added benchmark to estimate how much work required to handle one message from PUB/SUB:

func BenchmarkClientMsg(b *testing.B) {
    app := testMemoryApp()

    // create one client so clientMsg really marshal into client response JSON.
    c, _ := newClient(app, &testSession{})

    messagePoolSize := 1000

    messagePool := make([][]byte, messagePoolSize)

    for i := 0; i < len(messagePool); i++ {
        channel := Channel("test" + strconv.Itoa(i))
        // subscribe client to channel so we need to encode message to JSON
        app.clients.addSub(channel, c)
        // add message to pool so we have messages for different channels.
        testMsg := newMessage(channel, []byte("{\"hello world\": true}"), "", nil)
        byteMessage, _ := testMsg.Marshal() // protobuf
        messagePool[i] = byteMessage
    }

    b.ResetTimer()
    for i := 0; i < b.N; i++ {
        var msg Message
        err := msg.Unmarshal(messagePool[i%len(messagePool)]) // unmarshal from protobuf
        if err != nil {
            panic(err)
        }
        err = app.clientMsg(Channel("test"+strconv.Itoa(i%len(messagePool))), &msg)
        if err != nil {
            panic(err)
        }
    }
}

Results:

PASS
BenchmarkClientMsg-2      200000          8767 ns/op         753 B/op         12 allocs/op

@FZambia
Copy link
Member Author

FZambia commented May 8, 2016

I've added custom manual marshaling for client message, join, leave JSON responses so benchmark now looks like:

BenchmarkPubSubMessageReceive-2   500000          3189 ns/op         223 B/op          6 allocs/op

So I don't think this place will be a bottleneck in any setup as node now able to process more than 300k NEW messages from engine on my machine (working in one goroutine, it will be possible to spread a work among workers based on channel name if we need this).

@FZambia
Copy link
Member Author

FZambia commented May 12, 2016

As a note, if we need to process messages coming from Redis in different goroutines here is a gist with implementation of this: https://gist.github.com/FZambia/1864a76e9c51f8e3eea5d0f8bdc2f739

@FZambia FZambia changed the title Experimental engine refactoring Engine refactoring May 15, 2016
@FZambia
Copy link
Member Author

FZambia commented May 19, 2016

So after all changes some interesting times, all without connected clients just to show how unnecessary JSON encoding of every message affected performance.

Broadcast one message into 100000 channels:

Memory engine:

master branch:

[I]: 2016/05/19 12:58:36 POST /api/ from [::1]:50040 completed in 744.333891ms

gogoprotobuf branch:

[I]: 2016/05/19 12:57:53 POST /api/ from [::1]:50023 completed in 349.8216ms

Redis engine:

master branch:

[I]: 2016/05/19 12:58:48 POST /api/ from [::1]:50047 completed in 1.146819458s

gogoprotobuf:

[I]: 2016/05/19 12:58:11 POST /api/ from [::1]:50033 completed in 801.090882ms

Publish 50000 messages in one request into different channels:

master branch:

[I]: 2016/05/19 13:09:05 POST /api/ from [::1]:50129 completed in 916.376917ms

gogoprotobuf branch:

[I]: 2016/05/19 13:06:18 POST /api/ from [::1]:50102 completed in 668.998574ms

Redis engine:

master branch:

[I]: 2016/05/19 13:08:30 POST /api/ from [::1]:50126 completed in 10.846997003s

gogoprotobuf:

[I]: 2016/05/19 13:07:27 POST /api/ from [::1]:50113 completed in 10.56859332s

Huge numbers when publishing 50k into Redis because we don't utilize batching for commands in one request (as I wrote here) so we wait for 50k RTT. In practice I dont think someone will send such requests - all publishes will be in separate requests to API.

Also the most actual bench of PUB/SUB receive shows this on Mac Pro 2012:

BenchmarkPubSubMessageReceive-2  1000000          2426 ns/op         232 B/op          6 allocs/op

@FZambia FZambia merged commit f4e3048 into master Jun 9, 2016
@banks
Copy link
Member

banks commented Jun 25, 2016

Nice work!

@FZambia FZambia deleted the gogoprotobuf branch July 2, 2016 15:02
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants