-
Notifications
You must be signed in to change notification settings - Fork 5.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add distributed lookup table design #9075
Changes from all commits
8c67fff
0baf4e1
267ffc2
4c33a10
bc611f0
5cf5ad3
c95156e
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,128 @@ | ||
## Design Doc: Distributed Lookup Table Operator | ||
|
||
A lookup table operator in PaddlePaddle where the table could be out | ||
of the memory of a computer. | ||
|
||
## Background | ||
|
||
A lookup table operator is well-used in deep learning for learning the | ||
representation, or the | ||
[*embedding*](http://www.cs.toronto.edu/~fritz/absps/ieee-lre.pdf), of | ||
symbols. | ||
|
||
### The Forward Algorithm | ||
|
||
The forward algorithm of the lookup table is a multiplication of the | ||
input vector x and the lookup table matrix W: | ||
|
||
$$y = x * W$$ | ||
|
||
When x is a sparse vector of symbols, the above multiplication | ||
simplifies into looking up rows in W that correspond to symbols in x, | ||
denoted by W(x). Please be aware that W could be huge and out of the | ||
memory, so we'd need a distributed storage service, which supports the | ||
lookup of rows. | ||
|
||
The following figure illustrates the multiplication of x with two | ||
non-zero elements, or say, two symbols, and a lookup table W: | ||
|
||
![lookup table](./lookup_table.png) | ||
|
||
### The Backward Algorithm | ||
|
||
The backward algorithm computes W'(x) using W(x). W'(x) has the same | ||
scale of size as W(x) and is much smaller than W. | ||
|
||
To optimize W given W', we can do simple SGD update: | ||
|
||
$$W = f(W') = \lambda * W'$$ | ||
|
||
or some more sophisticated algorithms that rely on both W' and W: | ||
|
||
$$W = f(W, W')$$ | ||
|
||
The following figure illustrates the backward pass of the lookup | ||
operator: ![lookup table training](./lookup_table_training.png) | ||
|
||
## Distributed Storage Service | ||
|
||
The forward algorithm requires a distributed storage service for W. | ||
The backward algorithm prefers that the storage system can apply the | ||
optimization algorithm on W. The following two sections describe two | ||
solutions -- the former doesn't require that the storage service can | ||
do optimization, the latter does. | ||
|
||
### Storage Service Doesn't Optimize | ||
|
||
In this design, we use highly-optimized distributed storage, e.g., | ||
memcached, as the storage service, and we run the optimization | ||
algorithm on parameter servers of PaddlePaddle. The following figure | ||
illustrates the training process. | ||
|
||
<!-- | ||
Note: please update the following URL when update this digraph. | ||
<img src='https://g.gravizo.com/svg? | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. One awesome tool for pasting figures! |
||
digraph G { | ||
rankdir="LR"; | ||
subgraph cluster1 { | ||
P1 [label="pserver 1"]; | ||
P2 [label="pserver 2"]; | ||
T1 [label="trainer 1"]; | ||
T2 [label="trainer 2"]; | ||
T3 [label="trainer 3"]; | ||
} | ||
KV [label="memcached"]; | ||
T1 -> P1; | ||
T1 -> P2; | ||
T2 -> P1; | ||
T2 -> P2; | ||
T3 -> P1; | ||
T3 -> P2; | ||
P1 -> KV [color=gray, weight=0.1]; | ||
KV -> P1 [color=gray, weight=0.1]; | ||
P2 -> KV [color=gray, weight=0.1]; | ||
KV -> P2 [color=gray, weight=0.1]; | ||
KV -> T1 [color=gray, weight=0.1]; | ||
KV -> T2 [color=gray, weight=0.1]; | ||
KV -> T3 [color=gray, weight=0.1]; | ||
} | ||
) | ||
'/> | ||
--> | ||
|
||
<img src='https://g.gravizo.com/svg?%20digraph%20G%20{%20rankdir=%22LR%22;%20subgraph%20cluster1%20{%20P1%20[label=%22pserver%201%22];%20P2%20[label=%22pserver%202%22];%20T1%20[label=%22trainer%201%22];%20T2%20[label=%22trainer%202%22];%20T3%20[label=%22trainer%203%22];%20}%20KV%20[label=%22memcached%22];%20T1%20-%3E%20P1;%20T1%20-%3E%20P2;%20T2%20-%3E%20P1;%20T2%20-%3E%20P2;%20T3%20-%3E%20P1;%20T3%20-%3E%20P2;%20P1%20-%3E%20KV%20[color=gray,%20weight=0.1];%20KV%20-%3E%20P1%20[color=gray,%20weight=0.1];%20P2%20-%3E%20KV%20[color=gray,%20weight=0.1];%20KV%20-%3E%20P2%20[color=gray,%20weight=0.1];%20KV%20-%3E%20T1%20[color=gray,%20weight=0.1];%20KV%20-%3E%20T2%20[color=gray,%20weight=0.1];%20KV%20-%3E%20T3%20[color=gray,%20weight=0.1];%20}'/> | ||
|
||
Each trainer runs the forward and backward passes using their local | ||
data: | ||
|
||
1. In the forward pass, when a trainer runs the forward algorithm of a | ||
lookup operator, it retrieves W(x) from the storage service. | ||
1. The trainer computes W'(x) in the backward pass using W(x). | ||
|
||
During the global update process: | ||
|
||
1. Each trainer uploads its W'(x) to parameter servers. | ||
1. The parameter server runs the optimization algorithm, e.g., the | ||
Adam optimization algorithm, which requires that | ||
1. The parameter server retrieves W(x) from memcached, and | ||
1. The parameter server pushes $\Delta W(x)=f(W(x), lambda \sum_j | ||
W'(x))$ to memcached, where $f$ denotes the optimization | ||
algorithm. | ||
|
||
### Storage Service Does Optimize | ||
|
||
This design is very similar to the above one, except that the | ||
optimization algorithm $f$ runs on the storage service. | ||
|
||
- Pro: parameter servers do not retrieve W(x) from the storage | ||
service, thus saves half network communication. | ||
- Con: the storage service needs to be able to run the optimization | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Does this means we actually have two kinds of servers when doing training:
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. yes, There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Yes, if we go this way. But let us go the other way first. |
||
algorithm. | ||
|
||
## Conclusion | ||
|
||
Let us do the "storage service does not optimize" solution first, as a | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Sorry I don't understand why we need to use memcached given that our current parameter server implementation should be faster in key lookup. memcached is a general purpose in-memory key value store, from its website description:
We only need a special case in-memory key value store (few keys, large values). Currently our parameter server (recv operator in fluid) does exactly this. In terms of the key lookup speed, our parameter server should be faster or at least same speed comparing to memcached. The design doc mentioned:
Our parameter server already does this (and does optimization as well), why we have to make another implementation with memcached again, is it because of performance reasons? There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. @helinwang I am not so clear about the table lookup implementation currently, for a large scale lookup table, the keys may be discrete in a very big range, so we need a key-value lookup module. |
||
baseline at least, because it is easier to use a well-optimized | ||
distributed storage service like memcached. We can do the "storage | ||
service does optimize" solution later or at the same time, which, if | ||
implemented carefully, should have better performance than the former. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@wangkuiyi After discussing with @ldmod LiDong, We think that a standalone distributed KV service like Memcached maybe not a good solution. It's better that parameter and the corresponding optimization happened at the same place. For large scale model training and optimization, we need to use the asynchronous update, in this condition, we need to make sure every optimizer should update the latest parameter, even when the gradient is calculated by the old parameter. If we use a standalone KV service, we need to add a lock to the parameter when some optimizer is updating it. If the parameter and optimization op is at the same place, the solution will be very simple, for example, use one thread to do the optimization.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
But still, we can have two option, one is using the current optimization operators, but the parameter will be maintained by the pserver. The other one is using a distributed Storage Service which can do Optimization.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Discussed with @Yancey1989 yesterday that we agree with this idea, mixing two servers (original pserver and embedding table pserver) is a perfect way to embed an external parameter server:
we can carry on this two methods at the same time, then both opensource users and industry users can have their choice.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
To using distributed Storage Service, PServer would also communicate with the storage, this would make the double traffic than using
embedding table pserver
. And I agree with @typhoonzero , we can carry on this two methods at the same time.There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Let us have a baseline solution as early as possible. @jacquesqiao @ldmod.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@wangkuiyi I agree. We can implement a baseline solution without Memcached.