Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How to implement the non-static CNN in (Kim, 2014) using Keras #1515

Closed
Imorton-zd opened this issue Jan 21, 2016 · 13 comments
Closed

How to implement the non-static CNN in (Kim, 2014) using Keras #1515

Imorton-zd opened this issue Jan 21, 2016 · 13 comments

Comments

@Imorton-zd
Copy link

In the paper (Kim, 2014), the non-static and static CNN were proposed. Has someone implemented the methods. I will appreciate it vary much if you share the codes with Keras.

@Imorton-zd
Copy link
Author

@ymcui

@ymcui
Copy link

ymcui commented Apr 9, 2016

hmm, doesn't a Convolutional1D work for you?
i have read that paper, and i think there is no problem in implementing it using Keras.

@chenzl3000
Copy link

Hello @Imorton-zd , I implemented the (Kim, 2014) model recently. And here is my code.

graph = Graph()
graph.add_input(name = 'input', input_shape=(config.sent_len,), dtype='int')
graph.add_node(
    Embedding(config.vocab_size, config.vec_dim, input_length=config.sent_len, weights = [self._load_embedding()]),
    name = 'nonstatic_emb', input = 'input'
)
conv_layer_outputs = []
for idx, window_size in enumerate(config.conv_filter_hs):
    conv_name = 'conv_nonstatic_%d' % idx
    pool_name = 'pool_nonstatic_%d' % idx
    graph.add_node(
        Convolution1D(config.conv_features, window_size, activation='relu',W_constraint=MaxNorm(3), b_constraint=MaxNorm(3)),
        name=conv_name, input = 'nonstatic_emb'
    )
    graph.add_node(MaxPooling1D(pool_length = config.sent_len - window_size + 1), name=pool_name, input = conv_name)
    conv_layer_outputs.append(pool_name)
#MLP
print conv_layer_outputs
graph.add_node(
    Reshape((config.conv_features * len(config.conv_filter_hs),)),
    inputs = conv_layer_outputs,
    name = 'reshape')
for idx, mlp_h_dim in enumerate(config.mlp_hidden_units):
    print mlp_h_dim
    graph.add_node(
        Dense(
            mlp_h_dim,
            activation='linear' if idx != len(config.mlp_hidden_units)-1 else 'softmax',
            W_constraint = MaxNorm(3),
            b_constraint = MaxNorm(3)
        ),
        name = 'fc_%d' % idx,
        input= 'reshape' if idx == 0 else 'fc_%d_dropout' %(idx-1)
    )
    if idx != len(config.mlp_hidden_units) - 1:
        graph.add_node(
            Dropout(config.dropout_rate),
            name = 'fc_%d_dropout' % idx,
            input = 'fc_%d' % idx
        )
graph.add_output(name='output',input='fc_%d' % (len(config.mlp_hidden_units)-1))

Hope it helps.

@Imorton-zd
Copy link
Author

@chen070757 Thanks for your reply and sharing your code! But I have some questions to the code. First, what are the parameters setting in config? Second, what is MaxNorm? In addtion, in fact, I want to do semi-supervised learning. Therefore, how can I modify the embedding layer to achieve the idea of non-static? Any opinions would be appreciated!

@Imorton-zd
Copy link
Author

@ymcui Thanks for your reply!As far as I'm concerned, the first input of the non-static cnn is the word embeddings pre-trained in word2vec, then the embbedings are always updated via back propagation. However, in keras, either using embedding layer which cannot do semi-supevised learning or directly putting the pre-trained embeddings into the networks as static input, it does not achieve the idea of non-static.

@chenzl3000
Copy link

@Imorton-zd

what are the parameters setting in config

config is a self-defined class that contains some constants of my network.

what is MaxNorm?

MaxNorm is a normalization method mentioned in (Kim 2014) paper as l2 constraints. It rescale the weight to make the l2 normalization (of the weight) less than a given constant (e.g. 3).

how can I modify the embedding layer to achieve the idea of non-static?

weights = [self._load_embedding()] means that I use pretrained word embedding to initialize the embedding layer. The embedding layer is still trainable during the training process, and it will be updated according to back propagation. So, actually, it is a non-static CNN model here.
**ps: if you set trainable=False to embedding layer, it will be a static CNN model

@Imorton-zd
Copy link
Author

@chen070757 Thank you very much. I still have some questions, hope not disturbing you.

  1. The weights could be assigned to word embedding? In my opinion, according to W^T*X+b, the X should be be assigned to word embedding, just the input of first layer?
  2. In your posted code, I don't find trainable=True.
    If permitted, would you send your full implementation including data reading, config definition, MaxNorm definition and so on, to my email: dzhangsuda@qq.com? Thanks once again!

@chenzl3000
Copy link

@Imorton-zd

Keras API changed recently. My code is based on Keras 0.3.2, and it is not compatible with the latest keras.

The weights could be assigned to word embedding? In my opinion, according to W^T*X+b, the X should be be assigned to word embedding, just the input of first layer?

Embedding layer contains a embedding matrix. When it receive a input sequence like [3,7], it will look up the embedding vector and output like [emb[3], emb[7]]. So, the embedding matrix can be initialized randomly by default or with given matrix.

For further information about Embedding, you should read the document http://keras.io/layers/embeddings/

In your posted code, I don't find trainable=True.

trainable argument is in definition of Layer class which is base class of all layer class. You can find it in https://github.com/fchollet/keras/blob/0.3.2/keras/layers/core.py

MaxNorm definition

MaxNorm is also defined by keras. You can find the definition in https://github.com/fchollet/keras/blob/0.3.2/keras/constraints.py

@Imorton-zd
Copy link
Author

@chen070757 Many thanks! The last two questions:
1、In training process train(self,train_X,train_Y,valid_X,valid_Y), validation data valid_X,valid_Y are necessary? According to my experiments, training loss is also decreasing and training accuracy is also visible without validation data (I use model.fit()). So I think validation data are not necessary. The positions of valid_X,valid_Y could be assigned to test data. Am I wrong?
2、 After 25 epochs, are you sure that the model achieve the best performance?

@chenzl3000
Copy link

@Imorton-zd

In training process train(self,train_X,train_Y,valid_X,valid_Y), validation data valid_X,valid_Y are necessary? According to my experiments, training loss is also decreasing and training accuracy is also visible without validation data (I use model.fit()). So I think validation data are not necessary. The positions of valid_X,valid_Y could be assigned to test data. Am I wrong?

Validation data is used to help to estimate whether model is underfitting or overfitting. It has no effect with training process.

After 25 epochs, are you sure that the model achieve the best performance?

config.n_epochs is just another constant, you can change it as you need. In the experiment I've done, the model will converge very quickly if you use adadelta as you optimizer.

@ipod825
Copy link
Contributor

ipod825 commented May 8, 2016

@chen070757 Did you get (almost) same performance on any dataset compared to the original implementation or the torch implementation.

I've tried to implement Kim's CNN (see #1994, though it used the old graph api) but failed to achieve 81% accuracy on MR (movie review) data set. I got about 79%~80%. This bothers me for a long time. I don't know whether the difference comes from some minor difference in the model architecture or details of experiment setting.

It would be very helpful if you could provide the performance information,

@ndrmahmoudi
Copy link

@chen070757 Thanks a lot for your great comments. I have one more question. Is there any way to return back-propagated word vectors? Or even more general question, is it possible to just back-propagate word embeddings using labeled data set? I need to observe changes in the word vectors and similarities.

Regards,
Nader

@stale stale bot added the stale label Aug 16, 2017
@stale
Copy link

stale bot commented Aug 16, 2017

This issue has been automatically marked as stale because it has not had recent activity. It will be closed after 30 days if no further activity occurs, but feel free to re-open a closed issue if needed.

@stale stale bot closed this as completed Sep 15, 2017
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants