We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
if we set self.dropout=True, the probability of an element to be zeroed is one. then all elements of output tenor is zero. I get confused about it
self.dropout=True
class GAT(nn.Module): def __init__(self, nfeat, nhid, nclass, dropout, alpha, nheads): """Dense version of GAT.""" super(GAT, self).__init__() self.dropout = dropout self.attentions = [GraphAttentionLayer(nfeat, nhid, dropout=dropout, alpha=alpha, concat=True) for _ in range(nheads)] for i, attention in enumerate(self.attentions): self.add_module('attention_{}'.format(i), attention) self.out_att = GraphAttentionLayer(nhid * nheads, nclass, dropout=dropout, alpha=alpha, concat=False) def forward(self, x, adj): x = F.dropout(x, self.dropout, training=self.training) x = torch.cat([att(x, adj) for att in self.attentions], dim=1) x = F.dropout(x, self.dropout, training=self.training) # if dropout is true,x is zero x = F.elu(self.out_att(x, adj)) return F.log_softmax(x, dim=1)
The text was updated successfully, but these errors were encountered:
--dropout', type=float, default=0.6
Sorry, something went wrong.
No branches or pull requests
if we set
self.dropout=True
, the probability of an element to be zeroed is one. then all elements of output tenor is zero.I get confused about it
The text was updated successfully, but these errors were encountered: