Skip to content

Commit

Permalink
GNNLux docs start and general docs improvement (#513)
Browse files Browse the repository at this point in the history
  • Loading branch information
aurorarossi authored Nov 8, 2024
1 parent b1d7936 commit 6ad8d81
Show file tree
Hide file tree
Showing 14 changed files with 251 additions and 31 deletions.
2 changes: 1 addition & 1 deletion GNNGraphs/docs/src/api/temporalgraph.md
Original file line number Diff line number Diff line change
Expand Up @@ -10,7 +10,7 @@ Pages = ["temporalsnapshotsgnngraph.jl"]
Private = false
```

### TemporalSnapshotsGNNGraph random generators
## TemporalSnapshotsGNNGraph random generators

```@docs
rand_temporal_radius_graph
Expand Down
16 changes: 14 additions & 2 deletions GNNGraphs/docs/src/index.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
# GNNGraphs.jl

GNNGraphs.jl is a package that provides graph data structures and helper functions specifically designed for working with graph neural networks. This package allows to store not only the graph structure, but also features associated with nodes, edges, and the graph itself. It is the core foundation for the GNNlib, GraphNeuralNetworks, and GNNLux packages.
GNNGraphs.jl is a package that provides graph data structures and helper functions specifically designed for working with graph neural networks. This package allows to store not only the graph structure, but also features associated with nodes, edges, and the graph itself. It is the core foundation for the GNNlib.jl, GraphNeuralNetworks.jl, and GNNLux.jl packages.

It supports three types of graphs:

Expand All @@ -12,4 +12,16 @@ It supports three types of graphs:



This package depends on the package [Graphs.jl] (https://github.com/JuliaGraphs/Graphs.jl).
This package depends on the package [Graphs.jl] (https://github.com/JuliaGraphs/Graphs.jl).



## Installation

The package can be installed with the Julia package manager.
From the Julia REPL, type `]` to enter the Pkg REPL mode and run:

```julia
pkg> add GNNGraphs
```

1 change: 1 addition & 0 deletions GNNLux/docs/Project.toml
Original file line number Diff line number Diff line change
@@ -1,5 +1,6 @@
[deps]
Documenter = "e30172f5-a6a5-5a46-863b-614d45cd2de4"
DocumenterInterLinks = "d12716ef-a0f6-4df4-a9f1-a5a34e75c656"
GNNLux = "e8545f4d-a905-48ac-a8c4-ca114b98986d"
GNNlib = "a6a84749-d869-43f8-aacc-be26a1996e48"
LiveServer = "16fef848-5104-11e9-1b77-fb7a48bbb589"
7 changes: 6 additions & 1 deletion GNNLux/docs/make.jl
Original file line number Diff line number Diff line change
@@ -1,4 +1,5 @@
using Documenter
using DocumenterInterLinks
using GNNlib
using GNNLux

Expand All @@ -8,11 +9,15 @@ assets=[]
prettyurls = get(ENV, "CI", nothing) == "true"
mathengine = MathJax3()


interlinks = InterLinks(
"GNNGraphs" => ("https://carlolucibello.github.io/GraphNeuralNetworks.jl/GNNGraphs/", joinpath(dirname(dirname(@__DIR__)), "GNNGraphs", "docs", "build", "objects.inv")),
"GNNlib" => ("https://carlolucibello.github.io/GraphNeuralNetworks.jl/GNNlib/", joinpath(dirname(dirname(@__DIR__)), "GNNlib", "docs", "build", "objects.inv")))

makedocs(;
modules = [GNNLux],
doctest = false,
clean = true,
plugins = [interlinks],
format = Documenter.HTML(; mathengine, prettyurls, assets = assets, size_threshold=nothing),
sitename = "GNNLux.jl",
pages = ["Home" => "index.md",
Expand Down
3 changes: 2 additions & 1 deletion GNNLux/docs/src/api/basic.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,5 +4,6 @@ CurrentModule = GNNLux

## GNNLayer
```@docs
GNNLux.GNNLayer
GNNLayer
GNNChain
```
27 changes: 27 additions & 0 deletions GNNLux/docs/src/api/conv.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,27 @@
```@meta
CurrentModule = GNNLux
```

# Convolutional Layers

Many different types of graphs convolutional layers have been proposed in the literature. Choosing the right layer for your application could involve a lot of exploration.
Multiple graph convolutional layers are typically stacked together to create a graph neural network model (see [`GNNChain`](@ref)).

The table below lists all graph convolutional layers implemented in the *GNNLux.jl*. It also highlights the presence of some additional capabilities with respect to basic message passing:
- *Sparse Ops*: implements message passing as multiplication by sparse adjacency matrix instead of the gather/scatter mechanism. This can lead to better CPU performances but it is not supported on GPU yet.
- *Edge Weight*: supports scalar weights (or equivalently scalar features) on edges.
- *Edge Features*: supports feature vectors on edges.
- *Heterograph*: supports heterogeneous graphs (see [`GNNHeteroGraph`](@ref)).
- *TemporalSnapshotsGNNGraphs*: supports temporal graphs (see [`TemporalSnapshotsGNNGraph`](@ref)) by applying the convolution layers to each snapshot independently.

| Layer |Sparse Ops|Edge Weight|Edge Features| Heterograph | TemporalSnapshotsGNNGraphs |
| :-------- | :---: |:---: |:---: | :---: | :---: ||
| [`GCNConv`](@ref) ||| || |

## Docs

```@autodocs
Modules = [GNNLux]
Pages = ["layers/conv.jl"]
Private = false
```
42 changes: 40 additions & 2 deletions GNNLux/src/layers/basic.jl
Original file line number Diff line number Diff line change
Expand Up @@ -2,14 +2,52 @@
abstract type GNNLayer <: AbstractLuxLayer end
An abstract type from which graph neural network layers are derived.
It is Derived from Lux's `AbstractLuxLayer` type.
It is derived from Lux's `AbstractLuxLayer` type.
See also `GNNChain`.
See also [`GNNLux.GNNChain`](@ref).
"""
abstract type GNNLayer <: AbstractLuxLayer end

abstract type GNNContainerLayer{T} <: AbstractLuxContainerLayer{T} end

"""
GNNChain(layers...)
GNNChain(name = layer, ...)
Collects multiple layers / functions to be called in sequence
on given input graph and input node features.
It allows to compose layers in a sequential fashion as `Lux.Chain`
does, propagating the output of each layer to the next one.
In addition, `GNNChain` handles the input graph as well, providing it
as a first argument only to layers subtyping the [`GNNLayer`](@ref) abstract type.
`GNNChain` supports indexing and slicing, `m[2]` or `m[1:end-1]`,
and if names are given, `m[:name] == m[1]` etc.
# Examples
```jldoctest
julia> using Lux, GNNLux, Random
julia> rng = Random.default_rng();
julia> m = GNNChain(GCNConv(2=>5),
x -> relu.(x),
Dense(5=>4))
julia> x = randn(rng, Float32, 2, 3);
julia> g = rand_graph(rng, 3, 6)
GNNGraph:
num_nodes: 3
num_edges: 6
julia> ps, st = LuxCore.setup(rng, m);
julia> m(g, x, ps, st) # First entry is the output, second entry is the state of the model
(Float32[-0.15594329 -0.15594329 -0.15594329; 0.93431795 0.93431795 0.93431795; 0.27568763 0.27568763 0.27568763; 0.12568939 0.12568939 0.12568939], (layer_1 = NamedTuple(), layer_2 = NamedTuple(), layer_3 = NamedTuple()))
```
"""
@concrete struct GNNChain <: GNNContainerLayer{(:layers,)}
layers <: NamedTuple
end
Expand Down
79 changes: 76 additions & 3 deletions GNNLux/src/layers/conv.jl
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,80 @@ _getstate(s::StatefulLuxLayer{Static.True}) = s.st
_getstate(s::StatefulLuxLayer{false}) = s.st_any
_getstate(s::StatefulLuxLayer{Static.False}) = s.st_any


@doc raw"""
GCNConv(in => out, σ=identity; [init_weight, init_bias, use_bias, add_self_loops, use_edge_weight])
Graph convolutional layer from paper [Semi-supervised Classification with Graph Convolutional Networks](https://arxiv.org/abs/1609.02907).
Performs the operation
```math
\mathbf{x}'_i = \sum_{j\in N(i)} a_{ij} W \mathbf{x}_j
```
where ``a_{ij} = 1 / \sqrt{|N(i)||N(j)|}`` is a normalization factor computed from the node degrees.
If the input graph has weighted edges and `use_edge_weight=true`, than ``a_{ij}`` will be computed as
```math
a_{ij} = \frac{e_{j\to i}}{\sqrt{\sum_{j \in N(i)} e_{j\to i}} \sqrt{\sum_{i \in N(j)} e_{i\to j}}}
```
# Arguments
- `in`: Number of input features.
- `out`: Number of output features.
- `σ`: Activation function. Default `identity`.
- `init_weight`: Weights' initializer. Default `glorot_uniform`.
- `init_bias`: Bias initializer. Default `zeros32`.
- `use_bias`: Add learnable bias. Default `true`.
- `add_self_loops`: Add self loops to the graph before performing the convolution. Default `false`.
- `use_edge_weight`: If `true`, consider the edge weights in the input graph (if available).
If `add_self_loops=true` the new weights will be set to 1.
This option is ignored if the `edge_weight` is explicitly provided in the forward pass.
Default `false`.
# Forward
(::GCNConv)(g, x, [edge_weight], ps, st; norm_fn = d -> 1 ./ sqrt.(d), conv_weight=nothing)
Takes as input a graph `g`, a node feature matrix `x` of size `[in, num_nodes]`, optionally an edge weight vector and the parameter and state of the layer. Returns a node feature matrix of size
`[out, num_nodes]`.
The `norm_fn` parameter allows for custom normalization of the graph convolution operation by passing a function as argument.
By default, it computes ``\frac{1}{\sqrt{d}}`` i.e the inverse square root of the degree (`d`) of each node in the graph.
If `conv_weight` is an `AbstractMatrix` of size `[out, in]`, then the convolution is performed using that weight matrix.
# Examples
```julia
using GNNLux, Lux, Random
# initialize random number generator
rng = Random.default_rng()
# create data
s = [1,1,2,3]
t = [2,3,1,1]
g = GNNGraph(s, t)
x = randn(rng, Float32, 3, g.num_nodes)
# create layer
l = GCNConv(3 => 5)
# setup layer
ps, st = LuxCore.setup(rng, l)
# forward pass
y = l(g, x, ps, st) # size of the output first entry: 5 × num_nodes
# convolution with edge weights and custom normalization function
w = [1.1, 0.1, 2.3, 0.5]
custom_norm_fn(d) = 1 ./ sqrt.(d + 1) # Custom normalization function
y = l(g, x, w, ps, st; norm_fn = custom_norm_fn)
# Edge weights can also be embedded in the graph.
g = GNNGraph(s, t, w)
l = GCNConv(3 => 5, use_edge_weight=true)
ps, st = Lux.setup(rng, l)
y = l(g, x, ps, st) # same as l(g, x, w)
```
"""
@concrete struct GCNConv <: GNNLayer
in_dims::Int
out_dims::Int
Expand All @@ -18,7 +91,7 @@ _getstate(s::StatefulLuxLayer{Static.False}) = s.st_any
end

function GCNConv(ch::Pair{Int, Int}, σ = identity;
init_weight = glorot_uniform,
init_weight = glorot_uniform,
init_bias = zeros32,
use_bias::Bool = true,
add_self_loops::Bool = true,
Expand Down Expand Up @@ -55,7 +128,7 @@ end

function (l::GCNConv)(g, x, edge_weight, ps, st;
norm_fn = d -> 1 ./ sqrt.(d),
conv_weight=nothing, )
conv_weight=nothing)

m = (; ps.weight, bias = _getbias(ps),
l.add_self_loops, l.use_edge_weight, l.σ)
Expand Down
13 changes: 11 additions & 2 deletions GNNlib/docs/src/index.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,15 @@
# GNNlib.jl

GNNlib.jl is a package that provides the implementation of the basic message passing functions and
functional implementation of graph convolutional layers, which are used to build graph neural networks in both the Flux.jl and Lux.jl machine learning frameworks, created in the GraphNeuralNetworks.jl and GNNLux.jl packages, respectively.
functional implementation of graph convolutional layers, which are used to build graph neural networks in both the [Flux.jl](https://fluxml.ai/Flux.jl/stable/) and [Lux.jl](https://lux.csail.mit.edu/stable/) machine learning frameworks, created in the GraphNeuralNetworks.jl and GNNLux.jl packages, respectively.

This package depends on GNNGraphs.jl and NNlib.jl, and is primarily intended for developers looking to create new GNN architectures. For most users, the higher-level GraphNeuralNetworks.jl and GNNLux.jl packages are recommended.
This package depends on GNNGraphs.jl and NNlib.jl, and is primarily intended for developers looking to create new GNN architectures. For most users, the higher-level GraphNeuralNetworks.jl and GNNLux.jl packages are recommended.

## Installation

The package can be installed with the Julia package manager.
From the Julia REPL, type `]` to enter the Pkg REPL mode and run:

```julia
pkg> add GNNlib
```
2 changes: 1 addition & 1 deletion GNNlib/docs/src/messagepassing.md
Original file line number Diff line number Diff line change
Expand Up @@ -134,7 +134,7 @@ function (l::GCN)(g::GNNGraph, x::AbstractMatrix{T}) where T
end
```

See the `GATConv` implementation [here](https://github.com/JuliaGraphs/GraphNeuralNetworks.jl/blob/master/src/layers/conv.jl) for a more complex example.
See the `GATConv` implementation [here](https://juliagraphs.org/GraphNeuralNetworks.jl/graphneuralnetworks/api/conv/) for a more complex example.


## Built-in message functions
Expand Down
2 changes: 1 addition & 1 deletion GNNlib/src/GNNlib.jl
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@ module GNNlib

using Statistics: mean
using LinearAlgebra, Random
using MLUtils: zeros_like
using MLUtils: zeros_like, ones_like
using NNlib
using NNlib: scatter, gather
using DataStructures: nlargest
Expand Down
22 changes: 11 additions & 11 deletions GraphNeuralNetworks/docs/src/home.md
Original file line number Diff line number Diff line change
@@ -1,8 +1,8 @@
# GraphNeuralNetworks

This is the documentation page for [GraphNeuralNetworks.jl](https://github.com/JuliaGraphs/GraphNeuralNetworks.jl), a graph neural network library written in Julia and based on the deep learning framework [Flux.jl](https://github.com/FluxML/Flux.jl).
GraphNeuralNetworks.jl is largely inspired by [PyTorch Geometric](https://pytorch-geometric.readthedocs.io/en/latest/), [Deep Graph Library](https://docs.dgl.ai/),
and [GeometricFlux.jl](https://fluxml.ai/GeometricFlux.jl/stable/).
GraphNeuralNetworks.jl is a graph neural network package based on the deep learning framework [Flux.jl](https://github.com/FluxML/Flux.jl).

It provides a set of stateful graph convolutional layers and utilities to build graph neural networks.

Among its features:

Expand All @@ -11,30 +11,30 @@ Among its features:
* Easy to define custom layers.
* CUDA support.
* Integration with [Graphs.jl](https://github.com/JuliaGraphs/Graphs.jl).
* [Examples](https://github.com/JuliaGraphs/GraphNeuralNetworks.jl/tree/master/examples) of node, edge, and graph level machine learning tasks.
* [Examples](https://github.com/JuliaGraphs/GraphNeuralNetworks.jl/tree/master/GraphNeuralNetworks/examples) of node, edge, and graph level machine learning tasks.
* Heterogeneous and temporal graphs.


## Package overview

Let's give a brief overview of the package by solving a
graph regression problem with synthetic data.
Let's give a brief overview of the package by solving a graph regression problem with synthetic data.

Usage examples on real datasets can be found in the [examples](https://github.com/JuliaGraphs/GraphNeuralNetworks.jl/tree/master/examples) folder.
Usage examples on real datasets can be found in the [examples](https://github.com/JuliaGraphs/GraphNeuralNetworks.jl/tree/master/GraphNeuralNetworks/examples) folder.

### Data preparation

We create a dataset consisting in multiple random graphs and associated data features.

```julia
using GraphNeuralNetworks, Graphs, Flux, CUDA, Statistics, MLUtils
using GraphNeuralNetworks, Flux, CUDA, Statistics, MLUtils
using Flux: DataLoader

all_graphs = GNNGraph[]

for _ in 1:1000
g = rand_graph(10, 40,
ndata=(; x = randn(Float32, 16,10)), # input node features
gdata=(; y = randn(Float32))) # regression target
ndata=(; x = randn(Float32, 16,10)), # Input node features
gdata=(; y = randn(Float32))) # Regression target
push!(all_graphs, g)
end
```
Expand All @@ -50,7 +50,7 @@ model = GNNChain(GCNConv(16 => 64),
BatchNorm(64), # Apply batch normalization on node features (nodes dimension is batch dimension)
x -> relu.(x),
GCNConv(64 => 64, relu),
GlobalPool(mean), # aggregate node-wise features into graph-wise features
GlobalPool(mean), # Aggregate node-wise features into graph-wise features
Dense(64, 1)) |> device

opt = Flux.setup(Adam(1f-4), model)
Expand Down
Loading

0 comments on commit 6ad8d81

Please sign in to comment.