This repository contains the code for the paper TILP: Differentiable Learning of Temporal Logical Rules on Knowledge Graphs.
We propose TILP, a differentiable framework for temporal logical rules learning. By designing a constrained random walk mechanism and the introduction of temporal operators, we ensure the efficiency of our model. We present temporal features modeling in tKG, e.g., recurrence, temporal order, interval between pair of relations, and duration, and incorporate it into our learning process.
To run the code, you need to first set up the environment given in requirements.txt.
It is recommended to use anaconda for installation.
After the installation, you need to create a file folder for experiments.
The structure of the file folder should be
TILP/
│
├── src/
│
├── data/
│ ├── WIKIDATA12k/
│ └── YAGO11k/
│
└── output/
├── found_rules/
├── found_t_s/
├── train_weights_tfm/
├── train_weights/
├── learned_rules/
├── explore_res/
└── rank_dict/
To run the code, simply use the command
python src/main.py
For some required files:
- pos_examples_idx.json:
It describes the samples used for training. By default (without this file), we use the whole training set. We also do random sampling sometimes.
- bg_train.txt:
It describes the background knowledge used for training. By default (without this file), we use the whole training set.
- bg_test.txt:
It describes the background knowledge used for test. By default (without this file), we use the whole training set.
The complete version can be time-consuming, to accelerate it, you can:
-
random sample some postive examples by setting pos_examples_idx.json (main.py)
-
reduce 'self.num_training_samples', 'self.num_paths_max', 'self.num_path_sampling', 'self.max_rulenum' (Models.py)
-
increase 'num_processes' (all py files)
@inproceedings{xiong2022tilp,
title={TILP: Differentiable Learning of Temporal Logical Rules on Knowledge Graphs},
author={Xiong, Siheng and Yang, Yuan and Fekri, Faramarz and Kerce, James Clayton},
booktitle={The Eleventh International Conference on Learning Representations},
year={2022}
}