a package for streamlined multidomain data integration and translation based on cross-modal autoencoder architecture[1]. It is designed to add new data modalities and train models for seamless translation.
To install the package, simply run:
pip install multimodal-autoencoders
(optional) To make sure you have all the dependencies, you can create an appropriate environment using the environment.yml file with Conda:
conda env create -f environment.yml
An example on how to train and use the multimodal autoencoders can be found in relevant notebooks in examples
Usage is centered around a JointTraner instance (defined in multimodal_autoencoders/trainer/joint_trainer.py). A central part of the whole architecture is that different components need to be associated to the individual modalities. This is done through python dictionaries, with which most users will be familiar with.
Thibault Bechtler (th.bechtler@gmail.com) & Bartosz Baranowski (bartosz.baranowski@novartis.com)
Contributors: Michal Pikusa (michal.pikusa@novartis.com), Steffen Renner (steffen.renner@novartis.com)
[1] Yang, K.D., Belyaeva, A., Venkatachalapathy, S. et al. Multi-domain translation between single-cell imaging and sequencing data using autoencoders. Nat Commun 12, 31 (2021). https://doi.org/10.1038/s41467-020-20249-2