Open Source Toolbox for Boosting the AI Research Efficiency
ReproModel helps the AI research community to reproduce, compare, train, and test AI models faster.
ReproModel toolbox revolutionizes research efficiency by providing standardized models, dataloaders, and processing procedures. It features a comprehensive suite of pre-existing experiments, a code extractor, and an LLM descriptor. This toolbox allows researchers to focus on new datasets and model development, significantly reducing time and computational costs.
With this no-code solution, you'll have access to a collection of benchmark and SOTA models and datasets. Dive into training visualizations, effortlessly extract code for publication, and let our LLM-powered automated methodology description writer do the heavy lifting.
The current prototype helps researchers to modularize their development and compare the performance of each step in the pipeline in a reproducible way. This prototype version helped us reduce the time for model development, computation, and writing by at least 40%. Watch our demo.
The coming versions will help researchers build upon state-of-the-art research faster by just loading the previously published study ID. All code, experiments, and results will already be verified and stored in our system.
https://repromodel.netlify.app
✅ Standard Models Included
✅ Benchmark Datasets
✅ Metrics (100+)
✅ Losses (20+)
✅ Data Splitting
✅ Augmentations
✅ Optimizers (10+)
✅ Learning Rate Schedulers
✅ Early Stopping Criterion
✅ Training Device Selection
✅ Logging (Tensorboard ...)
✅ AI Experiment Description Generator
✅ Code Extractor
✅ Custom Script Editor
✅ Docker image
🔲 GUI augmentation builder
🔲 Conventional ML models workflow
🔲 Parallel training
🔲 Statistical testing
🔲 Explainability
🔲 Interpretability
For examples and step-by-step instructions, please visit our full documentation at https://www.repromodel.com/docs.
Please verify that you have Docker or Docker CLI installed on your system.
Pull the docker image:
docker pull dsitnik1612/repromodel
Run the container:
docker run --name ReproModel -p 5173:5173 -p 6006:6006 -p 5005:5005 dsitnik1612/repromodel
Then open the frontend under: http://localhost:5173/
In case you want to run the ReproModel directly from the source code, here are the steps:
You will need to have Node.js installed.
Combines npm install, creation of a virtual environment, as well as the launch of the frontend and backend:
npm run repromodel // Mac and Linux
npm run repromodel-windows // Windows
If you want to launch the frontend only:
npm install
npm run dev
For using the Methodology Generator, you need to have Ollama installed You can get Ollama from their website and pull the model of your choice.
npm install
npm run repromodel-with-llm // Mac and Linux
npm run repromodel-with-llm-windows // Windows
Then open the frontend under: http://localhost:5173/
Contributions are what make the open-source community such an amazing place to learn, inspire, and create.
Any contributions you make are greatly appreciated. If you have a suggestion that would make this better, please read our Contribution Guidelines and Code of Conduct.
Dario Sitnik, PhD AI Scientist GitHub |
Mint Owl ML Engineer GitHub |
Martin Schumakher Developer GitHub |
Tomonari Feehan Developer GitHub |
For questions or any type of support, you can reach out to me via dario.sitnik@gmail.com
This project is licensed under the MIT License.