To access the search functionality, apply to access the mosaic beta
Art is one of the few languages which transcends barriers of country, culture, and time. We aim to create an algorithm that can help discover the common semantic elements of art even between any culture, media, artist, or collection within the combined artworks of The Metropolitan Museum of Art and The Rijksmusem.
Image retrieval systems allow individuals to find images that are semantically similar to a query image. This serves as the backbone of reverse image search engines and many product recommendation engines. We present a novel method for specializing image retrieval systems called conditional image retrieval. When applied over large art datasets, conditional image retrieval provides visual analogies that bring to light hidden connections among different artists, cultures, and media. Conditional image retrieval systems can efficiently find shared semantics between works of vastly different media and cultural origin. Our paper introduces new variants of K-Nearest Neighbor algorithms that support specializing to particular subsets of image collections on the fly.
To find artworks with similar semantic structure we leverage "features" from deep vision networks trained on ImageNet. These networks map images into a high-dimensional space where distance is semantically meaningful. Here, nearest neighbor queries tend to act as "reverse image search engines" and similar objects often share common structure.
To learn more about this project please join our live webinar on 10AM PST 7/30/2020.
- Hamilton, M., Fu, S., Freeman, W. T., & Lu, M. (2020). Conditional Image Retrieval. arXiv preprint arXiv:2007.07177.
To cite this work please use the following:
@article{hamilton2020conditional,
title={Conditional Image Retrieval},
author={Hamilton, Mark and Fu, Stephanie and Freeman, William T and Lu, Mindren},
journal={arXiv preprint arXiv:2007.07177},
year={2020}
}
Please see our developer guide to build the project for yourself.
Shared portrayals of reverence over 3000 years:
How to match your watch to your outfit and your dinnerware:
Special thanks to all of the contributors who helped make this project a reality!
- Mark Hamilton
- Chris Hoder
- Professor William T Freeman
- Lei Zhang
- Anand Raman
- Al Bracuti
- Ryan Gaspar
- Christina Lee
- Lily Li
The MIT x MSFT externs were pivotal in turning this research project into a functioning website. In only one month, the team built and designed the mosaic website. Stephanie Fu and Mindren Lu also contributed to the "Conditional Image Retrieval" publication through their evaluation of the affect of different pre-trained networks on nonparametric style transfer.
- Stephanie Fu
- Mindren Lu
- Zhenbang (Ben) Chen
- Felix Tran
- Darius Bopp
- Margaret (Maggie) Wang
- Marina Rogers
- Johnny Bui
This project owes a heavy thanks to the MSFT Garage team. They are passionate creators who seek to incubate new projects and inspire new generations of engineers. Their support and mentorship on this project are sincerely appreciated.
- Chris Templeman
- Linda Thackery
- Jean-Yves Ntamwemezi
- Dalitso Banda
- Anunaya Pandey
This project welcomes contributions and suggestions. Most contributions require you to agree to a Contributor License Agreement (CLA) declaring that you have the right to, and actually do, grant us the rights to use your contribution. For details, visit https://cla.opensource.microsoft.com.
When you submit a pull request, a CLA bot will automatically determine whether you need to provide a CLA and decorate the PR appropriately (e.g., status check, comment). Simply follow the instructions provided by the bot. You will only need to do this once across all repos using our CLA.
This project has adopted the Microsoft Open Source Code of Conduct. For more information see the Code of Conduct FAQ or contact opencode@microsoft.com with any additional questions or comments.