-
Notifications
You must be signed in to change notification settings - Fork 500
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
8 hours to index a dataset with thousands of files #8097
Comments
The same thing happend to me with a dataset having 33,870 files while upgrading from 4.20 to 5.6 with a new Solr version. Maybe we should change the indexing order and end with the datasets that have many files. |
At standup this morning we talked about a dataset that takes eight hours to index (as part of "index all"). It has 25 thousand files: https://dataverse.harvard.edu/dataset.xhtml?persistentId=doi:10.7910/DVN/3CTMKP
More context:
The text was updated successfully, but these errors were encountered: