Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Added sharding to experimental features #4569

Merged
merged 1 commit into from
Feb 9, 2018

Conversation

victorb
Copy link
Member

@victorb victorb commented Jan 9, 2018

Not sure about "Road to being a real feature" and also would like to have clarified when this is being used. My understanding is that it'll shard automatically both when using the Files API and also when doing ipfs add -r but I'm not 100% about this.

@ghost ghost assigned victorb Jan 9, 2018
@ghost ghost added the status/in-progress In progress label Jan 9, 2018
ipfs config --json Experimental.ShardingEnabled true
```

### Road to being a real feature
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

IIRC this was a problem:

  • Make sure that objects that don't have to be sharded aren't

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Also, we kind of wanted to generalize this and define a new layer between IPLD and IPFS with sharding.

### State
Experimental

Currently, when too many items are added into a unixfs directory, the object gets too large and you may experience issues. To pervent this problem, and generally make working really large directories more efficient, we have implemented a HAMT structure for unixfs. To enable this feature, run:
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'd keep the description short, something like: Allows to create directories with unlimited number of entries - currently size of unixfs directories is limited by the maximum block size.

Also, try to keep the lines in docs under 80 characters.

@@ -343,3 +343,23 @@ See [Plugin docs](./plugins.md)

- [ ] Needs more testing
- [ ] Make sure there are no unknown major problems

## Sharding / HAMT
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'd explicitly say directory sharding.

ipfs config --json Experimental.ShardingEnabled true
```

### Road to being a real feature
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Also, we kind of wanted to generalize this and define a new layer between IPLD and IPFS with sharding.

Not sure about "Road to being a real feature" and also would like to have
clarified when this is being used. My understanding is that it'll shard
automatically both when using the Files API and also when doing `ipfs add -r`
but I'm not 100% about this.

License: MIT
Signed-off-by: Victor Bjelkholm <victorbjelkholm@gmail.com>
@victorb victorb force-pushed the add-sharding-experimental-flag branch from ec6f6b0 to db3abf3 Compare January 24, 2018 08:46
@victorb
Copy link
Member Author

victorb commented Jan 24, 2018

Thanks for the feedback @Stebalien and @magik6k, I've updated the document now.

@victorb
Copy link
Member Author

victorb commented Jan 24, 2018

Could someone clarify if the sharding always happens or only when using ipfs add-r/using MFS?

@whyrusleeping
Copy link
Member

@victorbjelkholm when sharding is enabled, everything dealing with unixfs directories will use it.

@victorb
Copy link
Member Author

victorb commented Feb 9, 2018

@whyrusleeping thanks, then this is ready to be merged.

@whyrusleeping whyrusleeping merged commit eca0486 into master Feb 9, 2018
@ghost ghost removed the status/in-progress In progress label Feb 9, 2018
@whyrusleeping whyrusleeping deleted the add-sharding-experimental-flag branch February 9, 2018 17:54
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants