Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Default to mmapfs within hybridfs #8508

Merged
merged 6 commits into from
Aug 15, 2023
Merged

Conversation

dzane17
Copy link
Contributor

@dzane17 dzane17 commented Jul 6, 2023

Description

Currently OpenSearch code contains an explicit list of file extensions which load using mmap from hybridfs. Other file extensions default to nio. This PR flips the logic to keep a list of nio file extensions while all others default to mmap. This will prevent any future regressions in case Lucene adds new segment file type that should be using mmap.

Related Issues

Resolves #8297

Check List

  • New functionality includes testing.
    • All tests pass
  • New functionality has been documented.
    • New functionality has javadoc added
  • Commits are signed per the DCO using --signoff
  • Commit changes are listed out in CHANGELOG.md file (See: Changelog)

By submitting this pull request, I confirm that my contribution is made under the terms of the Apache 2.0 license.
For more information on following Developer Certificate of Origin and signing off your commits, please check here.

@github-actions
Copy link
Contributor

github-actions bot commented Jul 6, 2023

Gradle Check (Jenkins) Run Completed with:

  • RESULT: UNSTABLE ❕
  • TEST FAILURES:
      1 org.opensearch.remotestore.SegmentReplicationUsingRemoteStoreIT.testDropPrimaryDuringReplication
      1 org.opensearch.remotestore.RemoteStoreIT.testStaleCommitDeletionWithInvokeFlush

@github-actions
Copy link
Contributor

github-actions bot commented Jul 6, 2023

Gradle Check (Jenkins) Run Completed with:

@dzane17 dzane17 closed this Jul 7, 2023
@dzane17 dzane17 reopened this Jul 7, 2023
@github-actions
Copy link
Contributor

github-actions bot commented Jul 7, 2023

Gradle Check (Jenkins) Run Completed with:

  • RESULT: UNSTABLE ❕
  • TEST FAILURES:
      1 org.opensearch.remotestore.RemoteStoreIT.testStaleCommitDeletionWithoutInvokeFlush
      1 org.opensearch.indices.replication.SegmentReplicationIT.testScrollCreatedOnReplica

@codecov
Copy link

codecov bot commented Jul 7, 2023

Codecov Report

Merging #8508 (b8e1e09) into main (4114009) will decrease coverage by 0.08%.
Report is 4 commits behind head on main.
The diff coverage is 86.84%.

@@             Coverage Diff              @@
##               main    #8508      +/-   ##
============================================
- Coverage     71.19%   71.12%   -0.08%     
- Complexity    57455    57476      +21     
============================================
  Files          4777     4777              
  Lines        270712   270746      +34     
  Branches      39566    39572       +6     
============================================
- Hits         192729   192555     -174     
- Misses        61782    62052     +270     
+ Partials      16201    16139      -62     
Files Changed Coverage Δ
...pensearch/common/settings/IndexScopedSettings.java 100.00% <ø> (ø)
...rc/main/java/org/opensearch/index/IndexModule.java 82.35% <81.48%> (-0.14%) ⬇️
...org/opensearch/index/store/FsDirectoryFactory.java 79.72% <100.00%> (+2.11%) ⬆️

... and 461 files with indirect coverage changes

@nknize
Copy link
Collaborator

nknize commented Aug 28, 2023

I already added a benchmark which details out the impact of this change on the Lucene Vector Field (ref: #9528 (comment)).

@navneet1v, +1!! That seems right to me! I think the .vec (vector data) and .vex (vector index) files should be MMapped by default for the same reasons terms dictionary, norms, and BKD are mmapped, they're hot in the page cache and should reap the performance gains. I don't think the .vem (metadata file) should be mmapped though, it's just the vector metadata and should be small enough as is (page cache usually allocates more memory than what's needed so data can be prefetched in blind anticipation of access).

@msokolov I'm curious if you have thoughts about mmapping Lucene's .vec and .vex files for the performance gains pointed out in the benchmarks referenced. Have y'all run across any crouching tigers or hidden dragons?

@jainankitk
Copy link
Collaborator

jainankitk commented Aug 28, 2023

I think this change comes from a place of good intentions (e.g., prevent oopsie regressions because we missed a "new" lucene file that should've been mmapped) but can have even worse (unintended) consequences and we should seriously consider reverting.

@nknize - In addition to that, this should always have been the default behavior of hybrid directory. Given it replaced the MMapDirectory until 6.x.

So if "Lucene" does the opposite as your comment suggests, and adds new file formats that should NOT be MMapped, (e.g., metadata files, "improved" stored fields file formats) and we "default" to mmapping them, then the page cache will fill w/ unnecessarily mmapped files that will evict the more important search sensitive "hot" cached files such as doc values and terms dictionaries.

My bad if I suggested so, but "Lucene" does not do the opposite (it uses MMap by default on all supported instance types). HybridDirectory is a concept introduced by Elasticsearch in 7.0, that was plagued with several bugs until 7.4.

We should bias toward simplicity and not put that burden on OpenSearch users (especially new users that blindly benchmark defaults against competitive alternatives :)).

We are not doing that with this PR!? The only difference being default is mmap instead of nio for unspecified files

@nknize
Copy link
Collaborator

nknize commented Aug 28, 2023

The only difference being default is mmap instead of nio for unspecified files

RIght... that's what I suggest we don't do.. if Lucene introduces a read once and forget file that we're not explicitly specifying, why would we default to mmap it? That doesn't make sense. If Lucene introduces a hot file that stays in the page cache, then we should explicitly MMap that file in the corresponding Lucene upgrade PR.

@nknize
Copy link
Collaborator

nknize commented Aug 28, 2023

@jainankitk That was the purpose of this comment in the old codebase:

  /*
     * We are mmapping norms, docvalues as well as term dictionaries, all other files are served through NIOFS
     * this provides good random access performance and does not lead to page cache thrashing.
     */

I think that was lost in some OpenSearch PRs.

@jainankitk
Copy link
Collaborator

if Lucene introduces a read once and forget file that we're not explicitly specifying, why would we default to mmap it?

It won't matter as the page cache will eventually evict that file. And, since it was just read once it cannot cause page cache thrashing

@msokolov
Copy link

Hi @nknize yes it makes sense to me to use MMAP for these vector files (.vec and .vex), and shouldn't be needed for .vem although I wonder if it would really make any difference to include those as well?

@jainankitk
Copy link
Collaborator

jainankitk commented Aug 28, 2023

Also, just an operational anecdote from managed service. I don't remember seeing any customer issues due to file being mmapped instead of niofs. But, remember several issues the other way due to FST Offheap (.tip) file being offheap in 7.1.

@backslasht @Bukhtawar @shwetathareja @itiyamas @rishabhmaurya - Do you remember any managed service customer issue root caused to page cache thrashing and using niofs instead of mmap solved the issue?

@jainankitk
Copy link
Collaborator

@mikecan - We have discussed this a bit offline as well. What do you think about keeping mmap as default fallback instead of niofs? Lucene uses mmap everywhere right?

@navneet1v
Copy link
Contributor

Hi @nknize yes it makes sense to me to use MMAP for these vector files (.vec and .vex), and shouldn't be needed for .vem although I wonder if it would really make any difference to include those as well?

@msokolov we did try to add .vem file to mmap, but we went against that approach as we were not seeing the performance gains atleast on our benchmarks of 1M sift-128 dataset

@nknize
Copy link
Collaborator

nknize commented Aug 28, 2023

It won't matter as the page cache will eventually evict that file.

This is true. However, madvise sometimes causes a bigger page to be loaded into memory than requested (e.g., prefetch). Maybe that makes index evictions more probable? Maybe not?

I don't remember seeing any customer issues due to file being mmapped instead of niofs. But, remember several issues the other way due to FST Offheap (.tip) file being offheap in 7.1.

+1 That's a good data point. However I had several customer issues where search time "randomly" took 60ms (violating a 50ms QoS requirement) because of cache misses on some shards (appearing random). It would be a shame if an increase in evictions was a cause? But there's always tradeoffs...

Do we have any concrete testing to justify the default change? The point of #3837 was to make the mmapped files configurable so a user could always just add it. I know they can do the same here (just the other way around) so it's not a one way door, nor a hill I'd die on..

@jainankitk
Copy link
Collaborator

However I had several customer issues where search time "randomly" took 60ms (violating a 50ms QoS requirement) because of cache misses on some shards (appearing random). It would be a shame if an increase in evictions was a cause?

I wish we had concrete root cause for this. There are lot of other variables as well at play like gc, query type, etc.

Do we have any concrete testing to justify the default change?

The default change is primarily based on performance degradation, and no complains with 6.8 whatsoever. I also recently realized this, I had been blaming offheap all along, while it was working absolutely fine for Lucene. Of course, niofs to blame for this issue

I know they can do the same here (just the other way around) so it's not a one way door, nor a hill I'd die on..

Yeah, I am also in favor of keeping mmap default and revisit if it causes any regressions (don't expect any based on Lucene and 6.8 experience). Also in the short term, the default Opensearch behavior is not changing at all. Hopefully, it will keep someone like me from multi-year wild goose chase, just because I missed BufferedIndexInput in the flame graph (thanks @mikecan!)

@nknize
Copy link
Collaborator

nknize commented Aug 28, 2023

I wish we had concrete root cause for this.

We do. It's an AWS issue and the related open source issue that was opened is #1536

...and no complains with 6.8 whatsoever.

I think your talking about Daniel's commit here: elastic/elasticsearch@f0052b1

But that came in 7.0.0 and also defaults to MMap for hot index files while leaving metadata files to nio.

Or are you talking about @mikecan's issue elastic/elasticsearch#16983 which also was primarily focused on the compound file format (hot index files)?

I'm not sure this is reason to default everything to MMap? But it's true that small files could just get evicted so I'm not sure there's a strong justification either way. In those circumstances I like to take a more conservative approach and lean into good benchmark testing before changing defaults. That's why I suggest we revert this for 2.10 and lean into benchmark testing for 2.11.

Update:

...of course, niofs to blame for this issue

Oh I see the motivation now. That chase leading to default everything MMap and only revisit nio if necessary. That makes sense and thanks for the context! Hmmm. I'm not sure I'm completely convinced on that blanket change as a prevention mechanism? But y'all are running this at some good scale so maybe there's some numbers that can help guide the choice a bit better? @mikecan do you think performance hit due to any cache evictions from mmapping files by default are negligible? We'd only hit this if Lucene introduces new files that aren't explicitly mmapped, which shouldn't be often.

kkmr pushed a commit to kkmr/OpenSearch that referenced this pull request Aug 28, 2023
* Default to mmapfs within hybridfs

Signed-off-by: David Zane <davizane@amazon.com>

* Add index setting validation func

Signed-off-by: David Zane <davizane@amazon.com>

* Reviewer comments

Signed-off-by: David Zane <davizane@amazon.com>

* Clean up, mmap.extensions validation

Signed-off-by: David Zane <davizane@amazon.com>

* Deprecation flag, build all_ext list

Signed-off-by: David Zane <davizane@amazon.com>

* Make nioExtensions unmodifiable

Signed-off-by: David Zane <davizane@amazon.com>

---------

Signed-off-by: David Zane <davizane@amazon.com>
Signed-off-by: Kiran Reddy <kkreddy@amazon.com>
@jainankitk
Copy link
Collaborator

Hmmm. I'm not sure I'm completely convinced on that blanket change as a prevention mechanism?
I'm not sure this is reason to default everything to MMap?

We are not doing that, the default behavior for all existing file extensions is same as before. Its just instead of specifying files to mmap, user specifies files to nio, else it is mmapped. The default list of nio files ensures no behavior change between say 2.9 (without this PR) and 2.10 (with the PR)

That's why I suggest we revert this for 2.10 and lean into benchmark testing for 2.11.

What are we benchmarking against? Since the behavior has not changed, benchmark won't show anything!?

@nknize
Copy link
Collaborator

nknize commented Aug 28, 2023

Its just instead of specifying files to mmap, user specifies files to nio, else it is mmapped.

Sorry. I should've said "I'm not sure this is reason to default everything <that's not explicitly marked> as mmapped". I know the change does not switch all files to be mmapped. It's selecting MMapDirectory as the default for files that are not explicitly specified. That's what I meant by if Lucene introduces a read once and forget file that we're not explicitly specifying, why would we default to mmap it? I think regardless of "the cache will just evict" this is still a penalty we need to be able to vet and understand consequences.

What are we benchmarking against? Since the behavior has not changed, benchmark won't show anything!?

Right!! That's partly my point. Why make the change at all then if we don't show any benefit? Alternatively I think we can possibly simulate something? Maybe a micro benchmark that introduces periodic page cache evictions and measures an impact on search? Maybe that will be too contrived? Not sure but this is a good discussion to have before we decide to release it as a core change.

@andrross
Copy link
Member

Why make the change at all then if we don't show any benefit?

The rationale here is that in the hypothetical situation of a new type of Lucene file, it's better to err on the side of mmap than nio. It's hard to quantify that because hypothetical situations can be constructed either way. The one concrete data point we have from @jainankitk is #825 where "err on the side of mmap" would have been the better behavior.

I think the ideal situation here is that HybridFsDirectory would have the ability to detect a previously unknown Lucene file, and then cause a failure somewhere in the CI pipeline (unit or integration tests or nightly benchmarks or (worst case) release testing). Then we would be forced to make an explicit choice about how to treat this file and the default behavior would not matter. Is that feasible? I think we'd want that behavior in any case because it is better to make an informed decision versus relying on the default.

@jainankitk
Copy link
Collaborator

That's what I meant by if Lucene introduces a read once and forget file that we're not explicitly specifying, why would we default to mmap it? I think regardless of "the cache will just evict" this is still a penalty we need to be able to vet and understand consequences.

There are 2 sides to the what if. And IMO, other side is much more disastrous. Also, given Lucene itself uses MMapDirectory by default, the other way is more likely possibility as it would be completely untested by Lucene microbenchmarks.

Right!! That's partly my point. Why make the change at all then if we don't show any benefit? Alternatively I think we can possibly simulate something? Maybe a micro benchmark that introduces periodic page cache evictions and measures an impact on search? Maybe that will be too contrived? Not sure but this is a good discussion to have before we decide to release it as a core change.

While I agree on this change not being urgent, I am not sure if we will have any micro benchmark in near future. It does seem too contrived! :) Also, I will rephrase as no short term benefit, which is the case with most code refactoring/rewrite

I think the ideal situation here is that HybridFsDirectory would have the ability to detect a previously unknown Lucene file, and then cause a failure somewhere in the CI pipeline (unit or integration tests or nightly benchmarks or (worst case) release testing). Then we would be forced to make an explicit choice about how to treat this file and the default behavior would not matter. Is that feasible?

+1. Although, even while we are making the explicit choice (taking time to run benchmarks/tests, etc.), we need default fallback. And IMO, mmap is still better than nio in most cases for that duration

@nknize
Copy link
Collaborator

nknize commented Aug 29, 2023

...it's better to err on the side of mmap than nio

Except we don't have anything to substantiate this claim. So making the change is based on conjecture. There are Elasticsearch issues to substantiate the reason for selecting certain files to NIOFs as well.. otherwise everything would be MMap in Elasticsearch and this would be a non-issue. So I"m not so quick to take that assumption on blind faith. I like to lean into actual testing to substantiate. Paper cuts add up fast... that doesn't seem to be the sentiment lately for some strange reason.

other side is much more disastrous

Why? I think calling it disastrous is a stretch given that MMap is still a configurable option. Remember, we can still set files to default to MMap when we upgrade lucene versions.

Whether we decide to keep this blind change or not. I think this project needs to start leaning into doing a better job substantiating performance claims before unilaterally changing on faith.

@reta
Copy link
Collaborator

reta commented Aug 29, 2023

@andrross, to your point

I think the ideal situation here is that HybridFsDirectory would have the ability to detect a previously unknown Lucene file, and then cause a failure somewhere in the CI pipeline (unit or integration tests or nightly benchmarks or (worst case) release testing).

It was suggested on pull request #8508 (comment) but not taken forward

@navneet1v
Copy link
Contributor

navneet1v commented Aug 31, 2023

@nknize , @jainankitk , @reta @andrross There is long discussion happening on this feature and I am not sure if I am able to understand all of it. But one thing which atleast I am looking here is, are we reverting this change or not. Given that the release for 2.10 is in few days, and it has impact on the builds and performance. I want to make sure that no surprises are seen at the end during the release stage.

Next steps as per my understanding:
Can we use a voting here to deicide whether to revert the commit and continue the discussion on separate github issue? if there is any other way to resolve this open to suggestions.

@reta
Copy link
Collaborator

reta commented Aug 31, 2023

@navneet1v since there are concerns with the a) feature itself b) the approach that led to this particular implementation, in my opinion reverting it before the 2.10.0 release and restarting the discussion on what is the way to implement it taken all the risks / gains into account is the way to go, @nknize @andrross @jainankitk please share you opinion

@jainankitk
Copy link
Collaborator

jainankitk commented Sep 1, 2023

Except we don't have anything to substantiate this claim. So making the change is based on conjecture. There are Elasticsearch issues to substantiate the reason for selecting certain files to NIOFs as well.. otherwise everything would be MMap in Elasticsearch and this would be a non-issue. So I"m not so quick to take that assumption on blind faith. I like to lean into actual testing to substantiate. Paper cuts add up fast... that doesn't seem to be the sentiment lately for some strange reason.

I gave this bit more thought and mmap should be kept as fallback default in hybrid directory for following reasons (probably mentioned separately, collating them together):

  • Lucene does not have hybrid directory and uses mmap wherever supported. Amazon product search which is fairly large workload does the same
  • In my experience with managed service, customers did not have any issues until Elasticsearch 6.8 even with mmap only. Essentially, mmap only is a viable option whereas niofs only is not. Hence, if I have to pick one of the two for new file type, whose access pattern I don't know about, should be mmap IMO.
  • Even on the Elastic PR, after fixing the regression, hybridfs performance came at par with mmapfs. With this change, we are again on par (as expected). Although, if you constrain the resources, hybrid directory (niofs limited to certain file types) performs better. Probably, it is a matter of better resource utilization. That means:
    • If niofs is the right choice for new file type, it will only impact resource constrained (specifically native memory) customer workloads
    • If mmap is the right choice for new file type, it will impact every customer workload
  • This also raises important question, what is our primary product tenet - better performance OR less resource consumption, ofcourse when both are practically viable choices

To me, mmap seems better fallback choice when we don't know much about the file access pattern. Specific users that are resource constrained can always use niofs explicitly.

Why? I think calling it disastrous is a stretch given that MMap is still a configurable option. Remember, we can still set files to default to MMap when we upgrade lucene versions.

Fair enough, disastrous is strong word. I take that back. Probably, still hurting from FST offheap! :(

Whether we decide to keep this blind change or not. I think this project needs to start leaning into doing a better job substantiating performance claims before unilaterally changing on faith.

I think this towards both zstd compression and this issue. Although, they are much different things. As per me, falling back to niofs for hybrid directory was a mistake in Elasticsearch 7.0 itself. I do not see any convincing data points in any of the PRs or issues. Recently learnt about it, and trying to address it for long term.
Maybe we can add some numbers by doing mmap only, niofs only for few of the opensearch workloads. That should give us better idea on how different the performance can be between mmap and niofs

@jainankitk
Copy link
Collaborator

a) feature itself b) the approach that led to this particular implementation, in my opinion reverting it before the 2.10.0 release and restarting the discussion on what is the way to implement it taken all the risks / gains into account is the way to go, @nknize @andrross @jainankitk please share you opinion

I am okay with either approach. Although, we should agree on concerns and what data points we are looking for. Also, given this is settings addition / removal PR, committing / reverting / re-committing, sounds tricky to me. But maybe, it is simpler than how it seems in my head.

@jainankitk
Copy link
Collaborator

I think the ideal situation here is that HybridFsDirectory would have the ability to detect a previously unknown Lucene file, and then cause a failure somewhere in the CI pipeline (unit or integration tests or nightly benchmarks or (worst case) release testing).
It was suggested on pull request #8508 (comment) but not taken forward

In addition to the fallback option, we can log warning whenever new file extension is encountered. It should force us to evaluate every “new” file we encounter. Thank you @andrross for the logging suggestion.
Although still not sure, how we can do in the unit / integration tests, as they might be limited to compound file formats.

@andrross
Copy link
Member

andrross commented Sep 1, 2023

Also, given this is settings addition / removal PR, committing / reverting / re-committing, sounds tricky to me

This is all mechanical work. It may be annoying, but it is simple. The new setting is much more of a one way door though. Once it goes out into the 2.10 release, we're committing to support it through at least the 3.x releases.

how we can do in the unit / integration tests, as they might be limited to compound file formats.

Agreed we likely will get limited coverage in unit and integration tests. We'd probably want to add something to nightly benchmarks to fail or cut issues on warnings in log statements.

If niofs is the right choice for new file type, it will only impact resource constrained (specifically native memory) customer workloads

This is the key point for me. What happens to the resource constrained user if nio was actually the better choice but we mmap'd it? If the answer is that performance is a little bit worse, then mmap-by-default is reasonable. But if there is real risk of the OS OOM-killer killing the node, then maybe nio by default is the better choice. @jainankitk What do you think?

@jainankitk
Copy link
Collaborator

This is all mechanical work. It may be annoying, but it is simple. The new setting is much more of a one way door though. Once it goes out into the 2.10 release, we're committing to support it through at least the 3.x releases.

That's a good point!

What happens to the resource constrained user if nio was actually the better choice but we mmap'd it? If the answer is that performance is a little bit worse, then mmap-by-default is reasonable. But if there is real risk of the OS OOM-killer killing the node, then maybe nio by default is the better choice. @jainankitk What do you think?

There is no risk of OOM-killer afaik. Just the performance is worse as OS spends more time figuring out if there is empty page. The same is confirmed by this comment in ES PR I also ran an update-heavy workload with 40% id conflicts on a larger index of 75GB on a system that only has 8GB available page cache. The baseline (mmapfs) results in 12000 docs/s median indexing throughput whereas hybridfs with this change results in 21800 docs/s.

kaushalmahi12 pushed a commit to kaushalmahi12/OpenSearch that referenced this pull request Sep 12, 2023
* Default to mmapfs within hybridfs

Signed-off-by: David Zane <davizane@amazon.com>

* Add index setting validation func

Signed-off-by: David Zane <davizane@amazon.com>

* Reviewer comments

Signed-off-by: David Zane <davizane@amazon.com>

* Clean up, mmap.extensions validation

Signed-off-by: David Zane <davizane@amazon.com>

* Deprecation flag, build all_ext list

Signed-off-by: David Zane <davizane@amazon.com>

* Make nioExtensions unmodifiable

Signed-off-by: David Zane <davizane@amazon.com>

---------

Signed-off-by: David Zane <davizane@amazon.com>
Signed-off-by: Kaushal Kumar <ravi.kaushal97@gmail.com>
brusic pushed a commit to brusic/OpenSearch that referenced this pull request Sep 25, 2023
* Default to mmapfs within hybridfs

Signed-off-by: David Zane <davizane@amazon.com>

* Add index setting validation func

Signed-off-by: David Zane <davizane@amazon.com>

* Reviewer comments

Signed-off-by: David Zane <davizane@amazon.com>

* Clean up, mmap.extensions validation

Signed-off-by: David Zane <davizane@amazon.com>

* Deprecation flag, build all_ext list

Signed-off-by: David Zane <davizane@amazon.com>

* Make nioExtensions unmodifiable

Signed-off-by: David Zane <davizane@amazon.com>

---------

Signed-off-by: David Zane <davizane@amazon.com>
Signed-off-by: Ivan Brusic <ivan.brusic@flocksafety.com>
shiv0408 pushed a commit to Gaurav614/OpenSearch that referenced this pull request Apr 25, 2024
* Default to mmapfs within hybridfs

Signed-off-by: David Zane <davizane@amazon.com>

* Add index setting validation func

Signed-off-by: David Zane <davizane@amazon.com>

* Reviewer comments

Signed-off-by: David Zane <davizane@amazon.com>

* Clean up, mmap.extensions validation

Signed-off-by: David Zane <davizane@amazon.com>

* Deprecation flag, build all_ext list

Signed-off-by: David Zane <davizane@amazon.com>

* Make nioExtensions unmodifiable

Signed-off-by: David Zane <davizane@amazon.com>

---------

Signed-off-by: David Zane <davizane@amazon.com>
Signed-off-by: Shivansh Arora <hishiv@amazon.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
backport 2.x Backport to 2.x branch v2.10.0
Projects
None yet
Development

Successfully merging this pull request may close these issues.

[BUG] Default to mmapfs within hybridfs
8 participants