Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[SNAP-2366] row buffer fault-in, forced roll-over #391

Open
wants to merge 29 commits into
base: snappy/master
Choose a base branch
from

Conversation

sumwale
Copy link

@sumwale sumwale commented Jun 4, 2018

Changes proposed in this pull request

  • force fault-in in MemHeapScanController for row buffer iterations;
    added RegionEntryUtils.fillRowFaultInOptimized for the same
  • also fault-in regular row table data for qualified entries if there were any
    filters that were pushed down to the scan level
  • update region last modified time for destroy/invalidate and transactions ops too
    (to determine idle time for forced roll-over)
  • add a Predicate check just before actual rollover in BucketRegion under the lock
  • added a StoppableReentrantReadWriteLock.lockDelayCancel method to avoid a cancellation
    check for the good case of successful lock acquisition in first try (cancellation check
    will be done in any case if lock could not be acquired in first try)
  • new SYS.PURGE_CODEGEN_CACHES() procedure to clear codegeneration caches (both Spark
    and Snappy caches); actual implementation of procedure body in snappydata layer
  • some perf optimizations seen in profiling
    • use a global system property to disable GemFire compression (disabled by default in Snappy)
    • remove TStateless* classes and replace with koloboke maps
    • removed trove extensions THashMapWithCreate, TObjectLongHashMapWithIndex an
      replace with koloboke maps; also some usages of TObjectIntHashMap in TX code replaced
    • optimized CustomEntryConcurrentHashMap iteration by using a local expandible array
      for copying current chain instead of an ArrayList
    • made ObjectObjectHashMap serializable by implementing Externalizable

Patch testing

precheckin

ReleaseNotes changes

new SYS.PURGE_CODEGEN_CACHES() procedure

Other PRs

TIBCOSoftware/snappydata#1046

Sumedh Wale added 8 commits May 30, 2018 05:35
- Reason is that the "regionContext" set inside ColumnFormatValue (via base SerializedDiskBuffer)
  always remains a PlaceHolderRegion and never changes to a Region. So ColumnFormatValue cannot
  update the size stats when required.
- Now invoke the setDiskEntry method in AbstractRegionEntry.setOwner too to fix above.
- Other minor cleanups and a method rename.
- force fault-in in MemHeapScanController for row buffer iterations;
  added RegionEntryUtils.fillRowFaultInOptimized for the same
- also fault-in regular row table data for qualified entries if there were any
  filters that were pushed down to the scan level
- update region last modified time for destroy/invalidate and transactions ops too
  (to determine idle time for forced roll-over)
- add a Predicate check just before actual rollover in BucketRegion under the lock
- added a StoppableReentrantReadWriteLock.lockDelayCancel method to avoid a cancellation
  check for the good case of successful lock acquisition in first try (cancellation check
      will be done in any case if lock could not be acquired in first try)
- perf improvement: replaced TStateless* classes by Koloboke maps/sets
- removed now obsolete TStateless* classes
…iling

- new PURGE_CODEGEN_CACHES procedure to clear codegeneration caches
- removed trove extensions THashMapWithCreate, TObjectLongHashMapWithIndex and replace
  with koloboke maps; also some usages of TObjectIntHashMap in TX code path replaced
- optimized CustomEntryConcurrentHashMap iteration by using a local expandible array
  for copying current chain instead of an ArrayList
- made ObjectObjectHashMap serializable by implementing Externalizable
Sumedh Wale added 17 commits June 6, 2018 12:54
Conflicts:
	gemfire-core/src/main/java/com/gemstone/gemfire/internal/cache/LocalRegion.java
- skip SYS tables from memory accounting; this fixes failures in snappy UMM
  accounting tests where storage size may change after restart because of
  SYS entry initializations in different order by hive meta-store initialization
- skip destroyed entries in ObjectSizer row count
- reduce per-table entries from 100K to 40K in PersistenceRecoveryOrderDUnit and
  use batch inserts instead of single inserts; this reduces running time of couple
  of tests in the suite to 30s from 4mins
- also fixed occasional failures in PersistenceRecoveryOrderDUnit by explicitly dropping
  tables in correct order (after starting a new server in a test for the same)
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant