-
improve performance
- bucket buffer of donwlevel bucket as slice
- using elist_head as key/value item
-
support user defined key order.
-
refactoring
-
fix race condition
- add/update in embedded itempool
- add/update in outside itempool
- global sharedSearchOpt . define atomic.Value
- SampleItem.SetValue
- //go:nocheckptr elist_head.Ptr()
- sp.items = sp.items[:i+1]
- embedded mode
- _findBucket -> b.downLevels[idx].level and bucketFromPoolEmbedded -> b.downLevels = b.downLevels[:idx
- appendLast-> sp.items = sp.items[:olen+1] -> use updateItem()
- [s] map.Set() -> s.PtrMapHead().reverse = bits.Reverse64(k)
- getWithBucket -> e.PtrMapHead().reverse != bits.Reverse64(k) || e.PtrMapHead().conflict != conflict m.TestSet -> item.PtrMapHead().reverse = bits.Reverse64(k)
- samepleItemPool.insertToPool() -> copy(newItems[i+1:], sp.items[i:]) SampleItem.SetValue() -> s.V.Store()
- insertToPool-> updateItems() , map._set() -> item.PtrMapHead().conflict = conflict
- state4get -> len, cap updateItems()
- map._findBucket -> bucketDowns := b.downLevels , map.bucketFromPoolEmbedded() -> b.downLevels = b.downLevels[:idx+1]
- samepleItemPool._split sp.items = sp.items[:idx:idx], samepleItemPool.At(). use samepleItemPool.updateWithLock()
- map._findBucket() -> if bucketDowns.len <= idx || bucketDowns.at(idx).level == 0 { , h.bucketFromPoolEmbedded() -> b.downLevels = make([]bucket, 1, 16) change intialize bucketFromPoolEmbedded()
- map.makeBucket2() -> h.bucketFromPoolEmbedded(newReverse) , map._findBucket() -> if bucketDowns.len <= idx || bucketDowns.at(idx).level == 0 {
- non embedded mode
- bucketFromPool() -> oBucket.downLevels = oBucket.downLevels[:oIdx+1] and list.at()
-
Implement Purge()
- basic impleent
- waiting/locking to traverse on pursing
- purge bucket
-
impelemnt syncMap()
- new sync.Map implementaion to re-write dirty as skiplistmap.Map
-
remove not-used function
-
RunLazyUnlocker()
-
map.AddLen() -> addLen()
-
map.GetWithFn()
-
Implement Reset()
-