Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Check failed: result >= enforced_min_time #2571

Closed
ndeodhar opened this issue Oct 10, 2019 · 12 comments
Closed

Check failed: result >= enforced_min_time #2571

ndeodhar opened this issue Oct 10, 2019 · 12 comments
Assignees
Labels
area/docdb YugabyteDB core features kind/bug This issue is a bug priority/high High Priority

Comments

@ndeodhar
Copy link
Contributor

ndeodhar commented Oct 10, 2019

Saw this on a 2DC enabled cluster:

F20191010 15:15:05 ../../src/yb/tablet/mvcc.cc:340] Check failed: result >= enforced_min_time ({ physical: 1570720498089740 logical: 1 } vs. { physical: 1570720505667409 }) T abecd1b22804401089bb206116ea31d0 P 878fbc7cb51f4c8d9a2786380e48828a: : has_lease=1, enforced_min_time.ToUint64() - result.ToUint64()=31038132223, ht_lease={ physical: 1570720507406734 }, max_ht_lease_seen_={ physical: 1570720507406734 }, last_replicated_={ physical: 1570720498086624 logical: 2 }, clock_->Now()={ physical: 1570720505777680 }, ToString(deadline)=9223372036.855s, queue_.size()=1, queue_=[{ physical: 1570720498089740 logical: 2 }]
    @     0x7f6865e9b0c5  yb::LogFatalHandlerSink::send()
    @     0x7f68650a2616  google::LogMessage::SendToLog()
    @     0x7f686509fd0a  google::LogMessage::Flush()
    @     0x7f68650a2c39  google::LogMessageFatal::~LogMessageFatal()
    @     0x7f686ce689a5  yb::tablet::MvccManager::DoGetSafeTime()
    @     0x7f686ce695b5  yb::tablet::MvccManager::SafeTime()
    @     0x7f686cdee307  yb::tablet::Tablet::DoGetSafeTime()
    @     0x7f686d8bccbc  yb::tserver::ReadContext::PickReadTime()
    @     0x7f686d8aa289  yb::tserver::TabletServiceImpl::Read()
    @     0x7f686b9f80ba  yb::tserver::TabletServerServiceIf::Handle()
    @     0x7f68679320d1  yb::rpc::ServicePoolImpl::Handle()
    @     0x7f68678de764  yb::rpc::InboundCall::InboundCallTask::Run()
    @     0x7f686793dc58  yb::rpc::(anonymous namespace)::Worker::Execute()
    @     0x7f6865f23a19  yb::Thread::SuperviseThread()
    @     0x7f6860a22694  start_thread
    @     0x7f686015f41d  __clone
    @              (nil)  (unknown)

Unpacking the CHECK failure message:

Check failed: result >= enforced_min_time 
({ physical: 1570720498089740 logical: 1 } vs. { physical: 1570720505667409 }) 
T abecd1b22804401089bb206116ea31d0 P 878fbc7cb51f4c8d9a2786380e48828a: : has_lease=1,
enforced_min_time.ToUint64() - result.ToUint64()=31038132223,
ht_lease={ physical: 1570720507406734 },
max_ht_lease_seen_={ physical: 1570720507406734 },
last_replicated_={ physical: 1570720498086624 logical: 2 },
clock_->Now()={ physical: 1570720505777680 },
ToString(deadline)=9223372036.855s,
queue_.size()=1,
queue_=[{ physical: 1570720498089740 logical: 2 }]
@ndeodhar ndeodhar added kind/bug This issue is a bug area/docdb YugabyteDB core features labels Oct 10, 2019
@ndeodhar ndeodhar changed the title MVCC enforced min time check failure Check failed: result >= enforced_min_time Oct 10, 2019
@bmatican
Copy link
Contributor

Since this was in the context of CDC, @ndeodhar , do you know if this was just local DC writes, or if it would involve writes from the other DC having been replicated over?

Also added @mbautin and @spolitov for some info on when this might happen.

@ndeodhar
Copy link
Contributor Author

If I'm not wrong, this was observed on the producer cluster, so it would just be local writes (the load was just being run on producer).

@bmatican
Copy link
Contributor

cc @amitanandaiyer as he said he had seen something similar during the index backfill work.

@bmatican
Copy link
Contributor

bmatican commented Nov 8, 2019

Just saw this the other day in a non-CDC enabled cluster. The main differences though were:

  • servers spread across 3 data center, 2 east, 1 west
  • max clock skew set to 500ms
  • leaders all bound to 1 east region (not sure if relevant)

The problem after this though was that, after a crash, the tablets failed to bootstrap! The failure message was: Already present (yb/consensus/replica_state.cc:676): Duplicate request

More importantly, this did not clear! We went through the process of removing bad data on a follower and restarting it, then it got back into the same state. This suggests the data that triggered this was messed up on the leader and the Remote Bootstrap brought it over!

cc @spolitov @mbautin

@bmatican bmatican added the priority/high High Priority label Nov 8, 2019
@mbautin
Copy link
Contributor

mbautin commented Nov 8, 2019

@bmatican: could you share the check failure message from the most recent occurrence of this bug with CDC disabled?

@mbautin
Copy link
Contributor

mbautin commented Nov 8, 2019

Rewriting the original error message here, subtracting 1570720490000000 from all microsecond timestamps:

Check failed: result >= enforced_min_time 
({ physical: 8089740 logical: 1 } vs. { physical: 15667409 }) 
T abecd1b22804401089bb206116ea31d0 P 878fbc7cb51f4c8d9a2786380e48828a: : has_lease=1,
enforced_min_time.ToUint64() - result.ToUint64()=31038132223,
ht_lease={ physical: 17406734 },
max_ht_lease_seen_={ physical: 17406734 },
last_replicated_={ physical: 8086624 logical: 2 },
clock_->Now()={ physical: 15777680 },
ToString(deadline)=9223372036.855s,
queue_.size()=1,
queue_=[{ physical: 8089740 logical: 2 }]

The computed safe time is 7577669 microseconds (7.57 seconds) behind previously returned safe time.

@bmatican
Copy link
Contributor

bmatican commented Nov 8, 2019

F20191106 22:37:35 ../../src/yb/tablet/mvcc.cc:351] Check failed: result >= enforced_min_time ({ physical: 1573079829591857 logical: 4095 } vs. { physical: 1573079854991617 }) T 0fbf4af6478f464fa04d2fbb47674b18 P d1994047d572450c84f24f2b4781250b: : has_lease=1, enforced_min_time.ToUint64() - result.ToUint64()=104037412865, ht_lease={ physical: 1573079856973519 }, max_ht_lease_seen_={ physical: 1573079856973519 }, last_replicated_={ physical: 1573079829293219 }, clock_->Now()={ physical: 1573079855097236 }, ToString(deadline)=9223372036.855s, queue_.size()=1, queue_=[{ physical: 1573079829591858 }]

This was the most recent failure stack @mbautin

Unpacked:

F20191106 22:37:35 ../../src/yb/tablet/mvcc.cc:351] 
Check failed: result >= enforced_min_time
({ physical: 1573079829591857 logical: 4095 } vs. { physical: 1573079854991617 })
T 0fbf4af6478f464fa04d2fbb47674b18 P d1994047d572450c84f24f2b4781250b:
has_lease=1,
enforced_min_time.ToUint64() - result.ToUint64()=104037412865,
ht_lease={ physical: 1573079856973519 },
max_ht_lease_seen_={ physical: 1573079856973519 },
last_replicated_={ physical: 1573079829293219 },
clock_->Now()={ physical: 1573079855097236 },
ToString(deadline)=9223372036.855s,
queue_.size()=1,
queue_=[{ physical: 1573079829591858 }]

Removing the common prefix (15730798) from all the microsecond numbers above:

F20191106 22:37:35 ../../src/yb/tablet/mvcc.cc:351] 
Check failed: result >= enforced_min_time
({ physical: 29591857 logical: 4095 } vs. { physical: 54991617 })
T 0fbf4af6478f464fa04d2fbb47674b18 P d1994047d572450c84f24f2b4781250b:
has_lease=1,
enforced_min_time.ToUint64() - result.ToUint64()=104037412865,
ht_lease={ physical: 56973519 },
max_ht_lease_seen_={ physical: 56973519 },
last_replicated_={ physical: 29293219 },
clock_->Now()={ physical: 55097236 },
ToString(deadline)=9223372036.855s,
queue_.size()=1,
queue_=[{ physical: 29591858 }]

@bmatican
Copy link
Contributor

bmatican commented Nov 8, 2019

One other potentially useful data point, this was a batch workload with a large batch size (so each leader would spread requests to all other TS), and TS::Write latency was slowly going up (queue backing up), and we saw the warnings of having taken 10-30000ms instead of the 3000ms timeout.

@mbautin
Copy link
Contributor

mbautin commented Nov 8, 2019

@bmatican do we also have stack traces?

@bmatican
Copy link
Contributor

bmatican commented Nov 8, 2019

@mbautin Unfortunately no :(

spolitov added a commit that referenced this issue Nov 17, 2019
…CppCassandraDriverTest.BatchWriteDuringSoftMemoryLimit

Summary:
When an operation is received by a follower, it is added to MvccManager during the "prepare" phase.
But this preparing phase could be delayed under high load, so it happens after this operation is Raft-replicated.
So the following scenario is possible:
1) The follower receives an operation from the leader.
2) Follower wins an election.
3) It becomes the leader and replicates operations from the old leader with a newly added no-op.
4) Operations from the old leader are being prepared.

After (3) we already know that this follower is a fully functional leader, so it could return safe time equal to hybrid time lease expiration.
And (4) happens after that, so we will try to register (in MvccManager) operations with hybrid times that are lower than last returned safe time, leading to a check failure and a crash.

Fixed by changing the place where we register follower operations in MVCC, i.e. StartReplicaOperation.

Also fixed the following issues found by the same test:
1) `RemoteTablet::GetRemoteTabletServers` could try to lock a mutex while the wait is not allowed.
   Fixed by allowing reading stale leader state at this point.
2) In CppCassandraDriverTest.BatchWriteDuringSoftMemoryLimit, the external mini-cluster could produce more than 50MB of logs.
   Increased limit to 512MB.
3) Moved partition check from `AsyncRpc` to `Batcher`, so failure would happen earlier.
4) Removed `DCHECK` for the number of `scheduled_tasks_` to be equal to zero from ServicePool. There could be a race condition when the aborted task was not yet completely aborted, while the new scheduled task is complete. So `check_timeout_task_` will be reset to `kUninitializedScheduledTaskId`, but `scheduled_tasks_` would still be non-zero.
5) Reset of `leader_no_op_committed_` before marking current node as leader. Otherwise, for a very short period of time, we will have an invalid value of `leader_no_op_committed_` from previous leadership.
6) Fix race condition in `Batcher` when `FlushBuffersIfReady` is performed twice. Disallow `FlushBuffersIfReady` in `kTransactionReady` state, and introduce new function `ExecuteOperations` to complete `Batcher` after transaction is ready.

Test Plan: ybd asan --cxx-test cassandra_cpp_driver-test --gtest_filter CppCassandraDriverTest.BatchWriteDuringSoftMemoryLimit -n 20

Reviewers: bogdan, timur, mikhail

Reviewed By: mikhail

Subscribers: ybase

Differential Revision: https://phabricator.dev.yugabyte.com/D7560
@amitanandaiyer
Copy link
Contributor

@spolitov I'm still able to run into this on the master, if we have txn-writes, and non-txn-writes going to the same table.

here's a simple repro.

ybd --cxx-test ql-transaction-test --gtest_filter QLTransactionTest.*WriteConflicts --test_args --external_mini_cluster_max_log_bytes=52428800123 --no-remote

diff --git a/src/yb/client/ql-transaction-test.cc b/src/yb/client/ql-transaction-test.cc
index 4741986b5..40c81a1e9 100644
--- a/src/yb/client/ql-transaction-test.cc
+++ b/src/yb/client/ql-transaction-test.cc
@@ -604,9 +604,9 @@ void QLTransactionTest::TestWriteConflicts(bool do_restarts) {
     std::future<Status> commit_future;
   };
 
-  constexpr size_t kActiveTransactions = 50;
+  constexpr size_t kActiveTransactions = 3;
   constexpr auto kTestTime = 60s;
-  constexpr int kTotalKeys = 5;
+  constexpr int kTotalKeys = 1;
   std::vector<ActiveTransaction> active_transactions;
 
   auto stop = std::chrono::steady_clock::now() + kTestTime;
@@ -643,12 +643,18 @@ void QLTransactionTest::TestWriteConflicts(bool do_restarts) {
     while (!expired && active_transactions.size() < kActiveTransactions) {
       auto key = RandomUniformInt(1, kTotalKeys);
       ActiveTransaction active_txn;
-      active_txn.transaction = CreateTransaction();
+      if (value % 2 == 0) {
+        active_txn.transaction = CreateTransaction();
+      }
       active_txn.session = CreateSession(active_txn.transaction);
       const auto op = table_.NewInsertOp();
       auto* const req = op->mutable_request();
       QLAddInt32HashValue(req, key);
-      table_.AddInt32ColumnValue(req, kValueColumn, ++value);
+      const auto val = ++value;
+      table_.AddInt32ColumnValue(req, kValueColumn, val);
+      LOG(INFO) << (active_txn.transaction ? active_txn.transaction->ToString() : " no-txn ")
+       << " trying to write to key "
+          << key << " value " << val;
       ASSERT_OK(active_txn.session->Apply(op));
       active_txn.flush_future = active_txn.session->FlushFuture();
 
@@ -658,22 +664,30 @@ void QLTransactionTest::TestWriteConflicts(bool do_restarts) {
 
     auto w = active_transactions.begin();
     for (auto i = active_transactions.begin(); i != active_transactions.end(); ++i) {
+      const auto txn_id(i->transaction ? i->transaction->ToString() : "no-txn");
       if (!i->commit_future.valid()) {
         if (i->flush_future.wait_for(0s) == std::future_status::ready) {
           auto flush_status = i->flush_future.get();
           if (!flush_status.ok()) {
-            LOG(INFO) << "Flush failed: " << flush_status;
+            LOG(INFO) << "Flush failed: " << flush_status
+              << " for " << "TXN: " << txn_id;
             continue;
           }
           ++flushed;
+          LOG(INFO) << "Flushed : " << "TXN: " << txn_id;
+          if (!i->transaction) {
+            continue;
+          }
           i->commit_future = i->transaction->CommitFuture();
         }
       } else if (i->commit_future.wait_for(0s) == std::future_status::ready) {
         auto commit_status = i->commit_future.get();
         if (!commit_status.ok()) {
-          LOG(INFO) << "Commit failed: " << commit_status;
+          LOG(INFO) << "Commit failed: " << commit_status
+              << " for " << "TXN: " << txn_id;
           continue;
         }
+        LOG(INFO) << "Committed : " << "TXN: " << txn_id;
         ++committed;
         continue;
       }

@amitanandaiyer
Copy link
Contributor

   10143 Fatal failure details written to /n/users/amitanand/code/yugabyte-db/build/debug-gcc-dynamic-ninja/yb-test-logs/tests-client__ql-transaction-test/QLTransactionTest_WriteConflicts.fatal_failure_details.2020-01-06T23_59_32.pid11369.txt
    10144 F20200106 23:59:32 ../../src/yb/tablet/mvcc.cc:190] T b3f1ad7ba29e4e96ad9d6698ae68c018 P 69225d5f26cc4b548ec607f2f755318c: New operation's hybrid time too low: <initial>
    10145   max_safe_time_returned_with_lease_={ safe_time: { days: 18267 time: 23:59:32.707074 } source: kNow }
    10146   *ht < max_safe_time_returned_with_lease_.safe_time=1
    10147   static_cast<int64_t>(ht->ToUint64() - max_safe_time_returned_with_lease_.safe_time.ToUint64())=-6464942787408175103
    10148   ht->PhysicalDiff(max_safe_time_returned_with_lease_.safe_time)=-1578355172707074
    10149
    10150   max_safe_time_returned_without_lease_={ safe_time: <min> source: kUnknown }
    10151   *ht < max_safe_time_returned_without_lease_.safe_time=0
    10152   static_cast<int64_t>(ht->ToUint64() - max_safe_time_returned_without_lease_.safe_time.ToUint64())=1
    10153   ht->PhysicalDiff(max_safe_time_returned_without_lease_.safe_time)=0
    10154
    10155   max_safe_time_returned_for_follower_={ safe_time: <min> source: kUnknown }
    10156   *ht < max_safe_time_returned_for_follower_.safe_time=0
    10157   static_cast<int64_t>(ht->ToUint64() - max_safe_time_returned_for_follower_.safe_time.ToUint64())=1
    10158   ht->PhysicalDiff(max_safe_time_returned_for_follower_.safe_time)=0
    10159
    10160   (SafeTimeWithSource{last_replicated_, SafeTimeSource::kUnknown})={ safe_time: { days: 18267 time: 23:59:32.705193 } source: kUnknown }
    10161   *ht < (SafeTimeWithSource{last_replicated_, SafeTimeSource::kUnknown}).safe_time=1
    10162   static_cast<int64_t>(ht->ToUint64() - (SafeTimeWithSource{last_replicated_, SafeTimeSource::kUnknown}).safe_time.ToUint64())=-6464942787400470527
    10163   ht->PhysicalDiff((SafeTimeWithSource{last_replicated_, SafeTimeSource::kUnknown}).safe_time)=-1578355172705193
    10164
    10165   (SafeTimeWithSource{last_ht_in_queue, SafeTimeSource::kUnknown})={ safe_time: <min> source: kUnknown }
    10166   *ht < (SafeTimeWithSource{last_ht_in_queue, SafeTimeSource::kUnknown}).safe_time=0
    10167   static_cast<int64_t>(ht->ToUint64() - (SafeTimeWithSource{last_ht_in_queue, SafeTimeSource::kUnknown}).safe_time.ToUint64())=1
    10168   ht->PhysicalDiff((SafeTimeWithSource{last_ht_in_queue, SafeTimeSource::kUnknown}).safe_time)=0
    10169
    10170   is_follower_side=0
    10171   queue_.size()=0
    10172   queue_=[]
    10173   aborted=[]
    10174     @     0x7effc4920005  yb::LogFatalHandlerSink::send(int, char const*, char const*, int, tm const*, char const*, unsigned long) (src/yb/util/logging.cc:474)
    10175     @     0x7effc3b07305
    10176     @     0x7effc3b04769
    10177     @     0x7effc3b07838
    10178     @     0x7effccfa9317  yb::tablet::MvccManager::AddPending(yb::HybridTime*) (src/yb/tablet/mvcc.cc:190)
    10179     @     0x7effccefba2c  yb::tablet::Tablet::StartOperation(yb::tablet::WriteOperationState*) (src/yb/tablet/tablet.cc:836)
    10180     @     0x7effccf97d19  yb::tablet::WriteOperation::DoStart() (src/yb/tablet/operations/write_operation.cc:104)
    10181     @     0x7effccf7bc2f  yb::tablet::Operation::Start() (src/yb/tablet/operations/operation.cc:54)
    10182     @     0x7effccf83ce3  yb::tablet::OperationDriver::StartOperation() (src/yb/tablet/operations/operation_driver.cc:202)
    10183     @     0x7effccf83ed5  yb::tablet::OperationDriver::HandleConsensusAppend() (src/yb/tablet/operations/operation_driver.cc:181)
    10184     @     0x7effccc18737  yb::consensus::RaftConsensus::AppendNewRoundsToQueueUnlocked(std::vector<scoped_refptr<yb::consensus::ConsensusRound>, std::allocator<scoped_refptr<yb::consensus::ConsensusRound> > > const&) (src/yb/consensus/raft_consensus.cc:965)
    10185     @     0x7effccc15d7f  yb::consensus::RaftConsensus::ReplicateBatch(std::vector<scoped_refptr<yb::consensus::ConsensusRound>, std::allocator<scoped_refptr<yb::consensus::ConsensusRound> > >*) (src/yb/consensus/raft_consensus.cc:930)
    10186     @     0x7effccfbb096  yb::tablet::PreparerImpl::ReplicateSubBatch(__gnu_cxx::__normal_iterator<yb::tablet::OperationDriver**, std::vector<yb::tablet::OperationDriver*, std::allocator<yb::tablet::OperationDriver*> > >, __gnu_cxx::__normal_iterator<yb::tablet::OperationDriver**, std::v      ector<yb::tablet::OperationDriver*, std::allocator<yb::tablet::OperationDriver*> > >) (src/yb/tablet/preparer.cc:278)
    10187     @     0x7effccfbb734  yb::tablet::PreparerImpl::ProcessAndClearLeaderSideBatch() (src/yb/tablet/preparer.cc:250)
    10188     @     0x7effccfbbce2  yb::tablet::PreparerImpl::Run() (src/yb/tablet/preparer.cc:162)
    10189     @     0x7effccfbbf85  void std::_Mem_fn_base<void (yb::tablet::PreparerImpl::*)(), true>::operator()<, void>(yb::tablet::PreparerImpl*) const (gcc/5.5.0_4/include/c++/5.5.0/functional:600)
    10190     @     0x7effccfbbf85  void std::_Bind<std::_Mem_fn<void (yb::tablet::PreparerImpl::*)()> (yb::tablet::PreparerImpl*)>::__call<void, , 0ul>(tuple<>&&, std::_Index_tuple<0ul>) (gcc/5.5.0_4/include/c++/5.5.0/functional:1074)
    10191     @     0x7effccfbbf85  void std::_Bind<std::_Mem_fn<void (yb::tablet::PreparerImpl::*)()> (yb::tablet::PreparerImpl*)>::operator()<, void>() (gcc/5.5.0_4/include/c++/5.5.0/functional:1133)
    10192     @     0x7effccfbbf85  std::_Function_handler<void (), std::_Bind<std::_Mem_fn<void (yb::tablet::PreparerImpl::*)()> (yb::tablet::PreparerImpl*)> >::_M_invoke(std::_Any_data const&) (gcc/5.5.0_4/include/c++/5.5.0/functional:1871)
    10193     @     0x7effcbc561f1  std::function<void ()>::operator()() const (gcc/5.5.0_4/include/c++/5.5.0/functional:2267)
    10194     @     0x7effcbc561f1  yb::FunctionRunnable::Run() (src/yb/util/threadpool.h:479)
    10195     @     0x7effc49ab3a5  yb::ThreadPool::DispatchThread(bool) (src/yb/util/threadpool.cc:608)
    10196     @     0x7effc49ae642  void std::_Mem_fn_base<void (yb::ThreadPool::*)(bool), true>::operator()<bool&, void>(yb::ThreadPool*, bool&) const (gcc/5.5.0_4/include/c++/5.5.0/functional:600)
    10197     @     0x7effc49ae642  void std::_Bind<std::_Mem_fn<void (yb::ThreadPool::*)(bool)> (yb::ThreadPool*, bool)>::__call<void, , 0ul, 1ul>(tuple<>&&, std::_Index_tuple<0ul, 1ul>) (gcc/5.5.0_4/include/c++/5.5.0/functional:1074)
    10198     @     0x7effc49ae642  void std::_Bind<std::_Mem_fn<void (yb::ThreadPool::*)(bool)> (yb::ThreadPool*, bool)>::operator()<, void>() (gcc/5.5.0_4/include/c++/5.5.0/functional:1133)
    10199     @     0x7effc49ae642  std::_Function_handler<void (), std::_Bind<std::_Mem_fn<void (yb::ThreadPool::*)(bool)> (yb::ThreadPool*, bool)> >::_M_invoke(std::_Any_data const&) (gcc/5.5.0_4/include/c++/5.5.0/functional:1871)
    10200     @     0x7effc49a4099  std::function<void ()>::operator()() const (gcc/5.5.0_4/include/c++/5.5.0/functional:2267)
    10201     @     0x7effc49a4099  yb::Thread::SuperviseThread(void*) (src/yb/util/thread.cc:739)
    10202     @     0x7effc04e4693  start_thread (/tmp/glibc-20181130-26094-cs1x60/glibc-2.23/nptl/pthread_create.c:333)
    10203     @     0x7effc022641c  (unknown) (sysdeps/unix/sysv/linux/x86_64/clone.S:109)
    10204     @ 0xffffffffffffffff

spolitov added a commit that referenced this issue Jan 11, 2020
Summary:
During conflict resolution committing transaction has commit time HybridTime::kMax.
And we use this time to update clock after resolving operation conflicts.

This diff fixes issue by ignoring max hybrid time for clock update.

Test Plan: ybd --cxx-test ql-transaction-test --gtest_filter QLTransactionTest.MixedWriteConflicts

Reviewers: amitanand, mikhail, timur

Reviewed By: timur

Subscribers: ybase, bogdan

Differential Revision: https://phabricator.dev.yugabyte.com/D7779
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area/docdb YugabyteDB core features kind/bug This issue is a bug priority/high High Priority
Projects
None yet
Development

No branches or pull requests

6 participants