-
Notifications
You must be signed in to change notification settings - Fork 1.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Jepsen][YSQL] Two concurrent complex update-insert transactions may both be successful, while only one will be presented #11165
Comments
Reviewed other run with trading all queries option enabled Here we also have key = 37 and 2 values 8 and 11
pg log 1
For tserver that has faulty transaction new segment allocation messages exists.
|
@es1024 , Can you help take a look? |
Got another reproducer I think that can be evaluated against local cluster. UPDATEs.
INSERTs:
DELETEs
So during this test there is a possibility that on INSERT operation few different threads will try to commit the same key. Try to start this jar file with following parameters and then check history table: Arguments to launch jar:
|
Simple case to reproduce this bug: CREATE TABLE test(k INTEGER PRIMARY KEY, v VARCHAR); Using two connections to the database:
Both transactions will commit, even though they're writing to the same row. This is likely because we're doing primary key checks only when processing INSERT and not at commit time, and not acquiring the right locks to cause the conflict to be detected. This is a regression introduced by 83a150b and only affects SERIALIZABLE isolation. |
…txns Summary: The D11239 / 83a150b introduces the optimization for UPDATE statement based on relaxing lock intent strength. After the change UPDATE of particular row takes `weak read` intent instead of `strong read` intent. This is achieved by creating `strong read` intent on row's `kLivenessColumn` column instead of row key itself. But this change introduces the bug that INSERT/UPSERT tries to create `strong read` intent on `kLivenessColumn` as well. Solution is to limit intent relaxing optimization for UPDATE operation only. Test Plan: Jenkins New unit tests were introduced ``` ./yb_build.sh --gtest_filter PgMiniTest.DuplicateInsert* ./yb_build.sh --gtest_filter PgMiniTest.DuplicateUniqueIndexInsert* ./yb_build.sh --gtest_filter PgMiniTest.DuplicateNonUniqueIndexInsert* ``` Reviewers: mbautin, sergei, pjain, mihnea Reviewed By: sergei, mihnea Subscribers: yql Differential Revision: https://phabricator.dev.yugabyte.com/D16278
…m 2 SERIALIZABLE txns Summary: The D11239 / 83a150b introduces the optimization for UPDATE statement based on relaxing lock intent strength. After the change UPDATE of particular row takes `weak read` intent instead of `strong read` intent. This is achieved by creating `strong read` intent on row's `kLivenessColumn` column instead of row key itself. But this change introduces the bug that INSERT/UPSERT tries to create `strong read` intent on `kLivenessColumn` as well. Solution is to limit intent relaxing optimization for UPDATE operation only. Original commit: 7333ee7 / D16278 Test Plan: Jenkins New unit tests were introduced ``` ./yb_build.sh --gtest_filter PgMiniTest.DuplicateInsert* ./yb_build.sh --gtest_filter PgMiniTest.DuplicateUniqueIndexInsert* ./yb_build.sh --gtest_filter PgMiniTest.DuplicateNonUniqueIndexInsert* ``` Reviewers: mbautin, sergei, pjain, mihnea Reviewed By: mihnea Subscribers: yql Differential Revision: https://phabricator.dev.yugabyte.com/D16425
… SERIALIZABLE txns Summary: The D11239 / 83a150b introduces the optimization for UPDATE statement based on relaxing lock intent strength. After the change UPDATE of particular row takes `weak read` intent instead of `strong read` intent. This is achieved by creating `strong read` intent on row's `kLivenessColumn` column instead of row key itself. But this change introduces the bug that INSERT/UPSERT tries to create `strong read` intent on `kLivenessColumn` as well. Solution is to limit intent relaxing optimization for UPDATE operation only. Original commit: 7333ee7 / D16278 Test Plan: Jenkins New unit tests were introduced ``` ./yb_build.sh --gtest_filter PgMiniTest.DuplicateInsert* ./yb_build.sh --gtest_filter PgMiniTest.DuplicateUniqueIndexInsert* ./yb_build.sh --gtest_filter PgMiniTest.DuplicateNonUniqueIndexInsert* ``` Reviewers: mbautin, sergei, pjain, mihnea Reviewed By: mihnea Differential Revision: https://phabricator.dev.yugabyte.com/D16433
…2 SERIALIZABLE txns Summary: The D11239 / 83a150b introduces the optimization for UPDATE statement based on relaxing lock intent strength. After the change UPDATE of particular row takes `weak read` intent instead of `strong read` intent. This is achieved by creating `strong read` intent on row's `kLivenessColumn` column instead of row key itself. But this change introduces the bug that INSERT/UPSERT tries to create `strong read` intent on `kLivenessColumn` as well. Solution is to limit intent relaxing optimization for UPDATE operation only. Original commit: 7333ee7 / D16278 Test Plan: Jenkins New unit tests were introduced ``` ./yb_build.sh --gtest_filter PgMiniTest.DuplicateInsert* ./yb_build.sh --gtest_filter PgMiniTest.DuplicateUniqueIndexInsert* ./yb_build.sh --gtest_filter PgMiniTest.DuplicateNonUniqueIndexInsert* ``` Reviewers: mbautin, sergei, pjain, mihnea Reviewed By: mihnea Subscribers: yql Differential Revision: https://phabricator.dev.yugabyte.com/D16434
Jira Link: DB-8240
Description
Issue appeared after Jepsen version was updated to 0.2.5, not sure how this change was related so this issue appears, nevertheless we observe problem in transaction history on server side.
All current versions are affected, it may be an old issue. Issue disappears if we relax consistency model check from
:strict-serialisable
to:serialisable
. Link to Jepsen consistency models decriptionIssue is well reproducible, but provides a lot of noise in the results. Automated description may contain 100 transactions. Trying to fix this in future jepsen tests.
2 transactions in history, both - first insert into a table key 37
Code that acts here, both update and insert are evaluated.
Both transactions are committed, while we observe only one value after - note that we have read [
:r 37 [1 10 11 ...
20211219T225033.000Z.zip
The text was updated successfully, but these errors were encountered: