Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[adapter] Ignore deletions for objects that are always wrapped #7174

Merged
merged 1 commit into from
Jan 5, 2023
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Original file line number Diff line number Diff line change
Expand Up @@ -13,4 +13,4 @@ written: object(106)

task 3 'run'. lines 43-43:
written: object(108)
deleted: object(_), object(107)
deleted: object(107)
Original file line number Diff line number Diff line change
Expand Up @@ -27,7 +27,7 @@ written: object(111)

task 6 'run'. lines 52-52:
written: object(112), object(114)
deleted: object(_), object(113)
deleted: object(113)

task 7 'run'. lines 55-55:
created: object(116), object(117)
Expand Down
7 changes: 3 additions & 4 deletions crates/sui-adapter/src/adapter.rs
Original file line number Diff line number Diff line change
Expand Up @@ -628,10 +628,9 @@ fn process_successful_execution<S: Storage + ParentSync>(
None => match state_view.get_latest_parent_entry_ref(id) {
Ok(Some((_, previous_version, _))) => previous_version,
Ok(None) => {
// TODO we don't really need to mark this as deleted, as the object was not
// created this txn but has never existed in storage. We just need a better
// way of detecting this rather than relying on the parent sync
SequenceNumber::new()
// This object was not created this transaction but has never existed in
// storage, skip it.
continue;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

isn't the Ok(Some(...)) case also questionable - if an object did exist in the store, and then was wrapped, and then deleted, is it approriate to use the last version that existed before being wrapped?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

also, i'm slightly concerned about how this could mask a bug in which parent_sync isn't updated, but maybe there's not much we can do about that.

Copy link
Contributor Author

@amnn amnn Jan 5, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think there's an argument to be made that because wrapping is considered a form of deletion, if the object was not in by_value_objects, this deletion should be entirely ignored (i.e. advocating for getting rid of the use of parent sync altogether), but things are set-up this way because in practice, it's useful to know when the object is gone for good vs just hidden. That is a deeper subject than the thesis of this PR though.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

isn't the Ok(Some(...)) case also questionable - if an object did exist in the store, and then was wrapped, and then deleted, is it approriate to use the last version that existed before being wrapped?

I'm not sure what else you would suggest?
IIRC correctly, we used to set it to MAX - 1 instead of previous_version, but that feels even more confusing.
If you conceptually think of the version as being a field of the object (even though that isn't the case in Move), the previous_version is the consistent behavior.
Though definitely open to suggestions here.

also, i'm slightly concerned about how this could mask a bug in which parent_sync isn't updated, but maybe there's not much we can do about that.

That is a potential issue, yes. One thing we have discussed doing is forming a write-ahead, crash-recovery log of Txn => ObjectID => Version which would indicate the version used as an input in this transaction in the case that this transaction is being reprocessed (for whatever reason). We could also populate that table for this parent sync case, which I think should help prevent any bugs with the parent_sync, but maybe

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

in the case that this transaction is being reprocessed

@tnowacki We won't have to worry about that - the current and future plan for execution recovery is that we only ever execute once, then right the temporary store + effects atomically to a recovery "log" before updating the store. If the store updates are interrupted, we re-read the recovery log rather than re-executing. Otherwise there is no way to be correct wrt dynamic child loading as far as I can see.

}
_ => {
return Err(ExecutionError::new_with_source(
Expand Down
34 changes: 30 additions & 4 deletions crates/sui-core/src/unit_tests/move_integration_tests.rs
Original file line number Diff line number Diff line change
Expand Up @@ -21,7 +21,7 @@ use sui_types::{
messages::ExecutionStatus,
};

use std::path::PathBuf;
use std::{collections::HashSet, path::PathBuf};
use std::{env, str::FromStr};

const MAX_GAS: u64 = 10000;
Expand Down Expand Up @@ -540,19 +540,30 @@ async fn test_create_then_delete_parent_child_wrap() {
.await
.unwrap();
assert!(effects.status.is_ok());
// Modifies the gas object
assert_eq!(effects.mutated.len(), 1);
// Creates 3 objects, the parent, a field, and the child
assert_eq!(effects.created.len(), 2);
// not wrapped as it wasn't first created
assert_eq!(effects.wrapped.len(), 0);
assert_eq!(effects.events.len(), 3);

let gas_ref = effects.mutated[0].0;

let parent = effects
.created
.iter()
.find(|(_, owner)| matches!(owner, Owner::AddressOwner(_)))
.unwrap()
.0;

let field = effects
.created
.iter()
.find(|((id, _, _), _)| id != &parent.0)
.unwrap()
.0;

// Delete the parent and child altogether.
let effects = call_move(
&authority,
Expand All @@ -568,9 +579,24 @@ async fn test_create_then_delete_parent_child_wrap() {
.await
.unwrap();
assert!(effects.status.is_ok());
// Check that both objects were deleted.
assert_eq!(effects.deleted.len(), 3);
assert_eq!(effects.events.len(), 4);

// The parent and field are considered deleted, the child doesn't count because it wasn't
// considered created in the first place.
assert_eq!(effects.deleted.len(), 2);
assert_eq!(effects.events.len(), 3);

assert_eq!(
effects
.modified_at_versions
.iter()
.cloned()
.collect::<HashSet<_>>(),
HashSet::from([
(gas_ref.0, gas_ref.1),
(parent.0, parent.1),
(field.0, field.1)
]),
);
}

#[tokio::test]
Expand Down