Skip to content
This repository has been archived by the owner on Aug 23, 2020. It is now read-only.

Update regression structure #1764

Merged
merged 4 commits into from
Mar 5, 2020
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
98 changes: 76 additions & 22 deletions python-regression/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -19,37 +19,39 @@ pip install -e .
```

### Available Tests
Machine 1 - API Tests: This machine uses 2 nodes and tests each of the api calls, and ensures that the responses are
the expected values
Machine 1 - Local Snapshotting Tests: This machine uses 4 nodes. The first node contains the snapshot `meta` and `state`
files, the `spent-addresses-db` and `testnetdb` of a synced node. The second only contains the database, and the third
only contains the snapshot files. All three of these nodes are tested to ensure that they solidify to the same point,
and that the proper information is contained in the snapshot files and databases. The fourth node has a larger database
that contains more milestones than the `local-snapshots-pruning-depth`. This node is checked to make sure that after
starting, transactions that should be pruned from the database have been pruned correctly. This machine also includes
tests for spent addresses including a test for exporting and merging spent addresses using IXI
modules.

Machine 2 - Transaction Tests: This machine uses 2 nodes. Several zero value transactions are sent to the first node,
Machine 2 - Blowball Tests: This machine uses 6 nodes by default, but can be customized to be performed on as many/few
as desired. 1000 `getTransactionsToApprove` calls are made across these nodes, and the responses checked to make sure
that less than 5% of the results are milestone transactions. If the responses are over this threshold, then that means
blowballs are occurring.


Machine 3 - Transaction Tests: This machine uses 2 nodes. Several zero value transactions are sent to the first node,
as well as a milestone transaction. Then node two is checked to make sure the transactions are all confirmed. After
these transactions are resolved, the same approach is used to ensure that value transactions are also being confirmed
correctly.

Machine 3 - Blowball Tests: This machine uses 6 nodes by default, but can be customized to be performed on as many/few
as desired. 1000 `getTransactionsToApprove` calls are made across these nodes, and the responses checked to make sure
that less than 5% of the results are milestone transactions. If the responses are over this threshold, then that means
blowballs are occurring.
Machine 4 - API Tests: This machine uses 2 nodes and tests each of the api calls, and ensures that the responses are
the expected values

Machine 4 - Stitching Tests: This machine uses 1 node. The node is loaded with a db containing a large side tangle. A
Machine 5 - Stitching Tests: This machine uses 1 node. The node is loaded with a db containing a large side tangle. A
stitching transaction is issued, and another transaction referencing that one is issued. After these transactions are
issued, making `getTransactionsToApprove` calls should not crash the node.

Machine 5 - Milestone Validation Tests: This machine uses 2 nodes. Both nodes are loaded with the same db. The db
Machine 6 - Milestone Validation Tests: This machine uses 2 nodes. Both nodes are loaded with the same db. The db
contains several signed milestone transactions, and several unsigned transactions. One node is set to validate the
testnet coordinator signature, while the other is not. The one that requires validation should solidify to one point,
while the other should solidify further.

Machine 6 - Local Snapshotting Tests: This machine uses 4 nodes. The first node contains the snapshot `meta` and `state`
files, the `spent-addresses-db` and `testnetdb` of a synced node. The second only contains the database, and the third
only contains the snapshot files. All three of these nodes are tested to ensure that they solidify to the same point,
and that the proper information is contained in the snapshot files and databases. The fourth node has a larger database
that contains more milestones than the `local-snapshots-pruning-depth`. This node is checked to make sure that after
starting, transactions that should be pruned from the database have been pruned correctly. This machine also includes
tests for spent addresses including a test for exporting and merging spent addresses using IXI
modules.

__*Note:*__ _The db's used for these tests have been documented below for reference_

### Running Tests Locally

Expand Down Expand Up @@ -137,7 +139,7 @@ iri
--/tests
---/features
----/machine1 [Same structure for other machines]
-----/1_api_tests.feature
-----/4_api_tests.feature
-----/config.yml
-----/output.yml
```
Expand All @@ -151,7 +153,7 @@ From the `iri/python-regression` directory, a test can be run using the followin
```
i.e. For the api tests:
```
aloe 1_api_tests.feature -w ./tests/features/machine1/ -v --nologcapture
aloe 4_api_tests.feature -w ./tests/features/machine1/ -v --nologcapture
```


Expand All @@ -168,12 +170,64 @@ When running the aloe command, you can add the `-a` flag to register that you wo
the given attribute. Inversely you can also run all tests that do not contain that flag by using `!`. This is shown
below:
```
aloe 1_api_tests.feature -w ./tests/features/machine1 -a getNodeInfo
aloe 4_api_tests.feature -w ./tests/features/machine1 -a getNodeInfo
```
or to not run the flagged tests:
```
aloe 1_api_tests.feature -w ./tests/features/machine1 -a '!getNodeInfo'
aloe 4_api_tests.feature -w ./tests/features/machine1 -a '!getNodeInfo'
```
_Note: To negate running the tests using the flag requires the `!` and flag to be wrapped in parentheses as shown above_

The same flag can be used for several scenarios, and they will all either be included or negated by this flag.


### _*DB Descriptions:*_
##### _Machine 1_
https://s3.eu-central-1.amazonaws.com/iotaledger-dbfiles/dev/LS_Test_Db_With_LS_Db.tar.gz - Full LocalSnapshot test db
synced to milestone 10321 and a local snapshot at 10220. Contains mostly 0 value transactions with a few spends early on
to generate the localsnapshot-db as well as to provide transactions for pruning reference

https://s3.eu-central-1.amazonaws.com/iotaledger-dbfiles/dev/LS_Test_DB_and_Snapshot.tar - Full LocalSnapshot test db
synced to milestone 10321 without any localsnapshots-db.

https://s3.eu-central-1.amazonaws.com/iotaledger-dbfiles/dev/LS_Test_LS_Db.tar.gz - No LocalSnapshot test db provided,
instead the localsnapshot-db from index 10220 is provided instead.

https://s3.eu-central-1.amazonaws.com/iotaledger-dbfiles/dev/PruningTestDB.tar - A full db synced to milestone 15000
containing a mix of value and 0 value transactions that will be pruned once the node takes its snapshot. This is used
to ensure that any transaction below the threshold is pruned correctly.

https://s3.eu-central-1.amazonaws.com/iotaledger-dbfiles/dev/SpentAddressesTestDB.tar - A full db synced to milestone
10100 with a mix of value and 0 value transactions that will be pruned once the node takes its snapshot. This is used
to ensure that spent addresses are persisted in the local snapshot data after pruning occurs.


##### _Machine 2_
https://s3.eu-central-1.amazonaws.com/iotaledger-dbfiles/dev/Blowball_Tests_db.tar - A full db synced to milestone 27.
There are several tips surrounding the last milestone, and the test is used to ensure that new transactions aren't
attaching en masse to the last milestone present.


##### _Machine 3_
https://s3.eu-central-1.amazonaws.com/iotaledger-dbfiles/dev/Transactions_Tests_db.tar - A small db synced to milestone
50 with a snapshot file containing a list of addresses preset for value spending. This DB is used to test value and non
value transactions and their inclusion in the next milestone.


##### _Machine 4_
https://s3.eu-central-1.amazonaws.com/iotaledger-dbfiles/dev/testnet_files.tgz - A full db synced to milestone 8412 used
for testing basic api commands.


##### _Machine 5_
https://s3.eu-central-1.amazonaws.com/iotaledger-dbfiles/dev/Stitching_tests_db.tar - A full db synced to milestone 37
with a large subtangle that is building beside the main tangle. This db is used to test the success or failure of
stitching this subtangle back into the original.


##### _Machine 6_
https://s3.eu-central-1.amazonaws.com/iotaledger-dbfiles/dev/Validation_tests_db.tar - A full db containing 2 separate
synchronisation points. The first point is hit at milestone 37, where the last milestone issued with valid signatures is
present. The db contains several more milestones that have been attached without a valid signature. The test will sync
to 37 if a valid signature is required and 45 if not.

Original file line number Diff line number Diff line change
Expand Up @@ -10,35 +10,35 @@ Feature: Test Bootstrapping With LS
Check that the permanode has been started correctly and is synced.

#First make sure nodes are neighbored
Given "nodeA-m6" and "nodeB-m6" are neighbors
And "nodeA-m6" and "nodeC-m6" are neighbors
Given "nodeA-m1" and "nodeB-m1" are neighbors
And "nodeA-m1" and "nodeC-m1" are neighbors

#Default for test is to issue 10322
When milestone 10322 is issued on "nodeA-m6"
When milestone 10322 is issued on "nodeA-m1"
And we wait "30" second/seconds
Then "nodeA-m6" is synced up to milestone 10322
Then "nodeA-m1" is synced up to milestone 10322


Scenario: DB node is synced, and files contain expected values
Check that the node started with just a DB is synced correctly, and that the proper addresses and hashes have been
stored correctly.

#First make sure nodes are neighbored
Given "nodeB-m6" and "nodeA-m6" are neighbors
And "nodeB-m6" and "nodeC-m6" are neighbors
Given "nodeB-m1" and "nodeA-m1" are neighbors
And "nodeB-m1" and "nodeC-m1" are neighbors

# Default for test is to issue 10323
When milestone 10323 is issued on "nodeA-m6"
When milestone 10323 is issued on "nodeA-m1"
#Give the node time to finish syncing properly, then make sure that the node is synced to the latest milestone.
And we wait "30" second/seconds
Then "nodeB-m6" is synced up to milestone 10323
And A local snapshot was taken on "nodeB-m6" at index 10220
Then "nodeB-m1" is synced up to milestone 10323
And A local snapshot was taken on "nodeB-m1" at index 10220

When reading the local snapshot state on "nodeB-m6" returns with:
When reading the local snapshot state on "nodeB-m1" returns with:
|keys |values |type |
|address |LS_TEST_STATE_ADDRESSES |staticValue |

And reading the local snapshot metadata on "nodeB-m6" returns with:
And reading the local snapshot metadata on "nodeB-m1" returns with:
|keys |values |type |
|hashes |LS_TEST_MILESTONE_HASHES |staticValue |

Expand All @@ -47,27 +47,27 @@ Feature: Test Bootstrapping With LS
Check that the node started with just a LS DB is synced correctly.

#First make sure nodes are neighbored
Given "nodeC-m6" and "nodeA-m6" are neighbors
And "nodeC-m6" and "nodeB-m6" are neighbors
Given "nodeC-m1" and "nodeA-m1" are neighbors
And "nodeC-m1" and "nodeB-m1" are neighbors

#Default for test is to issue 10324
When milestone 10324 is issued on "nodeA-m6"
When milestone 10324 is issued on "nodeA-m1"
#Give the node time to finish syncing properly, then make sure that the node is synced to the latest milestone.
And we wait "120" second/seconds
Then "nodeC-m6" is synced up to milestone 10324
Then "nodeC-m1" is synced up to milestone 10324


Scenario: Check DB for milestone hashes
Give the db-less node some time to receive the latest milestones from the permanode, then check if the milestones
are present in the new node.

#First make sure nodes are neighbored
Given "nodeC-m6" and "nodeA-m6" are neighbors
Given "nodeC-m1" and "nodeA-m1" are neighbors
#Default for test is to issue 10325
When milestone 10325 is issued on "nodeA-m6"
When milestone 10325 is issued on "nodeA-m1"
And we wait "30" second/seconds

Then "checkConsistency" is called on "nodeC-m6" with:
Then "checkConsistency" is called on "nodeC-m1" with:
|keys |values |type |
|tails |LS_TEST_MILESTONE_HASHES |staticValue |

Expand All @@ -80,7 +80,7 @@ Feature: Test Bootstrapping With LS
Takes a node with a large db and transaction pruning enabled, and checks to make sure that the transactions below
the pruning depth are no longer present.

Given "checkConsistency" is called on "nodeD-m6" with:
Given "checkConsistency" is called on "nodeD-m1" with:
|keys |values |type |
|tails |LS_PRUNED_TRANSACTIONS |staticValue |

Expand All @@ -90,13 +90,13 @@ Feature: Test Bootstrapping With LS
Scenario: Check unconfirmed transaction is spent from
Issues a value transaction that will be unconfirmed, and check that the address was spent from.

Given a transaction is generated and attached on "nodeE-m6" with:
Given a transaction is generated and attached on "nodeE-m1" with:
|keys |values |type |
|address |TEST_ADDRESS |staticValue |
|value |10 |int |
|seed |UNCONFIRMED_TEST_SEED |staticValue |

When "wereAddressesSpentFrom" is called on "nodeE-m6" with:
When "wereAddressesSpentFrom" is called on "nodeE-m1" with:
|keys |values |type |
|addresses |UNCONFIRMED_TEST_ADDRESS |staticValue |

Expand All @@ -110,7 +110,7 @@ Feature: Test Bootstrapping With LS
transaction has been pruned from the DB.

# Check that addresses were spent from before pruning
Given "wereAddressesSpentFrom" is called on "nodeE-m6" with:
Given "wereAddressesSpentFrom" is called on "nodeE-m1" with:
|keys |values |type |
|addresses |LS_SPENT_ADDRESSES |staticValue |

Expand All @@ -122,7 +122,7 @@ Feature: Test Bootstrapping With LS
When the next 30 milestones are issued

# Check that addresses were spent after transaction have been pruned
And "wereAddressesSpentFrom" is called on "nodeE-m6" with:
And "wereAddressesSpentFrom" is called on "nodeE-m1" with:
|keys |values |type |
|addresses |LS_SPENT_ADDRESSES |staticValue |

Expand All @@ -131,7 +131,7 @@ Feature: Test Bootstrapping With LS
|addresses |True |boolList |

# Check that transactions from those addresses were pruned
And "getTrytes" is called on "nodeE-m6" with:
And "getTrytes" is called on "nodeE-m1" with:
|keys |values |type |
|hashes |LS_SPENT_TRANSACTIONS |staticValue |

Expand Down
85 changes: 65 additions & 20 deletions python-regression/tests/features/machine1/config.yml
Original file line number Diff line number Diff line change
@@ -1,26 +1,71 @@
defaults: &api_tests_config_files
db: https://s3.eu-central-1.amazonaws.com/iotaledger-dbfiles/dev/testnet_files.tgz
db_checksum: 6eaa06d5442416b7b8139e337a1598d2bae6a7f55c2d9d01f8c5dac69c004f75
iri_args: ['--testnet-coordinator',
'BTCAAFIH9CJIVIMWFMIHKFNWRTLJRKSTMRCVRE9CIP9AEDTOULVFRHQZT9QAQBZXXAZGBNMVOOKTKAXTB',
'--milestone-start',
'0',
'--snapshot',
'./snapshot.txt',
'--testnet-no-coo-validation',
'true',
'--testnet',
'true'
default_args: &args
['--testnet-coordinator',
'EFPNKGPCBXXXLIBYFGIGYBYTFFPIOQVNNVVWTTIYZO9NFREQGVGDQQHUUQ9CLWAEMXVDFSSMOTGAHVIBH',
'--mwm',
'1',
'--milestone-start',
'0',
'--testnet-no-coo-validation',
'true',
'--testnet',
'true',
'--snapshot',
'./snapshot.txt',
'--local-snapshots-pruning-enabled',
'true',
'--local-snapshots-pruning-delay',
'10000',
'--remote',
'true',
'--remote-limit-api',
'""'
]
java_options: -agentlib:jdwp=transport=dt_socket,server=y,address=8000,suspend=n -javaagent:/opt/jacoco/lib/jacocoagent.jar=destfile=/iri/jacoco.exec,output=file,append=true,dumponexit=true

seeds: # For internal use by the regression system.
- SEED
- SIID
default_ixi: &ixi
['IXI/LocalSnapshots.ixi']

java_options: -agentlib:jdwp=transport=dt_socket,server=y,address=8000,suspend=n -javaagent:/opt/jacoco/lib/jacocoagent.jar=destfile=/iri/jacoco.exec,output=file,append=true,dumponexit=true

defaults: &db_full
db: https://s3.eu-central-1.amazonaws.com/iotaledger-dbfiles/dev/LS_Test_Db_With_LS_Db.tar.gz
db_checksum: 2055406bf312136d7cd0efa21248bd8cc9c407ab14ef0d18b921cf18c72c5270
iri_args: *args
ixis: *ixi

db_with_snapshot: &db_with_snapshot
db: https://s3.eu-central-1.amazonaws.com/iotaledger-dbfiles/dev/LS_Test_DB_and_Snapshot.tar
db_checksum: eabb81b0570a20e8d1c65c3d29e4b4e723de537ebca0eada536e3155d5a96972
iri_args: *args
ixis: *ixi

db_with_ls_db: &db_with_ls_db
db: https://s3.eu-central-1.amazonaws.com/iotaledger-dbfiles/dev/LS_Test_LS_Db.tar.gz
db_checksum: d217729fd5efb0432d179ec59472f283cd61e8ad4ca9aab32e5c1f82632a1a29
iri_args: *args
ixis: *ixi

db_for_pruning: &db_for_pruning
db: https://s3.eu-central-1.amazonaws.com/iotaledger-dbfiles/dev/PruningTestDB.tar
db_checksum: 15122ba80c0a03dc5b6b4186e5d880d0a1a15b5a6de48bafe4002c4c9b682221
iri_args: *args

db_for_spent_addresses: &db_for_spent_addresses
db: https://s3.eu-central-1.amazonaws.com/iotaledger-dbfiles/dev/SpentAddressesTestDB.tar
db_checksum: 7e15b2cbc76585d6483668cb1709201daa71314e7d488d9e7d71d7052479e73e
iri_args: *args

nodes:
nodeA-m1: #name
<<: *api_tests_config_files
<<: *db_full

nodeB-m1:
<<: *api_tests_config_files
<<: *db_with_snapshot

nodeC-m1:
<<: *db_with_ls_db

nodeD-m1:
<<: *db_for_pruning

nodeE-m1:
<<: *db_for_spent_addresses
Loading