diff --git a/docs/dev/development.adoc b/docs/dev/development.adoc index 49f2ffeec87..a84458427e7 100644 --- a/docs/dev/development.adoc +++ b/docs/dev/development.adoc @@ -178,7 +178,7 @@ NOTE: Refer https://github.com/golang/go/wiki/LearnTesting for Go best practices === Integration and e2e tests -*Prerequisites:* +*Prerequisites for Openshift cluster:* * A `minishift` or OpenShift environment with Service Catalog enabled: + @@ -186,7 +186,13 @@ NOTE: Refer https://github.com/golang/go/wiki/LearnTesting for Go best practices $ MINISHIFT_ENABLE_EXPERIMENTAL=y minishift start --extra-clusterup-flags "--enable=*,service-catalog,automation-service-broker,template-service-broker" ---- -* `odo` and `oc` binaries in `$PATH`. +*Prerequisites for Kubernetes cluster:* + +* A `kubernetes` environment set up with single node cluster: ++ +For a single node `kubernetes` cluster install link:https://kubernetes.io/docs/tasks/tools/install-minikube/[`Minikube`] + +* `odo`, `oc` and `kubectl` binaries in `$PATH`. *Integration tests:* @@ -331,7 +337,7 @@ There are some test environment variable that helps to get more control over the * UNIT_TEST_ARGS: Env variable UNIT_TEST_ARGS is used to get control over enabling test flags along with go test. For example, To enable verbosity export or set env UNIT_TEST_ARGS like `UNIT_TEST_ARGS=-v`. -*Running integration tests:* +*Running integration tests on Openshift:* By default, tests are run against the `odo` binary placed in the PATH which is created by command `make`. Integration tests can be run in two (parallel and sequential) ways. To control the parallel run use environment variable `TEST_EXEC_NODES`. For example component test can be run @@ -353,7 +359,55 @@ Run component command integration tests $ TEST_EXEC_NODES=1 make test-cmd-cmp ---- -NOTE: To see the number of available integration test file for validation, press `tab` just after writing `make test-cmd-`. However there is a test file `generic_test.go` which handles certain test spec easily and can run the spec in parallel by calling `make test-generic`. By calling make `test-integration`, the whole suite can run all the spec in parallel on two ginkgo test node except `service` and `link` irrespective of service catalog status in the cluster. However `make test-integration-service-catalog` runs all spec of service and link tests successfully in parallel on cluster having service catalog enabled. `make test-odo-login-e2e` doesn't honour environment variable `TEST_EXEC_NODES`. So by default it runs login and logout command integration test suite on a single ginkgo test node sequentially to avoid race conditions in a parallel run. +NOTE: To see the number of available integration test file for validation, press `tab` just after writing `make test-cmd-`. However there is a test file `generic_test.go` which handles certain test specs easily and we can run it parallelly by calling `make test-generic`. By calling `make test-integration`, the whole suite will run all the specs in parallel on two ginkgo test node except `service` and `link` irrespective of service catalog status in the cluster. However `make test-integration-service-catalog` runs all specs of service and link tests in parallel on a cluster having service catalog enabled. `make test-odo-login-e2e` doesn't honour environment variable `TEST_EXEC_NODES`. So by default it runs login and logout command integration test suites on a single ginkgo test node sequentially to avoid race conditions during a parallel run. + +*Running integration tests on Kubernetes:* + +By default, the link:https://github.com/openshift/odo/tree/master/tests/integration/devfile[`integration tests`] for devfile feature, which is in experimental mode, run against `kubernetes` cluster. For more information on Experimental mode, please read link:https://github.com/openshift/odo/blob/master/docs/dev/experimental-mode.adoc:[`odo experimental mode`] document. + +The tests are run against the `odo` binary placed in the PATH which is created by the command `make`. Integration tests can be run in two ways (parallel and sequential). To control the parallel run use environment variable `TEST_EXEC_NODES`. For example, the devfile tests can be run + +* To run the tests on Kubernetes cluster: + ++ +Set the `KUBERNETES` environment variable ++ +---- +$ export KUBERNETES=true +---- + ++ +Enable the experimental mode ++ +---- +$ export ODO_EXPERIMENTAL=true +---- ++ +OR ++ +---- +$ odo preference set Experimental true -f +---- + +* To run the test in parallel, on a test cluster (By default the test will run in parallel on two ginkgo test node): + ++ +Run catalog command integration tests ++ +---- +$ make test-cmd-devfile-catalog +---- ++ + +* To run the catalog command integration tests sequentially or on single ginkgo test node: ++ +Run catalog command integration tests ++ +---- +$ TEST_EXEC_NODES=1 make test-cmd-devfile-catalog +---- + +NOTE: To see the number of available integration test files for validation, press `tab` keb just after writing `make test-cmd-devfile-`. By calling `make test-integration-devfile`, the suite will run all test specs in parallel on two ginkgo test nodes. *Running e2e tests:*