Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Improve Java-Client Tests and CI #689

Open
jmini opened this issue Jul 30, 2018 · 7 comments
Open

Improve Java-Client Tests and CI #689

jmini opened this issue Jul 30, 2018 · 7 comments

Comments

@jmini
Copy link
Member

jmini commented Jul 30, 2018

Input @MatanRubin on #513:

this was a serious bug in a very basic scenario for a very common library (i.e. the generated client fails a simple GET for Java + RestTemplate). Don't we have tests that cover this kind of scenarios for all languages and libraries? Not trying to blame anyone here, just asking out of curiosity and to see maybe I can contribute some tests that will help prevent this kind of issues.

Well the more I am confronted with changes to java-client templates, the more I think we need to do a real setup.

I theory some of the Java-Client have some tests for clients implemented (replacement of the generated stup) that perform test against some server serving the "Petstore sample". During the creation of OpenAPI-Generator, some of them were removed => #247. This reminds me that I should finish restoring them.

But this is not enough... Have a look at the PR I have merge this morning: #646. It is about special chars being wrongly encoded in query-params.
A test case would be ideal for that.

We need a setup where we can control the server for each test.
Using the generated code (XxxxApi) we send a request and we create expectations on what should arrive to the server.
Or in the server we define what should be returned and we see if the client did interpret the answer correctly.

I am wondering which library can be used to do something like this. Easy embedding of a small HTTP-Server. I have started to have a look at https://github.com/arteam/embedded-http-server, I can't imagine that there is not something.
(it is not about mocking the server, but having a simple one where we can perform assertions on request that arrived and were we can control the answer).

Any suggestions?

Then I think that we should generate client for all libs we support. I started something in this direction: https://github.com/jmini/openapi-experiments/tree/master/openapi-generator-utils/icespellmint. The spec used to create the client will be updated to test a lot of cases we have (all HTTP verbs like "GET", "POST", "PUT"..., with or without parameters, with different supported content-type (json, text, xml)).

This way we can control if something is an improvement or creates some regressions.

There is an other interesting issue: #476, I have started to work on this, but again I want to have a great coverage to be sure that I am not breaking anything.

Feedback is welcomed.

@wing328
Copy link
Member

wing328 commented Aug 1, 2018

Previously (a while ago) I suggested using our own server generator to generate a server stub that allows us to add new test cases easily.

If time permits, I may try to do that with the Rails generator or one of the PHP server generators.

@jmini
Copy link
Member Author

jmini commented Aug 11, 2018

@wing328: Thank you for this feedback, but I would like to have something 100% java. Each unit test for the java-client should looks like this:

  • Small server started
  • Expected behavior defined (expected call defined in the server, expected response send by the server for this request)
  • Call made with the generated java-client code
  • Assertions on the request that the server got and client-side on how the answer was understood by the client.

In my opinion, something like will get way to complicated if you start to mix technologies.

@marcoreni
Copy link
Contributor

I think that

  • Small server started
  • Expected behavior defined (expected call defined in the server, expected response send by the server for this request)

can be accomplished with a mock server. I haven't had any chance to use them with Java yet, but a simple Google Search brought me to http://wiremock.org/docs/junit-rule/ that may be easy enough to use (it allows stubbing via json files http://wiremock.org/docs/stubbing/ )

To accomplish also

  • Assertions on the request that the server got

requests can be verified ( http://wiremock.org/docs/verifying/ ) or the server may be strictly configured for the expected request payload, so that anything out of the ordinary is rejected/returns error to the caller.

I think that

  • Call made with the generated java-client code
  • Assertions [...] client-side on how the answer was understood by the client.

may be more difficult.

  • If code is generated before the test run, there should be a single OAI declaration containing resources for all the tests and this may lead to confusion and limit the possibilities (for example, creating a test case where authentication is required for one specific endpoint)
  • We may generate multiple APIs before the test run, but we'd have to associate each test with the corresponding API and this may be difficult as well
  • If code is generated during a specific unit test run (as a pre-hook inside the test file, for example), classes should be loaded dynamically
  • the developer wouldn't have access to autocomplete or other IDE features

@jmini
Copy link
Member Author

jmini commented Aug 24, 2018

Thank you a lot for your inputs. Because I pointed to this issue, I did the same research yesterday evening ;-)

I think I was first confused with http://wiremock.org and/or http://www.mock-server.com/ because I do not want to mock a server (mock like in Mockito or EasyMock), I want to have a real server so I can use the real Http-libs in the generated clients.

Am understand you concern and your reason about generating code on the fly, but I will let it out of scope of my experiment.
I agree with one point, having to regenerate the example by hand (or by separated bin/*.sh script) is not good (this can be improved). There should be one step that performs everything.
I also like the idea that if I put the generated code somewhere and run this code a lot will be easier (setup, writing tests, debugging, understanding what is going wrong in the generated code, do a small local change before editing the templates, ...)

I think I will give the Wiremock or Mock-Server a try.

@jmini
Copy link
Member Author

jmini commented Aug 20, 2019

With jmini/openapi-experiments@ccc9146 I have created a PoC of a test-suite for java clients.

Code is located mainly in the icespellmint/ folder (icespellmint is an anagram of "simple client"). The OpenAPI spec used to generate the code is icespellmint.yaml.


Some points I wanted to highlight:

  • a new spec: less realistic in comparison to Pet-Store, but most exhaustive. For each case, I have tried to look at the spec to see what is possible and I have tried to have each case testing one thing. This is how I have discovered that we were ignoring the TRACE http method (see [OpenAPI v3] Support for TRACE operation #3647)
  • The examples are generated in a Java Main (see RunOpenapitoolsCodegenMain), this is much quicker than using the CLI, because it needs only one JVM-warm-up.
  • This allow to test multiple configurations and combination of configuration.
  • The tests (part of the test suite) are outside the generated project (in an other maven module with the suffix -test-suite). This way we have no confusion between what is generated and what is overriden for the CI-Setup.
  • Usage of Mock-Server to stub the server part (see bellow)
  • Usage of Abstract Tests (called TCK) that do the tests (example: AbstractEtiamApiTck) and implementations that does the glue for a specific testing framework (example for rest-assured: EtiamApiTckTest)
  • If there is something specific given a library (like ReqSpec in Rest-Assured — see issue [Java rest-assured] reqSpec is reused across multiple requests #3370), it is possible to add a test only in this project (example IpsumWithReqSpecApiTest)

Usage of Mock-Server:

I wanted to outline why the usage of this tool is interesting in the context of testing.

  • allows to defines per test the answer that will be returned by the server. This allow to test corner-cases.
  • adding a new test case is easier than this a petstore server, because the call, the stubbed answer from the server are close to each other.
  • focus on testing: you can write assertions about the expected incoming parameters or request-body. You can verify that a method was called only once
  • there is a REST API, meaning that stubbing and verification could be defined in any language using this REST API and JSON (instead of the Java way that I have used).
  • based on Netty (quick startup)
  • open-source project (2000 stars on GitHub, frequent releases)

Other ways to run Mock-Server (I did not try this yet):

In the Java world, having a JUnit Rule (or the equivalent with JUnit5) is interesting, because this way we have a way to express “before test-suite or after test-suite” and not at class level as it is implemented now.
But I have also observed classpath issues between our client and Mock-Sever. To avoid something like this, Mock-Server can also run as separated process (not in the JVM running the tests). In this case there is a complete isolation.

Example:

Because the google-api-client version that we use is quite old (see #3625), there is a conflict with guava, solved in the pom of java-google-api-client-test-suite


Out-of scope of this PoC:

  • Multi-language: I think that if we create something like this, we need to think on other languages, because OpenAPI-Generator is not limited to generating java-client.
    • Orchestration: maybe maven is not the way to go, on the JVM gradle using build-cache would be more performant and at the end we need to use the build system of the technology (webpack or similar for JS based test modules)
    • Maintenance: when we add a test, how to we ensure that it is added in all test suite? Some kind of generator would be requested
  • Reporting: in my opinion we should change the normal maven behavior that stop on test failure to continue testing. It is OK to have 90% of the test passing with a specific generator (or a generator with a combination of parameters). It is just a matter of reporting it and in my eyes it is OK to have unsupported cases until someone work on improving the generator.
  • Splitting into multiple OpenAPI spec:
    • For the moment I use a single OpenAPI spec, but this is not realistic for more advanced constructs. If we add something in this input OpenAPI-spec that prevent the generated-project to compile, the test coverage will be 0%.
    • I would prefer having the cases spitted into multiple input specs (some testing basic feature, some testing OpenAPI v2, some testing a specific OpenAPI v3 feature). This way if an input-spec contains a construct that is not supported at all, it can be ignored, but will not prevent the other parts of the test suite to run.
  • Management of OpenAPI-Generator version: this PoC was tested only against SNAPSHOT version of OpenAPI-Generator (by regenerating the clients at specific points in time, using a specific commit-id)
    • In my opinion such a test suite should be able to evolve for itself and to be run against different version of OpenAPI-Generator (latest SNAPSHOT, last released version or even other released version), in order to spot regressions.
  • Documentation: I think that for each test-case we should be able to explain
    • what was the input OpenAPI spec used to generate the example
    • what are the parameters and request body (input) and what is the expected output based on the server answer.
    • this is requested to maintain the test suite over time (to understand what is tested - especially useful for the corner cases)
    • this allows to document corner-cases properly.

Next steps:

The test suite needs to be extended.

My vision is that small test-cases representing construct that are used in real project could be contributed.
For example my company has a definition containing a Schema containing two properties type and $_type (see as small reproducer the SomeObj schema defined in #3676 (review)). This is some legacy that we can not change, so we expect the produced code to be valid over the time.

I have proposed this to start a discussion. I think that one approach could be to setup this for real (with CI, with some automation to update OpenAPI generator, with some reporting) as separated project OpenAPITools/openapi-generator-test-suite.
This could run against the different master branches that we have and against the PRs that are submitted here.

When the project is mature enough and has shown viability we could think about merging it here (replacing or augmenting the petstore generated samples)

Feedback is welcomed.

@jmini
Copy link
Member Author

jmini commented Aug 22, 2019

To put it in an other way:

My goal with this OpenAPI-Generator test suite (that will probably be different from the proposed PoC focusing only on Java) is to test the complete cycle:

an OpenAPI Spect as intput -> OpenAPI-Generator generates a client that compiles -> given some input parameters for one operation -> the request must arrive server side correctly -> and given a response from the server -> the client must interpreted this correctly

The same OpenAPI spec (that should be as minimal as possible to test a specific feature) can be used for several test cases where just the the input parameters and the server response differ.

@bkabrda
Copy link
Contributor

bkabrda commented Oct 2, 2019

I think this is a very good proposal. It seems to me that especially with larger changes like my PR to introduce explicitly nullable fields (#3474) which made some of the templates quite complex, it's going to get harder and harder to maintain the codebase without introducing regressions.

Therefore I fully support this proposal - I think the idea of having generated clients send requests, verifying them on the server side and then sending back responses to be verified on the client side is sufficient enough to test the features that openapi-generator templates should be providing. This will come especially handy with features like the one mentioned above, where it's necessary to test pretty much all various combinations of the field being required/nullable/readonly, which can get time consuming and tricky.

I don't have too much time to spare, but I'd be happy to do some code review/testing if that would help. I'm starting to rely on the Java clients to be generated in a stable way, so I'll support any effort that helps that.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

4 participants