Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Create a tldr version of the tmt man page #968

Open
psss opened this issue Dec 3, 2021 · 22 comments · May be fixed by #3176
Open

Create a tldr version of the tmt man page #968

psss opened this issue Dec 3, 2021 · 22 comments · May be fixed by #3176
Assignees
Labels
documentation Improvements or additions to documentation good first issue Good for newcomers

Comments

@psss
Copy link
Collaborator

psss commented Dec 3, 2021

Would be nice to add tmt to the tldr project:

Also, what about having a tmt tldr subcommand as well?

@psss psss added good first issue Good for newcomers documentation Improvements or additions to documentation labels Dec 3, 2021
@jscotka
Copy link
Collaborator

jscotka commented Dec 7, 2021

I would prefer to have there commands for debugging like:

  • login to last executed machine (not important how it was provisioned)
  • show last log
  • show last failed test
  • run faled test again
  • ...

@martinhoyer
Copy link
Collaborator

@psss I presume we want to maintain tldr .md file(s) within this repo and create pull requests to github.com/tldr-pages/tldr?

Also, what about having a tmt tldr subcommand as well?

Wouldn't that be basically the same as tldr tmt? Albeit one requires a tldr client and access to pages database.
How about tmt --examples, in the same style as tldr page, but without tmt name, description and installation instructions?

@psss
Copy link
Collaborator Author

psss commented Sep 29, 2022

@psss I presume we want to maintain tldr .md file(s) within this repo and create pull requests to github.com/tldr-pages/tldr?

Yes, agreed. And because it will be already there why not adding the tmt tldr command as well.

Also, what about having a tmt tldr subcommand as well?

Wouldn't that be basically the same as tldr tmt? Albeit one requires a tldr client and access to pages database.

Exactly, that content would be the same, could be useful for those who would like to get some quick start from command line but don't have (or don't want to have) the tldr package installed.

How about tmt --examples, in the same style as tldr page, but without tmt name, description and installation instructions?

I would slightly prefer a separate subcommand instead of the --examples option, but I'm not strictly against it.

@qcheng-redhat
Copy link
Contributor

Hi @psss, I will look into this issue. Thanks.

@idorax
Copy link
Contributor

idorax commented Apr 9, 2024

Hi @martinhoyer, please let me know what .md files should be added to support both simplified Chinese and traditional Chinese when you start to fix this issue :-)

@martinhoyer
Copy link
Collaborator

Thanks @idorax! Once we have the English version finalized, I'll ping you. Can be done in separate PR later as well :)
Just fyi, this is the structure of tldr languages: https://github.com/tldr-pages/tldr/blob/main/CONTRIBUTING.md#directory-structure

@idorax
Copy link
Contributor

idorax commented Apr 9, 2024

No problem! @martinhoyer , I'll take a look at the tldr/CONTRIBUTING.md#directory-structure, thanks for sharing the doc!

@psss
Copy link
Collaborator Author

psss commented Jun 20, 2024

@martinhoyer, what about including this in 1.35? Should be fairly easy.

@martinhoyer
Copy link
Collaborator

@martinhoyer, what about including this in 1.35? Should be fairly easy.

Yep, have on the list. Thanks for reminder. 1.35 sounds good.

@abitrolly
Copy link
Contributor

abitrolly commented Jun 23, 2024

Here the tldr page as I see it, and I don't see a lot. :D

# tmt

> Runs tests in containers.
> More information: <https://tmt.readthedocs.io/en/stable/>.

- Make project tests manageable by `tmt`:

`tmt init`

- Describe how to run tests:

`tmt tests create`

- Run all tests in a container or VM:

`tmt run`

- Show last failed test:

`???`

- Show last log:

`tmt ???`

- Run faled test again:

`tmt ???`

- Login to last executed container or VM:

`tmt ???`

@martinhoyer
Copy link
Collaborator

Here the tldr page as I see it, and I don't see a lot. :D

Thanks, much appreciated. :)

@happz
Copy link
Collaborator

happz commented Jun 24, 2024

What I use often, FWIW:

  • list/search objects: tmt <test|plan|story> ls [pattern]
  • "what tests would a plan foo run?": tmt run -vv discover plan -n foo
  • add context: tmt -c foo=bar -c baz=qux,quux ...
  • run just some plans and some tests: tmt run ... plan -n foo test -n bar
  • tmt [test|plan|story] lint
  • tmt [test|plan|story] export -h yaml

@psss
Copy link
Collaborator Author

psss commented Jun 24, 2024

I like @happz's favorites above, perhaps one thought: What about showing inspirative examples for each command, e.g. not enumerating all test|plan|story combinations, something like:

tmt plan ls <pattern> .......... list plan names
tmt test show <pattern> ........ show details for given tests
tmt story coverage <pattern> ... docs, test & implementation coverage for selected stories

In the examples I'd suggest using long options to make the examples as self-explanatory as possible (users will find the short versions soon/easily):

tmt run -v plan --name foo test --name bar

I'd suggest to also include at least one tmt try example, perhaps one of these?

cd directory/with/test/code && tmt try ... run and debug a single test
tmt try fedora@container ................. quickly experiment with fedora in a container

Or maybe both?

@martinhoyer
Copy link
Collaborator

We can reference subcommands: https://github.com/tldr-pages/tldr/blob/main/CONTRIBUTING.md#subcommands
+1 for tmt try

@abitrolly
Copy link
Contributor

How do you actually run tests and see the results?

I tried example test, it failed, and the output is not helpful. It is so verbose that looks like debug info, and that debug info still doesn't contain info about what went wrong. Which test is failed, why, etc.

$ tmt run --all provision --how container               
/var/tmp/tmt/run-011

/default/plan
    discover
        how: fmf
        directory: /data/s/gitlab-ai-gateway/xxx
        summary: 1 test selected
    provision
        queued provision.provision task #1: default-0
        
        provision.provision task #1: default-0
        how: container
        multihost name: default-0
        arch: x86_64
        distro: Fedora Linux 40 (Container Image)
    
        summary: 1 guest provisioned
    prepare
        queued push task #1: push to default-0
        
        push task #1: push to default-0
    
        queued prepare task #1: requires on default-0
        
        prepare task #1: requires on default-0
        how: install
        summary: Install required packages
        name: requires
        where: default-0
        package: /usr/bin/flock
    
        queued pull task #1: pull from default-0
        
        pull task #1: pull from default-0
    
        summary: 1 preparation applied
    execute
        queued execute task #1: default-0 on default-0
        
        execute task #1: default-0 on default-0
        how: tmt
        progress:              
    
        summary: 1 test executed
    report
        how: display
        summary: 1 error
    finish
    
        container: stopped
        container: removed
        container: network removed
        summary: 0 tasks completed

total: 1 error 

@psss
Copy link
Collaborator Author

psss commented Jun 25, 2024

How do you actually run tests and see the results?

If you're interested in individual test results then it would be tmt run -v. To see link to logs tmt run -vv or the whole test output tmt run -vvv.

I tried example test, it failed, and the output is not helpful. It is so verbose that looks like debug info, and that debug info still doesn't contain info about what went wrong. Which test is failed, why, etc.

Yeah, I agree that some of the details could/should be omitted from the default output. I kicked off #2534 where I would like to cover the queued prepare task message. Just didn't have time to finish it.

@abitrolly
Copy link
Contributor

tmt run --all -vvv provision --how container got even more output. :D I like that the output exposes everything that tmt is doing, but getting that to just to troubleshoot a single test is quite an overkill.

From the 2.7kB of output, these are the relevant lines.

    report
        how: display
        order: 50
            errr /test01
                output.txt: /var/tmp/tmt/run-017/default/plan/execute/data/guest/default-0/test01-1/output.txt
                content:
                    ++ mktemp
                    + tmp=/tmp/tmp.huNW3sQbKy
                    + tmt --help
                    ./test.sh: line 4: tmt: command not found
        summary: 1 error

This is the test that is created with test create command.

$ tmt tests create test01
Test template (shell or beakerlib): shell
Test directory '/data/s/gitlab-ai-gateway/xxx/test01' created.
Test metadata '/data/s/gitlab-ai-gateway/xxx/test01/main.fmf' created.
Test script '/data/s/gitlab-ai-gateway/xxx/test01/test.sh' created.

It could be okay if file comment described how to fix it, that is "tmt: command not found".

Still in the context of tldr thread.

How to view the output of the last command? (I know tmt saves reports and don't want to rerun tests each time)

I almost feel like creating tmt wrapper that will just provide zero-shot CLI interface explaining what to do without docs, and TAP output by default.

Is it possible to custom format tmt output?

@happz
Copy link
Collaborator

happz commented Jun 25, 2024

(Not responding to all points)

How to view the output of the last command? (I know tmt saves reports and don't want to rerun tests each time)

tmt run --last report -vv -h display should run just the report step for the last run. Use run --id /var/tmp/run-foo to do the same with any run.

I almost feel like creating tmt wrapper that will just provide zero-shot CLI interface explaining what to do without docs, and TAP output by default.

Is it possible to custom format tmt output?

Now that's a very interesting idea. No, it is not possible to custom format the current tmt output, at least not easily. What you observe is rather logging (goes to stderr), not an actual output of tmt run, and tmt run has close to no output at all, IIRC, stdout would be pretty barren.

Adding TAP support sounds very much doable, IIUIC with very limited experience, it's a plain-text interface, a stream of simple lines. Is it something that's consumed through pipes, or is it common to exchange the output via files too? Would you be able to share more about how TAP support would help integrating tmt with your workflows?

We could go the easy way, and add a new report plugin to emit the output after the testing is complete, which is a pretty standard situation, or we can extend tmt core, the execute step and plugins to emit these lines in real-time, as the testing progresses. This would be a new concept, execute plugins do emit some progress info as tests complete, but it's hardwired and it would need a bit more work.

@abitrolly
Copy link
Contributor

Would you be able to share more about how TAP support would help integrating tmt with your workflows?

Not specifically TAP, but copy pasting failed test run for a bug report is much easier when the results are represented in concise format rather than in 2kb of text.

ok 1 - Input file opened
not ok 2 - First line of the input valid
ok 3 - Read the rest of the file

For output customization example, Vale .md validator uses Go templates.

$ vale --output='path/to/my/template.tmpl' somefile.md

The template receives array of these objects https://vale.sh/docs/integrations/guide/#--outputjson

{
  "index.md": [
    {
      "Action": {
        "Name": "",
        "Params": null
      },
      "Check": "write-good.Passive",
      "Description": "",
      "Line": 6,
      "Link": "",
      "Message": "'was created' may be passive voice. Use active voice if you can.",
      "Severity": "warning",
      "Span": [
        59,
        69
      ],
      "Match": "was created"
    },
  ]
}

The array could probably be JSON Lines generator for streaming processing, or may be it is. The only problem with Vale is that if you need to output results to the screen and save report, like for CI/CD postprocessing, you need to run Vale twice.

@happz
Copy link
Collaborator

happz commented Jun 26, 2024

Would you be able to share more about how TAP support would help integrating tmt with your workflows?

Not specifically TAP, but copy pasting failed test run for a bug report is much easier when the results are represented in concise format rather than in 2kb of text.

ok 1 - Input file opened
not ok 2 - First line of the input valid
ok 3 - Read the rest of the file

I see. Is it something that should be streamed as tests progressed, or is it fine to dump this kind of output in the report stage once all tests finish?

For output customization example, Vale .md validator uses Go templates.

$ vale --output='path/to/my/template.tmpl' somefile.md

The template receives array of these objects https://vale.sh/docs/integrations/guide/#--outputjson

{
  "index.md": [
    {
      "Action": {
        "Name": "",
        "Params": null
      },
      "Check": "write-good.Passive",
      "Description": "",
      "Line": 6,
      "Link": "",
      "Message": "'was created' may be passive voice. Use active voice if you can.",
      "Severity": "warning",
      "Span": [
        59,
        69
      ],
      "Match": "was created"
    },
  ]
}

The array could probably be JSON Lines generator for streaming processing, or may be it is. The only problem with Vale is that if you need to output results to the screen and save report, like for CI/CD postprocessing, you need to run Vale twice.

This seems like a perfect match for a report plugin. We're using Jinja2 templates, which shouldn't be a problem, but report plugins have access to all results, and rendering templates is perfectly common. Would you mind filing an issue for this? It's basically the existing html report plugin, just a more generic picture & html plugin does not print the rendered output to stdout, so a bit of refactoring will be needed, but all in all, definitely easy and doable.

@abitrolly
Copy link
Contributor

Is it something that should be streamed as tests progressed, or is it fine to dump this kind of output in the report stage once all tests finish?

Unbuffered streaming tests results is necessary for onscreen reports and for troubleshooting problems when the whole test harness crashes. CI/CD is okay with post-processing steps, because reports are being uploaded as artifacts for further processing.

@abitrolly
Copy link
Contributor

@happz created #3049 in a hurry. :D

@martinhoyer martinhoyer linked a pull request Aug 28, 2024 that will close this issue
3 tasks
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
documentation Improvements or additions to documentation good first issue Good for newcomers
Projects
None yet
Development

Successfully merging a pull request may close this issue.

7 participants