Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Ability to specify benchmarks that should run synchronously #5412

Closed
4 tasks done
peterHakio opened this issue Mar 21, 2024 · 3 comments · Fixed by #5444
Closed
4 tasks done

Ability to specify benchmarks that should run synchronously #5412

peterHakio opened this issue Mar 21, 2024 · 3 comments · Fixed by #5444

Comments

@peterHakio
Copy link

peterHakio commented Mar 21, 2024

Clear and concise description of the problem

In cases where the code is not only js, but is using a shared resource such as a database benchmarks are difficult to run in isolation.

Example: we are currently testing prisma vs kysely and have written these tests. But since they are using a shared resourse it is not optimal to run them in parrallel.

Here is one of the tests

describe("Prisma vs Kysely 1000 objects", () => {
  bench("Prisma", async () => {
    await prisma.forecast.findMany({
      take: 1000,
    })
  })
  bench("Kysely", async () => {
    await db.selectFrom("Forecast").selectAll().limit(1000).execute()
  })
  bench("Prisma Raw SQL", async () => {
    await prisma.$queryRaw`
      SELECT
        *
      FROM
        "Forecast"
      LIMIT
        1000;
    `
  })

  bench("Kysely Raw SQL", async () => {
    await sql`
      SELECT
        *
      FROM
        "Forecast"
      LIMIT
        1000;
    `.execute(db)
  })
})

Suggested solution

An ability to run vitest bench --synchronous where all promises are awaited and the benchmark tests are performed synchronous.

Alternative

No response

Additional context

No response

Validations

@hi-ogawa
Copy link
Contributor

hi-ogawa commented Mar 22, 2024

It looks like multiple describe are running at the same time:

if (benchmarkSuiteGroup.length)
await Promise.all(benchmarkSuiteGroup.map(subSuite => runBenchmarkSuite(subSuite, runner)))

However, multiple bench inside single describe are executed sequentially, so your sample code with four bench inside single describe shouldn't interfere each other. Can you check if you have multiple describe?

Though I'm not familiar with the field of benchmarking, but I would feel everything should run sequentially by default and that probably gives better stability. Maybe we can check with what other benchmark framework does in general.

I'm playing with examples like this. Multiple bench files also run in parallel by default, but it can be made sequential by fileParallelism = false:

https://stackblitz.com/edit/vitest-dev-vitest-cdshc4?file=test.log

Sample code
import { bench, describe } from "vitest"
import fs from "node:fs";

const sleep = (ms: number) => new Promise(resolve => setTimeout(resolve, ms))

describe("suite1", () => {
  let i = 0;
  bench("bench1", async () => {
    await fs.promises.appendFile("test.log",`file1, suite1, bench1 = ${i++}\n`)
    await sleep(500);
  }, { time: 0, iterations: 0 })

  let j = 0;
  bench("bench2", async () => {
    await fs.promises.appendFile("test.log",`file1, suite1, bench2 = ${j++}\n`)
    await sleep(500);
  }, { time: 0, iterations: 0 })
})

describe("suite2", () => {
  let i = 0;
  bench("bench1", async () => {
    await fs.promises.appendFile("test.log",`file1, suite2, bench1 = ${i++}\n`)
    await sleep(500);
  }, { time: 0, iterations: 0 })
})

@peterHakio
Copy link
Author

peterHakio commented Mar 22, 2024

Thank you very much. We did what you suggested and it worked fine.

We made implimented it in the vite.config.js like this. Such that we still could have our tests run in parallel

import { defineConfig } from 'vitest/config';

export default defineConfig(({mode})=>({
  test: {
    fileParallelism: mode!=="benchmark",
  },
}));

And regarding if benchmarks should run sequentially. I can see arguments for both. A good first step could be to add a small section for it in the documentation.

@sheremet-va
Copy link
Member

Though I'm not familiar with the field of benchmarking, but I would feel everything should run sequentially by default and that probably gives better stability.

I do agree with that. I think we should flatten all benchmarks and use the suite just for reporting.

@hi-ogawa hi-ogawa moved this to P2 - 3 in Team Board Mar 28, 2024
@hi-ogawa hi-ogawa removed the status in Team Board Mar 28, 2024
@github-actions github-actions bot locked and limited conversation to collaborators Apr 22, 2024
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
Archived in project
Development

Successfully merging a pull request may close this issue.

3 participants