Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix Tests #106

Merged
merged 24 commits into from
Aug 15, 2023
Merged

Fix Tests #106

merged 24 commits into from
Aug 15, 2023

Conversation

tomasciccola
Copy link
Contributor

@tomasciccola tomasciccola commented Aug 7, 2023

This should close #79.
Things I've noticed when writing different MapeoDocs for testing:

  • passing docId fails JSONSchema validation (JSONSchema thinks is an additional - not allowed - field)
  • We are not handling JSONSchema validation errors, we could throw them or return them (which would change the validation fn signature)
 const validationFn = validations[schemaName]
 if(validationFn(obj)) return true
 throw new Error(validationFn.errors.map(err => JSON.stringify(err)).join('\n'),null,4)

I'm missing more tests:

  • check equality of decoded object with passed object
  • write a bunch of docs with different missing properties and invalid types for fields
  • test decoding a buffer with wrong random header
  • test decoding a buffer with header with wrong schemaType and/or schemaVersion
  • test JSONSchema validation
  • smth missing?

Also, not important nor a priority, but we're using tape for testing. We could change to using brittle since they are pretty similar. I don't have a rationale for that change, but mostly for the sake of uniformity in tools...

@tomasciccola tomasciccola self-assigned this Aug 7, 2023
@sethvincent sethvincent self-assigned this Aug 7, 2023
@gmaclennan
Copy link
Member

  • passing docId fails JSONSchema validation (JSONSchema thinks is an additional - not allowed - field)

Currently the validation function validates the "valueSchema", e.g. the schema without the common properties. This is because the main place where we validate data is on user (API) input, and the API never passes the common properties - those are set on the server. I don't think there is a need to validate objects that include the common properties?

  • We are not handling JSONSchema validation errors, we could throw them or return them (which would change the validation fn signature)

I think it would be good to do the same as Ajv here:

export function validate(schemaName, obj) {
    const isValid = validations[schemaName](obj);
    validate.errors = validations[schemaName].errors
    return isValid
}
 const validationFn = validations[schemaName]
 if(validationFn(obj)) return true
 throw new Error(validationFn.errors.map(err => JSON.stringify(err)).join('\n'),null,4)

I'm missing more tests:

  • check equality of decoded object with passed object
  • write a bunch of docs with different missing properties and invalid types for fields
  • test decoding a buffer with wrong random header
  • test decoding a buffer with header with wrong schemaType and/or schemaVersion
  • test JSONSchema validation

I don't think this needs to be thorough - the Ajv tests should catch that. We just need to check that a correct document of each schema type validates correctly, to make sure we're exporting the right thing.

  • smth missing?

Also, not important nor a priority, but we're using tape for testing. We could change to using brittle since they are pretty similar. I don't have a rationale for that change, but mostly for the sake of uniformity in tools...

We use tape in almost everything else, I'm ok with sticking to that. brittle has been ok to use in mapeo-core-next, but I don't think it adds anything that we need over tape.

Copy link
Member

@gmaclennan gmaclennan left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Progress looks good. A suggestion about how to write this to avoid wrapping test functions:

test('bad docs throw', t => {
    t.plan(fixtures.badDocs.length)
    for (const { doc, text } of fixtures.badDocs) {
        t.throws(() => encode(doc), text)
    }
})
test('good docs encode and decode correctly', t => {
    t.plan(fixtures.goodDocs.length * 2)
    for (const { doc, text } of badDocs) {
        let buf
        t.doesNotThrow(() => {
            buf = encode(doc)
        })
        const decoded = decode(buf, parseVersionId(doc.versionId))
        t.deepEqual(decoded, doc)
    }
})

FYI if you use pass an async function to test(), then you don't need to make a plan - it will consider the test finished when the function resolves. You can do that, or make the plan like I do above.

deepEqual might fail because of the difference between undefined properties and uninitialized properties, e.g.

const a = {
  optionalProp: undefined
}
const b = {}

Those won't be deep equal, but will be valid as far as our schema and the type are concerned. Probably need a different deep equal function for testing this.

test/fixture.js Outdated Show resolved Hide resolved
test/fixture.js Outdated
onlyId: {
text: 'test encoding of record with missing fields',
plan: 1,
doc: {id: randomBytes(32).toString('hex')}
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

maybe change to docId, since testing missing fields not invalid fields.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah, that field was copy/pasted from older tests, I should change that.

@gmaclennan
Copy link
Member

gmaclennan commented Aug 8, 2023

deepEqual might fail because of the difference between undefined properties and uninitialized properties

I came across this same issue today when writing tests. I solved it the "dumb" way by round-tripping the object to JSON in order to remove undefined properties.

e.g. you can do:

function stripUndef(obj) {
  return JSON.parse(JSON.stringify(obj))
}

t.deepEqual({ foo: undefined }, {}) // fails because `foo` is missing from second parameter
t.deepEqual(stringUndef({ foo: undefined }), {}) // passes

@tomasciccola
Copy link
Contributor Author

I think it would be good to do the same as Ajv here:

export function validate(schemaName, obj) {
    const isValid = validations[schemaName](obj);
    validate.errors = validations[schemaName].errors
    return isValid
}

Is not clear what is happening here, its not throwing nor returning any error. just setting a field.

@tomasciccola
Copy link
Contributor Author

Currently the validation function validates the "valueSchema", e.g. the schema without the common properties. This is because the main place where we validate data is on user (API) input, and the API never passes the common properties - those are set on the server. I don't think there is a need to validate objects that include the common properties?

Given this, should the validate fn that we export delete common props before trying to validate, or is this a responsability on the client side (since all common props would be added on the server)?

@gmaclennan
Copy link
Member

Given this, should the validate fn that we export delete common props before trying to validate, or is this a responsability on the client side (since all common props would be added on the server)?

No, because if the client passed a value with these props (e.g. with a docId) then we want the validation to fail, because that shouldn't happen.

@sethvincent
Copy link
Contributor

Looks like Gregor covered the important things here, but let me know if there's a specific thing I can help with.

Unnecessary side note: brittle is really cool and has made it a lot easier to write tests for code with combinations of async functions and event emitters, so not really necessary here. It also has built in code coverage! 🐰💨

@tomasciccola
Copy link
Contributor Author

tomasciccola commented Aug 9, 2023

function stripUndef(obj) {
  return JSON.parse(JSON.stringify(obj))
}

t.deepEqual({ foo: undefined }, {}) // fails because `foo` is missing from second parameter
t.deepEqual(stringUndef({ foo: undefined }), {}) // passes

The issue I'm having with this is that, despite stripping undefined values, a field which is an object where fields inside it are Arrays, comparison will fail.
This happens, for example on the field defaultPresets of project which is

{point:[], vertex:[], area:[], line:[], relation:[]}

@tomasciccola
Copy link
Contributor Author

Another issue I'm having is for decimals on floats.
i.e.: when encoding lat in observation, if I encode a value like -20.2 I would get back from decoding smth like -20.2244782

Tomás Ciccola added 2 commits August 9, 2023 15:35
when decoding a protobuf with a float, the number of decimals is not
predictable nor it will necessary match the number of decimals on the
field we encoded previously...
@gmaclennan
Copy link
Member

The issue I'm having with this is that, despite stripping undefined values, a field which is an object where fields inside it are Arrays, comparison will fail.

This might actually be a bug. In general I've made array and object properties required to avoid this problem.

@gmaclennan
Copy link
Member

Decimal issue I've also came across. See digidem/comapeo-core#150 - this suggests it might be a protobuf issue rather than SQLite. Maybe both. We could change how we encode floats - is there a more precise option? Or we could go for an option laid out in the issue (integer or string)

@tomasciccola
Copy link
Contributor Author

The issue I'm having with this is that, despite stripping undefined values, a field which is an object where fields inside it are Arrays, comparison will fail.

This might actually be a bug. In general I've made array and object properties required to avoid this problem.

I had a bug in src/lib/encode-conversions.ts:31-33 which was tripping me off. Its fixed now...

Decimal issue I've also came across. See digidem/mapeo-core-next#150 - this suggests it might be a protobuf issue rather than SQLite. Maybe both. We could change how we encode floats - is there a more precise option? Or we could go for an option laid out in the issue (integer or string)

Its seems to be on the protobuf side since there's no sqlite here. Its weird that I haven't found an issue related to this on ts-proto

@gmaclennan
Copy link
Member

I think actually we should at least be using double first non-integers. I see float is only 32 bit. That's probably the source of the large inaccuracies. Let's make that change first and see how it changes things, then look at integer or string encoding if needed.

@tomasciccola
Copy link
Contributor Author

I think actually we should at least be using double first non-integers. I see float is only 32 bit. That's probably the source of the large inaccuracies. Let's make that change first and see how it changes things, then look at integer or string encoding if needed.

Yeah, using doubles seems to fix the issue.

On the other hand, now I'm having another issue.
If I encode an object and avoid setting some values, i.e.:

// here I'm not passing appearance, snakeCase, options and universal
    {
      docId: randomBytes(32).toString('hex'),
      versionId: `${randomBytes(32).toString('hex')}/0`,
      schemaName: 'field',
      createdAt: new Date().toJSON(),
      updatedAt: new Date().toJSON(),
      links: [],
      tagKey: "myTagKey",
      type: "text",
      label: "myLabel"
    },

protobuf won't throw an error, but also I would get back from decoding the same doc BUT with the added values (this are added on the conversion functions before encoding) which means that a comparison would fail (and so the test). This failing tests could be a good thing (hey! you're missing some fields), but in that case, shouldn't we avoid adding the missing fields with their default value and letting the encoding fail??

@gmaclennan gmaclennan merged commit 37efc4b into main Aug 15, 2023
@gmaclennan gmaclennan deleted the feat/fix-tests branch August 15, 2023 15:46
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Fix tests
3 participants