-
Notifications
You must be signed in to change notification settings - Fork 0
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Fix Tests #106
Fix Tests #106
Conversation
Currently the validation function validates the "valueSchema", e.g. the schema without the common properties. This is because the main place where we validate data is on user (API) input, and the API never passes the common properties - those are set on the server. I don't think there is a need to validate objects that include the common properties?
I think it would be good to do the same as Ajv here: export function validate(schemaName, obj) {
const isValid = validations[schemaName](obj);
validate.errors = validations[schemaName].errors
return isValid
}
I don't think this needs to be thorough - the Ajv tests should catch that. We just need to check that a correct document of each schema type validates correctly, to make sure we're exporting the right thing.
We use |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Progress looks good. A suggestion about how to write this to avoid wrapping test functions:
test('bad docs throw', t => {
t.plan(fixtures.badDocs.length)
for (const { doc, text } of fixtures.badDocs) {
t.throws(() => encode(doc), text)
}
})
test('good docs encode and decode correctly', t => {
t.plan(fixtures.goodDocs.length * 2)
for (const { doc, text } of badDocs) {
let buf
t.doesNotThrow(() => {
buf = encode(doc)
})
const decoded = decode(buf, parseVersionId(doc.versionId))
t.deepEqual(decoded, doc)
}
})
FYI if you use pass an async function to test()
, then you don't need to make a plan - it will consider the test finished when the function resolves. You can do that, or make the plan like I do above.
deepEqual might fail because of the difference between undefined properties and uninitialized properties, e.g.
const a = {
optionalProp: undefined
}
const b = {}
Those won't be deep equal, but will be valid as far as our schema and the type are concerned. Probably need a different deep equal function for testing this.
test/fixture.js
Outdated
onlyId: { | ||
text: 'test encoding of record with missing fields', | ||
plan: 1, | ||
doc: {id: randomBytes(32).toString('hex')} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
maybe change to docId
, since testing missing fields not invalid fields.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah, that field was copy/pasted from older tests, I should change that.
I came across this same issue today when writing tests. I solved it the "dumb" way by round-tripping the object to JSON in order to remove undefined properties. e.g. you can do: function stripUndef(obj) {
return JSON.parse(JSON.stringify(obj))
}
t.deepEqual({ foo: undefined }, {}) // fails because `foo` is missing from second parameter
t.deepEqual(stringUndef({ foo: undefined }), {}) // passes |
Is not clear what is happening here, its not throwing nor returning any error. just setting a field. |
Given this, should the |
No, because if the client passed a value with these props (e.g. with a |
Looks like Gregor covered the important things here, but let me know if there's a specific thing I can help with.
|
The issue I'm having with this is that, despite stripping {point:[], vertex:[], area:[], line:[], relation:[]} |
Another issue I'm having is for decimals on floats. |
when decoding a protobuf with a float, the number of decimals is not predictable nor it will necessary match the number of decimals on the field we encoded previously...
This might actually be a bug. In general I've made array and object properties required to avoid this problem. |
Decimal issue I've also came across. See digidem/comapeo-core#150 - this suggests it might be a protobuf issue rather than SQLite. Maybe both. We could change how we encode floats - is there a more precise option? Or we could go for an option laid out in the issue (integer or string) |
I had a bug in
Its seems to be on the protobuf side since there's no sqlite here. Its weird that I haven't found an issue related to this on |
I think actually we should at least be using |
Yeah, using doubles seems to fix the issue. On the other hand, now I'm having another issue. // here I'm not passing appearance, snakeCase, options and universal
{
docId: randomBytes(32).toString('hex'),
versionId: `${randomBytes(32).toString('hex')}/0`,
schemaName: 'field',
createdAt: new Date().toJSON(),
updatedAt: new Date().toJSON(),
links: [],
tagKey: "myTagKey",
type: "text",
label: "myLabel"
}, protobuf won't throw an error, but also I would get back from decoding the same doc BUT with the added values (this are added on the conversion functions before encoding) which means that a comparison would fail (and so the test). This failing tests could be a good thing (hey! you're missing some fields), but in that case, shouldn't we avoid adding the missing fields with their default value and letting the encoding fail?? |
This should close #79.
Things I've noticed when writing different
MapeoDoc
s for testing:docId
fails JSONSchema validation (JSONSchema thinks is an additional - not allowed - field)validation
fn signature)I'm missing more tests:
Also, not important nor a priority, but we're using
tape
for testing. We could change to usingbrittle
since they are pretty similar. I don't have a rationale for that change, but mostly for the sake of uniformity in tools...