-
Notifications
You must be signed in to change notification settings - Fork 72
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Start on a new reference implementation #67
Conversation
So far this includes the type-level schema validation and normalization logic. No domain-level validation is in place yet (i.e. checks on the contents of the strings involved; see #63).
TODO: probably should normalize all right-hand sides into arrays (sometimes one-element arrays). |
Includes some string validation
1ab9eca
to
442f12c
Compare
The parser is much further along. It fully validates and normalizes the "specifier map" structure. It it missing validation and normalization of the "scopes" structure, i.e. normalizing the keys as URLs. |
OK. Parser done, I am pretty sure. Going home for the day. Drive-by reviews appreciated, although given that this implementation is in service of the spec, understand that I might not take all advice on coding style or project setup. I'll merge this in tomorrow or Thursday, and move on to the resolution algorithm. After that, time to translate into a more formal spec. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks like a great start.
Main thing I think is I would still prefer explicit type errors to help authorship of these maps instead of silent failure.
I'm not sure why we're restricting to fetch scheme URLs on the map, I suppose I prefer a concept of URL schemes being extensible and not some centralized list.
const url = tryURLParse(string); | ||
if (url === null) { | ||
return null; | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Out of interest, why can't both of the above branches just be represented by:
try {
return new URL(string, baseURL);
}
catch () {
return null;
}
I thought absolute URL parsing in new URL(absoluteURL, referrerURL)
was independent of the referrer?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I find using try/catch for non-exceptional cases unfortunate, so I created this little wrapper. See whatwg/url#372.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It was somewhat nitpicking the implementation details, but the suggestion was that:
if (string.startsWith('./') || string.startsWith('../') || string.startsWith('/')) {
return new URL(string, baseURL);
}
const url = tryURLParse(string);
if (url === null) {
return null;
}
if (hasFetchScheme(url)) {
return url;
}
return null;
could be replaced with the implementation-equivalent:
const url = tryURLParse(string, baseURL);
if (url && hasFetchScheme(url)) {
return url;
}
return null;
Although this is personal preference, and conceptual clarity should dominate too.
return null; | ||
} | ||
|
||
if (hasFetchScheme(url)) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I worry a little about this inhibiting experimentation with other protocols... what is the reason for this restriction?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Since it ignores them, I don't think it will inhibit experimentation. (See #80.) It's just that the initial spec will throw them away since they are unfetchable.
I don't think this will be observable except through side channels. E.g. you would probably see something in the dev console/network panel that fetches and fails if we kept them, whereas when we ignore them, you wouldn't see that. But since the fetch is always a failure anyway for non-fetch schemes, the result observable to the page would be the same.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Since it ignores them, I don't think it will inhibit experimentation. (See #80.) It's just that the initial spec will throw them away since they are unfetchable.
Right, but what happens if a browser implements a new protocol? Are fetch schemes extensible to support this?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Say Node.js might want to implement its own internal schemes - if it were to implement package maps, it would then be "spec invalid"?
New protocols should not hit frictions in places like this.
The idea of locking down protocols just makes me weary in general, and I just don't understand what benefits we get for that.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, I anticipate that Node.js will not implement the browser spec for import maps exactly, precisely because it takes a dependency on browser concepts like "fetch schemes".
if (typeof value === 'string') { | ||
obj[key] = [value]; | ||
} else if (!Array.isArray(value)) { | ||
delete obj[key]; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can we not throw on invalid right hand side types?
}; | ||
|
||
function normalizeSpecifierMap(obj, baseURL) { | ||
delete obj['']; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I would prefer to throw for empty string maps and scopes.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Empty string scopes seem useful. If you disagree please open a separate issue. For empty string specifier keys, and other cases where you'd prefer to throw, let's discuss in #80.
const scopeKeyURL = tryURLParse(scopeKey, baseURL); | ||
if (scopeKeyURL === null) { | ||
continue; | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'd still prefer to be strict and throw errors for invalid cases.
So far this includes the type-level schema validation and normalization logic. No domain-level validation is in place yet (i.e. checks on the contents of the strings involved; see #63).