Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

RFC: registry: dependency specifiers #314

Closed
wants to merge 3 commits into from

Conversation

isaacs
Copy link
Contributor

@isaacs isaacs commented Feb 6, 2021

Fix: #275
Close: #217

References

@isaacs isaacs added the Agenda will be discussed at the Open RFC call label Feb 6, 2021
@isaacs
Copy link
Contributor Author

isaacs commented Feb 6, 2021

We've already discussed this pretty much to death in the context of #217, but added the Agenda label, since I figure we may as well mention on the call that it's coming, in case anyone still wants to poke at it or it doesn't match what they expected.

@ruyadorno
Copy link
Contributor

heads up @arcanis @zkochan, I think it would be great to have your eyes on this proposal

A new dependency specifier is added:

```
registry:<registry url>#<package name>[@<specifier>]
Copy link

@zkochan zkochan Feb 8, 2021

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

why not leveraging the current syntax? The <source>: syntax which is already used for github and some other sources.

if there are multiple packages from the same registry, the URL will be duplicated? Maybe it should be moved out to a separate field.

For instance:

"dependencies": {
  "foo": "corp:^1.0.0",
  "bar": "corp:^1.0.0"
},
"registries": {
  "corp": "https://registry.corp.com"
}

also, if such a package will be published to the public registry, are all the third-party registries trusted by default?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Defining a spec name to point to a registry is a good idea, and there's a number of ways we could go with various different trade-offs. Worth doing as a subsequent RFC that builds on this one, since corp:foo@1.x would presumably desugar to registry:https://registry.corp.com#foo@1.x.

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's not only a matter of syntactic sugar - if we support / implement this, then we need to be sure the syntax meet our standards. If fully qualified URLs lead to subpar developer experience, then it'll be very confusing to later say "our bad, now you can use this other syntax that work better but isn't as well supported".

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't really understand the objection here. If we want to support a custom registry url in the package manifest as suggested here, how is that made any more difficult by also supporting a full registry url as part of the dependency spec? In fact, it seems like it would be somewhat easier implementation-wise, because most of the code that consumes specs would be able to remain agnostic as to whether the alias spec was corp:foo@1.x or npm:foo@1.x or registry:https://registry.npmjs.org/#foo@1.x, and we would have a verbose canonical way to save it that doesn't rely on having the rest of the package.json file in order to parse it.

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If we want to support a custom registry url in the package manifest as suggested here, how is that made any more difficult by also supporting a full registry url as part of the dependency spec?

I think our point is that we're not convinced we want to support both, since that has a cost in terms of documentation, and has the potential to be confusing for our users. Should they use an url or a name? In which context? Etc. I'd much prefer having a single consistent syntax, and that makes it important to discuss what this syntax would be.

we would have a verbose canonical way to save it that doesn't rely on having the rest of the package.json file in order to parse it

Imo registry names shouldn't be mapped to urls via the package.json, but rather by our respective configuration files - just like scope urls.

@zkochan
Copy link

zkochan commented Feb 8, 2021

I don't see a lot of benefits from this versus using direct tarball URLs.

@ljharb
Copy link
Contributor

ljharb commented Feb 8, 2021

Users know their registry URL but don't easily know the tarball URL.

@isaacs
Copy link
Contributor Author

isaacs commented Feb 10, 2021

I don't see a lot of benefits from this versus using direct tarball URLs.

The main advantage is that you can specify a dist-tag or SemVer range.

Users know their registry URL but don't easily know the tarball URL.

That too :)

@isaacs
Copy link
Contributor Author

isaacs commented Feb 10, 2021

Suggestion from @wesleytodd: make sure we're crystal clear in the rfc and docs that this will break if users are on npm versions prior to its inclusion, and anyone who doesn't have access to the registry defined in the spec.

@ljharb
Copy link
Contributor

ljharb commented Feb 10, 2021

Perhaps when npm notices a registry specifier in the current package.json, but it lacks an appropriate engines.npm range, it could warn?

@Christian24
Copy link

Would the official npm registry reject packages with registry specifiers? Or would it check if they are available? Or would it be worth requiring an additional confirmation aka I sure hope you know what you are doing?

@isaacs
Copy link
Contributor Author

isaacs commented Feb 10, 2021

Would the official npm registry reject packages with registry specifiers? Or would it check if they are available? Or would it be worth requiring an additional confirmation aka I sure hope you know what you are doing?

The registry doesn't do anything with dependency specifiers. It would just let them on through, like it does with unrecognized specifiers yarn has introduced that npm doesn't know how to handle.

@Christian24
Copy link

Christian24 commented Feb 10, 2021

Would the official npm registry reject packages with registry specifiers? Or would it check if they are available? Or would it be worth requiring an additional confirmation aka I sure hope you know what you are doing?

The registry doesn't do anything with dependency specifiers. It would just let them on through, like it does with unrecognized specifiers yarn has introduced that npm doesn't know how to handle.

Maybe it makes sense for the CLI to warn people then?

Copy link

@arcanis arcanis left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm a proponent of making package managers less reliant on default registries (we even discussed a fairly similar approach back whe the GitHub registry was released), so I'm happy to see npm championing a proposal in this direction 👍🌟

With that being said, I believe this RFC is missing information in the "Rationale and Alternatives" section. In particular, I'd like to see an objective rundown of the problems we may hit with this approach rather than another, why they don't matter in the grand scheme of things, and why other approaches are worse. This is important since we should not only be convinced that this move is a good thing (I already agree with that), but that there isn't a better approach.

Comment on lines +66 to +61
- `<registry>` is a fully qualified URL to an npm registry, which may not
contain a `hash` portion,
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As @zkochan mentioned, the RFC doesn't make it clear why fully qualified URLs are the right choice. They have various drawbacks (such as hard to reconfigure; strongly vulnerable to hosts going down; syntactically ambiguous), so I think it's important to have the discussion about this before the fact, not after.

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Also, the only reason mentioned in the RFC for using URLs seem to be this, but it could be expanded (as is, I'm not very convinced the pros outweigh the cons).

contain a `hash` portion,
- `<package name>` is the (scoped or unscoped) name of the package to
resolve on the registry, and
- `<specifier>` is an optional dist-tag, version, or range.
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
- `<specifier>` is an optional dist-tag, version, or range.
- `<specifier>` is an optional dist-tag or semver range.

Versions are valid ranges.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, they are. And also, every time we rely on that fact in our documentation, I get someone asking whether it has to be a range, or if a single version is allowed, so I just got in the habit of being a little extra verbose about it.

Comment on lines +77 to +72
When a package is installed using a registry specifier, it *must* be saved
using a registry specifier.
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is another argument in favor of not making them URLs. A common complaint back in Yarn 1 was that storing the registry urls within the lockfile was causing annoying issues when switching from a registry to another - common use cases for China users, for instance, which frequently need to use Taobao.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If I run npm install foo@registry:https://registry.foo.com/#foo@1.x, and it is saved in such a way that future npm install invocations do not fetch from https://registry.foo.com/, then that is a clear violation of expectation and intent.

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Precisely; and if you run npm install foo@registry:openjs#foo@1.x then there's no such expectation, and users will instead assume they can swap the openjs registry url to another if they need - which is a reasonable feature, considering that it's already an established use case (cf the Taobao example).

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What this is saying is that we have to save it to the package.json so that if you run this:

npm install foo@registry:https://registry.foo.com/#foo
rm -rf node_modules
npm install

then the second one gets it from the https://registry.foo.com registry, and not the default configured registry.

If you agree, then I don't understand the objection.

accepted/0000-registry-spec.md Show resolved Hide resolved
accepted/0000-registry-spec.md Outdated Show resolved Hide resolved
A new dependency specifier is added:

```
registry:<registry url>#<package name>[@<specifier>]
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's not only a matter of syntactic sugar - if we support / implement this, then we need to be sure the syntax meet our standards. If fully qualified URLs lead to subpar developer experience, then it'll be very confusing to later say "our bad, now you can use this other syntax that work better but isn't as well supported".

@arcanis
Copy link

arcanis commented Feb 10, 2021

I don't see a lot of benefits from this versus using direct tarball URLs.

Tarball URLs require an exact version, don't support deprecation notices, harder to authenticate, etc. Plus, I think that semantically the very idea of fetching packages that belong to different sets, to treat the npm registry as only one of those sets, make sense. With that we could imagine separate organizations providing their own registries / sets.

@zkochan
Copy link

zkochan commented Feb 10, 2021

I understand, but IMHO, hardcoding the URLs in the specs makes it a bit inflexible. I think it should be decoupled somehow. The mapping of the package source to the package base URLs may be provided in a separate configuration file. So as mentioned in the RFC, when a package is in a staging environment, the mapping will provide one registry URL, in prod, it would provide a different one.

Isn't this how the registries in some Linux package managers are specified? Like a list of mirrors is provided. This way you can change the registries without republishing the packages.

@arcanis
Copy link

arcanis commented Feb 10, 2021

Yep, on that I fully agree.

@isaacs
Copy link
Contributor Author

isaacs commented Feb 12, 2021

One thing we didn't discuss in the last openRFC call, but which probably needs to be addressed, is how the dependencies of packages fetched via registry specifiers would be resolved.

For example, say that I am using a package foo from registry-a, and bar from registry-b.

{
  "dependencies": {
    "foo": "registry:https://registry.a.com/#foo@1.x",
    "bar": "registry:https://registry.b.com/#bar@1.x"
  }
}

foo depends on bar@1.x, but of course the user who published foo intended to get the bar from registry-a.

Two options I can think of off the top of my head (there may be more):

  • It's broken, too bad. It's the same with tarball urls, and this is exactly why you should be using scopes and mapping them to registries in your configs if you want it to be consistent like this.
  • When we are traversing the dependency graph, the dependencies of a package loaded via a registry: specifier will effectively use the defined registry in the specifier as their default registry. This is why it's better to use registry: specifiers than tarball urls, because their deps are more intelligently handled. This may impact deduplication, but I think the current logic should be fine, since the matching relies on integrity/resolved values, only falling back to name@version when those are not available.

@isaacs
Copy link
Contributor Author

isaacs commented Feb 12, 2021

I understand, but IMHO, hardcoding the URLs in the specs makes it a bit inflexible. I think it should be decoupled somehow. The mapping of the package source to the package base URLs may be provided in a separate configuration file. So as mentioned in the RFC, when a package is in a staging environment, the mapping will provide one registry URL, in prod, it would provide a different one.

This is a good idea. It's also a different idea from the one being proposed here, and should have its own RFC so we can examine it fully ;)

@arcanis
Copy link

arcanis commented Feb 15, 2021

This is a good idea. It's also a different idea from the one being proposed here, and should have its own RFC so we can examine it fully ;)

I personally don't agree it's a different RFC. It's an alternative to one part of this one, and thus should at the very least be discussed in the "Rationale and Alternatives" section, along with an explanation why you still think the original design has more value considering the problems we raised against it.

One thing we didn't discuss in the last openRFC call, but which probably needs to be addressed, is how the dependencies of packages fetched via registry specifiers would be resolved.

Indeed 🤔 At the moment (at least in Yarn), all packages exist in the same vaccum, so there's no "context aware ranges". This would likely change that, as dependencies of a packages would semantically make more sense to resolve against the same registry by default (but then how would it work if, say, I download the package tarball and reference it directly using the file: protocol? Then we won't know its source registry from which to resolve deps).

@isaacs
Copy link
Contributor Author

isaacs commented Mar 2, 2021

Indeed 🤔 At the moment (at least in Yarn), all packages exist in the same vaccum, so there's no "context aware ranges". This would likely change that, as dependencies of a packages would semantically make more sense to resolve against the same registry by default

Updated the RFC with a fixup commit to add a bit saying that the dependencies of a package fetched via a registry: spec have to come from the same registry by default. (They can of course have their own registry: specifiers on deps, branching the tree yet again, and so on, and they may still dedupe against something encountered earlier if the integrities match.)

I was tempted to say that it should fall back to the main configured registry if it's not found, but this opens the door to name hijacking attacks (including non-malicious "attacks" where something just gets unpublished, and then the wrong thing gets installed).

And yeah, I also find the "context aware range" idea somewhat appealing.

(but then how would it work if, say, I download the package tarball and reference it directly using the file: protocol? Then we won't know its source registry from which to resolve deps).

It doesn't, lol. It's a huge pain and kind of makes file and tarball URLs annoying and useless, unless they expect to use the main public registry. That's part of the problem we're addressing here :)

@isaacs
Copy link
Contributor Author

isaacs commented Mar 2, 2021

I personally don't agree it's a different RFC. It's an alternative to one part of this one, and thus should at the very least be discussed in the "Rationale and Alternatives" section, along with an explanation why you still think the original design has more value considering the problems we raised against it.

I don't prefer to eat the elephant in one bite, if I can help it.

User-customizable registry shorthands is a pretty big feature that touches different parts of the system than this does. If we can get this well understood and implemented, and then express the registry shorthands feature in terms of registry specifiers, that's a lot easier. On the other hand, if we add it to this RFC, we're going to get bogged down in more issues.

The main disadvantage raised, which that would address, is "urls are clunky". And ok, I agree. But they're also explicit and it's easy to see what's going on. npm works by layering portable UX affordances on top of clunky explicit non-portable implementations. When you npm install, essentially, it's saying "put files in node_modules". You could do that by specifying a particular tarball on disk, like npm install /path/to/foo-pkg.tgz. Or you could tell npm where to get that tarball, by doing npm install https://foo.com/pkg.tgz. Or you could tell npm to get the tarball URL from a package manifest, by doing npm install foo@1.2.3. Or you could tell npm to resolve the version number by giving it a range npm install foo@1.x. Ultimately, though, everything "desugars" to a tarball file that we unpack in node_modules.

The power that this RFC adds is a branch point in that process. So rather than only having one registry to resolve deps against, you can explicitly state that some deps come from specific registries. I'd rather not also add the second UX affordance of saying that some registries are named, especially since "it is annoying to have to assign names to all my registries explicitly" is part of the pushback against scope-specific registries that led us here, here in this one RFC, because that bloats the feature and makes iterative implementation more difficult. An RFC should be one thing. We add the clunky explicit thing, and then we add the UX affordance that makes it less explicit, more portable, and easier for humans.

I agree that this is a valuable feature you are suggesting, and I want to implement it. That is why I am advocating steps to get there efficiently and carefully.

@zkochan
Copy link

zkochan commented Mar 2, 2021

I don't prefer to eat the elephant in one bite, if I can help it.

I understand the desire to add the least powerful tool to solve some of the issues users are facing. The issue is though, that once npm adds some feature, it is impossible to revert that feature. So I would not rush with this RFC until the bigger picture is discussed. It can be discussed in a different RFC, no problem.

I think if this was considered at an earlier stage, even scopes would have been a good fit.

@iarna
Copy link
Contributor

iarna commented Apr 14, 2021

From the RFC call today:

  1. registry:<url>#<name>@<specifier> should resolve all dependencies from the same registry. (As mentioned in threads above.)
  2. Ugliness could be considered a feature -- syntactic salt as the Perl people once said. It makes things possible that weren't before but does not encourage its use.
  3. Regarding sugar: Some kind of mapping of registry url to symbolic name has different implications depending on where that's kept:
    a. If included in the package.json, allows use of this sugar on the public registry and means that remapping can happen as you pull in dependencies. (My note: this seems like a high complexity cost.)
    b. If included in the npmrc, means that public registry modules can't use this sugar any more than they can use scopes that are served from third party registries. Means that there is only a single mapping for the entire project.

@zkochan

It sounds like you aren't opposed to this RFC as written, but you'd like to see a discussion of how this might function with the sugar. Is this a fair summation? It seems to me that if folks feel an RFC for that would be premature that an RRFC might fill that gap?

@arcanis

Is your concern solely that registry:${registryname}#${packagename}@${semver} is not aesthetically pleasing nor easy to type into a yarn add command line? Do you have other concerns about this RFC? Would the previously discussed mapping of symbolic name to registry url resolve your concerns?

@arcanis
Copy link

arcanis commented Apr 14, 2021

Is your concern solely that registry:${registryname}#${packagename}@${semver} is not aesthetically pleasing nor easy to type into a yarn add command line

I don't mind the esthetic side at all. My worry is about urls (just like @zkochan). Specifically, what's the story we want to sell our users? Urls vs identifiers are only a "sugar" as an implementation detail. From a user perspective, what should they use, and why? This feature has a cognitive cost, and by experience multiple ways of doing the same thing tend to confuse people more than help them.

Would the previously discussed mapping of symbolic name to registry url resolve your concerns?

Kind of; I could get behind that (I think the feature is generally nice to have), but I'm still not convinced supporting urls on the first iteration is needed. I'm worried this complexity will be unneeded (those who would use it could just as well specify a tarball url as dependency; it would be mostly the same, except for semver ranges).

That being said, I won't block on that if symbolic links are supported.

If included in the npmrc, means that public registry modules can't use this sugar any more than they can use scopes that are served from third party registries. Means that there is only a single mapping for the entire project.

Indeed. Given that there aren't much public registries that would want this feature right now it doesn't seem too bad - and should it sprout competing registries it won't be much different from a system like apt, where you register the locations providing your packages. If well documented I don't think that would be a problem.

@iarna
Copy link
Contributor

iarna commented Apr 14, 2021

From a user perspective, what should they use, and why?

This seems like a really critical question to answer.

Given that there aren't much public registries that would want this feature right now it doesn't seem too bad - and should it sprout competing registries it won't be much different from a system like apt, where you register the locations providing your packages. If well documented I don't think that would be a problem.

Agreed, it seems like a feature to me.

@arcanis
Copy link

arcanis commented Apr 15, 2021

This seems like a really critical question to answer.

Perhaps a way to move forward and be sure to all be aligned would be to write the user documentation as part of the rfc (top down design), from the perspective of a reader with no prior context ? Given that documentation will be needed anyway, time taken on this won't take away implementation time.

@zkochan
Copy link

zkochan commented Apr 15, 2021

It sounds like you aren't opposed to this RFC as written, but you'd like to see a discussion of how this might function with the sugar. Is this a fair summation? It seems to me that if folks feel an RFC for that would be premature that an RRFC might fill that gap?

Yes, I think it would answer the question of whether this RFC is needed at all.

Would we think about these problems 5 years ago, I think scopes could've solved these issues.

@iarna
Copy link
Contributor

iarna commented Apr 15, 2021

Yes, I think it would answer the question of whether this RFC is needed at all.

Does the "Motivation" section not cover that? It has scenarios with problems that current tooling has no way to resolve. (If you don't feel the problems there are worth solving then that's a discussion worth having, but if they are then SOME sort of RFC is warranted.)

@zkochan
Copy link

zkochan commented Apr 15, 2021

I understand the problems but I see downsides to using semi-URLs. I would look into alternative solutions first. I am talking not about "sugar" but about alternative ideas. URLs are too unflexible IMO. And as a result, their usage will be limited to a very small subset of problems.

@isaacs isaacs force-pushed the isaacs/registry-dependency-spec branch from c151191 to 4df8c0b Compare April 19, 2021 06:04
@isaacs
Copy link
Contributor Author

isaacs commented Apr 19, 2021

Moving this to accepted status, as decided in our OpenRFC meeting. No point discussing the next steps here, apart from the "Future Work" section of this RFC. It's time to move this to the next step.

If we get a bit further and find a better way there, we'll take that instead. An RFC isn't a binding contract or anything. If pnpm or yarn wants to wait to see step 2 and if there's a better way to get there than the path npm takes, then that's fine.

@isaacs isaacs closed this in ea7f296 Apr 19, 2021
@darcyclarke darcyclarke removed the Agenda will be discussed at the Open RFC call label Apr 19, 2021
@iarna
Copy link
Contributor

iarna commented Apr 20, 2021

I think the thing being glossed over here is that pnpm and yarn want to see a different solution entirely, not a "step two" or sugar on this proposal. There's nothing about the proposed alternatives that necessitates a raw registry-url type specifier, and so there's no reason they would be dependent on this.

It sounds like the next step forward, if cross-package-manager support is valued(?), would be to open an RFC that replaces this one with an alternative to discuss that.

@isaacs
Copy link
Contributor Author

isaacs commented May 2, 2021

The objection I've seen here is "urls are inflexible, we shouldn't use urls or anything that looks like them".

This objection was addressed on the following basis:

  1. We already do use URLs, for tarballs and git references, and things that are url-like for aliases. So it seems little is made worse by using a url to reference a registry.
  2. A url is the most sensible way to refer to a registry, since that is what it is. If a registry somehow changes the canonical url where it should be addressed, ok, but from npm's pov, that's a different registry.
  3. There are use cases where people are relying on tarball and git reference workarounds today, which would be better addressed (even if perhaps not optimally addressed in the best of all possible worlds) by using an explicit registry url reference in the dependency specifier.
  4. We can build other features on top of this one, and we want to move on to building those features.

The other objection about URLs is that they confuse users. This is an assertion that requires more evidence to accept. If there's any population of people that are comfortable with URLs, it's web developers. Unless we were making software for browser makers or members of the IETF committees defining web standards, it would be hard to find a population of users more able to handle URLs without confusion.

If there's an alternative that aims to address the use cases presented here, a rival RFC is indeed the way to go. It may render this one irrelevant! Or it may be additive to this, or fully satisfied by this one in a better way, and thus not optimal overall (even if it is better in some ways).

If there are new objections to be brought up, which have not already been considered, by all means do so! But the "rough" in "rough consensus" means we sometimes decide to do something, even when not everyone agrees, once all the objections are explored and decided on.

It sounds like the next step forward, if cross-package-manager support is valued(?), would be to open an RFC that replaces this one with an alternative to discuss that.

There is a value in mimicking one another's features and being able to interoperate, for sure. There is also a value in us sometimes exploring in different directions. Yarn and pnpm are their own projects, and it's up to their maintainers whether they wish to copy npm's features or go a different way. npm features do not depend on yarn or pnpm implementation, any more than yarn and pnpm features depend on npm implementing them.

@arcanis
Copy link

arcanis commented May 2, 2021

I'm quite curious how you envision this feature to be used. From my perspective, it sounds like it can only end two ways:

  • Either this feature ends up being used by packages, which means that those packages cannot be used by Yarn or pnpm users, fragmenting the ecosystem because you didn't want to spend two minutes thinking about middle grounds.
  • Or this feature isn't used, and it's just yet another technical debt you add to npm (which is your problem, to be fair).

I have my idea on which one it's gonna be, but I'm a bit saddened to see how much weight you put into constructive feedback from your peers. It's the second time you basically decide to ignore us even though we clearly try to find compromise, and I don't understand how you expect to build trust in this context (and to be clear, the very fact that @zkochan and me are here is because we each time want to give you a chance to show us you care about cooperation).

npm features do not depend on yarn or pnpm implementation, any more than yarn and pnpm features depend on npm implementing them.

I already said it in the peer dependency PR: I absolutely don't care what you do in npm that is purely "client-side" (ie, that isn't expected to have a practical functional effect once the package is published). We have different philosophies, and that's good.

But once the package is published, it ceases to be about just our package managers because it affects us all. In this regard, this rfc isn't an "npm feature", it's a "common trunk feature". It should at least require rough consensus from the stewards of all the relevant projects. And if you don't manage to have it, then perhaps it's a sign that should give you pause.

Of course, perhaps you think that "this is npm, we have most of the marketshare, we'll do as we please". If that's the case, you probably should be upfront about this so that at least everyone is on the same page. Or you can lock the thread...

@isaacs
Copy link
Contributor Author

isaacs commented May 3, 2021

I'm quite curious how you envision this feature to be used. From my perspective, it sounds like it can only end two ways:

We discussed this at length in our OpenRFC meetings. I will summarize here.

A common requested use case for this is for users who have a private registry, and want another way to avoid name hijacking attacks without having to migrate their legacy applications to use package scopes.

Either it will end up being used by public published packages, or it will not.

  • If it is, then yes, those packages will only be installable by package managers that support the new feature. That is often the case for new features. It hasn't stopped any of us from adding new features in the past. (This has been discussed already, and in fact is a benefit in this case, because it avoids the security problems with the "registry per scoped package name", where installing with a different package manager opens a name hijacking attack that the developer would reasonably assume is closed.)
  • If it is not, then it's still valuable as a building block for other features we intend to build, and for users of private internal registries that do not use (or support) scoped package names.

It is patently incorrect, and frankly insulting, to say that we have ignored your concerns. They have been discussed and evaluated, and very carefully considered, in these pull requests and in our Open RFC meetings that are open to the public, and which we encourage you to attend. It is not our responsibility to ensure that you see or agree with the resolutions to the concerns that you bring up. It is your responsibility to stay informed and be a constructive part of the decision making process if you wish to have a part in it.

It should at least require rough consensus from the stewards of all the relevant projects. And if you don't manage to have it, then perhaps it's a sign that should give you pause.

I think that you misunderstand the purpose of this repository, and I encourage you to review the README.md file.

This is the npm RFC process, not the "all JavaScript package managers" RFC process. npm is the final authority as to which features npm implements. If you wish to set up a space in which the maintainers of yarn, pnpm, and npm agree to block one another from implementing features until all of us agree on them, then that is a much larger conversation. But this is not that forum, and never has been. That forum does not exist.

"Rough consensus" is not "full agreement". It is "all objections and issues have been thoroughly considered, and a decision has been made on each of them".

If yarn decides not to implement a feature, that is of course a relevant piece of information, which we do pause to consider (often for months), but once considered and decided upon, we move on with the decision. Sometimes that decision is "do a different thing", and sometimes it is "do the thing that yarn won't do".

Continuing to bring up previously addressed concerns is disruptive to this process. If you have new issues to consider, we welcome them. If not, then please behave appropriately or find another venue to complain. You have been warned before, please do not let it happen again.

I am locking this thread for now, because it seems that there are no issues to bring up which have not been addressed. If anyone has new concerns to bring up, message me and I will gladly unlock it. If you have an alternative proposal that you believe would better meet the use cases that this one targets, then please start a new RFC for it. If you would like to suggest constructive changes to this RFC, please open a new PR with the proposed edits.

@npm npm locked as resolved and limited conversation to collaborators May 3, 2021
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

[RRFC] registry:<url>#<name>[@<version-range>] dependency specifier
9 participants