Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Proposal]: Overload Resolution Priority (VS 17.12, .NET 9) #7706

Open
2 of 4 tasks
333fred opened this issue Nov 19, 2023 · 61 comments
Open
2 of 4 tasks

[Proposal]: Overload Resolution Priority (VS 17.12, .NET 9) #7706

333fred opened this issue Nov 19, 2023 · 61 comments
Assignees
Labels
Implemented Needs ECMA Spec This feature has been implemented in C#, but still needs to be merged into the ECMA specification Proposal champion Proposal
Milestone

Comments

@333fred
Copy link
Member

333fred commented Nov 19, 2023

Summary

We introduce a new attribute, System.Runtime.CompilerServices.OverloadResolutionPriority, that can be used by API authors to adjust the relative priority of
overloads within a single type as a means of steering API consumers to use specific APIs, even if those APIs would normally be considered ambiguous or otherwise
not be chosen by C#'s overload resolution rules.

Motivation

API authors often run into an issue of what to do with a member after it has been obsoleted. For backwards compatibility purposes, many will keep the existing member around
with ObsoleteAttribute set to error in perpetuity, in order to avoid breaking consumers who upgrade binaries at runtime. This particularly hits plugin systems, where the
author of a plugin does not control the environment in which the plugin runs. The creator of the environment may want to keep an older method present, but block access to it
for any newly developed code. However, ObsoleteAttribute by itself is not enough. The type or member is still visible in overload resolution, and may cause unwanted overload
resolution failures when there is a perfectly good alternative, but that alternative is either ambiguous with the obsoleted member, or the presence of the obsoleted member causes
overload resolution to end early without ever considering the good member. For this purpose, we want to have a way for API authors to guide overload resolution on resolving the
ambiguity, so that they can evolve their API surface areas and steer users towards performant APIs without having to compromise the user experience.

Design Meetings

Related Issues

@333fred 333fred self-assigned this Nov 19, 2023
@colejohnson66
Copy link

So, the idea is, should I consume a DLL with obsolete members, that I can swap out a dynamically invoked DLL with a newer version, but I wouldn’t be able to compile against the newer one? In other words, the idea is to allow types that are visible to the runtime but not the compiler?

@333fred
Copy link
Member Author

333fred commented Nov 19, 2023

In other words, the idea is to allow types that are visible to the runtime but not the compiler?

Precisely.

@TahirAhmadov
Copy link

Why not just add a flag to Obsolete to specify this behavior? Instead of bool error, create new enum ObsoleteLevel { Warning, Error, BinaryCompatOnly }, then obsolete the Obsolete constructor which accepts bool error (see what I did there?) and add a new overload which accepts the new enum.

@iam3yal
Copy link
Contributor

iam3yal commented Nov 22, 2023

@TahirAhmadov I'd imagine it's easier to query whether this attribute exists than to query whether a property on it exists and then question whether its value is true/false, imo it makes little sense but not only that, overall having an attribute is cheaper, more readable and discoverable.

@TahirAhmadov
Copy link

The compiler already queries for Obsolete, so the comparison isn't between an attribute with a flag and an attribute without a flag, rather between a new flag on an existing attribute, vs a whole new attribute - I think it's easier to just add a flag to Obsolete.

More importantly though, logically a 3-state Obsolete flag makes sense because it describes the 3 levels of "obsoletion" of a member; whereas with a new attribute, we would now have more possibilities: Obsolete(false); Obsolete(false), BinaryCompatOnly; Obsolete(true); Obsolete(true), BinaryCompatOnly; BinaryCompatOnly - 5 possibilities; obviously 5 is more than 3.

@DanFTRX
Copy link

DanFTRX commented Nov 22, 2023

Why not just add a flag to Obsolete to specify this behavior? Instead of bool error, create new enum ObsoleteLevel { Warning, Error, BinaryCompatOnly }, then obsolete the Obsolete constructor which accepts bool error (see what I did there?) and add a new overload which accepts the new enum.

This would also have the added benefit of pushing developers to check over all their existing Obsolete attributes.

@iam3yal
Copy link
Contributor

iam3yal commented Nov 23, 2023

@TahirAhmadov You might be right, I'm a bit torn on this but for whatever reason they chose to go with an attribute so maybe they can share their clarification in a comment or update the OP. :)

@ds5678
Copy link

ds5678 commented Nov 24, 2023

Since the main motivation for this is to resolve overload resolution problems, I propose this be resolved by changing ObsoleteAttribute instead.

Idea 1: Make Obsolescence Affect Overload Resolution

We define an "obsolete context" where a containing type or member is obsolete. In this context, overload resolution is treated the same as it is today.

Outside of an obsolete context, obsolete members are treated as a last resort for resolution, perhaps even after extension members with the same signature.

Idea 2: Add a Boolean Property to ObsoleteAttribute

partial class ObsoleteAttribute
{
    bool BinaryCompatOnly { get; set; }
}

or an enum like the other commenter suggested.

@HaloFour
Copy link
Contributor

Would the compiler resolve the annotated member as a last resort, for the purposes of providing an error message? Or would the member be effectively completely invisible to the compilation process?

I think it does make sense to evolve the ObsoleteAttribute rather than creating a new one. My one concern would be existing compilers wouldn't know to look for these changes, but as long as setting those new properties or whatever also results in IsError returning true it's probably not an issue.

@rickbrew
Copy link

rickbrew commented Nov 26, 2023

Sign me up. This would be really useful for me. Paint.NET's plugin system is almost 20 years old, and sometimes it's quite difficult to modernize things, or to get plugin authors to use the correct (new) systems, when all the old stuff is still littering the namespaces and Intellisense.

For example, I have two BGRA32 color/pixel structs: PaintDotNet.ColorBgra, which is the old/legacy one, and then there's PaintDotNet.Imaging.ColorBgra32. How is a plugin author supposed to know which one they should use? Sometimes it's obvious, because they're passed a parameter that specifies it, e.g. IBitmap<ColorBgra32>, or they're similarly required to return something that uses the new type. But otherwise there's no way, other than [Obsolete], to keep them away from the old crusty version of the struct.

Here's another example. Let's say you're writing a library of primitives for things like vectors, matrices, colors, etc. Sometimes these should be explicitly castable to other primitives, other times implicitly castable.

But sometimes you make a mistake and an explicit cast operator really should've been implicit, or vice versa. How do you fix that? You literally can't. You are forever stuck with your first decision or typo or copypasta error. Edit: This is because the compiler won't let you declare both (without trickery), and if both exist when using it then you'll get ambiguity errors (IIRC).

public readonly struct Matrix5x4
{
    private readonly Vector4 row0, row1, row2, row3, row4;
    ...
    public static implicit operator Matrix4x4(Matrix5x4 matrix) => ...; // OOPS 😢
}

I mention this because I actually made this type of mistake recently 😢 I got lucky -- no plugins were using the cast operator, so I just changed it.

It'd be nice to have this capability:

    [BinaryCompatOnly]
    public static implicit operator Matrix4x4(Matrix5x4 matrix) => (Matrix4x4)matrix;

    public static explicit operator Matrix4x4(Matrix5x4 matrix) => ...

BTW, extension methods already have a pseudo [BinaryCompatOnly] ... just remove the this https://twitter.com/rickbrewPDN/status/1146810316887429120

@333fred
Copy link
Member Author

333fred commented Nov 27, 2023

Why not just add a flag to Obsolete to specify this behavior?

My concern with this is that the BCL may want to do this in libraries that are still targeting netstandard2.0. It's certainly something we can discuss though.

@TahirAhmadov
Copy link

@333fred isn't it the same issue with either adding a new attribute or modifying an existing one? Please elaborate...

@333fred
Copy link
Member Author

333fred commented Nov 27, 2023

No, it is not the same. You could polyfill BinaryCompatAttribute.

@TahirAhmadov
Copy link

Hmmm...

If we were to change how Obsolete(error=true) behaves, the outcome would be overload resolution would work differently and programs which previously didn't compile would now compile - is that acceptable?

How about modifying Obsolete for new .NET and polyfilling a new Obsolete under a different namespace for netstandard2.0? Either using #if or even with compiler magic - if System.ObsoleteAttribute is referenced, C# redirects to System.BackCompat.ObsoleteAttribute.

Or just do both - modify Obsolete in new .NET; add BackCompatOnly as polyfill for netstandard2.0; then compiler checks both, with eventual netstandard2.0 deprecation it stops being used naturally - again, with #if.

These options may sound too complicated, but it just feels so right to modify Obsolete, it would be a shame to add a new attribute just because of the older framework compatibility.

@333fred
Copy link
Member Author

333fred commented Nov 27, 2023

If we were to change how Obsolete(error=true) behaves, the outcome would be overload resolution would work differently and programs which previously didn't compile would now compile - is that acceptable?

Absolutely not, but I think you may have known that 😛.

How about modifying Obsolete for new .NET and polyfilling a new Obsolete under a different namespace for netstandard2.0? Either using #if or even with compiler magic - if System.ObsoleteAttribute is referenced, C# redirects to System.BackCompat.ObsoleteAttribute.

That doesn't seem preferrable to me. That seems way more complex.

@jnm2
Copy link
Contributor

jnm2 commented Nov 28, 2023

My concern with this is that the BCL may want to do this in libraries that are still targeting netstandard2.0. It's certainly something we can discuss though.

@333fred Couldn't such libraries still define an internal System.ObsoleteAttribute and silence the warning that a compilation type is being preferred over a conflicting library type?

@TahirAhmadov
Copy link

@jnm2 I think it's possible to create a Nuget which just adds source to the project - not a referenced DLL, right? Then this "internal" Obsolete can be distributed without the need to copy and paste (which by itself is not a huge amount of effort, but still)

@teo-tsirpanis
Copy link

Moving from #7707 (comment), one idea to support type-forwarding these hidden types is to add a command-line option for a line-delimited file with the assembly-qualified names of the types to forward.

The compiler will check that these types exist in the referenced assembly, and emit entries in the ExportedType table.

@333fred
Copy link
Member Author

333fred commented Nov 30, 2023

My concern with this is that the BCL may want to do this in libraries that are still targeting netstandard2.0. It's certainly something we can discuss though.

@333fred Couldn't such libraries still define an internal System.ObsoleteAttribute and silence the warning that a compilation type is being preferred over a conflicting library type?

I could be misremembering, but no, I don't believe they could, just like (iirc) you can't do that with the Obsolete constructor that takes an error code.

@TahirAhmadov
Copy link

using System;

namespace ObsoleteTest
{
	internal class Program
	{
		static void Main(string[] args)
		{
			Console.WriteLine("Hello, World!");
		}
	}
	[Obsolete(ObsoleteLevel.BinaryCompatOnly)]
	class Foo
	{
	}
}

namespace System
{
	enum ObsoleteLevel { Warning, Error, BinaryCompatOnly };
	class ObsoleteAttribute : Attribute
	{
		public ObsoleteAttribute(ObsoleteLevel level = ObsoleteLevel.Warning) { }
	}
}

It causes this currently: 1>C:\Program Files\Microsoft Visual Studio\2022\Professional\MSBuild\Current\Bin\Roslyn\Microsoft.CSharp.Core.targets(80,5): error MSB6006: "csc.exe" exited with code -2146232797.

IDE shows this in stacktrace: Unable to cast object of type 'System.Int32' to type 'System.String'.

Looks like this is not currently supported, but surely this can be made to work? The legacy netstandard2.0 is unchanged, but the compiler can be changed to accommodate this, right?

@rickbrew
Copy link

one idea to support type-forwarding these hidden types is to add a command-line option for a line-delimited file with the assembly-qualified names of the types to forward.

Please no ... I don't think you can even add custom command-line args for the C# compiler in Visual Studio

@teo-tsirpanis
Copy link

@rickbrew it can be integrated with the project files like this:

MyProject.csproj

<PropertyGroup>
  <TypeForwards>forwards.txt</TypeForwards>
</PropertyGroup>

forwards.txt

MyNamespace.MyObsoleteType[MyObsoleteAssembly]
MyNamespace.MyObsoleteType2[MyObsoleteAssembly]

Or this:

<ItemGroup>
  <TypeForward Include="MyNamespace.MyObsoleteType[MyObsoleteAssembly]" />
  <TypeForward Include="MyNamespace.MyObsoleteType2[MyObsoleteAssembly]" />
</ItemGroup>

@rickbrew
Copy link

Okay that would work :)

@TahirAhmadov
Copy link

Maybe it's a stupid idea - but can we re-use Obsolete.Message and add a special text at the end which denotes "binary compat only"? In new .NET, we add a ctor which sets the message as needed - together with IsError=true; new C# sees that message and treats it as "BinaryCompatOnly"; old C# just continues erroring as usual - and user sees something like "--THIS IS BINARY COMPAT ONLY--" at the end of the message; .NET standard 2.0 just adds that blob manually - using some polyfilled type with a constant string; voila.

@ds5678
Copy link

ds5678 commented Nov 30, 2023

My concern with this is that the BCL may want to do this in libraries that are still targeting netstandard2.0. It's certainly something we can discuss though.

@333fred Couldn't such libraries still define an internal System.ObsoleteAttribute and silence the warning that a compilation type is being preferred over a conflicting library type?

I could be misremembering, but no, I don't believe they could, just like (iirc) you can't do that with the Obsolete constructor that takes an error code.

I think we should make it that a class in our assembly hides all external classes with the same full name. (Those classes could still be accessed with extern alias, if necessary.)

@333fred
Copy link
Member Author

333fred commented Nov 30, 2023

@ds5678 I am unsure what you mean by that or how it applies here.

@333fred
Copy link
Member Author

333fred commented Mar 19, 2024

We never take that approach for attributes we expect users to apply directly, and I wouldn't expect that to change with this proposal.

@Neme12
Copy link

Neme12 commented Apr 11, 2024

I wouldn't conflate this feature with the Obsolete attribute, they seem orthogonal.

@jnm2
Copy link
Contributor

jnm2 commented Apr 11, 2024

@AraHaan Like with other statically-analyzed attributes, libraries can declare their own copies of the attributes as internal with #if !NET9_0_OR_GREATER around them.

@AraHaan
Copy link
Member

AraHaan commented Apr 14, 2024

@AraHaan Like with other statically-analyzed attributes, libraries can declare their own copies of the attributes as internal with #if !NET9_0_OR_GREATER around them.

Yeah, but I would rather it be compiler generated for all TFMs though. Mainly because there are tons already that is already compiler generated for all TFMs anyways nowadays so what is the problem with 1 more?

@333fred
Copy link
Member Author

333fred commented Apr 14, 2024

Mainly because there are tons already that is already compiler generated for all TFMs

As I previously stated, the compiler does not generate attributes we expect users to apply by hand, and this policy will not change here. There are lots of footguns for polyfilling these attributes on lower TFMs, and we do not support the scenario.

@Neme12
Copy link

Neme12 commented Apr 16, 2024

Is it yet decided whether this will allow overloading by return type? If so, we need new reflection APIs (e.g. Type.GetMethod) that take a return type in addition to the parameter types, in order to be future proof and not get an AmbiguousMatchException if a new, preferred overload with a different return type is added in the future.

@HaloFour
Copy link
Contributor

@Neme12

Is it yet decided whether this will allow overloading by return type? If so, we need new reflection APIs (e.g. Type.GetMethod) that take a return type in addition to the parameter types

Technically the CLR has always allowed for overloading by return type, as the return type is a part of the method signature. I'm not aware of any languages that take advantage of the capability, though.

@Neme12
Copy link

Neme12 commented Apr 16, 2024

@HaloFour Yes, I know. I'm asking if this proposal will allow overloading by return type in C# as well, with one of the overloads having a higher priority and thus always being the one that is picked. Then the need for getting a method not only by parameter types but also by return type in reflection will be greater. And it would be good to get this ability in the same .NET release.

(Although I think having the ability to specify return type in GetMethod etc in addition to parameter types is still useful and needed even without this C# feature, because IL allows it and you can encounter a method overloaded on the return type in the metadata, and I would like to always specify not only the parameter types but also the return type when calling GetMethod to specify precisely which method I want.)

@CyrusNajmabadi
Copy link
Member

The proposal only affects overload resolution. Meaning which overload is picked for a particular call. It doesn't change at all the rules about what overloads can be declared.

@Neme12
Copy link

Neme12 commented Apr 16, 2024

@CyrusNajmabadi And is there no plan to allow such overloads to be declared? From the way this feature (or the previous one - BinaryCompatOnly) was advertised and talked about by Jared and others, they mentioned the possibility of overloading by return type thanks to this feature, if I recall correctly.

@CyrusNajmabadi
Copy link
Member

they mentioned the possibility of overloading by return type thanks to this feature, if I recall correctly.

We'd have to update this proposal then in taht case, or add a new proposal for that :)

I'm not opposed to it. It just doesn't seem to be part of what is being designed right here, right now. Perhaps for the future.

@brianrourkeboll
Copy link

brianrourkeboll commented Apr 17, 2024

(I tried to find previous discussion of this question, but I didn't see any. If this has already been talked about somewhere, please point me to it.)

Given this bit from the proposal summary about the feature's intended use —

a new attribute … that can be used by API authors to adjust the relative priority of overloads … even if those APIs would normally be considered ambiguous

(emphasis mine)

— and given this example from § Motivation

The type or member is still visible in overload resolution, and may cause unwanted overload resolution failures when there is a perfectly good alternative, but that alternative is either ambiguous with the obsoleted member, or the presence of the obsoleted member causes overload resolution to end early without ever considering the good member. For this purpose, we want to have a way for API authors to guide overload resolution on resolving the ambiguity, so that they can evolve their API surface areas and steer users towards performant APIs without having to compromise the user experience.

(emphasis mine)

— what happens when (1) an older C# compiler, (2) a C# compiler targeting an older language version, or (3) a language that does not understand this attribute encounters a library that depends on this feature?

Will it be impossible to consume an overloaded API whose only disambiguator is this attribute in such a case? Or am I misunderstanding the implications here?

@snarbies
Copy link

Will it be impossible to consume an overloaded API whose only disambiguator is this attribute in such a case?

At the risk of stating the obvious, such an attribute would presumably never be the only disambiguator between overloads. It only helps with the decision making when there are ambiguous non-exact matches (implicit casts). Adding explicit casts so that your invocation exactly matches a (non-binary-compat-only) overload should succeed, in C# as well as other languages.

In other words, if you can change the invoking code, you can always make it work. Without changing the code, when a library introduces back-compat-only overloads I'd expect it to risk breaking on an older compiler and for a language that does not understand the attribute.

@jnm2
Copy link
Contributor

jnm2 commented Jul 4, 2024

If there would be a way to prioritize an instance method lower than any unannotated extension method, this could be the way out of the dilemma in dotnet/runtime#18087 (comment).

@jodydonetti
Copy link
Contributor

jodydonetti commented Aug 25, 2024

If we were to change how Obsolete(error=true) behaves, the outcome would be overload resolution would work differently and programs which previously didn't compile would now compile - is that acceptable?

Absolutely not, but I think you may have known that 😛.

Hi all, can somebody please explain why this is the case?

I mean "programs which previously COMPILE would now NOT COMPILE" is obviously bad, no question asked.
But why "programs which previously DIDN'T COMPILE would now COMPILE" is?

What I mean is that every piece of code that tries to use features introduced only later (eg: trying to use async before C# 5) would not compile until that feature would have been added (in the future), so I fail to grasp a good example of this being problematic.

Thanks!

@jjonescz
Copy link
Member

I mean "programs which previously COMPILE would now NOT COMPILE" is obviously bad, no question asked.
But why "programs which previously DIDN'T COMPILE would now COMPILE" is?

You're right, this direction should not be problematic. However, changing Obsolete(error=true) would change overload resolution which would result in both newly successful compilations (fine) but also newly failing compilations or different runtime behaviors (breaking changes).

@AraHaan
Copy link
Member

AraHaan commented Aug 26, 2024

Honestly just ship a new attribute that can be easily poly filled downlevel without issues as needed and then ship it.

shipit

Credits goes to Tanner Gooding a long time ago for sending this image on discord and I just saved it for later use 😂. But yeah I agree with it, ship it and screw design. 😸

@jodydonetti
Copy link
Contributor

jodydonetti commented Aug 26, 2024

You're right, this direction should not be problematic.

Thank god finally, I was going a lil bit insane. It has been told me multiple times that it was a problem, and I was not able to understand why 😅

However, changing Obsolete(error=true) would change overload resolution which would result in both newly successful compilations (fine) but also newly failing compilations or different runtime behaviors (breaking changes)

Since we are here, can you give me one example of this?
Ideally for both:

  1. a compilation that would stop working with such a change
  2. a different runtime behavior with such a change

I'm asking because I fail to see (again, my bad) how that could be, not in general but in this very specific case.
What I mean is that I understand that usually a change in behaviour is uber delicate and can have subtle ramifications, but in this very precise case I don't understand how.

The change we are talking about is such that code that was compiling before would not stop compiling, only vice versa (and we agreed that is not a problem).

How can "ignoring from overload resolution of a specific call site a specific overload that cannot be called from that specific call site anyway" break an existing piece of code?

Just to be clear, I'm not saying I'm necessarily right, I'm asking if it's possible to see an instance of such a case to better understand it.

Thanks for your time.

@333fred
Copy link
Member Author

333fred commented Aug 26, 2024

@jodydonetti I'll point out that you are quoting from comments on the previous version of this proposal, not the current version. However, yes, it is is a problem when code that previously didn't compile now compiles; what that often means in practice is that code that previously compiled because instance member overload resolution failed, and an extension was chosen instead, now chooses an instance member, which is a silent breaking change. The version you're talking about here also had a lot of problems around inheritance and testability, which is why we moved to OverloadResolutionPriority instead.

@jodydonetti
Copy link
Contributor

@jodydonetti I'll point out that you are quoting from comments on the previous version of this proposal, not the current version.

Sorry about that, it was not intentional.

what that often means in practice is that code that previously compiled because instance member overload resolution failed, and an extension was chosen instead, now chooses an instance member

Let's use an MRE (.net 9).

var foo = new Foo();

foo.Bar();

public class Foo
{
	[Obsolete("Use Bar(int) instead.", true)]
	public void Bar(bool b = false)
	{
	}

	public void Bar(int i = 0)
	{
	}
}

public static class FooExtensions
{
	public static void Bar(this Foo foo)
	{
	}
}

This does not compile, and even if the overload resolution failed on instance members and they cannot be called, the extension is not chosen instead.

image

Also, one thing I'm not getting this part:

what that often means in practice is that code that previously compiled

We were talking about code that did not compile, not code that did. Am I missing something?

The version you're talking about here also had a lot of problems around inheritance and testability, which is why we moved to OverloadResolutionPriority instead.

If you have some time to spare, can you point me to an example of that? I'd like to understand more and haven't found one by looking at the other comments.

A pratical example

To shorten the back-and-forth and save time on your side (sorry), here is the MRE for which I'm not able to wrap my head around.

var foo = new Foo();

foo.Bar();

public class Foo
{
	public void Bar(int i = 0)
	{
	}
}

This compiles: the compiler picks the only one, easy peasy.
All makes sense.

var foo = new Foo();

foo.Bar();

public class Foo
{
	private void Bar(bool b = false)
	{
	}

	public void Bar(int i = 0)
	{
	}
}

This compiles (note private): the compiler picks the only one that is callable from that call site.
All makes sense.

var foo = new Foo();

foo.Bar();

public class Foo
{
	public void Bar(bool b = false)
	{
	}

	public void Bar(int i = 0)
	{
	}
}

This does NOT compile: the compiler cannot pick one callable from that call site, since 2 are both callable and compatible.
All makes sense.

var foo = new Foo();

foo.Bar();

public class Foo
{
	private void Bar(bool b = false)
	{
	}

	//public void Bar(int i = 0)
	//{
	//}
}

This does NOT compile (note private): the compiler sees only one, but that is NOT callable from that call site because of private.

image

So it knows that that overload is not callable from the specific callsite.
All makes sense.

var foo = new Foo();

foo.Bar();

public class Foo
{
	[Obsolete("Use Bar(int) instead.", true)]
	public void Bar(bool b = false)
	{
	}

	//public void Bar(int i = 0)
	//{
	//}
}

This does NOT compile: the compiler sees only one, but that is NOT callable from that call site because of obsolete+error.

image

So it knows that that overload is not callable from the specific callsite, much like a private overload.
All makes sense.

And now the curved ball.

var foo = new Foo();

foo.Bar();

public class Foo
{
	[Obsolete("Use Bar(int) instead.", true)]
	public void Bar(bool b = false)
	{
	}

	public void Bar(int i = 0)
	{
	}
}

This does NOT compile: the compiler sees 2, and knows that one is NOT callable from that call site because of obsolete+error, but it does not ignore it.

The compiler knows that from the specific call site it cannot call a private one, so it ignores it, but doesn't do the same for the obsolete+error, even though both are not callable from that callsite, so much so that when there's only the obsolete+error one, it does not compile.

How can it not have a doubt about overload resolution when the overload is private, but have it when it's obsolete+error?

This is the part where I can't see how it can make sense.

Sorry for the long post, I wanted to be as clear and precise as possible.

@333fred
Copy link
Member Author

333fred commented Aug 26, 2024

We were talking about code that did not compile, not code that did. Am I missing something?

Code that doesn't compile can have downstream impacts that can cause something else to compile in a different manner. These are often particularly esoteric scenarios, true, that usually have to go through multiple levels of overload resolution and lambda binding, but it can happen. In the simple case where we must be talking about an ambiguity, there is indeed nothing an extension would change, but these types of subtle overload resolution changes can pop up in unexpected places and it may not always surface as an ambiguity. I don't feel confident enough to state for sure that marking something Obsolete(error: true) will always cause an ambiguity error that can never be resolved by something further down the pipeline.

The previous version of the proposal, and a brief sentence as to why it was abandoned, is at the end of the current proposal: https://github.com/dotnet/csharplang/blob/main/proposals/overload-resolution-priority.md#alternatives.

How can it not have a doubt about overload resolution when the overload is private, but have it when it's obsolete+error?

Because private members don't participate in overload resolution at all. The way the compiler implements overload resolution here is actually 2 passes: in the first pass, it does overload resolution respecting visibility rules. When this pass fails and couldn't find anything to look at, we then do a second pass, not respecting visibility rules; this pass is not required by the specification at all. We just want to give a more helpful error in these cases. On the other hand, members that are marked Obsolete do participate in overload resolution; it may end up being an error to call them, but maybe it isn't. For example, you get no error when you're within an Obsolete context yourself (warning or error), but this doesn't impact overload resolution (at least, not directly. I don't feel confident enough to say that there's absolutely no case where overload resolution could have been affected by some nested lambda binding thing that a surprising number of people have actually done for some reason).

@jcouv jcouv added Proposal Implemented Needs ECMA Spec This feature has been implemented in C#, but still needs to be merged into the ECMA specification labels Sep 4, 2024
@jcouv jcouv changed the title [Proposal]: Overload Resolution Priority [Proposal]: Overload Resolution Priority (VS 17.12, .NET 9) Sep 17, 2024
@jcouv jcouv modified the milestones: Working Set, 13.0 Sep 17, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Implemented Needs ECMA Spec This feature has been implemented in C#, but still needs to be merged into the ECMA specification Proposal champion Proposal
Projects
None yet
Development

No branches or pull requests