Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Make non-writable prototype properties not prevent assigning to instance #1307

Closed
wants to merge 1 commit into from

Conversation

bmeck
Copy link
Member

@bmeck bmeck commented Sep 13, 2018

The so called "override mistake" is a symptom of this.

Some slides: https://docs.google.com/presentation/d/17dji3YM5_LeMvdAD3Y3PQoXU1Mgm5e2yN_BraBSTkjo/edit

@bmeck bmeck added normative change Affects behavior required to correctly evaluate some ECMAScript source text needs consensus This needs committee consensus before it can be eligible to be merged. labels Sep 13, 2018
@littledan
Copy link
Member

This patch looks to me like it would do the trick. At the risk of bikeshedding on details too much, I wonder if it would be a little cleaner to check existingDescriptor's [[Writable]] attribute rather than ownDescriptor.

@littledan
Copy link
Member

At a high level, I like the idea of going in this direction. However, let's make sure to get some data about web compatibility before asking engines to ship the semantics.

@bmeck
Copy link
Member Author

bmeck commented Sep 13, 2018

@littledan this patch should be using existingDescriptor? Where is ownDescriptor being checked?

We cannot use ownDescriptor because it is the "incoming" new descriptor.

@allenwb
Copy link
Member

allenwb commented Sep 13, 2018

Please review and reference the multiple times in the past when this issue has been discussed within TC39 and was tabled. Explain what has changed or is different now that justifies reopening this discussion. This is just good hygiene for a standards committees.

A couple thoughts:
Any change that silently modifies the observable behavior of property get/set semantics is likely to be web breaking. This is true even if the old behavior is unintuitive and and relatively rare. The legacy web is huge and correct application level behavior can dependent upon local details that would be considered bugs if the original developer had discovered them.

It is much more likely that behaviors that terminate in a thrown exception can be changed without breaking existing applications. (No guarantees, but the odds are better). TC39 traditionally haven't worried very much about substituting some new behavior for a throw. The behavior you are concerned about currently throws in strict mode code. This suggests that a change that only applied in strict mode might be plausible.

Because of the spec's intentional factoring of behavior between a MOP and syntax linked evaluation actions, [[Set]] does not know whether or not it is being invoked from a strict mode operation. So it would not be possible to make a strict mode only change via the specification of ordinary [[Set]]. Instead, what you would probably have to do is make the change in step 6.3 of PutValue. If [[Set]] returns false for a is a strict mode PutValue where the thisValue is an extensible object that does not have the corresponding own property then you could create and initialize an own data property using [[DefineOwnProperty]].

@bmeck
Copy link
Member Author

bmeck commented Sep 13, 2018

@allenwb if you have discrete conversations in the past of this that you want to discuss we can do that. Some quick searching on https://esdiscuss.org/ didn't show anything of mention. #1154 exists on ecma262 but is the only reference of this in the past I found on some quick searching that tries to discuss the issue. It also doesn't have a concrete solution proposed. The stalled nature and this being a concrete change I am proposing let make me think it appropriate to open up a new PR.

In addition, we should not do this in PutValue after some review. In particular because:

X = {get f() {}}
X.f = 1;

Performs a PutValue and returns false for [[Set]] but I do not believe it should perform a defineProperty operation. I do not wish for the code above to break the shared nature of the prototype as not all getters are single time use. I suspect the change in PutValue would be breaking for the same reasons mentioned in the slides about changing accessor behavior.

Per moving from error to other behavior, I agree it is observable but many changes including adding methods to objects also can move from errors to non-errors. This is unlikely to be controversial in nature given history.

Per moving from noop to other behavior, I am suspect that any code intentionally relies on a [[Set]] failing to do anything and testing to ensure that no change occurred. Gathering data through means such as usage counters would give us some actual data rather than hand waving. In addition, code in ES Modules and class bodies always throw, so this is a relevant breakage but generally going to be hard to rely on the behavior remaining a noop as code migrates slowly to newer features.

Per why this is more relevant than it was in the past. We have had a presentation about freezing primordials and security problems of their mutability both from Mark and from Natalie within this past year. I would also state the landing of class fields is a prime example of a define operation being introduced due at least in part due to this behavioral discontinuity and it being untenable to perform [[Set]] on prototypes if they could be frozen as the spec stands today. Since the issue I mentioned about on the spec was also this year, it does not seem like actual proposals to fix this have come up much.

@zenparsing
Copy link
Member

It would also be helpful to understand the end user value that this change will create (taking into account the fact that users have had time to "deal with" the current semantics).

@erights
Copy link

erights commented Sep 13, 2018

It would also be helpful to understand the end user value that this change will create (taking into account the fact that users have had time to "deal with" the current semantics).

@zenparsing I have seen two ways users "deal with" the current behavior:

The possibility of the accessor is why this had not been taken seriously by tc39 in the past. Caja in fact has been using this accessor trick in production at scale for many years https://github.com/google/caja/blob/master/src/com/google/caja/ses/repairES5.js#L388 successfully.

However, since then Agoric and Salesforce have reengineered the core mechanisms of Caja/SES into the Realms Shim https://github.com/tc39/proposal-realms/tree/master/shim and modern SES https://github.com/Agoric/SES to be much more efficient than the old Caja/SES. Salesforce has deployed this in production at massive scale, and find that the overhead of [[Get]]ing from the getter causes the painful significant slowdown compared to normal JS.

@erights
Copy link

erights commented Sep 13, 2018

It would also be helpful to understand the end user value that this change will create (taking into account the fact that users have had time to "deal with" the current semantics).

So to be explicit, the values to the end user:

  • Programmers will be able to freeze what they want to freeze without penalty, resulting in code being more defensive and users experiencing less corruption --- both accidental and malicious.
  • The accessor trick is expensive, resulting in it being used only selectively when it is used, resulting in old code still breaking sometimes when it does an assignment outside the places where the accessor trick was used. The user will no longer suffer from these breakages.
  • The accessor trick must be used for cases like Object.prototype.toString, whose new getter imposes a significant slowdown compared to normal non-defensive JS. The user would no longer experience these slowdowns.
  • The main deterrent for freezing primordials would be gone, enabling more programmers to give more users a more reliable and more secure experience.

@zenparsing
Copy link
Member

Excellent, thank you @erights .

@allenwb
Copy link
Member

allenwb commented Sep 13, 2018

@bmeck

If you have discrete conversations in the past of this that you want to discuss we can do that.

You're making the proposal, so it's basically your job to adequately research it. You should take my word for it that it is been discussed multiple times going back to ES5 development. I'm happy, to give you some pointers of where you need to look, but I've done this sort of tedious search too many times to volunteer to do it again for somebody else's proposal. (BTW, this is a great example, of why the TC30 archives needs to be better curated and that a technical topic index (at least of meeting notes and decisions) would be particularly helpful).

I would suggest reviewing meeting notes as well as a deeper review of ES-discuss/es5-discuss. Note that some technical meeting notes prior to May 2012 are only attached to formal minutes on the Ecma members file server and others only occur as messages on es-discuss or archives of the old ecma tc39 email reflector. There are also presentations and proposal docs (not necessarily on this topic??) that only exist on the Ecma members site. Also see the bugs.ecmascript.org archive and the wiki.ecmascript.org as archived at archive.org. My recollection was that I was recently searching for something in the 2008-2012 timeframe and stumbled across something about "the over-ride bug" and thinking "I bet that this is the first discussion of this issue". Unfortunately, I don't think I captured a link as it wasn't the topic I was hunting at the time. Whereever you look Mark Miller is going to be part of the discussion as he is the person who has consistently been involved in raising the issue.

It also doesn't have a concrete solution proposed. The stalled nature and this being a concrete change I am proposing let make me think it appropriate to open up a new PR.

Exactly because, is past discussions it has never made it past the "this change is too likely to break the web" stage. You need a really strong case to get past that hurtle. IE, Browser Game Theory

In addition, we should not do this in PutValue after some review. In particular because:

My suggestion is just a sketch of a possible path that is less likely to run into objections. WRT accessor properties, I would suggest expanding the predicate it suggest for PutValue 3.2 so it doesn't do it for accessor property over-rides.

@erights
Copy link

erights commented Sep 13, 2018

See http://web.archive.org/web/20141230041441/http://wiki.ecmascript.org/doku.php?id=strawman:fixing_override_mistake and the links at the bottom to other discussions.

Repeating here in full @allenwb 's rationale for not fixing the override mistake, so that we can respond to it here:

Not a Mistake — Allen Wirfs-Brock 2012/01/09 20:45

This was not a mistake in ES5/5.1 and it is not a bug. It is a semantics that goes all the way back to ES1. It is also a behavior which makes complete sense from a prototypal inheritance perspective and can be found in the Self language.

The basic idea is that the properties prototype object are shared parts of all of inheriting child object. Modifying such a shared part by a child, introduces a local change that is visible to that child (and its children) so this requires creation of a “own” property on the child. However, read-only properties can not modified (by normal means, eg assignment) so there is no need to create a “own” copy. Assigning to an inherited read-only property or a “own” read-only property should have the same affect (whether it is ignoring the assignment, throwing, etc.). Allowing assignment to an inherited read-only property would break the invariant that that a prototype’s readonly property is an immutable value that is shared among all children of the prototype.

If there was a mistake in designing ES5, it was allowing Object.defineOwnProperty to create child properties that over-ride inherited read-only data properties. This broke an invariant that previously existed in the language but this invariant was already violated by some pre-ES5 clause 15 objects, (eg the writability of the prototype property of some children of Function.prototype). However, I think the ES5 decision was probably the right one given the legacy clause 15 usages and the overall reflective nature of defineOwnProperty).

Finally, this issue was given plenty of attention in during the development of ES5. In various drafts, the ES5 specification actually broke the legacy behavior causing it to behave similarly to what is suggested by this strawman. Those specification bugs were corrected in order to ensure that the semantics specified by ES3 and earlier editions were not changed by ES5.

@allenwb
Copy link
Member

allenwb commented Sep 13, 2018

@erights

Salesforce has deployed this in production at massive scale, and find that the overhead of [[Get]]ing from the getter causes the painful significant slowdown compared to normal JS.

Another approach is to pressure engine implementors to but more effort in optimizing such accessors. I doesn't seen any reason why in vey hot loops over objects with stable shapes that such accesses couldn't be specialized and inlined.

That's sort of the whole history of JS engines, features remain slow until they get enough usage that implementors are pressured into optimizations.

@bmeck
Copy link
Member Author

bmeck commented Sep 13, 2018

@allenwb even if we make accessors fast, it does not fix the underlying problem for data properties, which are what the JS spec uses for primordials. Nor does it fix the problem for the general usage of objects seeking solely to prevent mutation of themselves in userland such as freezing Object literals.

@allenwb
Copy link
Member

allenwb commented Sep 13, 2018

BTW, I should mention that I think the linked inheritance behavior of get/set accessor parts was also a mistake.

It means that somebody who just wants to over-ride one half of an accessor (let's say, disable the set behavior while preserving the get behavior) has to remember to over-ride the other half with an explicit super call. I don't know if this has any security implications but I suspect it is a bigger bug farms then the read-only over-ride issue.

@allenwb
Copy link
Member

allenwb commented Sep 13, 2018

@bmeck

even if we make accessors fast, it does not fix the underlying problem for data properties,

hence the suggestion make the change for strict mode data properties.

@robpalme
Copy link

robpalme commented Sep 13, 2018

Following up to @erights excellent response to @zenparsing 's request for end user benefits, this change directly benefits a use-case we have in production today: multiple applications that share the same realm for resource efficiency reasons.

In the use-case, it is still desirable to reduce interaction between the separate applications. You can have a consensual agreement that both apps will not mutate globals, but for real robustness you need to deep freeze the global object. We do that today in development (using the accessor replacement approach), but the performance degradation is too much to accept in production.

This change would promote the ability to run guaranteed-global-mutation-free code in production.

@waldemarhorwat
Copy link

I second Allen's points here. We've had a number of discussions about this that need to be put into context here.

@erights
Copy link

erights commented Sep 13, 2018

We certainly have had a number of discussions, because I keep raising it, Allen keeps objecting. I think Allen's points are wrong and he no doubt thinks mine are. Because the accessor trick was working well enough for Caja/SES and I had many higher priority fights, I never pursued it to exhaustion. We certainly never achieved consensus not to fix the override mistake. I do not recall anyone making arguments not to fix it besides Allen, except that Brendan would occasionally also voice concern about compat. All the points I remember anyone making, that we should not fix the override mistake, are well represented by Allen's old text that I quote above.

Every time I brought it up again, and Allen argued against it again, I remember having a distinct impression that neither of us were saying anything new.

So, yes, everyone please research this. Please follow the links from my old message:

http://web.archive.org/web/20130731082322/http://hg.ecmascript.org/tests/test262/file/c84161250e66/test/suite/chapter15/15.2/15.2.3/15.2.3.6/15.2.3.6-4-405.js
https://mail.mozilla.org/pipermail/es-discuss/2011-November/017997.html
https://bugs.chromium.org/p/v8/issues/detail?id=1169
https://mail.mozilla.org/pipermail/es-discuss/2011-November/018774.html

I acknowledge that the last ends with my footnote:

I have come to hate the rule that you can't override an inherited
non-writable data property by assignment. But I believe leaving this
mistake alone is better than the consequences of any attempt to fix it.

So, yes, we can learn something new by diving into this history. Apparently, in 2011 I agreed with Allen. I do not remember this and am surprised. Perhaps there are other surprises.

But we should not imagine that we will ever do exhaustive research on the history of any of our 9-year long controversies. From my memory, Allen's old points, Allen essentially reiterating a subset of these points here without access to his old writings, further research is a nice-to-have, not a requirement for proceeding.

What is a requirement: Well articulated and compelling answers to each of the points Allen raises in that old page and anything he cares to add here.

@allenwb
Copy link
Member

allenwb commented Sep 14, 2018

My primary concern is not whether the current behavior is good or bad from a language design perspective or whether the current design choice was intentional or a mistake. I really don't care about those aspects of this issue. I do, however, think the choice of words we use to describe the current behavior ("mistake", "bug", "design choice", etc) can shade and distract (either intentionally or unintentionally) from the discussion of the aspect of this issue that I think is very important.

That aspect is what do we mean when we say "Don't break the web" and how does that shape how we approach issues like this one. To me, "Don't break the web" doesn't just mean we don't want to suddenly break the Google or Amazon home page. Or the any of the current top X thousand web sites. To me, it is a commitment that we will treat the web and how it is accessed via browsers (or at least the parts of browsers that TC39 has some influence over) as an archive of human digital activity and knowledge. That means we not only want to keep Facebook working. We also want that silly game page that a teenager created in 2012 to still work when that teenager's grandchild loads that page for the first time 60 years later in 2082. We don't know what 100's of millions of web content creators have done between 1993 and now. We don't know what obscure JavaScript behavior their work may depend upon. We don't know what is sitting, currently unaccessed, on millions of servers. We don't know what is in various web archives. We don't know what is archived in server backups. We don't know what will be of interest to somebody in 5, 50, or 500 years.

A corollary of "Don't break the web" is "We live with our mistakes." Our initial reaction should be to oppose any proposal that may break existing valid (according to de jure or de facto standards) JavaScript code. The longer proposed modified behavior has been such a standard the greater the opposition should be. As a new feature is just being rolled out or isn't yet ubiquitous in browsers we may decide to correct a design or specification bug. But, once something has become an established interoperable web behavior the bar to changing it becomes extremely high. Even if the behavior was a mistake. We need to live with and work around past mistakes, not fix them.

For this issue, I've already suggested two possible work-arounds. The first one is to change the current exception throwing behavior in strict mode. Changing an exception to a non-exception is a change that is often non-breaking for existing code, although it still has to be carefully thought through. The other work-around is to lobby engine implementors to improve the performance of the accessor property pattern that can be used for the use case that is motivating the change.

Here's another possible fix. Add it new property attribute "overridable" (defaults to false, like all the attributes) that means allows [[Set]] to create an own property to over-ride an inherited non-writable data property. Is this issue important enough to justify that level of change to the specification? I don't know. But if this issue important enough to justify possibly breaking some unknown part of the web then it must be pretty important.

The real point is we don't take the easy way out. We need to assume that we must "live with our mistakes" and search very hard for alternatives to breaking changes. Only after doing that and for the most extreme issues should we ever consider "breaking the web".

@raulsebastianmihaila
Copy link

As a Javascript developer I'm very happy that TC39 is very carefully considering improving things, but at the same time I see a contrast between the current effort and the lack of effort for fixing fresh bugs: #666 (more than 2 years old though, maybe not so fresh...).

@bmeck
Copy link
Member Author

bmeck commented Sep 14, 2018

My primary concern is not whether the current behavior is good or bad from a language design perspective or whether the current design choice was intentional or a mistake. I really don't care about those aspects of this issue. I do, however, think the choice of words we use to describe the current behavior ("mistake", "bug", "design choice", etc) can shade and distract (either intentionally or unintentionally) from the discussion of the aspect of this issue that I think is very important.

That aspect is what do we mean when we say "Don't break the web" and how does that shape how we approach issues like this one. To me, "Don't break the web" doesn't just mean we don't want to suddenly break the Google or Amazon home page. Or the any of the current top X thousand web sites. To me, it is a commitment that we will treat the web and how it is accessed via browsers (or at least the parts of browsers that TC39 has some influence over) as an archive of human digital activity and knowledge. That means we not only want to keep Facebook working. We also want that silly game page that a teenager created in 2012 to still work when that teenager's grandchild loads that page for the first time 60 years later in 2082. We don't know what 100's of millions of web content creators have done between 1993 and now. We don't know what obscure JavaScript behavior their work may depend upon. We don't know what is sitting, currently unaccessed, on millions of servers. We don't know what is in various web archives. We don't know what is archived in server backups. We don't know what will be of interest to somebody in 5, 50, or 500 years.

I completely disagree here. We have shipped breaking changes in the past, and the current behavior is unreliable in either case due to discontinuity as pointed to both in the slide and the division of strict mode/es modules/class bodies vs sloppy mode. We are needing to gather the data and yes this affects millions of sites. However, we have it in our power to gather that data and make an informed decision rather than attempting to spread fear beforehand.

A corollary of "Don't break the web" is "We live with our mistakes." Our initial reaction should be to oppose any proposal that may break existing valid (according to de jure or de facto standards) JavaScript code. The longer proposed modified behavior has been such a standard the greater the opposition should be. As a new feature is just being rolled out or isn't yet ubiquitous in browsers we may decide to correct a design or specification bug. But, once something has become an established interoperable web behavior the bar to changing it becomes extremely high. Even if the behavior was a mistake. We need to live with and work around past mistakes, not fix them.

This mistake as you are stating is one that is problematic to all usage of writable property descriptors. It is not just a mistake, but it is only a source of bugs to my knowledge. The lack of reliability of the mechanism is compounded here. We should endeavor to make as few breaking changes as possible, but this is indeed a mistake that we should view as something to inspect. Workarounds are untenable in this case unless we introduce a new form of assignment, which I consider equally a shock to the ecosystem as we would then move to tell people to use it else their behavior for frozen/non-writable descriptors will suffer the problems listed here. If you can point to any code that relies on this behavior of sloppy noop that is of mention that would help me to understand this. Until then, we must wait for usage counters. If usage is not seen at all, and no reports are seen during the very long rollout like any other javascript feature, are you sure that such sites are wanting to use updated engines?

There must be at least a consideration of the future which is bigger than the past when addressing mistakes. Even if we want to make absolutely minimal changes.

For this issue, I've already suggested two possible work-arounds. The first one is to change the current exception throwing behavior in strict mode. Changing an exception to a non-exception is a change that is often non-breaking for existing code, although it still has to be carefully thought through. The other work-around is to lobby engine implementors to improve the performance of the accessor property pattern that can be used for the use case that is motivating the change.

As stated above by yourself [[Set]] is not factored in a way to handle this, and as I stated changing PutValue would break accessors in a way that is easily seen as breaking web APIs.

As stated above as well increasing the performance of accessors will not help us in fixing this mistake as JS uses data properties for both the majority of spec and many userland operations like function methods and properties of Object literals.

Such changes are both not able to be web compatible and would not fix our mistake that is having real security implications and usability problems.

Here's another possible fix. Add it new property attribute "overridable" (defaults to false, like all the attributes) that means allows [[Set]] to create an own property to over-ride an inherited non-writable data property. Is this issue important enough to justify that level of change to the specification? I don't know.

I disagree with this change unless it defaulted to true. Increasing the complexity would not help because of the exact reasons that increasing the performance of accessors would not help. We need to address the real mistake.

But if this issue important enough to justify possibly breaking some unknown part of the web then it must be pretty important.

I strongly state it is, and it is being increasingly important as you follow things trying to use frozen classes such as PrivateName and watching them struggle; watching the security talks we are having at TC39; and watching the difficulties in justifying some designs for things like class fields which would be broken if we used [[Set]] semantics on frozen prototypes. This mistake is becoming more visible, not less and it is not only affecting future designs but actively keeping security problems in our language.

The real point is we don't take the easy way out. We need to assume that we must "live with our mistakes" and search very hard for alternatives to breaking changes. Only after doing that and for the most extreme issues should we ever consider "breaking the web".

This PR is the hard way out. We have to be very careful when judging these breaking changes.

Increasing performance, adding a new descriptor field, and only being viable in strict mode. All of these are the easy way out and none of them fix the mistake nor aid our users that are facing the current behavioral discontinuity and problems therein. Those solutions are not hard because they do not attempt to address the problem. It is as you said above "working around" rather than fixing.

@bathos
Copy link
Contributor

bathos commented Sep 14, 2018

When I define a prototype property as unwritable, I do want assignment on instances to fail. I expect that usage where overriding is intentional, e.g. in a custom subclass, to be performed very intentionally, e.g. by using class extends to define the overrides in a class body or by using defineProperty. To me this seemed like a feature: prevent accidental assignment and enforce the need for code that says "yes, I really mean to do this".

I don’t feel that strongly about this and I don’t think it’s the end of the world if it were to change. My expectations here are probably informed in part by just being accustomed to it. But it doesn’t seem like a cut-and-dry case of “fixing” to me, and I’m confused about why it would happen via a PR and not a full-fledged proposal. I thought all major changes to semantics went through a proposal process?

@bmeck
Copy link
Member Author

bmeck commented Sep 14, 2018

@bathos not everything goes through the stage process. I don't think this PR should go through the staging process in particular because I don't think Stages 0, 1, 2, or 4 's intentions really apply to it. It is a massive undertaking to do this, but that doesn't mean it is through the staging process. If the committee wants it to go through the staging process... we could... but that would be odd to me.

@littledan
Copy link
Member

I agree with @allenwb above that web compatibility is extremely important. TC39 has made several breaking changes in the past, such as subclassable built-ins and Array.prototype.values. It was a lot of work, and resulted in a combination of spec workarounds/fixes and waiting out the death of some old websites, but we did it. Compatibility is an empirical question, not a theoretical one. Let's get UseCounters/telemetry out in the wild to start the investigation.

It's interesting that no one in this thread has voiced support for the current object model, in its entirety, being optimal. I agree with @erights that past disagreement shouldn't foreclose current discussion on the issue, and would be really happy to have the help of people who have been around longer than I have to dig up these historical archives, wherever they are. (It's somehow unactionable to put the responsibility on the proponents, when it's just unclear to some of us where to look.)

Even if we don't go through the stage process here, I would like to see an optimized JIT implementation and robust web compatibility feedback (e.g. shipping in an early pre-release channel--counters are not enough alone) before merging.

@zenparsing
Copy link
Member

Note that @bathos did express support for the current semantics. I also think that the current semantics are reasonable - I would say this issue is arguable both ways (although the current design clearly creates a bit of a nuisance for the SES use case).

Looking forward to the discussion ; )

@erights
Copy link

erights commented Sep 15, 2018

@bathos @zenparsing "a bit of a nuisance" is a bit of an understatement ;)

While I agree that overriders should be using a defineProperty-oriented operation, like extends, the problem is that there is a tremendous amount of perfectly good pre-class code using the following pattern, which violates no best practice:

function Point(x, y) {
  this.x = x;
  this.y = y;
}
Point.prototype.toString = function() { return ...; };

The pervasive existence of this pattern in old code has caused most projects that had tried freezing to back off of freezing in production, leaving their users more vulnerable. The others use the accessor pattern selectively, with problems of performance, compatibility, and occasional breakage. No one uses the accessor pattern pervasively --- not even the old Caja/SES --- because the performance cost is too high, especially during initialization.

@bathos
Copy link
Contributor

bathos commented Sep 16, 2018

@erights Thanks for the clear illustration / @bmeck thanks for explaining. Yes, addressing those needs does seem to have higher value than anything positive about the current semantics. I commented mostly because I was surprised by the mechanism of the possible change, but it sounds like my intuition for "what is / isn’t a proposal" probably is just not calibrated correctly.

@littledan
Copy link
Member

In parallel with the great conversation above about whether we want to go for this semantic change, let's figure out how this PR should be worded. Going back to details of this patch (to continue on this thread), it looks like I got a bit confused on a first look. In OrdinarySetWithOwnDescriptor, there are a bunch of variables flying around:

  • values
    • P: The property key we are setting
    • V: The value that we are setting the property key to
  • objects
    • Receiver: the object that we're setting the property on
    • O: the object that we're currently searching for the property in, as we iterate up the prototype chain
  • property descriptors
    • ownDesc: When iterating up the prototype chain, this is the property of the current object O if it exists, or undefined otherwise.
    • existingDescriptor: This is the property descriptor for P of Receiver if it exists, or undefined otherwise
    • valueDesc: This is the new property descriptor that's made if writing into an existing data property

There are two lines that do writability checks:

  • 3.a: "If ownDesc.[[Writable]] is false, return false." This means, throw an exception in strict mode if the property of the current object in the iteration has a non-writable property
  • 3.d.ii: "If existingDescriptor.[[Writable]] is false, return false." This means, if there's an own non-writable property on the instance that we're messing with, throw an exception in strict mode

I'm wondering if what we really want to do (if we want to make this semantic change) is simply delete 3.a.

(Did I get that all right?)

@devsnek
Copy link
Member

devsnek commented Sep 19, 2018

If you want to create the property on the receiver shouldn't you explicitly be using defineProperty?

@bmeck
Copy link
Member Author

bmeck commented Sep 19, 2018

@devsnek thats up for debate, as #1307 (comment) pointed out, there are ways to do such without defineProperty that are prevalent. If we were to introduce some specialized define operator those cases would not be fixed. Since a large part of the presentations we have seen in the last year deal with freezing values and not updating all assignment operations we should look at what can be done here since a define operation being introduced would not fix anything.

@allenwb
Copy link
Member

allenwb commented Sep 19, 2018

@littledan see final paragraphs of
#1307 (comment) and #1307 (comment)

I still contend that if this breaking change was made that it should only apply to strict mode (is anyone trying to build OCAP sandboxes that allow sandboxing non-strict code?) hence OrdinarySetWithOwnDescriptor is not the right place to do it.

Also, if you remove step 3.a you will be adding an additional observable (via a proxy) and redundant [[GetOwnProperty]] call on O for the base (assigning to a readonly own property) case of:

let O = Object.defineProperty({p: 1}, "p", {writable:false});
O.p = -1;  

The purpose of 3.a is to ensure that extra [[GetOwnProperty]] call does not occur.

@allenwb
Copy link
Member

allenwb commented Sep 19, 2018

@bathos

Yes, addressing those needs does seem to have higher value than anything positive about the current semantics.

The fundamental question isn't about the relative merit of the two semantics. Rather it is about whether TC39 is serious about its commitment to "don't break the web".

@bmeck
Copy link
Member Author

bmeck commented Sep 19, 2018

@allenwb

The fundamental question isn't about the relative merit of the two semantics.

This PR explicitly is because the current semantics make the intention of allowing freezing to preserve behaviors untenable in the real world. This PR is exactly about the semantics and if the current behavior is enough of a problem that we must change it. We have had multiple agenda items in the last year that mentioned the topic of freezing in a variety of ways, not all of them related to a full Object Capabilities model.

Rather it is about whether TC39 is serious about its commitment to "don't break the web".

Yes, we should be serious about it. However, using phrases like that without data and in the intent to spread an idea of no breakage being allowed in the language is simply not useful. We have some data about the impact of keeping the current semantics.

We also have made breaking changes in the past, and even recently renamed Atomics.wake to Atomics.notify even knowing that some deployed code to the web was using Atomics.wake.

If we are to talk about "don't break the web" we must first decide what breaking the web entails.


If there are viable alternatives to this PR, that sounds great. If there is a way to make [[Set]] change behavior in strict mode that doesn't have fallout in various things like Proxy traps, that also great. However, adding new operators and more fields to descriptors does not solve the problems we are seeing with the current semantics. Talking about the current semantics without finding a solution to the problems at hand is the very reason we have skirted this issue for so long. If there is a viable solution, we should take it. As it stands I think this PR has a chance of being viable as long as we gather metrics. Our history of breaking changes does exist, and we will continue to make breaking changes. Things like forcing Array.prototype.sort() to be stable is also a breaking change, some sites might be relying on instability to add some randomness to double sorting (idk why, but I also don't know of people who are writing robust code such that assignments silently failing don't break their expectations).

@littledan
Copy link
Member

In the September 2018 TC39 meeting, we discussed this PR. A high-level summary of my understanding of the discussion (please correct me if this is a misrepresentation):

  • In terms of the ideal object model, we tried to think through and understand @allenwb 's arguments as best as we could, but in the end, the attendees concluded that, if it is web-compatible (e.g., if we could go back in time) it would be best to adopt the proposed semantics here.
  • Web compatibility is a serious concern here; we don't want to break the web. We discussed the idea of including counters in implementations for how often the inherited non-writable property path was hit in both sloppy and strict mode; the committee seemed to agree that, if the sloppy counter is hit very rarely, then it is somewhat likely to be web-compatible to change. There were various hypotheses about what the data will be. @caitp is working on collecting this data in V8.
  • Some people advocated for the strict-only change, and others strongly opposed it. The only motivation for going down this path would be if the sloppy change isn't web-compatible, so we can delay thinking further in that direction until we actually have data-based web compatibility concerns.

littledan added a commit to littledan/ecma262 that referenced this pull request Oct 9, 2018
…ng to instance

TC39 is discussing making inherited non-writable properties overridable with Set,
in tc39#1307. We have not yet assessed the web compatibility of the change or written
tests, so this PR is not yet ready to merge, but this PR gives some concreteness
to how the specification may be done, to help move the discussion forward.

This patch implements the approach described in tc39#1307 (comment)
@allenwb
Copy link
Member

allenwb commented Oct 24, 2018

Some additional background that I found.
The May 2008 ES-discuss thread starting here is the earliest recorded discussion I've been able to find of this behavior. It includes information about its origin and motivation.

As the thread mentioned, this behavior goes back to ES1. In fact, it was in the very first ES1 draft of January 10, 1997. The image below is a screen snap of the the relevant algorithms in that draft. The tread mentioned that that these algorithms were initially authored by people from Microsoft. It's probably worth adding that this included people with considerable background in Self and NewtonScript so it seems likely that they were fully informed about the "prototypal inheritance" motivations for this approach. It was never a "mistake", it was an intentional informed design decision.

image

@rossberg
Copy link
Member

Intention and information are not mutually exclusive with mistakes. Just saying. :)

@erights
Copy link

erights commented Oct 24, 2018

Once it is fixed, we can stop worrying about what to call it ;).

@bmeck
Copy link
Member Author

bmeck commented Jan 4, 2019

Superceded by #1320

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
needs consensus This needs committee consensus before it can be eligible to be merged. normative change Affects behavior required to correctly evaluate some ECMAScript source text
Projects
None yet
Development

Successfully merging this pull request may close these issues.