Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

The getHTML(), setHTMLUnsafe(), and parseHTMLUnsafe() methods are missing #32731

Closed
Tracked by #537
mfreed7 opened this issue Mar 18, 2024 · 21 comments
Closed
Tracked by #537

Comments

@mfreed7
Copy link

mfreed7 commented Mar 18, 2024

MDN URL

https://developer.mozilla.org/en-US/docs/Web/API/Web_components/Using_shadow_DOM#declaratively_with_html

What specific section or headline is this issue about?

Several APIs related to declarative shadow DOM serialization and parsing

What information was incorrect, unhelpful, or incomplete?

There are several APIs that are not documented at all on MDN:

  • getHTML()
  • setHTMLUnsafe()
  • parseHTMLUnsafe()
  • serializable and shadowrootserializable

And the setHTML() method is documented, here, but that one isn't yet standardized or shipped anywhere. Perhaps worth removing or marking as such.

What did you expect to see?

Documentation

Do you have any supporting links, references, or citations?

setHTMLUnsafe() and parseHTMLUnsafe() are specified here:

The getHTML() method and the serializable/shadowrootserializable concept will be in the spec once this spec PR lands:

Do you have anything more you want to share?

No response

@mfreed7 mfreed7 added the needs triage Triage needed by staff and/or partners. Automatically applied when an issue is opened. label Mar 18, 2024
@github-actions github-actions bot added the Content:WebAPI Web API docs label Mar 18, 2024
@lukewarlow
Copy link
Contributor

After mdn/browser-compat-data#22866 is merged all of the compat data entries will be set up for any pages added for the various new APIs mentioned.

#33141 - seeks to remove the setHTML (and associated documents) content, as it's not helpful imo.

@lukewarlow
Copy link
Contributor

Btw getHTML has been merged into the spec: https://html.spec.whatwg.org/multipage/dynamic-markup-insertion.html#html-serialization-methods and has an intent to ship in Chromium

@dipikabh dipikabh added effort: large This task is large effort. Firefox 126 and removed needs triage Triage needed by staff and/or partners. Automatically applied when an issue is opened. labels May 1, 2024
@dipikabh
Copy link
Contributor

dipikabh commented May 1, 2024

Copying @pepelsbey, @hamishwillee, and @chrisdavidmills to put this on your radar. We're including this in our ongoing Firefox release work.

@dipikabh
Copy link
Contributor

dipikabh commented May 3, 2024

Related Firefox bugs:

@lukewarlow
Copy link
Contributor

This issue should be reopened it's not fully addressed yet and nor will it be when my second PR is merged.

@sideshowbarker sideshowbarker reopened this May 9, 2024
@github-actions github-actions bot added the needs triage Triage needed by staff and/or partners. Automatically applied when an issue is opened. label May 9, 2024
@dipikabh dipikabh removed the needs triage Triage needed by staff and/or partners. Automatically applied when an issue is opened. label May 10, 2024
@hamishwillee
Copy link
Collaborator

hamishwillee commented May 13, 2024

@lukewarlow So normally the MDN team do docs, compatibility data, and release note, along with experimental features page update for new stuff behind a preference. If you need any help with any of that, or getting reviews done, feel free to add a note here pinging me with what help you could use from me.

@sideshowbarker
Copy link
Collaborator

For the record here, it was my mistake that this issue got closed. It happened because the description for #33494 had “Partially fixes #32731. And when I reviewed the PR, I did actually notice the word Partially there — but I failed to remember that GitHub doesn’t pay any attention to whatever might come before the “fixes #NNNNN” part.

And so, when I merged the PR, GitHub went ahead and closed this issue. It should have dawned on me that it would, but I didn’t stop again to think about it, until Luke sent the ping about it.

So anyway, I’ve added it to my review checklist for the future — to check the PR description for text like “partially”  or whatever preceding any “fixes #NNNNN” part — and to make sure to either edit that part before merging, or else after merging, to make sure to go back and re-open the associated issue.

@hamishwillee
Copy link
Collaborator

Easily done ^^^.

@lukewarlow Just let me know what of #32731 (comment) could use my help. The most obvious thing being a release note, since the release happens today.

@lukewarlow
Copy link
Contributor

A release note as well as an experimental features entry, for getHTML would be useful.

#33492 could also do with a review on this one.

The pages aren't exhaustive but I consider them good enough for an initial entry. Then someone from the docs team can add more detail when they get time.

The compat data for these features is all done I believe.

@hamishwillee
Copy link
Collaborator

hamishwillee commented May 14, 2024

Need to check what of the below is in browsers - if it isn't present yet, it doesn't get documented.

Status:

@hamishwillee
Copy link
Collaborator

@mfreed7 I've tagged you with a couple of comments in the linked items.

  1. FYI only Element.setHTML() - According to the definition of "standard" adopted by the browser compat data this is in a standard. I have recently updated this to be marked as deprecated in Browser-compat since it has been withdrawn from Chrome and is behind pref everywhere else. That has not yet propagated to docs, but when it does there will be a big fact deprecated warning on top of the page. That's about all we can do until either the identifier is supported in a browser or removed from the spec.
  2. For the setHTMLUnsafe is this just called "unsafe" in contrast to the safe version that will eventually exist (point 1) which does sanitising? And again, we need this because setting innerHTML has unpredictable and imprecise behaviour for shadow roots?
  3. Not covered in this issue, but nowhere do we clearly state the implications of clonable being set. My assumption is that
    • if you copy a node that contains a shadow root and it is clonable, then everything in the node is deep copied - is that right? Are there any implications to such a copy? I mean presumably the copy is deep so the copy is completely independent (other than things like sharing a different reference to the same CSS definition)?
    • However if you set something as clonable=false and copy the node then the new copy will have nothing inside the element. It will be as though the element is empty and always was - right? In this case what are the implications? For example if I copy a web component and it has a shadow root, presumably the rest of the component will exist but now it doesn't have any internals? Won't that break things?

@zcorpan
Copy link
Contributor

zcorpan commented May 14, 2024

I think "deprecated" doesn't make sense for setHTML(), since it's not in the process of being removed; it's in the process of being added. "Experimental" maybe?

@lukewarlow
Copy link
Contributor

setHTML is really tricky because the version that is documented is deprecated.

I strongly believe removing the docs would be the most useful approach for developers. But policies prevent that happening.

@lukewarlow
Copy link
Contributor

For the setHTMLUnsafe is this just called "unsafe" in contrast to the safe version that will eventually exist (point 1) which does sanitising? And again, we need this because setting innerHTML has unpredictable and imprecise behaviour for shadow roots?

Yes in essence there's a safe and an unsafe pairing for setHTML and parseHTML. Eventually they'll both behave simiarly except that setHTML enfoces a "safe" baseline config. Whereas the unsafe variants will allow you to do unsafe things such as creating script elements.

These new functions need to exist for two reasons. Declarative shadow dom isn't supported by innerHTML (or any of the existing parser entry points), that's the main difference for now. But in future it needs to be a function so you can pass an argument to it (the sanitizer config).

So for now our best bet is to just mention the declarative shadow dom support.

@mfreed7
Copy link
Author

mfreed7 commented May 14, 2024

@mfreed7 I've tagged you with a couple of comments in the linked items.

  1. FYI only Element.setHTML() - According to the definition of "standard" adopted by the browser compat data this is in a standard. I have recently updated this to be marked as deprecated in Browser-compat since it has been withdrawn from Chrome and is behind pref everywhere else. That has not yet propagated to docs, but when it does there will be a big fact deprecated warning on top of the page. That's about all we can do until either the identifier is supported in a browser or removed from the spec.

So this sounds like a technical issue with MDN, right? The state of setHTML() is "still in development". It is not shipped yet, but definitely actively in progress. Hopefully to be shipped in the next year, if things go well.

  1. For the setHTMLUnsafe is this just called "unsafe" in contrast to the safe version that will eventually exist (point 1) which does sanitising? And again, we need this because setting innerHTML has unpredictable and imprecise behaviour for shadow roots?

Exactly right. "Unsafe" === "Unsanitized" in this case. It does roughly the same thing as element.innerHTML=html except that it also parses declarative shadow dom. It isn't that innerHTML is unpredictable or imprecise. It precisely does not parse declarative shadow dom - <template shadowrootmode=open> will be parsed as a <template> instead of a #shadowroot.

  1. Not covered in this issue, but nowhere do we clearly state the implications of clonable being set. My assumption is that

    • if you copy a node that contains a shadow root and it is clonable, then everything in the node is deep copied - is that right? Are there any implications to such a copy? I mean presumably the copy is deep so the copy is completely independent (other than things like sharing a different reference to the same CSS definition)?

Not quite. If you clone a node that contains a shadow root that is clonable, then the contents of the shadow root will be deep cloned. But children of the node will still either be deep-cloned or shallow-cloned according to the value of the deep parameter to cloneNode(). Otherwise, behavior is identical to regular cloneNode.

  • However if you set something as clonable=false and copy the node then the new copy will have nothing inside the element. It will be as though the element is empty and always was - right? In this case what are the implications? For example if I copy a web component and it has a shadow root, presumably the rest of the component will exist but now it doesn't have any internals? Won't that break things?

Correction: "the new copy will have no shadow root". I.e. with clonable=false, the shadow root is not cloned. Your question about web components depends on how the web component was written. It is in charge of setting or not setting clonable on its shadow root. Typically web components have constructor (or connectedCallback) logic that creates the shadow root and populates it. So if clonable=false, this logic will still work to create the shadow root.

@mfreed7
Copy link
Author

mfreed7 commented May 14, 2024

Also note that a lot of conversation about clonable seems to be happening here: #33325

@hamishwillee
Copy link
Collaborator

Thanks @mfreed7

  1. FYI only Element.setHTML() - According to the definition of "standard" adopted by the browser compat data this is in a standard. I have recently updated this to be marked as deprecated ...

So this sounds like a technical issue with MDN, right? The state of setHTML() is "still in development". It is not shipped yet, but definitely actively in progress. Hopefully to be shipped in the next year, if things go well.

The thing is that setHTML() DID ship. So there are browsers out there that include it. The rule on compatibility data is that once something ships it remains in the web platform for 2 years, at which point it can be removed from the docs. I'll see if we can make an exception for this case.

  • if you copy a node that contains a shadow root and it is clonable, then everything in the node is deep copied - is that right? Are there any implications to such a copy? I mean presumably the copy is deep so the copy is completely independent (other than things like sharing a different reference to the same CSS definition)?

Not quite. If you clone a node that contains a shadow root that is clonable, then the contents of the shadow root will be deep cloned. But children of the node will still either be deep-cloned or shallow-cloned according to the value of the deep parameter to cloneNode(). Otherwise, behavior is identical to regular cloneNode.

So to be clear on our terminology here,

  • a particular node can have children, which can be other nodes.
  • a node can contain one, and only one shadow root (but that shadow root may contain nodes that themselves contain shadow roots)

So what you are saying is that the ShadowRoot belonging to a node is always deep cloned if it is cloned at all, but other child nodes are cloned according to the value of deep?

  • However if you set something as clonable=false and copy the node then the new copy will have nothing inside the element. It will be as though the element is empty and always was - right? In this case what are the implications? For example if I copy a web component and it has a shadow root, presumably the rest of the component will exist but now it doesn't have any internals? Won't that break things?

Correction: "the new copy will have no shadow root". I.e. with clonable=false, the shadow root is not cloned. Your question about web components depends on how the web component was written. It is in charge of setting or not setting clonable on its shadow root. Typically web components have constructor (or connectedCallback) logic that creates the shadow root and populates it. So if clonable=false, this logic will still work to create the shadow root.

Thanks. This seems to match my new mental model.
When you're copying a web component it isn't just a matter that the parser gets to some DOM object and copies all properties in some kind of deep copy. The new object is constructed at some point (I think when the web component or node it is in is added to a document). But at whatever this point, the connectCallback() runs, and at that point creates the shadow root.
So cloning a web component effectively restarts creation of the element.

Could be completely wrong. I wrote a similar comment to this in the issue that is now merged. If it seems wrong let me know.

@hamishwillee
Copy link
Collaborator

Note, I have marked this as done. Note however that I hope to perhaps clean up setHTML() as well via BCD deletion. That would be separate job though.

@Josh-Cena
Copy link
Member

Note however that I hope to perhaps clean up setHTML() as well via BCD deletion. That would be separate job though.

@hamishwillee Do you still want to track that in this issue or separate issue?

@hamishwillee
Copy link
Collaborator

I still want to track this here please.

@hamishwillee
Copy link
Collaborator

I am closing this. Kicked off the removal of the sanitizer API stuff in #35238

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
Status: Done
Development

No branches or pull requests

7 participants