Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Julep/Very WIP - Heap allocated immutable arrays and compiler support #31630

Closed
wants to merge 1 commit into from

Conversation

Keno
Copy link
Member

@Keno Keno commented Apr 5, 2019

This is part of a larger set of overhauls I'd like to do in the 2.0 timeframe (along with #21912 and other things along these lines). As such this is more of a straw-man implementation to play with various ideas. I doubt any of this code will get merged as is, but should provide a place for experimentation and we may start picking off good ideas from here.

The basic concept here is that I think we're missing a heap-allocated immutable array. We have Array (mutable and dynamically sized), StaticArray (immutable and statically sized) and MArray (mutable and statically sized), but we don't really have an immutable dynamically sized array. This PR adds that.

In my ideal world, most functions would return immutable arrays. In a lot of code, it is fairly rare to require semantically mutable arrays at the highest level of the API (of course operations are internally often implemented as mutating operations) and even in a good chunk of the cases that make use of them, they are used as a performance optimization rather than a semantic necessity.

On the other hand, having an immutability guarantee can be quite useful. For example, it would solve a lot of the performance problems around aliasing (the reason LLVM can't vectorize in a lot of cases is that it doesn't know that the output array doesn't overlap the input array - if the input array is immutable that obviously can't happen).

Immutability is also nice for higher level applications. Since views and slices are the same thing in immutable arrays, operations that would semantically be required to make copies on mutable arrays (say an AD system taking captures during the forward pass), can use views instead.

Now, the problem here of course is that sometimes you do want mutation, particularly during construction of various objects (i.e. you construct the object once by setting something to every memory location, but then it's immutable afterwards). This PR introduces the freeze function, which takes a mutable array and returns an immutable array with the same memory contents. Semantically this function is a copy, but the idea is that the compiler will be able to elide this copy in most circumstances, thus allowing immutable arrays to be constructed using mutating operations without overhead. Similarly, there is the melt function which does the opposite. Once this infrastructure is mature, it should be trivial to get either the immutable or the mutable version in a zero-overhead (after compiler optimizations) manner of any array function just by adding the appropriate freeze/melt function. The 2.0 goal would then be to actually make most array operations return the immutable versions of the array.

Another motivation here is to make it easier to write code that it generic over mutability in order to support things like XLA and other optimizing linear algebra compilers that operate on immutable tensors as their primitives. By having a well defined way to talk about mutability in the standard library, it should be easier to plug in those external implementations seamlessly.

@Keno Keno added speculative Whether the change will be implemented is speculative design Design of APIs or of the language itself labels Apr 5, 2019
@andyferris
Copy link
Member

andyferris commented Apr 6, 2019

Yes! I'm really glad this is being looked at, @Keno.

I have been thinking of/wanting a freeze function for quite a while now. A melt/thaw function makes a lot of sense. I agree that the copy semantic is probably going to be the easiest to work with, assuming we gain the mentioned optimizations based on the compiler's ability to track references (and we already have some of that). I also buy the argument that if you are doing functional operations like map (or linear algebra on arrays), then it's relatively rarer that you want to mutate the output - you're likely to perform more functional operations or use/read the result directly so it's OK to defaul to the immutable output and let users melt them as necessary..

I think an immutable array would be a wonderful improvement - but I think it is only the beginning, and we can take the idea further. There are a bunch of related things that I feel an interface like this can solve. In no particular order:

  1. Can I freeze a mutable struct so no-one can mutate it later?
  2. Conversely, if I want the mutable version of a struct would I use melt to get that? How might that fit in with WIP: Make mutating immutables easier #21912?
  3. Can I freeze the keys and values of a Dict?
  4. Can I freeze just the indices of a Dict? This alone helps a lot with functional operations, such as making mapping of dictionary values (while preserving the keys) faster, since the output and input can share the same keys and hash-related data.
  5. Can I freeze just the indices of an Array - that is, make it non-resizeable? In view(A, inds) it would be a useful guarantee if inds were immutable and A were non-resizable. If I don't need to resize an array, then I could directly reference an underlying Buffer - no need for a pointer to pointers to data. Amongst other things, I think the FFI would be improved by a more direct Buffer type, and it would be nice to have more of Array implemented in Julia code.
  6. Speculatively, could these (immutable and fixed-size arrays) eventually replace SArray and MArray when the compiler can infer the size, e.g. as a consequence of constant propagation?

I think there are massive opportunities here. :) Are there ambitions for these kind of broad changes for Julia v2.0?

@andyferris andyferris added the julep Julia Enhancement Proposal label Apr 6, 2019

JL_CALLABLE(jl_f_arraymelt)
{
JL_NARGSV(arrayfreeze, 1);
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

arraymelt?


JL_CALLABLE(jl_f_mutating_arrayfreeze)
{
JL_NARGSV(arrayfreeze, 1);
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

mutating_arrayfreeze?

return (jl_value_t*)na;
}

JL_CALLABLE(jl_f_mutating_arrayfreeze)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

At first, I was thrown by what this name meant. It makes sense in the end; in this case you are sort-of mutating the type of a but not the value/data... though I did wonder if it would be better language to describe this as "taking" or "stealing" a or something like that.

@StefanKarpinski
Copy link
Member

StefanKarpinski commented Apr 6, 2019

The most trivial possible comment, but I think that thaw is a much better term than melt. (Note that melt / cast is common terminology in data reshaping thanks to R's excellent reshape2 package).

How does thawing/melting work? If the compiler can prove that there's only one reference to an object, I can see it but what about the general case? What if there are multiple references to the same object? Does the general case require doing a full GC to figure out if there's only one reference?

@andyferris
Copy link
Member

@StefanKarpinski My interpretation was in the general case you would make a copy, which you may then mutate without affecting the other readers of the original immutable.

@chethega
Copy link
Contributor

chethega commented Apr 7, 2019

Ideally, we would check the size of the buffer and use something like mmap / MAP_PRIVATE for freezing/melting large arrays. Thus, no data would be copied until the melted/pre-frozen array gets modified.

Then the price for the compiler failing to prove safety of re-using the buffer during melt/freeze would be a syscall, not a copy (obviously only good for non-tiny arrays). In the common case that only part of the array gets modified after melting, or only part of the parent array gets modified after freezing, we would only copy (and allocate) a handful of pages, instead of a giant object.

The resulting freeze/melt would be an absolutely superb API for copy-on-write structures. The MMU is an awesome piece of silicon that is criminally underused in day-to-day programming.

@kmsquire
Copy link
Member

kmsquire commented Apr 8, 2019

  1. Can I freeze the keys and values of a Dict?

Tangentially related, the new LittleDict in OrderedCollections.jl does this.

@StefanKarpinski
Copy link
Member

@chethega: that does seem like a very cool implementation strategy if it can be made to work well.

@Keno
Copy link
Member Author

Keno commented Apr 8, 2019

  1. Can I freeze a mutable struct so no-one can mutate it later?

Yes, I've been thinking about how to make this work. I think there's something here, but it probably needs to be more complicated than just freezing an immutable struct.

  1. Conversely, if I want the mutable version of a struct would I use melt to get that? How might that fit in with WIP: Make mutating immutables easier #21912?

Don't know yet, that's part of the reason to make these PRs. One could imagine that the default melt wraps it in a Ref cell that forwards get/setproperty and uses #21912 as the implementation.

  1. Can I freeze the keys and values of a Dict?

Yes, ideally.

  1. Can I freeze just the indices of a Dict? This alone helps a lot with functional operations, such as making mapping of dictionary values (while preserving the keys) faster, since the output and input can share the same keys and hash-related data.
  2. Can I freeze just the indices of an Array - that is, make it non-resizeable? In view(A, inds) it would be a useful guarantee if inds were immutable and A were non-resizable. If I don't need to resize an array, then I could directly reference an underlying Buffer - no need for a pointer to pointers to data. Amongst other things, I think the FFI would be improved by a more direct Buffer type, and it would be nice to have more of Array implemented in Julia code.

I haven't really thought through how this API extends to non-array collections yet, so I don't yet have an answer to these questions. I think it'll depend on the answer to question 1. Ideally these various combinations would just fall out.

  1. Speculatively, could these (immutable and fixed-size arrays) eventually replace SArray and MArray when the compiler can infer the size, e.g. as a consequence of constant propagation?

Maybe, but I'm not entirely sure. Part of the appeal of SArray is that you can specialize for every size of the array and do size-specific optimizations. That could potentially be replaced by constant prop and specialization hints, but for me it's not currently in scope to do anything about this. I do plan some general infrastructure improvements to make SArray perform better.

I think there are massive opportunities here. :) Are there ambitions for these kind of broad changes for Julia v2.0?

Yes

@Keno
Copy link
Member Author

Keno commented Apr 8, 2019

Ideally, we would check the size of the buffer and use something like mmap / MAP_PRIVATE for freezing/melting large arrays. Thus, no data would be copied until the melted/pre-frozen array gets modified.

Yes, runtime support for using the MMU to lazily perform the copy is very much in scope.

@Keno
Copy link
Member Author

Keno commented Apr 8, 2019

How does thawing/melting work

Yes, as @andyferris said, you get a copy (or something that semantically behaves like one) in either direction if the compiler can't prove that there's currently only a single reference. One could imagine in the future having a mode of the language that enforces that property by disallowing certain values from escaping or doing static analysis to that extent (a poor man's borrow checker), but that's not really necessary to design for the current proposal.

@chethega
Copy link
Contributor

chethega commented Apr 8, 2019

So, I did some quick googling and thinking on how to possibly use the MMU for copy-on-write freeze/thaw. Unfortunately, it does not look pretty. At least the linux kernel appears to fail to provide us with the necessary tools.

Refs: 1, 2, [3] manpages: memfd_create, mmap, mremap, mprotect, madvise, vmsplice, sendfile64. The problem is that mremap syscall does not accept the MAP_PRIVATE flag for copy-on-write.

One way that works on most operating systems is the following ugly kludge: Large buffer allocations don't go to malloc/free, but instead to mmap on memfd_create generated files. This also neatly solves aligned calloc and probably the issue of actually freeing large buffers, as well as allowing us to mremap in order to grow large vectors without copying any memory. This is an optimization for large buffers only (probably > 64kb or >2 MB), and one might want to play with HugeTLB options (since we only deal with large buffers, 4k pagesize is stupid). On the downside, we need our own allocator that tracks used/free memory and mappings (on large page granularity) in the giant file (from memfd_create) .

Now we have five operations: allocate, freeze, thaw, free, segfault. I already talked about allocation (grab free pages from file or grow file, mmap shared). For freezing, we look up to what pages the buffer's virtual memory is mapped. We mark that as non-writeable via mprotect. Next, we mmap some new virtual memory on the backing pages, also read-only. For thawing, we check whether any other virtual memory is mapped onto the pages; if yes, we do nothing, if no we mprotect the mapping as writeable. For freeing, we check (via refcounting) how many virtual memory are mapped on the pages backing our buffer. If none, we return the memory to the kernel (either via madvise or via munmap). If after the free there is only a single non-frozen holder of a reference, we mprotect the mapping as writeable.

On segfault, we need to check whether we attempted to write to one of our memfd_create-backed write-protected pages through an unfrozen buffer. In that case, we mmap the virtual memory to a new page, and fill that with a copy.

It is unfortunate that the linux kernel apparently does not permit us to create complex mappings and use its in-built pagefault handler for copy-on-write (as used by mmap/MAP_PRIVATE or fork), and we need to instead do the copy in user-space and duplicate the kernel's bookkeeping. The issue is that many of the virtual memory operations require file descriptors; but, given some range of virtual memory, I know of no sensible zero-copy way of getting a file-descriptor/offset to that (maybe vmsplice and sendfile64?). However, we need such a thing for mappings that have been faulted in on write, via cow. I heard that osx and freebsd have better support for that kind of mmu shennenigans.

A more elegant approach is maybe to mmap into /proc/pid/mem or the like, at least if this turns out to work and be fast.

Or is there another way of offloading all this to the kernel?

@Keno
Copy link
Member Author

Keno commented Apr 9, 2019

Your analysis is basically correct. Dual-mapped memory regions are not super convenient on linux. The one mechanism you missed is userfaultfd. If we are to get new features for memory map modification, they'll likely go there. Do note that we already do have a dual-mapping memory manager for the JIT memory manager.

A more elegant approach is maybe to mmap into /proc/pid/mem or the like, at least if this turns out to work and be fast.

/proc/pid/mem is not mmap'able.

@Keno
Copy link
Member Author

Keno commented Apr 9, 2019

In particular something like https://lwn.net/ml/linux-api/20180328101729.GB1743@rapoport-lnx/ might help if it ever gets added.

@c42f
Copy link
Member

c42f commented May 17, 2019

In my ideal world, most functions would return immutable arrays

This brings to mind the following speculation: I've long wanted to claim the array literal syntax [1,2,3] as a constructor for something as functionally efficient as an SVector.

These kind of short array literals are quite pervasive in certain types of code. It would be really nice if this minimalistic syntax also produced objects with minimal runtime overhead.

@StefanKarpinski
Copy link
Member

I've long wanted to claim the array literal syntax [1,2,3] as a constructor for something as functionally efficient as an SVector.

You mean (1,2,3)? 😬

@ChrisRackauckas
Copy link
Member

ChrisRackauckas commented May 17, 2019

I've always wondered why we don't just give Tuples a full abstract array interface to make them the real SVector. Would there be any weird side effects?

@AzamatB
Copy link
Contributor

AzamatB commented May 17, 2019

Continuing this thought, would it be a bad idea if

(1 2
 3 4)

constructed an equivalent of SMatrix?

@quinnj
Copy link
Member

quinnj commented May 17, 2019

I've long wanted to claim the array literal syntax [1,2,3] as a constructor for something as functionally efficient as an SVector.

You mean (1,2,3)? 😬

Except we really want something like #31681; otherwise there are some poor corner cases of the type system that get angry w/ things like NTuple{10_000, Int}.

@StefanKarpinski
Copy link
Member

By the time you've got 10_000 elements, you're not really dealing with small literal arrays anymore though, so it's unclear if one really needs the [1,2,3] syntax for this.

@JeffBezanson
Copy link
Member

I've always wondered why we don't just give Tuples a full abstract array interface to make them the real SVector. Would there be any weird side effects?

I think the main issue is that they have a different type-level structure; if they are subtypes of AbstractVector{T}, what's T? Ideally maybe only homogeneous tuples would be AbstractVector{T}s, but we don't have the ability to do that.

@c42f
Copy link
Member

c42f commented May 18, 2019

You mean (1,2,3)? 😬

If we could make it work, I'd take it!

But the behavior of Tuple and AbstractVector are rather different.

  • Homogenous vs hetrogenous eltype
  • A linear algebra type vs the type of function argument lists
  • <: AbstractVector vs <: Tuple (which is covariant to boot)
  • What Jeff said
    etc etc.

Which leads to wondering whether array literals must be mutable and if not whether size information could be added to them.

Side note... I once implemented unicode brackets as an experiment for small array literals ⟨1,2,3⟩ ( could be typed with \<TAB). I suppose that could still make sense if it was in Base. Obviously with an ascii equivalent.

@JeffBezanson
Copy link
Member

Oh I would love to parse all those brackets if we could decide how to handle them. Making literal arrays immutable also makes sense to me.

@timholy
Copy link
Member

timholy commented May 18, 2019

Is {} free now?

julia> {1,2,3}
ERROR: syntax: { } vector syntax is discontinued

@chethega
Copy link
Contributor

chethega commented May 18, 2019

Making literal arrays immutable also makes sense to me.

I object. Having x=1; A = [x]; and x=1; A = [1]; mean radically different things is a recipe for chaos, and it is a common idiom to initialize a vector vec = [x] for subsequent push!, in order to allow type inference to figure out the types.

If we want better syntax for literal static vectors, then something like SA[1, 2, 3] or MA[1, 2, 3] is close to nonstandard string literals, only two extra characters, also works like SA[1, x, 2] and can be made typestable without macros (by extending getindex(::Type{SA}, args...) and Base.typed_hvcat(::Type{SA}, args...)). The only advisable Base change would be to document Base.typed_hvcat, thus signaling stability of the API for packages like StaticArrays.

@rfourquet
Copy link
Member

Having x=1; A = [x]; and x=1; A = [1]; mean radically different things

I don't think this was the idea, in both cases you have an array literal.

@Keno
Copy link
Member Author

Keno commented May 18, 2019

I do think that having [] create an immutable array makes sense in the fullness of time once this works well, but that's obviously a 2.0 discussion.

@andyferris
Copy link
Member

@cthega I always liked [x, ...] as potential syntax for an array you plan to push! to afterwards (agree that it’s a common enough case to have syntactic support but the default doesn’t necessarily need to be implicitly resizable/mutable).

@jtrakk
Copy link

jtrakk commented Jun 27, 2021

Btw, this StackOverflow post "Allocating copy on write memory within a process" has some unsuccessful attempts that may be interesting.


On the idea of a lightweight borrow-checker alternative, "deny capabilities" seem powerful and easier to understand/manage. See paper, video, more info introducing a handful of capabilities and alias types for high-performance safe concurrency (and implementation in the Pony language).

image


Another interesting development is "Separating Permissions from Data in Rust", ICFP 2021.

Keno added a commit that referenced this pull request Aug 3, 2021
This rebases #31630 with several fixed and modifications.
After #31630, we had originally decided to hold off on said
PR in favor of implementing either more efficient layouts for
tuples or some sort of variable-sized struct type. However, in
the two years since, neither of those have happened (I had a go
at improving tuples and made some progress, but there is much
still to be done there). In the meantime, all across the package
ecosystem, we've seen an increasing creep of pre-allocation and
mutating operations, primarily caused by our lack of sufficiently
powerful immutable array abstractions and array optimizations.

This works fine for the individual packages in question, but it
causes a fair bit of trouble when trying to compose these packages
with transformation passes such as AD or domain specific optimizations,
since many of those passes do not play well with mutation. More
generally, we would like to avoid people needing to pierce
abstractions for performance reasons.

Given these developments, I think it's getting quite important
that we start to seriously look at arrays and try to provide
performant and well-optimized arrays in the language. More
importantly, I think this is somewhat independent from the
actual implementation details. To be sure, it would be nice
to move more of the array implementation into Julia by making
use of one of the abovementioned langugage features, but that
is a bit of an orthogonal concern and not absolutely required.

This PR provides an `ImmutableArray` type that is identical
in functionality and implementation to `Array`, except that
it is immutable. Two new intrinsics `Core.arrayfreeze` and
`Core.arraythaw` are provided which are semantically copies
and turn a mutable array into an immutable array and vice
versa.

In the original PR, I additionally provided generic functions
`freeze` and `thaw` that would simply forward to these
intrinsics. However, said generic functions have been omitted
from this PR in favor of simply using constructors to go
between mutable and immutable arrays at the high level.
Generic `freeze`/`thaw` functions can always be added later,
once we have a more complete picture of how these functions
would work on non-Array datatypes.

Some basic compiler support is provided to elide these copies
when the compiler can prove that the original object is
dead after the copy. For instance, in the following example:
```
function simple()
    a = Vector{Float64}(undef, 5)
    for i = 1:5
        a[i] = i
    end
    ImmutableArray(a)
end
```

the compiler will recognize that the array `a` is dead after
its use in `ImmutableArray` and the optimized implementation
will simply rewrite the type tag in the originally allocated
array to now mark it as immutable. It should be pointed out
however, that *semantically* there is still no mutation of the
original array, this is simply an optimization.

At the moment this compiler transform is rather limited, since
the analysis requires escape information in order to compute
whether or not the copy may be elided. However, more complete
escape analysis is being worked on at the moment, so hopefully
this analysis should become more powerful in the very near future.

I would like to get this cleaned up and merged resonably quickly,
and then crowdsource some improvements to the Array APIs more
generally. There are still a number of APIs that are quite bound
to the notion of mutable `Array`s. StaticArrays and other packages
have been inventing conventions for how to generalize those, but
we should form a view in Base what those APIs should look like and
harmonize them. Having the `ImmutableArray` in Base should help
with that.
@Keno Keno mentioned this pull request Aug 3, 2021
Keno added a commit that referenced this pull request Aug 4, 2021
This rebases #31630 with several fixed and modifications.
After #31630, we had originally decided to hold off on said
PR in favor of implementing either more efficient layouts for
tuples or some sort of variable-sized struct type. However, in
the two years since, neither of those have happened (I had a go
at improving tuples and made some progress, but there is much
still to be done there). In the meantime, all across the package
ecosystem, we've seen an increasing creep of pre-allocation and
mutating operations, primarily caused by our lack of sufficiently
powerful immutable array abstractions and array optimizations.

This works fine for the individual packages in question, but it
causes a fair bit of trouble when trying to compose these packages
with transformation passes such as AD or domain specific optimizations,
since many of those passes do not play well with mutation. More
generally, we would like to avoid people needing to pierce
abstractions for performance reasons.

Given these developments, I think it's getting quite important
that we start to seriously look at arrays and try to provide
performant and well-optimized arrays in the language. More
importantly, I think this is somewhat independent from the
actual implementation details. To be sure, it would be nice
to move more of the array implementation into Julia by making
use of one of the abovementioned langugage features, but that
is a bit of an orthogonal concern and not absolutely required.

This PR provides an `ImmutableArray` type that is identical
in functionality and implementation to `Array`, except that
it is immutable. Two new intrinsics `Core.arrayfreeze` and
`Core.arraythaw` are provided which are semantically copies
and turn a mutable array into an immutable array and vice
versa.

In the original PR, I additionally provided generic functions
`freeze` and `thaw` that would simply forward to these
intrinsics. However, said generic functions have been omitted
from this PR in favor of simply using constructors to go
between mutable and immutable arrays at the high level.
Generic `freeze`/`thaw` functions can always be added later,
once we have a more complete picture of how these functions
would work on non-Array datatypes.

Some basic compiler support is provided to elide these copies
when the compiler can prove that the original object is
dead after the copy. For instance, in the following example:
```
function simple()
    a = Vector{Float64}(undef, 5)
    for i = 1:5
        a[i] = i
    end
    ImmutableArray(a)
end
```

the compiler will recognize that the array `a` is dead after
its use in `ImmutableArray` and the optimized implementation
will simply rewrite the type tag in the originally allocated
array to now mark it as immutable. It should be pointed out
however, that *semantically* there is still no mutation of the
original array, this is simply an optimization.

At the moment this compiler transform is rather limited, since
the analysis requires escape information in order to compute
whether or not the copy may be elided. However, more complete
escape analysis is being worked on at the moment, so hopefully
this analysis should become more powerful in the very near future.

I would like to get this cleaned up and merged resonably quickly,
and then crowdsource some improvements to the Array APIs more
generally. There are still a number of APIs that are quite bound
to the notion of mutable `Array`s. StaticArrays and other packages
have been inventing conventions for how to generalize those, but
we should form a view in Base what those APIs should look like and
harmonize them. Having the `ImmutableArray` in Base should help
with that.
@tkf
Copy link
Member

tkf commented Aug 8, 2021

Replying to #41777 (comment), the idea I explored in Mutabilities.jl was (1) how can we expose ownership to the user so that we can memory-optimized code for non-inplace surface API and (2) if manually doing so right now is beneficial without a compiler support. The answer to the second question was "kind of, but it's cumbersome." So I'm keep using the more direct linear/affine update API provided via BangBang.jl, Mutabilities.jl is essentially a copy of C++'s move (... I think. Not that I know C++). There might be better interfaces but I think it's still a valid design direction for exposing linear and affine updates semantics to user-defined function. Though maybe a fundamental question is if it should be exposed at the language level and if so, if it should be an "unsafe" or checked API.

@AriMKatz
Copy link

Some mutable value semantic stuff from the s4tf team:

https://arxiv.org/abs/2106.12678
https://github.com/kyouko-taiga/mvs-calculus

From @tkf :

Yeah, interesting. I guess their main observation is

In the purest form of mutable value semantics, references are second-class: they are only created implicitly, at function boundaries, and cannot be stored in variables or object fields. Hence, variables can never share mutable state.

They built a simple language that helps the compiler to avoid alias analysis. I wonder how it feels like to programming with this constraint. Will users find it constrained like a functional language or the second-class reference is enough for easing such mental barrier? (I feel like I'd be OK with this but I probably played with functional programming too much)

@vtjnash
Copy link
Member

vtjnash commented Oct 2, 2021

Rebased and moved to #41777

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
design Design of APIs or of the language itself julep Julia Enhancement Proposal speculative Whether the change will be implemented is speculative
Projects
None yet
Development

Successfully merging this pull request may close these issues.