Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

add comprehension support to @threads #209

Open
StefanKarpinski opened this issue Oct 12, 2011 · 9 comments
Open

add comprehension support to @threads #209

StefanKarpinski opened this issue Oct 12, 2011 · 9 comments
Labels
macros @macros multithreading Base.Threads and related functionality parallelism Parallel or distributed computation

Comments

@StefanKarpinski
Copy link
Member

For example:

@parallel [ f(i) | i=1:n ]
@ajdecon
Copy link
Contributor

ajdecon commented Feb 27, 2012

Is this particularly different from pmap(f, {i|i=1:10} )?

I.e., would it make sense to implement this kind of comprehension as just some syntactic sugar on pmap, or are you envisioning more to it than that?

@JeffBezanson
Copy link
Member

The syntactic sugar part could be @parallel [ f(i) | i = 1:n ].

But, we want to handle multiple dimensions, and it would be nice for it to return a distributed array. It would also be nice for each processor to compute points evenly spaced over the whole range, since this usually gives better load balancing. Then you might want to be able to move those results into a simpler distribution.

@JeffBezanson
Copy link
Member

Now implemented on the jb/multi branch!

@StefanKarpinski
Copy link
Member Author

Sweet. Is this an indication that the new Darray model is more productive?

@JeffBezanson
Copy link
Member

Yes, a bit, but the only real difference in the interface is that the core operation is constructing a piece of an array from its indexes. The old DArray constructor could have been changed to work that way, but many other changes are needed too (distributing in all dimensions, and doing most operations lazily).

I'm also not sure what to do about array references inside parallel comprehensions. What you want is for A[i,j] to be transformed to A_chunk[i-ioffs,j-joffs] where A_chunk is the part of A that each processor needs.

@JeffBezanson
Copy link
Member

The basic thing is now on master, but it does not handle array references yet. Doing that rewrite may be too magical, but we should probably do it.

@amitmurthy
Copy link
Contributor

@JeffBezanson , seems like you had already implemented this - which I have moved under @darray in #5512 .

What do you mean by "does not handle array references yet" ?

@yuyichao yuyichao removed the help wanted Indicates that a maintainer wants help on an issue or pull request label Apr 11, 2017
StefanKarpinski pushed a commit that referenced this issue Feb 8, 2018
KristofferC added a commit that referenced this issue Mar 16, 2018
(cherry picked from commit 23af62a65487f626c7e5585d0fb4dfe7c4c9ce67)
@StefanKarpinski
Copy link
Member Author

This would be spelled @threads [ f(i) for i in 1:n ] these days. Not sure if we need to keep an issue open for it, but I don't want to deprive someone of the fun of closing a three-digit issue when they implement it 😁

@tkf
Copy link
Member

tkf commented May 3, 2020

FYI,

using Transducers
tcollect(2x for x in 1:10)

already does it in parallel. Filtering and flattening are also supported.

LilithHafner pushed a commit to LilithHafner/julia that referenced this issue Oct 11, 2021
@mbauman mbauman changed the title write parallel comprehension operator add comprehension support to @threads Feb 8, 2022
@mbauman mbauman added multithreading Base.Threads and related functionality macros @macros labels Feb 8, 2022
Keno pushed a commit that referenced this issue Oct 9, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
macros @macros multithreading Base.Threads and related functionality parallelism Parallel or distributed computation
Projects
None yet
Development

No branches or pull requests

8 participants