Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Banish all locks #1053

Merged
merged 4 commits into from
Aug 16, 2014
Merged

Banish all locks #1053

merged 4 commits into from
Aug 16, 2014

Conversation

diizy
Copy link
Contributor

@diizy diizy commented Aug 3, 2014

More removal of global lock calls: mostly replacing them with local mutexes (eh.. mutices?) which don't block the core rendering function.

A smart (I hope) thing I did was adding "processing locks" to tracks and playhandles. This way, when tracks are busy doing something like loading a plugin, they don't have to block the mixer thread: they just lock their "processing lock", which causes the play function to skip processing in tracks. Playhandles don't skip, they block but only their own rendering thread.

I think I can even notice some performance boost, although I'm not sure about that, haven't benchmarked or anything, but some projects of mine that are very cpu-heavy seem to run a bit smoother in this branch, compared to current master.

@tresf, @softrabbit, @grejppi, some testing of this branch would be nice before merging it.

@diizy
Copy link
Contributor Author

diizy commented Aug 4, 2014

There seems to be something, but I'm not sure if it's caused by this commit specifically or something earlier... happens with live notes mostly, causes segfaults with broken stack. Someone want to figure out why this happens?

@tresf
Copy link
Member

tresf commented Aug 4, 2014

Some banish-all-locks windows builds for the non-*nix testers out there:

Windows 32-bit:
https://drive.google.com/file/d/0B4PpvIwHd1U0WHFMSjJJczgwTGs/edit?usp=sharing

Windows 64-bit:
https://drive.google.com/file/d/0B4PpvIwHd1U0QlRVWDh5SjliM2c/edit?usp=sharing

Vesa, I may not have time to test this for a little while. Feel free to bump it in a few days if you receive no testing.

@diizy
Copy link
Contributor Author

diizy commented Aug 16, 2014

Ok, I think I'm going to merge this, because it fixes many instability issues with the current master, and even if there are some new issues in exchange (probably), it's still a good trade as we have to keep moving forward.

diizy added a commit that referenced this pull request Aug 16, 2014
@diizy diizy merged commit 128a3ec into LMMS:master Aug 16, 2014
@tobydox
Copy link
Member

tobydox commented Aug 16, 2014

Looks good!

@diizy
Copy link
Contributor Author

diizy commented Aug 16, 2014

oh btw @tobydox
I removed the #ifdef GNUC clauses from InstrumentSoundShaping because I don't think anyone uses anything other than GCC or Clang to compile LMMS these days and Clang should also support the C99 vararrays that we need here.

(Incidentally, we could probably also remove all the intel compiler stuff from the codebase, to clean things up a bit)

@tobydox
Copy link
Member

tobydox commented Aug 16, 2014

Yeah I noticed and I think this should be fine. If not, we could build a wrapper around it which allocates the data on stack or heap depending on compiler. When using the stack we should still keep in mind that the available stack memory can be very limited on some platforms so it's not a good idea to allocate several megabytes of memory on the stack (small buffers with a few KB is no problem I think). Otherwise this can cause bugs like #543 which has been fixed via 9c27956.

@diizy
Copy link
Contributor Author

diizy commented Aug 16, 2014

On 08/16/2014 03:04 PM, Tobias Doerffel wrote:

Yeah I noticed and I think this should be fine. If not, we could build
a wrapper around it which allocates the data on stack or heap
depending on compiler. When using the stack we should still keep in
mind that the available stack memory can be very limited on some
platforms so it's not a good idea to allocate several megabytes of
memory on the stack (small buffers with a few KB is no problem I
think). Otherwise this can cause bugs like #543
#543 which has been fixed via
9c27956
9c27956.

Yep, the problem is that heap allocations are apparently bad because
they aren't RT-safe (they can take any number of time to perform), so we
should definitely avoid them for any function that gets called every
period. Doing every-period allocation/deallocation is very bad RT-wise,
I've learned recently.

In the long run I'm thinking we can solve this issue by creating a
specific buffer-caching mechanism: a class that allocates a set amount
of buffers, which can be used and recycled by other objects: other
objects can then acquire and release buffers from the caching class on
an as-needed basis...

@tobydox
Copy link
Member

tobydox commented Aug 16, 2014

Yeah I know, heap allocation is bad for RT. A dedicated lightweight memory manager is what we had long time ago IIRC but should be worth to be implemented again.

@diizy
Copy link
Contributor Author

diizy commented Aug 20, 2014

On 08/16/2014 03:21 PM, Tobias Doerffel wrote:

Yeah I know, heap allocation is bad for RT. A dedicated lightweight
memory manager is what we had long time ago IIRC but should be worth
to be implemented again.

Sounds interesting... Can the source code of this memory manager still
be found somewhere?

@tobydox
Copy link
Member

tobydox commented Aug 20, 2014

It existed up to version 0.2.1:

https://github.com/LMMS/lmms/blob/d88b2959ceb0a0f714f5bea60b53ba255f2dc661/include/buffer_allocator.h
https://raw.githubusercontent.com/LMMS/lmms/d88b2959ceb0a0f714f5bea60b53ba255f2dc661/src/lib/buffer_allocator.cpp

No idea whether we can still re-use it. Probably it's better to write something from scratch.

@diizy
Copy link
Contributor Author

diizy commented Aug 20, 2014

On 08/20/2014 07:58 PM, Tobias Doerffel wrote:

It existed up to version 0.2.1:

https://github.com/LMMS/lmms/blob/d88b2959ceb0a0f714f5bea60b53ba255f2dc661/include/buffer_allocator.h
https://raw.githubusercontent.com/LMMS/lmms/d88b2959ceb0a0f714f5bea60b53ba255f2dc661/src/lib/buffer_allocator.cpp

No idea whether we can still re-use it. Probably it's better to write
something from scratch.

Hmm, yeah, I took a quick look at it and I'm not sure if it's exactly
what we need... it could maybe be adapted for our needs, but it may be
you're right and it'd be easier to just write a new one. Still, it's
definitely interesting and helpful to look at the code...

@diizy
Copy link
Contributor Author

diizy commented Aug 20, 2014

After a bit more looking into it, it looks like the old LMMS memory
manager works in a way where buffers are allocated when needed, but
never deallocated, old buffers just get reused... this seems to me like
it would still have the problem that we get way too many unnecessary
allocations before the cache is built up enough that the caching starts
working in our favour.

I think what we need is rather something which at first allocates a
large chunk of memory, then that memory is used to dispense buffers (and
other objects?), and extended only when necessary. The initial
allocation and the size increment of extension could be made
configurable, as well.

I found this quite nice tutorial for writing customized memory managers:
http://www.ibm.com/developerworks/aix/tutorials/au-memorymanager/index.html

Maybe the approach described there could be useful for our purposes?

@tobydox
Copy link
Member

tobydox commented Aug 20, 2014

Yes this is probably a better idea :)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants