Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Issue 207 #8

Merged
merged 8 commits into from
Aug 15, 2019
Merged

Issue 207 #8

merged 8 commits into from
Aug 15, 2019

Conversation

lucventurini
Copy link
Owner

General improvements in speed, especially for multiprocessing; also, now Mikado pick will check at runtime whether there are transcripts that have too big introns and remove them.

…t level of approximation kicks in *before* loading transcript data: that should speed things up as well.
…reation and checking to the subprocesses. This should speed it up a lot in the presence of complex, long loci.
…aximum length. This should prevent stray transcripts from botched prepare runs to take up too much time.
@codecov-io
Copy link

Codecov Report

Merging #8 into master will increase coverage by 0.02%.
The diff coverage is 77.7%.

@@            Coverage Diff             @@
##           master       #8      +/-   ##
==========================================
+ Coverage   79.54%   79.57%   +0.02%     
==========================================
  Files          70       70              
  Lines       15589    15628      +39     
==========================================
+ Hits        12401    12436      +35     
- Misses       3188     3192       +4

@lucventurini lucventurini merged commit 648af46 into master Aug 15, 2019
@lucventurini lucventurini deleted the issue-207 branch August 15, 2019 13:43
lucventurini added a commit that referenced this pull request Feb 11, 2021
lucventurini added a commit that referenced this pull request Feb 11, 2021
This PR deals with the fact that Mikado pick was not leveraging correctly the multiple processors. This was due to the fact that the main process was taking up the job of checking transcripts and creating loci - expensive operations that acted as bottlenecks. Now the main process will only collate transcripts as GTF rows, do a minimal check on the fact that they do not have introns longer that the maximum size, and then and only then dispatch them.
Moreover, the trigger to user reduction methods in a locus has been lowered (from 10,000 to 5,000) and the first method (removal of redundant, completely contained intron chains) will be triggered before loading transcript data from the database.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants