-
Notifications
You must be signed in to change notification settings - Fork 506
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
EXITING because of fatal ERROR: not enough memory for BAM sorting #289
Comments
Hi Sergey, my suspicion is that your FASTQ files were derived from coordinate-sorted BAM files. If this is the case, I think you would have to sort it with samtools. Another option is to shuffle the reads in the FASTQ (or BAM before conversion to FASTQ. Cheers |
@alexdobin, I think it should be mentioned in the manual! And in the description of --limitBAMsortRAM option too. Line 382 in 02dbc3f
|
Hi Sergey, to estimate the amount of RAM needed for sorting, STAR would basically need to map all the reads. The way to check whether the reads are sorted is to map the first ~100,000 reads (--readMapNumber 100000) with default parameters, and check which chromosomes are present in the SAM output, i.e. Cheers |
Hi Alex, I have the same issue. If I would like to increase RAM size, how can I do it? I read few other forums but I couldn't find an answer. I have attached my Log.out here for your reference. Thanks |
Hi Archana, the problem in your run is not RAM related, it's likely that your disk does not have enough space to write the temp files. Also, please remove the _STARtmp directory from the STAR run directory before re-running this job. Cheers |
Hi Alex!@@alexdobin Thanks |
Hi Jing, using samtools sort is a good solution. |
Hello!
I am trying to reproduce TCGA pipeline for users without access to the ICGC pipeline or a server with 8 cores, 60 Gb RAM and 8 Gb SWAP. But I am getting an error
immediately after STAR stars sorting BAM in the 2nd pass alignment (pipeline's step 4).
So I am using STAR 2.4.2a (as in TCGA).
I am talking only about VanAllen2015_pat03-tumor-rna (SRR2689711) and VanAllen2015_pat123-re-tumor-rna (SRR2779596). Other samples didn't result in such problems and errors.
I've tried both variants for
--sjdbOverhang
option:75
- maxReadLength-1; and100
- as in TCGA.Basically I started with:
--runThreadN 8 --genomeLoad NoSharedMemory
--runThreadN 8 --genomeLoad NoSharedMemory --limitBAMsortRAM 56998928790
I'm getting this:
After that I've tested it on version 2.5.3a and got the same.
Same runs result in same suggested RAM to use. Another patient or sjdbOverhang or STAR version - another RAM suggestion.
I also have tried this, but it seems to be incorrect due to ideology.
--runThreadN 8 --genomeLoad LoadAndKeep --limitBAMsortRAM 56998928790
Here are my logs. The error is in the step4_*
step2_pat123_re-tumor-rna-SRR2779596.Log.final.out.txt
step2_pat123_re-tumor-rna-SRR2779596.Log.out.txt
step2_pat123_re-tumor-rna-SRR2779596.Log.progress.out.txt
step3_pat123_re-tumor-rna-SRR2779596.Log.out.txt
step4_pat123_re-tumor-rna-SRR2779596.Log.out.txt
The text was updated successfully, but these errors were encountered: