ZeroDivisionError message - version 3.07

Hello,

I first tried to use MetaPhlAn2 on Galaxy.eu server but got an error;
This job was resubmitted to the queue because it encountered a tool detected error condition on its compute resource.
usage: metaphlan2.py --input_type
{fastq,fasta,multifasta,multifastq,bowtie2out,sam}
[–mpa_pkl
Then, I tried to install it on the local server via conda. When I tried to use it I got another error related to Fix for KeyError: ‘mpa_mpa_v30_CHOCOPhlAn_201901.tar’
and created environment and install version of 3.07 as suggested in another post.
When I run the same command to test the tool;
metaphlan
“PATH/_sortmerna_unaligned.fastq”
–bowtie2outmetagenome.bowtie2.bz2 --nproc 5 --input_type fastq > profiled_metagenome.txt
I got the error message below;
Traceback (most recent call last):
File “/truba/home/eraysahin/anaconda3/envs/mpa/bin/read_fastx.py”, line 10, in
sys.exit(main())
File “/truba/home/eraysahin/anaconda3/envs/mpa/lib/python3.7/site-packages/metaphlan/utils/read_fastx.py”, line 155, in main
f_nreads, f_avg_read_length = read_and_write_raw(f, opened=False, min_len=min_len)
File “/truba/home/eraysahin/anaconda3/envs/mpa/lib/python3.7/site-packages/metaphlan/utils/read_fastx.py”, line 118, in read_and_write_raw
nreads, avg_read_length = read_and_write_raw_int(inf, min_len=min_len)
File “/truba/home/eraysahin/anaconda3/envs/mpa/lib/python3.7/site-packages/metaphlan/utils/read_fastx.py”, line 110, in read_and_write_raw_int
avg_read_length /= nreads
ZeroDivisionError: division by zero

Can you help how to overcome it?

Thank you,

Best regards,

Hello, Thank you for the detailed post and sorry for the slow reply! I am not sure the cause of the MetaPhlAn error. My guess would be that possibly all of the reads in your input file are shorter then the min length for MetaPhlAn. If they were all filtered out then I think there could possibly be the division by zero error. I just double checked and the current min read length for MetaPhlAn is 70 nt. Based on your read length, you could try reducing this using the option --read_min_len <70>.

Thank you,
Lauren

1 Like

Dear Lauren,

Thank you for your reply, and sorry in this post I forgot to mention about my data, it has been produced by 50 bp reads. I will try it, and update.

Best regards,
Stay safe,

Eray

Hello Eray, Thanks for the post! If your reads are 50 bps, try running with the option --read_min_len 49 and hopefully that should resolve the issue. If you still continue to see the error, please ping the post with the new error. Hopefully this fix will have you all up and running but if not please post so I can help.

Thanks!
Lauren