The bioBakery help forum

Diamond error:Disk full

I use conda install humann2( version 2.8.1).Now I’m have some problems with diamond(version 0.8.22) on HPC-cluster:

Searching alignments... terminate called after throwing an instance of 'File_write_exception'
  what():  Error writing file /public/home/gaoxuefeng/test/p136C_3/p136C_R12_humann2_temp/tmp0UOuOq/diamond-5ed12556-174.tmp. Disk full?
terminate called recursively
terminate called recursively
terminate called recursively

The problem seems to be solved by this way:bbuchfink/diamond#73 (comment)

I want to know how to modify the script parameters of the diamond.

Thanks!

The DIAMOND temp is being written under the HUMAnN temp folder, which is written wherever you tell HUMAnN to place its outputs. Can you save the HUMAnN output (including temp) to a location with more disk space while it is running?

The recently announced HUMAnN 3.0 alpha also includes a new flag (--diamond-options) that lets you pass arbitrary arguments to the DIAMOND binary (if you’d rather tune DIAMOND’s operation that way).

Thank u help me,the disk has enough space(>1TB),log file:

CRITICAL ERROR: Error executing: /public/home/myuser/anaconda3/envs/humann2/bin/diamond blastx --query /public/home/test/p136C_3/p136C_R12_humann2_temp/p136C_R12_bowtie2_unaligned.fa --evalue 1.0 --threads 48 --max-target-seqs 20 --outfmt 6 --db /public/home/myuser/Database/humann2/uiref90_full/uniref90_annotated.1.1 --out /public/home/myuser/test/p136C_3/p136C_R12_humann2_temp/tmp1RgPjH/diamond_m8_tohBRL --tmpdir /public/home/myuser/test/p136C_3/p136C_R12_humann2_temp/tmp1RgPjH

when I run this command,the work is normal:

diamond blastx --query /public/home/test/p136C_3/p136C_R12_humann2_temp/p136C_R12_bowtie2_unaligned.fa --evalue 1.0 --threads 48 --max-target-seqs 20 --outfmt 6 --db /public/home/myuser/Database/humann2/uiref90_full/uniref90_annotated.1.1 --out /public/home/myuser/test/p136C_3/p136C_R12_humann2_temp/tmp1RgPjH/diamond_m8_tohBRL --tmpdir /dev/shm

Thanks~
humann2-diamond.csv (4.0 KB)

/dev/shm is a location in memory rather than a hard disk, so it seems like there is an issue with writing to the disk while diamond is running. Maybe it can’t handle 48 threads trying to write at the same time? If you have enough memory to write temp files to /dev/shm (and/or store databases there) it can definitely increase efficiency.

I used 1 thread, but the problem remained. It could be a problem of a computer cluster.I used higher version(0.9.34) of diamond ,work is normal:

diamond blastx --query /public/home/myuser/test/p136C_3/p136C_R12_humann2_temp/p136C_R12_bowtie2_unaligned.fa --evalue 1.0 --threads 48 --max-target-seqs 20 --outfmt 6 --db /public/home/myuser/Database/humann2/uiref90_full/uniref90_annotated.1.1 --out /public/home/myuser/test/p136C_3/p136C_R12_humann2_temp/tmp1RgPjH/diamond_m8_tohBRL --tmpdir --tmpdir /public/home/myuser/test/p136C_3/p136C_R12_humann2_temp/tmp1RgPjH --threads 48 --no-unlink

I will try HUMAnN 3.0,
Thanks~

hello, could you tell me how to change diamond version to 0.9.34

You can use diamond(version 0.8.22) to get the database in fasta format and then use latest version(version 0.9.34) to bulid a new database,eg:

#version 0.8.22
diamond getseq -db uniref90_annotated_1_1.dmnd > uniref90.fasta

#version 0.9.34
diamond makedb --db uniref90.fasta uniref90_annotated_1_1_0.9.34 --in uniref90.fasta

thanks a lot. But when I used your method. another error occured. :rofl:

Computing alignments… No such file or directory
[0.001s]
Error: Error writing file /public/home/nx_lfs/metaphlan2_analysis/demo_fastq2/demo_humann2_temp/tmpZuECdemo.txt (11.7 KB) Nb/diamond-tmp-PTc5Iq

I not sure what happened,but you can try this:find this file:translated.py(may be :/public/home/nx_lfs/anaconda2/envs/metagenome_env/lib/python2.7/site-packages/humann2/search/translated.py),then add a parameter (–no-unlink) on line 205,such as:


Good luck.