I use conda install humann2( version 2.8.1).Now I’m have some problems with diamond(version 0.8.22) on HPC-cluster:
Searching alignments... terminate called after throwing an instance of 'File_write_exception'
what(): Error writing file /public/home/gaoxuefeng/test/p136C_3/p136C_R12_humann2_temp/tmp0UOuOq/diamond-5ed12556-174.tmp. Disk full?
terminate called recursively
terminate called recursively
terminate called recursively
The DIAMOND temp is being written under the HUMAnN temp folder, which is written wherever you tell HUMAnN to place its outputs. Can you save the HUMAnN output (including temp) to a location with more disk space while it is running?
The recently announced HUMAnN 3.0 alpha also includes a new flag (--diamond-options) that lets you pass arbitrary arguments to the DIAMOND binary (if you’d rather tune DIAMOND’s operation that way).
/dev/shm is a location in memory rather than a hard disk, so it seems like there is an issue with writing to the disk while diamond is running. Maybe it can’t handle 48 threads trying to write at the same time? If you have enough memory to write temp files to /dev/shm (and/or store databases there) it can definitely increase efficiency.
I not sure what happened,but you can try this:find this file:translated.py(may be :/public/home/nx_lfs/anaconda2/envs/metagenome_env/lib/python2.7/site-packages/humann2/search/translated.py),then add a parameter (–no-unlink) on line 205,such as:
@wfgui Did you find the solution to the ‘File_write_exception’, I am trying it with Humann3 and facing same issue. @franzosa --diamond-options command is not recognized by humann3 and throws error expected one argument
--diamond-options should be fine. It looks like you have a special dash character in the flag you are passing to diamond. I would format as --diamond-options "--no-unlink" (note: I have not tested this option, I’m just saying it is syntactically OK).
Used this format too… does not recognize the command, only recognizes --top and --outfmt, previously humann3 gave me SIgnal 9 and killed process so I changed my database and sample location to external harddrive(4TB, 3.0USB) and thats why reduced the threads to 4 and now I get the error signal 6
CRITICAL ERROR: Error executing: /home/drkksharma/anaconda3/envs/biobakery_env/bin/diamond blastx --query /media/drkksharma/EUNI/humann3/AM15_merged_humann_temp/AM15_merged_bowtie2_unaligned.fa --evalue 1.0 --threads 4 --top 1 --outfmt 6 --db /media/drkksharma/EUNI/biobakery_database/humann3_db/uniref_90/uniref/uniref90_201901b_full --out /media/drkksharma/EUNI/humann3/AM15_merged_humann_temp/tmpgau6xjo6/diamond_m8_z54f8ga9 --tmpdir /media/drkksharma/EUNI/humann3/AM15_merged_humann_temp/tmpgau6xjo6
Error message returned from diamond :
diamond v0.9.36.137 (C) Max Planck Society for the Advancement of Science
Documentation, support and updates available at http://www.diamondsearch.org
#CPU threads: 4
Scoring parameters: (Matrix=BLOSUM62 Lambda=0.267 K=0.041 Penalties=11/1)
Temporary directory: /media/drkksharma/EUNI/humann3/AM15_merged_humann_temp/tmpgau6xjo6
Opening the database… [0.786s]
Percentage range of top alignment score to report hits: 1
Reference = /media/drkksharma/EUNI/biobakery_database/humann3_db/uniref_90/uniref/uniref90_201901b_full.dmnd
Sequences = 87296736
Letters = 29247941583
Block size = 2000000000
Opening the input file… [0.021s]
Opening the output file… [0s]
Loading query sequences… [25.683s]
Masking queries… [29.667s]
Building query seed set… [0.078s]
Algorithm: Double-indexed
Building query histograms… [6.307s]
Allocating buffers… [0s]
Loading reference sequences… [19.536s]
Masking reference… [23.676s]
Initializing temporary storage… [0s]
Building reference histograms… [9.063s]
Allocating buffers… [0s]
Processing query block 0, reference block 0, shape 0, index chunk 0.
Building reference seed array… [6.574s]
Building query seed array… [4.717s]
Computing hash join… [2.771s]
Building seed filter… [0.137s]
Searching alignments… [20.008s]
Processing query block 0, reference block 0, shape 0, index chunk 1.
Building reference seed array… [7.853s]
Building query seed array… [5.52s]
Computing hash join… [2.743s]
Building seed filter… [0.137s]
Searching alignments… [19.418s]
Processing query block 0, reference block 0, shape 0, index chunk 2.
Building reference seed array… [8.334s]
Building query seed array… [5.848s]
Computing hash join… [2.699s]
Building seed filter… [0.136s]
Searching alignments… Input/output error
terminate called after throwing an instance of ‘File_write_exception’
what(): Error writing file /media/drkksharma/EUNI/humann3/AM15_merged_humann_temp/tmpgau6xjo6/diamond-tmp-qw3o19
I ran humann_test --run-functional-tests-end-to-end, and one test failed.
FAIL: test_humann_fastq_biom_output_pathways (functional_tests_biom_humann.TestFunctionalHumannEndtoEndBiom)
Test the standard humann flow on a fastq input file
Traceback (most recent call last):
File “/home/drkksharma/anaconda3/envs/biobakery_env/lib/python3.7/site-packages/humann/tests/functional_tests_biom_humann.py”, line 56, in test_humann_fastq_biom_output_pathways
self.assertEqual(pathways_found,cfg.expected_demo_output_files_biom_pathways)
AssertionError: Items in the second set but not the first:
‘PWY-6305’
‘PWY-5173’
‘PWY490-3’