Hi when running humann3 I get the following error, any hints? Thanks in advance:
(metaphlan3.0) curangalt-osx:humann3 curanga$ humann -i C1PEPS.fasta --bypass-nucleotide-search -o C1PEPS
Output files will be written to: /Users/curanga/Desktop/humann3/C1PEPS
Running diamond …
Aligning to reference database: uniref90_201901.dmnd
CRITICAL ERROR: Error executing: /Users/curanga/opt/anaconda3/envs/metaphlan3.0/bin/diamond blastx --query /Users/curanga/Desktop/humann3/C1PEPS/C1PEPS_humann_temp/tmpsoprue3j/tmpsf4pu89o --evalue 100.0 --threads 1 --top 1 --outfmt 6 --db /Users/curanga/Desktop/humann3/databases/uniref/uniref90_201901 --out /Users/curanga/Desktop/humann3/C1PEPS/C1PEPS_humann_temp/tmpsoprue3j/diamond_m8_or5kugcn --tmpdir /Users/curanga/Desktop/humann3/C1PEPS/C1PEPS_humann_temp/tmpsoprue3j
Error message returned from diamond :
diamond v0.9.35.136 © Max Planck Society for the Advancement of Science
Documentation, support and updates available at http://www.diamondsearch.org
#CPU threads: 1
Scoring parameters: (Matrix=BLOSUM62 Lambda=0.267 K=0.041 Penalties=11/1)
Temporary directory: /Users/curanga/Desktop/humann3/C1PEPS/C1PEPS_humann_temp/tmpsoprue3j
Opening the database… [0.066s]
Error: Database was built with an older version of Diamond and is incompatible.
(metaphlan3.0) curangalt-osx:humann3 curanga$
It looks as though DIAMOND changed the name of its database format in v0.9.25. I’m not sure if the format itself changed, but it’s enough to make the software think the database is not compatible. If you’re able to remove your current diamond and install v0.9.24 that should fix the problem.
We apparently got unlucky tagging HUMAnN 3.0 alpha against v0.9.24.
Hmmmm ok this still didn’t work… It’s never that easy I guess! Thanks!
(metaphlan3.0) curangalt-osx:humann3 curanga$ diamond --version
diamond version 0.9.24
(metaphlan3.0) curangalt-osx:humann3 curanga$ humann -i C1PEPS.fasta --bypass-nucleotide-search -o newC1peps
Output files will be written to: /Users/curanga/Desktop/humann3/newC1peps
Running diamond …
Aligning to reference database: uniref90_201901.dmnd
CRITICAL ERROR: Error executing: /Users/curanga/opt/anaconda3/envs/metaphlan3.0/bin/diamond blastx --query /Users/curanga/Desktop/humann3/newC1peps/C1PEPS_humann_temp/tmpmc9wa2de/tmpmucb1460 --evalue 100.0 --threads 1 --top 1 --outfmt 6 --db /Users/curanga/Desktop/humann3/databases/uniref/uniref90_201901 --out /Users/curanga/Desktop/humann3/newC1peps/C1PEPS_humann_temp/tmpmc9wa2de/diamond_m8_tcuny3wg --tmpdir /Users/curanga/Desktop/humann3/newC1peps/C1PEPS_humann_temp/tmpmc9wa2de
Error message returned from diamond :
diamond v0.9.24.125 | by Benjamin Buchfink buchfink@gmail.com
Licensed under the GNU GPL https://www.gnu.org/licenses/gpl.txt
Check http://github.com/bbuchfink/diamond for updates.
#CPU threads: 1
Scoring parameters: (Matrix=BLOSUM62 Lambda=0.267 K=0.041 Penalties=11/1)
Temporary directory: /Users/curanga/Desktop/humann3/newC1peps/C1PEPS_humann_temp/tmpmc9wa2de
Opening the database… [0.000836s]
Error: Database was built with a different version of Diamond and is incompatible.
(metaphlan3.0) curangalt-osx:humann3 curanga$
Can you execute: diamond dbinfo --db uniref90_201901.dmnd
to check that the database version matches your executable? Having looked into this, it sounds like diamond users generally experience more of these errors among v0.9 than I would’ve expected, possibly as a function of how the software was compiled. Do you know which conda channel you pulled v0.9.24 from? It’s possible that specific version has issues, or that we built our databases on an unusual version.
1 Like
Hi here is the output that you requested:
(metaphlan3.0) curangalt-osx:uniref curanga$ diamond dbinfo --db uniref90_201901.dmnd
diamond v0.9.35.136 © Max Planck Society for the Advancement of Science
Documentation, support and updates available at http://www.diamondsearch.org
Database format version = 0
Diamond build = 84
Sequences = 3372836
Letters = 1293476324
(metaphlan3.0) curangalt-osx:uniref curanga$
Did you revert to the newer diamond? The message there says diamond 0.9.35. In any case, this is what I get on my system running diamond 0.9.24.125 (the version you were calling above):
diamond v0.9.24.125 | by Benjamin Buchfink <buchfink@gmail.com>
Licensed under the GNU GPL <https://www.gnu.org/licenses/gpl.txt>
Check http://github.com/bbuchfink/diamond for updates.
Database format version = 2
Diamond build = 125
Sequences = 87296736
Letters = 29247941583
Which is what I would’ve expected (version = 2, build = 125).
Are you able to make the dbinfo call using v0.9.24?
Hi sorry I reverted to the newest diamond because I have been running diamond independently, then feeding the m8 or SAM files into humann. This actually works, but humann should be able to go through the entire process with the “humann” command by bypassing the nucleotide search, correct? Or not. I don’t know. So I am curious about using humann with custom databases. I guess it is optimized for uniref databases?
Best,
Carla
Sorry for the delay here. Yes, you can use --bypass-nucleotide-search
to go directly to translated search. While it’s true that we work almost exclusively with the UniRef databases, we ought to be optimized for anything with a similar design (i.e. comprehensive, non-redundant protein catalogs).
I think you mentioned before that you’re working with metaproteomic peptides rather than metagenomes? That’s the area where I’d expect our optimization to cause more issues, specifically database sequence coverage requirements (which are unlikely to be met by individual peptides).
Hi, I think the whole issue with mapping a peptidome from a complex community is that peptides share homology to so many organisms. When I try to use the complete Uniref for diamond searches I get multiple matches to one query with the same e-value! This is of course a problem. However, I have a custom database that is smaller and more specific to our samples, but it is a mix of the HOMD database and uniprot “contaminants”. If I could use this database with humann3 it would be ideal! This is the end goal, but I first need to learn what aspects of fasta file headers need to be parsed correctly for use with humann3. Thank you so much for your help. Please stay healthy!
Best,
Carla