I run humann
with --bypass-prescreen
and --bypass-translated-search
because I have MetaPhlAn files from running MetaPhlAn previously.
Output files will be written to: /tmp
Decompressing gzipped file ...
Creating custom ChocoPhlAn database ........
gzip: stdout: No space left on device
$ du -sh /tmp/*
70G /tmp/OSCC_1-P_unmapped_R1_humann_temp
The custom ChocoPhlAn database seems remarkably large and would be bigger if it could be. Should I be using the options which I am using? Could there be a tutorial written specifically for MetaPhlAn followed by HUMAnN use?