Humann failed at diamond

hello there,
I submitted a batch work for humann:
here is the log file info:
12/08/2023 09:33:31 AM - humann.humann - INFO: Running humann v3.8
12/08/2023 09:33:31 AM - humann.humann - INFO: Output files will be written to: /lustre04/scratch/zhangbin/CodeClub_V12/humann/Metaphlan_Humann_rawreads
12/08/2023 09:33:31 AM - humann.humann - INFO: Writing temp files to directory: /lustre04/scratch/zhangbin/CodeClub_V12/humann/Metaphlan_Humann_rawreads/R85_merge_reads_humann_temp
12/08/2023 09:33:31 AM - humann.utilities - INFO: File ( /lustre04/scratch/zhangbin/CodeClub_V12/metaWrap_clean_reads/merged_reads/R85_merge_reads.fastq ) is of format: fastq
12/08/2023 09:33:31 AM - humann.humann - INFO: Removing spaces from identifiers in input file
12/08/2023 09:41:35 AM - humann.utilities - DEBUG: Check software, metaphlan, for required version, 3.0
12/08/2023 09:42:23 AM - humann.utilities - INFO: Using metaphlan version 4.0
12/08/2023 09:42:23 AM - humann.utilities - DEBUG: Check software, bowtie2, for required version, 2.2
12/08/2023 09:42:23 AM - humann.utilities - INFO: Using bowtie2 version 2.5
12/08/2023 09:42:23 AM - humann.utilities - DEBUG: Check software, diamond, for required version, 2.0.15
12/08/2023 09:42:23 AM - humann.utilities - INFO: Using diamond version 2.0.15
12/08/2023 09:42:23 AM - humann.config - INFO:
Run config settings:

DATABASE SETTINGS
nucleotide database folder = /lustre04/scratch/zhangbin/chocophlan2/chocophlan/chocophlan
protein database folder = /lustre04/scratch/zhangbin/chocophlan2/chocophlan/uniref
pathways database file 1 = /home/zhangbin/Humann/lib/python3.10/site-packages/humann/data/pathways/metacyc_reactions_level4ec_only.uniref.bz2
pathways database file 2 = /home/zhangbin/Humann/lib/python3.10/site-packages/humann/data/pathways/metacyc_pathways_structured_filtered_v24_subreactions
utility mapping database folder = /lustre04/scratch/zhangbin/chocophlan2/chocophlan/utility_mapping

RUN MODES
resume = False
verbose = False
bypass prescreen = False
bypass nucleotide index = False
bypass nucleotide search = False
bypass translated search = False
translated search = diamond
threads = 1

SEARCH MODE
search mode = uniref50
nucleotide identity threshold = 0.0
translated identity threshold = 50.0

ALIGNMENT SETTINGS
bowtie2 options = --very-sensitive
diamond options = --top 1 --sensitive --outfmt 6
evalue threshold = 1.0
prescreen threshold = 0.01
translated subject coverage threshold = 50.0
translated query coverage threshold = 90.0
nucleotide subject coverage threshold = 50.0
nucleotide query coverage threshold = 90.0

PATHWAYS SETTINGS
minpath = on
xipe = off
gap fill = on

INPUT AND OUTPUT FORMATS
input file format = fastq
output file format = tsv
output max decimals = 10
remove stratified output = False
remove column description output = False
log level = DEBUG

I received a failed message from the cluster, but I didn’t see any error from the humann log file. the run stopped at diamond:

12/08/2023 04:03:58 PM - humann.utilities - INFO: Execute command: /cvmfs/soft.computecanada.ca/easybuild/software/2020/avx512/Compiler/gcc9/diamond/2.0.15/bin/diamond blastx --query /lustre04/scratch/zhangbin/CodeClub_V12/humann/Metaphlan_Humann_rawreads/R85_merge_reads_humann_temp/R85_merge_reads_bowtie2_unaligned.fa --evalue 1.0 --threads 1 --top 1 --sensitive --outfmt 6 --db /lustre04/scratch/zhangbin/chocophlan2/chocophlan/uniref/uniref50_201901b_full --out /lustre04/scratch/zhangbin/CodeClub_V12/humann/Metaphlan_Humann_rawreads/R85_merge_reads_humann_temp/tmpfhf_1_rv/diamond_m8_nvmiz83b --tmpdir /lustre04/scratch/zhangbin/CodeClub_V12/humann/Metaphlan_Humann_rawreads/R85_merge_reads_humann_temp/tmpfhf_1_rv

any thoughts?

DIAMOND failures are usually due to running out of memory or tmp space. The logging system from your batch scheduler might indicate if you exceeded the resource request for the job?