Metaphlan 3.0.7 related problem with deep sequenced wgs file

I am using MetaPhlAn 3.0.7 for microbial community profiling. As input, I am providing FASTQ files generated after removing host-derived sequences using KneadData. Each input file is approximately 15–19 GB in size. My system configuration includes 64 GB RAM and an octa-core (16-thread) processor. I have attached the commands I am using for the analysis. However, the runs are not completing successfully. Could this issue be related to the large input file size, or might there be another factor affecting the execution? Also, i am encountering an issue while running HUMAnN for functional profiling. Specifically, input FASTQ files larger than ~15 GB are not being processed successfully and the jobs are getting terminated during execution.I would like to know if there are recommended strategies to handle such large input files efficiently. For example, would it be appropriate to split the KneadData-processed FASTQ files into smaller chunks, run HUMAnN on each separately, and then combine or average the results? Or is there a more suitable approach for handling large datasets in HUMAnN? I would greatly appreciate any guidance or suggestions you may have.

#metaphlan_command

for f in *_kneaddata.fastq; do metaphlan $f --input_type fastq --nproc 16 --read_min_len 70 --ignore_eukaryotes > ${f%_1_kneaddata.fastq}_profile.tsv; done

#humann_command

for f in *.fastq; do humann --input $f --output humann_output/ --taxonomic-profile ${f%_1_kneaddata.fastq}_profile.tsv --remove-temp-output --threads 16; done