Humann3 run crashes with Unhandled exception in thread started by

I’ve got a fatal error for one of my samples, whereas other finished quietly

Computing pathways abundance and coverage ...
Unhandled exception in thread started by <bound method Thread._bootstrap of <Worker(Thread-1, initial
daemon)>>
Traceback (most recent call last):
  File "/home/matalb01/miniconda3/envs/biobakery3/lib/python3.7/threading.py", line 890, in _bootstrapMemoryError
libgcc_s.so.1 must be installed for pthread_cancel to work
fish: “humann --memory-use maximum --r…” terminated by signal SIGABRT (Abort)

I did not have a high RAM consumption, neither CPU or writing peak. I have libgcc installed on my serveur with 800 Go RAM and 80 cpus.

Do you have an idea please ?

I have an input of 64 Go, it seems bigger than the other files, could it be the causality of my problem ?

If yes, is it possible to split the file and concatenate/merge the 2 outputs ?

Thanks,

It’s hard to know what happened exactly without more detail, but if this sample was much larger than the others that is a good working hypothesis. That could either result in a memory problem (unlikely since you have a lot of RAM) or maybe an out-of-space error on your disk? I would’ve expected the latter to produce a more specific error message though.

I have the possibility to check on my server:

  • thread consumption
  • ram consumption
  • writing process

None of them were out of range my server. To be more precise, 6 of my 300 samples couldn’t achieved the humann process, all of them were higher than 60 Go, whereas the other were around 50 Go.

The output *_genefamilies.tsv of the 6 samples have been produced, it appears that it is during the creation of pathway outputs that the process crashed.

If I use MinPath outside, could that work ?

Huh, that is very surprising since the gene families → pathways step is probably the least resource-intensive part of HUMAnN. You could try running HUMAnN for those samples with the genefamilies.tsv files as your input (to re-attempt computing pathways without starting from raw reads again).

Was there any more detailed error message in the HUMAnN log for those files?