Re-starting Halted Job (leveldb error)

Hello, my VPN (needed to access a HPC system) disconnected while I was running the following code, halting the job:
biobakery_workflows wmgx --input Projects/ORIGINS/biobakery/RawSeq --output Projects/ORIGINS/biobakery/Output_all_021423 --pair-identifier .R1 --taxonomic-profiling-options “–bowtie2db /home/biobakery_workflows_databases -t rel_ab_w_read_stats --perc_nonzero 25 --stat_q 0.10” --threads 64 --local-jobs 32

I reconnected to the VPN and tried running the same code again, but ran into the following error:

Traceback (most recent call last):
File “/home/env/biobakery/020823/bin/”, line 184, in
File “/home/env/biobakery/020823/lib/python3.8/site-packages/anadama2/”, line 772, in go
self._backend = backends.default(self.vars.get(“output”))
File “/home/env/biobakery/020823/lib/python3.8/site-packages/anadama2/”, line 22, in default
return LevelDBBackend(
File “/home/env/biobakery/020823/lib/python3.8/site-packages/anadama2/”, line 111, in init
self.db = leveldb.LevelDB(self.data_directory,
leveldb.LevelDBError: IO error: lock /Projects/ORIGINS/biobakery/Output_all_021423/.anadama/db/LOCK: Resource temporarily unavailable

Is there a way to bypass the levelDB lock? Or is there an alternative approach to resume a halted job?

Thank you!

Note: I have 182 samples, reverse and forward reads

Hello, Sorry for the slow response. The leveldb lock indicates the workflow is still running. If you can stop the workflow process, it will release the leveldb lock. Then re-run the exact same workflow command and it should only run those tasks which have not already finished successfully.

Please post again if you continue to have issues!

Thank you,