Hello, The first file in your list is the final fastq output file (after qc and decontamination filtering). The second file is the contaminate reads found when using the demo_db bowtie2 database. The third file is the log of all kneaddata steps including read counts for each qc task. The final file in your list are the reads after running through trimmomatic. For more information on the output files please see the kneaddata user manual: http://huttenhower.sph.harvard.edu/kneaddata .
Thank you,
Lauren
Dear Lauren, Thanks for your reply and its working really good. Actually I am using shotgun seq (PE reads), I did a trail run using this script (kneaddata --input /Users/mukilkavi/Desktop/Metaseq/fq1/bg01b_350.raw.fq1.gz --input /Users/mukilkavi/Desktop/Metaseq/fq2/bg01b_350.raw.fq2.gz --reference-db /Users/mukilkavi/Desktop/kneaddata-0.7.3/hg37dec_v0.1.1.bt2 --trimmomatic /Users/mukilkavi/Desktop/kneaddata-0.7.3/Trimmomatic-0.39 --output trail)
and it worked well but instead of running each samples I would like to assign the script to find all samples (entire folder) in --input1 & 2… Could you please help me to write the script for running all samples at once.
Thanks in advance
Regards
Mukil
Hi Mukil, I am glad to hear it is working well. The easiest method to run all samples in a folder through kneaddata is to use biobakery workflows. You would only need to install the base workflows package and you can specify all of your samples with a single input folder.
Thanks,
Lauren
Dear Lauren
Thanks for your prompt response. Is this link (https://bitbucket.org/biobakery/biobakery_workflows/wiki/Home) for the biobakery workflows?
Regards
Mukil
Hi Mukil, Yes we also have the repo in github. We are migrating all our repos from bitbucket to gihub.
Thanks,
Lauren