I would like to test for the interaction between Group and Time in a repeated measures design:
‘~ Group * Timepoint + (1|StudyID)’
Group has 3 levels and Time has 6 levels. The goal is to determine which ASVs are significantly different for each pairwise comparison at every level. Is the only way to do this to change the reference levels of Group and Time, followed by rerunning the analysis multiple times to get all pairwise comparisons please? Is there an easier way of doing this in maaslin3?
I would recommend using the maaslin_contrast_test function for this (described in the wiki here). You can create a contrast matrix with columns named by the name column of your maaslin3 outputs and rows named by the contrast you’re running. Then, in each column, put a pair of -1 and 1 for the two groups you want to test against each other. (This can be generated algorithmically, so you don’t actually have to type in all the 1s and 0s.)
I’ll caution that you’ll have 18 choose 2 = 153 pairs with probably 100s of taxa each, creating an enormous multiple hypothesis testing challenge. It’s likely that all but the most extreme associations won’t come out significant after FDR correction (and even those might be extreme only because they’re poor model fits). If there are some hypotheses you don’t care about (e.g., group 1 time 1 vs. group 2 time 2), dropping those up front could save you a lot of power.
Lastly, I’d recommend including small_random_effects = TRUE in your maaslin3 run to avoid erroneous hits that are likely to come up in this kind of all-vs-all analysis.
I’m only using abundance for testing. Does the reference level for the initial maaslin run affect the fit of the model and pairwise comparison results? I tried the contrast test with ~30 comparisons but didn’t manage to get any results (errors were Predictors not in the model had non-zero contrast values and Fit is NA), but when I removed the comparisons containing the reference levels of group and time (cut by about half the number of comparisons) I managed to get results. Wondering if that was due to removing the comparisons with the reference levels or reduced number of comparisons being tested. Will splitting the number of comparisons into multiple contrast test runs against the same model affect the results?
The reference level of the original run shouldn’t affect the pairwise results apart from small Monte-Carlo error in the median comparison step.
Regarding the Predictors not in the model had non-zero contrast values error, are you using the exact same name in the contrast matrix as in the name column? If so, could you post a small chunk of each so I can try to figure out what’s going on? I suspect the issue is that your contrast matrix is using columns with the reference levels, but those never show up in the name column of the MaAsLin 3 output because they’re used as the intercept. Those contrasts (against the reference) will have already been run originally, but you could use a single 1 in the comparison group column of the contrast matrix to re-run those contrasts.
Fit is NA means there was a problem fitting the feature originally, so a contrast test can’t be run.
Splitting the comparisons across multiple runs isn’t an issue per se, but you should p.adjust over all the p-values together.
Soon (hopefully later today), I’ll be pushing an update that makes maaslin_contrast_test return results more like maaslin_fit so that it automatically runs both the abundance and prevalence models, calculates q-values, gives joint p/q-values, etc.
Right ok I see. You are right, I added in the reference levels in the columns of contrast matrix but they wouldn’t have existed in the name column of results. So if I want to rerun the original contrasts then add in additional rows for the comparisons against the reference levels but then leave them out of the columns and put the comparator as 1 with everything else as 0?
Tried it with the new update and it worked great. Still had the Predictors not in the model had non-zero contrast values error but only for ASVs that had errors or problems fitting in the original run so that was fine. Thanks for all your help!