Skip to content

Multi-run Comparator

Advanced Models

Run multiple models or contrasts within the same project and review each run from a shared run history, making it easy to compare results across different analyses.

When to Use

  • You want to test several contrasts against the same baseline (e.g., Drug A vs. DMSO, Drug B vs. DMSO, Drug C vs. DMSO).
  • You are comparing models with and without a covariate to assess the impact of batch correction on your results.
  • You need to identify genes that are consistently differentially expressed across multiple comparisons.

Required Inputs

  • Two or more completed RNA-seq result runs stored in the same project.
  • Each run should have a clear, descriptive name so you can identify it later during comparison.

What to Expect

  • Each configured run is saved in the project run history with its timestamp, design formula, and contrast specification.
  • You can open any past run, inspect its results table and plots, and export the output for external comparison.
  • Result tabs and plot tabs stay tied to the selected run so that your interpretation remains traceable to a specific analysis configuration.

Common Pitfalls

  • Comparing models with different designs (e.g., ~condition vs. ~condition + batch) is valid, but results may shift because the model is accounting for different sources of variance.
  • Different reference levels produce different fold-change signs. Ensure you use consistent reference levels when comparing across runs.
  • A large number of runs can make the comparator view cluttered. Focus on the key contrasts that address your biological questions.

Citations

References

  • Love, M. I., Huber, W., & Anders, S. (2014). Moderated estimation of fold change and dispersion for RNA-seq data with DESeq2. Genome Biology, 15, 550.