Introduction
Last updated
Last updated
MIP enables identification of potential disease causing variants from sequencing data.
MIP is being rewritten in NextFlow as a part of the nf-core project. This repo will mainly receive bugfixes as we are focusing our resources on the new pipeline. You can follow the progress here 👉 raredisease.
MIP performs whole genome or target region analysis of sequenced single-end and/or paired-end reads from the Illumina platform in fastq(.gz) format to generate annotated ranked potential disease causing variants.
MIP performs QC, alignment, coverage analysis, variant discovery and annotation, sample checks as well as ranking the found variants according to disease potential with a minimum of manual intervention. MIP is compatible with Scout for visualization of identified variants.
MIP rare disease DNA analyses single nucleotide variants (SNVs), insertions and deletions (INDELs) and structural variants (SVs).
MIP rare disease RNA analyses mono allelic expression, fusion transcripts, transcript expression and alternative splicing.
MIP rare disease DNA vcf rerun performs re-runs starting from BCFs or VCFs.
MIP has been in use in the clinical production at the Clinical Genomics facility at Science for Life Laboratory since 2014.
Installation
Simple automated install of all programs using conda/docker/singularity via supplied install application
Downloads and prepares references in the installation process
Autonomous
Checks that all dependencies are fulfilled before launching
Builds and prepares references and/or files missing before launching
Decompose and normalise reference(s) and variant VCF(s)
Automatic
A minimal amount of hands-on time
Tracks and executes all recipes without manual intervention
Creates internal queues at nodes to optimize processing
Flexible:
Design your own workflow by turning on/off relevant recipes in predefined pipelines
Restart an analysis from anywhere in your workflow
Process one, or multiple samples
Supply parameters on the command line, in a pedigree.yaml file or via config files
Simulate your analysis before performing it
Limit a run to a specific set of genomic intervals or chromosomes
Use multiple variant callers for both SNV, INDELs and SV
Use multiple annotation programs
Optionally split data into clinical variants and research variants
Fast
Analyses an exome trio in approximately 4 h
Analyses a genome in approximately 21 h
Traceability
Track the status of each recipe through dynamically updated status logs
Recreate your analysis from the MIP log or generated config files
Log sample meta-data and sequence meta-data
Log version numbers of softwares and databases
Checks sample integrity (sex, contamination, duplications, ancestry, inbreeding and relationship)
Test data output file creation and integrity using automated tests
Annotation
Gene annotation
Summarize over all transcript and output on gene level
Transcript level annotation
Separate pathogenic transcripts for correct downstream annotation
Annotate all alleles for a position
Split multi-allelic records into single records to facilitate annotation
Left align and trim variants to normalise them prior to annotation
Extracts QC-metrics and stores them in YAML format
Annotate coverage across genetic regions via Sambamba and Chanjo
Standardized
Use standard formats whenever possible
Visualization
Ranks variants according to pathogenic potential
Output is directly compatible with Scout
MIP is written in perl and therefore requires that perl is installed on your OS.
Prerequisites
We recommend miniconda for installing perl and cpanm libraries. However, perlbrew can also be used for installing and managing perl and cpanm libraries together with MIP. Installation instructions and setting up specific cpanm libraries using perlbrew can be found here.
Automated Installation (Linux x86_64)
Below are instructions for installing the Mutation Identification Pipeline (MIP).
1. Clone the official git repository
2. Install required perl modules from cpan to a specified conda environment
3. Test conda and mip installation files (optional, but recommended)
A conda environment will be created where MIP with all dependencies will be installed.
4. Install MIP
This will cache the containers that are used by MIP.
Note:
For a full list of available options and parameters, run: $ perl mip install --help
6. Test your MIP installation (optional, but recommended)
Make sure to activate your MIP conda environment before executing prove.
When setting up your analysis config file
A starting point for the config is provided in MIP's template directory. You will have to modify the load_env keys to whatever you named the environment. If you are using the default environment name the load_env part of the config should look like this:
MIP is called from the command line and takes input from the command line (precedence) or falls back on defaults where applicable.
Lists are supplied as repeated flag entries on the command line or in the config using the yaml format for arrays. Only flags that will actually be used needs to be specified and MIP will check that all required parameters are set before submitting to SLURM.
Recipe parameters can be set to "0" (=off), "1" (=on) and "2" (=dry run mode). Any recipe can be set to dry run mode and MIP will create the sbatch scripts, but not submit them to SLURM. MIP can be restarted from any recipe using the --start_with_recipe
flag and after any recipe using the --start_after_recipe
flag.
MIP will overwrite data files when reanalyzing, but keeps all "versioned" sbatch scripts for traceability.
You can always supply mip [process] [pipeline] --help
to list all available parameters and defaults.
Example usage:
This will analyse case 3 using 3 individuals from that case and begin the analysis with recipes after Bwa mem and use all parameter values as specified in the config file except those supplied on the command line, which has precedence.
Running programs in containers
Aside from a conda environment, MIP uses containers to run programs. You can use either singularity or docker as your container manager. Containers that are downloaded using MIP's automated installer will need no extra setup. By default MIP will make the reference-, outdata- and temp directory available to the container. Extra directories can be made available to each recipe by adding the key recipe_bind_path
in the config.
In the example below the config has been modified to include the infile directories for the bwa_mem recipe:
Input
Fastq file directories can be supplied with --infile_dirs [PATH_TO_FASTQ_DIR=SAMPLE_ID]
All references and template files should be placed directly in the reference directory specified by --reference_dir
.
Meta-Data
Configuration file (YAML-format)
Pedigree file (YAML-format)
Rank model file (Ini-format; SNV/INDEL)
SV rank model file (Ini-format; SV)
Qc regexp file (YAML-format)
Output
Analyses done per individual is found in each sample_id directory and analyses done including all samples can be found in the case directory.
Sbatch Scripts
MIP will create sbatch scripts (.sh) and submit them in proper order with attached dependencies to SLURM. These sbatch script are placed in the output script directory specified by --outscript_dir
. The sbatch scripts are versioned and will not be overwritten if you begin a new analysis. Versioned "xargs" scripts will also be created where possible to maximize the use of the cores processing power.
Data
MIP will place any generated data files in the output data directory specified by --outdata_dir
. All data files are regenerated for each analysis. STDOUT and STDERR for each recipe is written in the recipe/info directory.