Introduction

MIP - Mutation Identification Pipeline

MIP enables identification of potential disease causing variants from sequencing data.

Citing MIP

1
Rapid pulsed whole genome sequencing for comprehensive acute diagnostics of inborn errors of metabolism
2
Stranneheim H, Engvall M, Naess K, Lesko N, Larsson P, Dahlberg M, Andeer R, Wredenberg A, Freyer C, Barbaro M, Bruhn H, Emahazion T, Magnusson M, Wibom R, Zetterström RH, Wirta V, von Döbeln U, Wedell A.
3
BMC Genomics. 2014 Dec 11;15(1):1090. doi: 10.1186/1471-2164-15-1090.
4
PMID:25495354
Copied!

Overview

MIP performs whole genome or target region analysis of sequenced single-end and/or paired-end reads from the Illumina platform in fastq(.gz) format to generate annotated ranked potential disease causing variants.
MIP performs QC, alignment, coverage analysis, variant discovery and annotation, sample checks as well as ranking the found variants according to disease potential with a minimum of manual intervention. MIP is compatible with Scout for visualization of identified variants.
MIP rare disease DNA analyses single nucleotide variants (SNVs), insertions and deletions (INDELs) and structural variants (SVs).
MIP rare disease RNA analyses mono allelic expression, fusion transcripts, transcript expression and alternative splicing.
MIP rare disease DNA vcf rerun performs re-runs starting from BCFs or VCFs.
MIP has been in use in the clinical production at the Clinical Genomics facility at Science for Life Laboratory since 2014.

Example Usage

MIP analyse rare disease DNA

1
$ mip analyse rd_dna [case_id] --config_file [mip_config_dna.yaml] --pedigree_file [case_id_pedigree.yaml]
Copied!

MIP analyse rare disease DNA VCF rerun

1
mip analyse rd_dna_vcf_rerun [case_id] --config_file [mip_config_dna_vcf_rerun.yaml] --vcf_rerun_file vcf.bcf --sv_vcf_rerun_file sv_vcf.bcf --pedigree [case_id_pedigree_vcf_rerun.yaml]
Copied!

MIP analyse rare disease RNA

1
$ mip analyse rd_rna [case_id] --config_file [mip_config_rna.yaml] --pedigree_file [case_id_pedigree_rna.yaml]
Copied!

Features

    Installation
      Simple automated install of all programs using conda/docker/singularity via supplied install application
      Downloads and prepares references in the installation process
    Autonomous
      Checks that all dependencies are fulfilled before launching
      Builds and prepares references and/or files missing before launching
      Decompose and normalise reference(s) and variant VCF(s)
    Automatic
      A minimal amount of hands-on time
      Tracks and executes all recipes without manual intervention
      Creates internal queues at nodes to optimize processing
    Flexible:
      Design your own workflow by turning on/off relevant recipes in predefined pipelines
      Restart an analysis from anywhere in your workflow
      Process one, or multiple samples
      Supply parameters on the command line, in a pedigree.yaml file or via config files
      Simulate your analysis before performing it
      Limit a run to a specific set of genomic intervals or chromosomes
      Use multiple variant callers for both SNV, INDELs and SV
      Use multiple annotation programs
      Optionally split data into clinical variants and research variants
    Fast
      Analyses an exome trio in approximately 4 h
      Analyses a genome in approximately 21 h
    Traceability
      Track the status of each recipe through dynamically updated status logs
      Recreate your analysis from the MIP log or generated config files
      Log sample meta-data and sequence meta-data
      Log version numbers of softwares and databases
      Checks sample integrity (sex, contamination, duplications, ancestry, inbreeding and relationship)
      Test data output file creation and integrity using automated tests
    Annotation
      Gene annotation
        Summarize over all transcript and output on gene level
      Transcript level annotation
        Separate pathogenic transcripts for correct downstream annotation
      Annotate all alleles for a position
        Split multi-allelic records into single records to facilitate annotation
        Left align and trim variants to normalise them prior to annotation
      Extracts QC-metrics and stores them in YAML format
      Annotate coverage across genetic regions via Sambamba and Chanjo
    Standardized
      Use standard formats whenever possible
    Visualization
      Ranks variants according to pathogenic potential
      Output is directly compatible with Scout

Getting Started

Installation

MIP is written in perl and therefore requires that perl is installed on your OS.
Prerequisites
    Perl, version 5.26.0 or above
    Cpanm
    Miniconda version 4.5.11
    [Singularity] version 3.2.1
We recommend miniconda for installing perl and cpanm libraries. However, perlbrew can also be used for installing and managing perl and cpanm libraries together with MIP. Installation instructions and setting up specific cpanm libraries using perlbrew can be found here.
Automated Installation (Linux x86_64)
Below are instructions for installing the Mutation Identification Pipeline (MIP).
1. Clone the official git repository
1
$ git clone https://github.com/Clinical-Genomics/MIP.git
2
$ cd MIP
Copied!
2. Install required perl modules from cpan to a specified conda environment
1
$ bash mip_install_perl.sh -e [mip] -p [$HOME/miniconda3]
Copied!
3. Test conda and mip installation files (optional, but recommended)
1
$ perl t/mip_install.test
Copied!
A conda environment will be created where MIP with all dependencies will be installed.
4. Install MIP
1
$ perl mip install --environment_name [mip] --reference_dir [$HOME/mip_references]
Copied!
This will cache the containers that are used by MIP.
Note:
    For a full list of available options and parameters, run: $ perl mip install --help
6. Test your MIP installation (optional, but recommended)
Make sure to activate your MIP conda environment before executing prove.
1
$ prove t -r
2
$ perl t/mip_analyse_rd_dna.test
Copied!
When setting up your analysis config file
A starting point for the config is provided in MIP's template directory. You will have to modify the load_env keys to whatever you named the environment. If you are using the default environment name the load_env part of the config should look like this:
1
load_env:
2
mip:
3
mip:
4
method: conda
Copied!

Usage

MIP is called from the command line and takes input from the command line (precedence) or falls back on defaults where applicable.
Lists are supplied as repeated flag entries on the command line or in the config using the yaml format for arrays. Only flags that will actually be used needs to be specified and MIP will check that all required parameters are set before submitting to SLURM.
Recipe parameters can be set to "0" (=off), "1" (=on) and "2" (=dry run mode). Any recipe can be set to dry run mode and MIP will create the sbatch scripts, but not submit them to SLURM. MIP can be restarted from any recipe using the --start_with_recipe flag and after any recipe using the --start_after_recipe flag.
MIP will overwrite data files when reanalyzing, but keeps all "versioned" sbatch scripts for traceability.
You can always supply mip [process] [pipeline] --help to list all available parameters and defaults.
Example usage:
1
$ mip analyse rd_dna case_3 --sample_ids 3-1-1A --sample_ids 3-2-1U --sample_ids 3-2-2U --start_with_recipe samtools_merge --config 3_config.yaml
Copied!
This will analyse case 3 using 3 individuals from that case and begin the analysis with recipes after Bwa mem and use all parameter values as specified in the config file except those supplied on the command line, which has precedence.
Running programs in containers
Aside from a conda environment, MIP uses containers to run programs. You can use either singularity or docker as your container manager. Containers that are downloaded using MIP's automated installer will need no extra setup. By default MIP will make the reference-, outdata- and temp directory available to the container. Extra directories can be made available to each recipe by adding the key recipe_bind_path in the config.
In the example below the config has been modified to include the infile directories for the bwa_mem recipe:
1
recipe_bind_path:
2
bwa_mem:
3
- <path_to_directory_with_fastq_files>
Copied!
Input
    Fastq file directories can be supplied with --infile_dirs [PATH_TO_FASTQ_DIR=SAMPLE_ID]
    All references and template files should be placed directly in the reference directory specified by --reference_dir.
Meta-Data
Output
Analyses done per individual is found in each sample_id directory and analyses done including all samples can be found in the case directory.
Sbatch Scripts
MIP will create sbatch scripts (.sh) and submit them in proper order with attached dependencies to SLURM. These sbatch script are placed in the output script directory specified by --outscript_dir. The sbatch scripts are versioned and will not be overwritten if you begin a new analysis. Versioned "xargs" scripts will also be created where possible to maximize the use of the cores processing power.
Data
MIP will place any generated data files in the output data directory specified by --outdata_dir. All data files are regenerated for each analysis. STDOUT and STDERR for each recipe is written in the recipe/info directory.
Last modified 2mo ago