BQSR stands for Base Quality Score Recalibration. In a nutshell, it is a data pre-processing step that detects systematic errors made by the sequencing machine when it estimates the accuracy of each base call.
Note that this base recalibration process (BQSR) should not be confused with variant recalibration (VQSR), which is a sophisticated filtering technique applied on the variant callset produced in a later step. The developers who named these methods wish to apologize sincerely to anyone, especially Spanish-speaking users, who get tripped up by the similarity of these names.
Contents
- Overview
- Base recalibration procedure details
- Important factors for successful recalibration
- Examples of pre- and post-recalibration metrics
- Recalibration report
1. Overview
It's all about the base, 'bout the base (quality scores)
Base quality scores are per-base estimates of error emitted by the sequencing machines; they express how confident the machine was that it called the correct base each time. For example, let's say the machine reads an A nucleotide, and assigns a quality score of Q20 -- in Phred-scale, that means it's 99% sure it identified the base correctly. This may seem high, but it does mean that we can expect it to be wrong in one case out of 100; so if we have several billion base calls (we get ~90 billion in a 30x genome), at that rate the machine would make the wrong call in 900 million bases -- which is a lot of bad bases. The quality score each base call gets is determined through some dark magic jealously guarded by the manufacturer of the sequencing machines.
Why does it matter? Because our short variant calling algorithms rely heavily on the quality score assigned to the individual base calls in each sequence read. This is because the quality score tells us how much we can trust that particular observation to inform us about the biological truth of the site where that base aligns. If we have a base call that has a low quality score, that means we're not sure we actually read that A correctly, and it could actually be something else. So we won't trust it as much as other base calls that have higher qualities. In other words we use that score to weigh the evidence that we have for or against a variant allele existing at a particular site.
Okay, so what is base recalibration?
Unfortunately the scores produced by the machines are subject to various sources of systematic (non-random) technical error, leading to over- or under-estimated base quality scores in the data. Some of these errors are due to the physics or the chemistry of how the sequencing reaction works, and some are probably due to manufacturing flaws in the equipment.
Base quality score recalibration (BQSR) is a process in which we apply machine learning to model these errors empirically and adjust the quality scores accordingly. For example we can identify that, for a given run, whenever we called two A nucleotides in a row, the next base we called had a 1% higher rate of error. So any base call that comes after AA in a read should have its quality score reduced by 1%. We do that over several different covariates (mainly sequence context and position in read, or cycle) in a way that is additive. So the same base may have its quality score increased for one reason and decreased for another.
This allows us to get more accurate base qualities overall, which in turn improves the accuracy of our variant calls. To be clear, we can't correct the base calls themselves, i.e. we can't determine whether that low-quality A should actually have been a T -- but we can at least tell the variant caller more accurately how far it can trust that A. Note that in some cases we may find that some bases should have a higher quality score, which allows us to rescue observations that otherwise may have been given less consideration than they deserve. Anecdotally our impression is that sequencers are more often over-confident than under-confident, but we do occasionally see runs from sequencers that seemed to suffer from low self-esteem.
This procedure can be applied to BAM files containing data from any sequencing platform that outputs base quality scores on the expected scale. We have run it ourselves on data from several generations of Illumina, SOLiD, 454, Complete Genomics, and Pacific Biosciences sequencers.
That sounds great! How does it work?
The base recalibration process involves two key steps: first the BaseRecalibrator tool builds a model of covariation based on the input data and a set of known variants, producing a recalibration file; then the ApplyBQSR tool adjusts the base quality scores in the data based on the model, producing a new BAM file. The known variants are used to mask out bases at sites of real (expected) variation, to avoid counting real variants as errors. Outside of the masked sites, every mismatch is counted as an error. The rest is mostly accounting.
There is an optional but highly recommended step that involves building a second model and generating before/after plots to visualize the effects of the recalibration process. This is useful for quality control purposes.
2. Base recalibration procedure details
BaseRecalibrator builds the model
To build the recalibration model, this first tool goes through all of the reads in the input BAM file and tabulates data about the following features of the bases:
- read group the read belongs to
- quality score reported by the machine
- machine cycle producing this base (Nth cycle = Nth base from the start of the read)
- current base + previous base (dinucleotide)
For each bin, we count the number of bases within the bin and how often such bases mismatch the reference base, excluding loci known to vary in the population, according to the known variants resource (typically dbSNP). This information is output to a recalibration file in GATKReport format.
Note that the recalibrator applies a "yates" correction for low occupancy bins. Rather than inferring the true Q score from # mismatches / # bases we actually infer it from (# mismatches + 1) / (# bases + 2). This deals very nicely with overfitting problems, which has only a minor impact on data sets with billions of bases but is critical to avoid overconfidence in rare bins in sparse data.
ApplyBQSR adjusts the scores
This second tool goes through all the reads again, using the recalibration file to adjust each base's score based on which bins it falls in. So effectively the new quality score is:
- the sum of the global difference between reported quality scores and the empirical quality
- plus the quality bin specific shift
- plus the cycle x qual and dinucleotide x qual effect
Following recalibration, the read quality scores are much closer to their empirical scores than before. This means they can be used in a statistically robust manner for downstream processing, such as variant calling. In addition, by accounting for quality changes by cycle and sequence context, we can identify truly high quality bases in the reads, often finding a subset of bases that are Q30 even when no bases were originally labeled as such.
3. Important factors for successful recalibration
Read groups
The recalibration system is read-group aware, meaning it uses @RG tags to partition the data by read group. This allows it to perform the recalibration per read group, which reflects which library a read belongs to and what lane it was sequenced in on the flowcell. We know that systematic biases can occur in one lane but not the other, or one library but not the other, so being able to recalibrate within each unit of sequence data makes the modeling process more accurate. As a corollary, that means it's okay to run BQSR on BAM files with multiple read groups. However, please note that the memory requirements scale linearly with the number of read groups in the file, so that files with many read groups could require a significant amount of RAM to store all of the covariate data.
Amount of data
A critical determinant of the quality of the recalibration is the number of observed bases and mismatches in each bin. This procedure will not work well on a small number of aligned reads. We usually expect to see more than 100M bases per read group; as a rule of thumb, larger numbers will work better.
No excuses
You should almost always perform recalibration on your sequencing data. In human data, given the exhaustive databases of variation we have available, almost all of the remaining mismatches -- even in cancer -- will be errors, so it's super easy to ascertain an accurate error model for your data, which is essential for downstream analysis. For non-human data it can be a little bit more work since you may need to bootstrap your own set of variants if there are no such resources already available for you organism, but it's worth it.
Here's how you would bootstrap a set of known variants:
- First do an initial round of variant calling on your original, unrecalibrated data.
- Then take the variants that you have the highest confidence in and use that set as the database of known variants by feeding it as a VCF file to the BaseRecalibrator.
- Finally, do a real round of variant calling with the recalibrated data. These steps could be repeated several times until convergence.
The main case figure where you really might need to skip BQSR is when you have too little data (some small gene panels have that problem), or you're working with a really weird organism that displays insane amounts of variation.
4. Examples of pre- and post-recalibration metrics
This shows recalibration results from a lane sequenced at the Broad by an Illumina GA-II in February 2010. This is admittedly not very recent but the results are typical of what we still see on some more recent runs, even if the overall quality of sequencing has improved. You can see there is a significant improvement in the accuracy of the base quality scores after applying the recalibration procedure. Note that the plots shown below are not the same as the plots that are produced by the AnalyzeCovariates tool.
5. Recalibration report
The recalibration report contains the following 5 tables:
- Arguments Table -- a table with all the arguments and its values
- Quantization Table
- ReadGroup Table
- Quality Score Table
- Covariates Table
Arguments Table
This is the table that contains all the arguments used to run BQSR for this dataset.
#:GATKTable:true:1:17::;
#:GATKTable:Arguments:Recalibration argument collection values used in this run
Argument Value
covariate null
default_platform null
deletions_context_size 6
force_platform null
insertions_context_size 6
...
Quantization Table
The GATK offers native support to quantize base qualities. The GATK quantization procedure uses a statistical approach to determine the best binning system that minimizes the error introduced by amalgamating the different qualities present in the specific dataset. When running BQSR, a table with the base counts for each base quality is generated and a 'default' quantization table is generated. This table is a required parameter for any other tool in the GATK if you want to quantize your quality scores.
The default behavior (currently) is to use no quantization. You can override this by using the engine argument -qq
. With -qq 0
you don't quantize qualities, or -qq N
you recalculate the quantization bins using N bins.
#:GATKTable:true:2:94:::;
#:GATKTable:Quantized:Quality quantization map
QualityScore Count QuantizedScore
0 252 0
1 15972 1
2 553525 2
3 2190142 9
4 5369681 9
9 83645762 9
...
ReadGroup Table
This table contains the empirical quality scores for each read group, for mismatches insertions and deletions.
#:GATKTable:false:6:18:%s:%s:%.4f:%.4f:%d:%d:;
#:GATKTable:RecalTable0:
ReadGroup EventType EmpiricalQuality EstimatedQReported Observations Errors
SRR032768 D 40.7476 45.0000 2642683174 222475
SRR032766 D 40.9072 45.0000 2630282426 213441
SRR032764 D 40.5931 45.0000 2919572148 254687
SRR032769 D 40.7448 45.0000 2850110574 240094
SRR032767 D 40.6820 45.0000 2820040026 241020
SRR032765 D 40.9034 45.0000 2441035052 198258
SRR032766 M 23.2573 23.7733 2630282426 12424434
SRR032768 M 23.0281 23.5366 2642683174 13159514
SRR032769 M 23.2608 23.6920 2850110574 13451898
SRR032764 M 23.2302 23.6039 2919572148 13877177
SRR032765 M 23.0271 23.5527 2441035052 12158144
SRR032767 M 23.1195 23.5852 2820040026 13750197
SRR032766 I 41.7198 45.0000 2630282426 177017
SRR032768 I 41.5682 45.0000 2642683174 184172
SRR032769 I 41.5828 45.0000 2850110574 197959
SRR032764 I 41.2958 45.0000 2919572148 216637
SRR032765 I 41.5546 45.0000 2441035052 170651
SRR032767 I 41.5192 45.0000 2820040026 198762
Quality Score Table
This table contains the empirical quality scores for each read group and original quality score, for mismatches insertions and deletions.
#:GATKTable:false:6:274:%s:%s:%s:%.4f:%d:%d:;
#:GATKTable:RecalTable1:
ReadGroup QualityScore EventType EmpiricalQuality Observations Errors
SRR032767 49 M 33.7794 9549 3
SRR032769 49 M 36.9975 5008 0
SRR032764 49 M 39.2490 8411 0
SRR032766 18 M 17.7397 16330200 274803
SRR032768 18 M 17.7922 17707920 294405
SRR032764 45 I 41.2958 2919572148 216637
SRR032765 6 M 6.0600 3401801 842765
SRR032769 45 I 41.5828 2850110574 197959
SRR032764 6 M 6.0751 4220451 1041946
SRR032767 45 I 41.5192 2820040026 198762
SRR032769 6 M 6.3481 5045533 1169748
SRR032768 16 M 15.7681 12427549 329283
SRR032766 16 M 15.8173 11799056 309110
SRR032764 16 M 15.9033 13017244 334343
SRR032769 16 M 15.8042 13817386 363078
...
Covariates Table
This table has the empirical qualities for each covariate used in the dataset. The default covariates are cycle and context. In the current implementation, context is of a fixed size (default 6). Each context and each cycle will have an entry on this table stratified by read group and original quality score.
#:GATKTable:false:8:1003738:%s:%s:%s:%s:%s:%.4f:%d:%d:;
#:GATKTable:RecalTable2:
ReadGroup QualityScore CovariateValue CovariateName EventType EmpiricalQuality Observations Errors
SRR032767 16 TACGGA Context M 14.2139 817 30
SRR032766 16 AACGGA Context M 14.9938 1420 44
SRR032765 16 TACGGA Context M 15.5145 711 19
SRR032768 16 AACGGA Context M 15.0133 1585 49
SRR032764 16 TACGGA Context M 14.5393 710 24
SRR032766 16 GACGGA Context M 17.9746 1379 21
SRR032768 45 CACCTC Context I 40.7907 575849 47
SRR032764 45 TACCTC Context I 43.8286 507088 20
SRR032769 45 TACGGC Context D 38.7536 37525 4
SRR032768 45 GACCTC Context I 46.0724 445275 10
SRR032766 45 CACCTC Context I 41.0696 575664 44
SRR032769 45 TACCTC Context I 43.4821 490491 21
SRR032766 45 CACGGC Context D 45.1471 65424 1
SRR032768 45 GACGGC Context D 45.3980 34657 0
SRR032767 45 TACGGC Context D 42.7663 37814 1
SRR032767 16 AACGGA Context M 15.9371 1647 41
SRR032764 16 GACGGA Context M 18.2642 1273 18
SRR032769 16 CACGGA Context M 13.0801 1442 70
SRR032765 16 GACGGA Context M 15.9934 1271 31
...