Assignment for Class 40

Your name:
Your email address:


Introduction to Bayesian Analyses

MrBayes 3.1.2 for the Mac can be downloaded from HERE. Save it to your Desktop. Double-click on the "mrbayes-3.1.2.sit" archive to expand it. Inside the "mrbayes-3.1.2" folder you will find "MrBayes3.1.2" (not to be confused with "MrBayes3.1.2p" which is the parallel-processing version which runs on multiple CPUs).

When you are ready to begin (not now :-), you will double-click the "MrBayes3.1.2" program to launch MrBayes.

On the bioinformatics cluster, you could type mb to start the program.

(In case you work at home on a PC, go here and follow instructions to download and install MrBayes.)

Exercise 1:
The goal of this exercise is to learn how to use MrBayes to reconstruct phylogenies.

  1. Save testseq1c.nex into the MrBayes folder that also contains the MrBayes Program (Folder mrbayes-3.1.2, probably on your Desktop). This is the dataset of vacuolar ATPase A subunit you used earlier, but in the NEXUS format that MrBayes reads (the nexus format is used by PAUP and many other programs - it allows to pass trees and commands to the different programs -- clustal and many other programs write nexus formated files, but usually, you need to go into the file with a texteditor and change things like the way gaps are treated, or the data type). You might need to rename the file after downloading. Start MrBayes through double clicking on the MrBayes3.1.2 icon. (do NOT use the one ending in .p). Have a look at the file in a texteditor and make sure you know the name of the file and the directory where the file is stored (often difficulties arise because one program or the other has added an extension to the file)!

  2. At the MrBayes command prompt type "execute testseq1c.nex". This will load the data into the program.

  3. Here is an explanation of the commands at the end of testseq1c.nex in the MrBayes block:

    prset aamodelpr=fixed(jones) [this sets the substititution matrix to JTT, a mdern version of PAM]
    mcmcp samplefreq=50 printfreq=50 nchains=2 startingtree=random [this sets the frequency with which the "robot" reports results to the screen and to the files (different files for parameters (.p) and trees (.t))]
    mcmcp savebrlens=yes [mcmcp sets parameters for the chain. savebrlens tells it to save the trees with branchlengths]
    [mcmcp filename=testseq1c] [if you use this command, it tells the program to save the files under a certaian name. This is handy, if you want to read in data from a previous run, but it usually is easier to go with the default, which in this case is testseq1c.nex]
    mcmcp ngen=20000 [this sets the number of generations to run the chain for to 20000 generations]

    " prset" sets the priors for the model used to calculate likelihoods. In this case we choose the substitution parameters from the JTT amino acid substitution model (Jones et al., 1992).
    " mcmcp " sets parameters for Markov Chain Monte Carlo: we set to sample every 50 generation, to print results to a screen every 50th generation, run 2 chains simultaneously, start with random tree, and save branch lengths.

  4. At the MrBayes command prompt type "mcmc". This actually runs the chain, and we set it to run for 20000 generations. The program runs two analyses in parallel (by default each with 4 chains, and three of these chains are heated; we use only two to make things a little faster--- it definitely is a good idea to run mb on a fast and remote computer). The smaller the reported average standard deviation of split frequencies is, the more reliable the result (i.e., your run is close enough to infinitely long). When the value is below .015, or when your patience is exhausted, terminate the run, by typing no at the prompt.

  5. Make sure you have typed "no" at the Continue with analysis? (yes/no): prompt.

  6. After the run is finished, the " sump " command will plot the logL vs. generation number, that allows to determine the necessary burnin (you want to discard those samples as "burnin" where the -logL is still rising steadily).

    [Rather than using the sump command, you also can import the parameter file into EXEL and plot the logL values as a chart in EXEL. See below.]

    During the start of the run, the likelihood rapidly increases by
    orders of magnitude. If the first samples are included in the plot, one
    really cannot see if the likelihood values fluctuate around a constant
    value. You can exclude the first couple of samples by specifying a burnin.
    sump burnin=20 excludes the first 20 samples.
    Note, that the y-axis is rescaled automatically, which provides you a better spread to judge it the chain is "mixing well", or still in the initial orientation.

    " sumt burnin=20 " or " sumt filename=testseq1c burnin=20 "
    , where you need to substitute '20' with the number you obtained in the previous step of the exercise (Note: that burnin values is the # of generations divided by 50, since we sample every 50th generation). This command creates a consensus tree, and shows posterior probabilities of the clades. You can take a look at the tree on the screen, or in Treeview or njplot by loading the testseq1.con file.

    Which branch in the tree is the longest?
    How long is it?
    What is the measure?
    Can you explain in a few words, why is it important to exclude a 'burnin' from our analyses?

    Type " quit " at the prompt to exit MrBayes.


MrBayes by example: Identification of sites under positive selection in a protein


Professor Walter M. Fitch and assistant research biologist Robin M. Bush of UCI's Department of Ecology and Evolutionary Biology, working with researchers at the Centers for Disease Control and Prevention, studied the evolution of a prevalent form of the influenza A virus during an 11-year period from 1986 to 1997. They discovered that viruses having mutations in certain parts of an important viral surface protein were more likely than other strains to spawn future influenza lineages. Human susceptibility to infection depends on immunity gained during past bouts of influenza; thus, new viral mutations are required for new epidemics to occur. Knowing which currently circulating mutant strains are more likely to have successful offspring potentially may help in vaccine strain selection. The researchers' findings appear in the Dec. 3 issue of Science magazine.

Fitch and his fellow researchers followed the evolutionary pattern of the influenza virus, one that involves a never-ending battle between the virus and its host. The human body fights the invading virus by making antibodies against it. The antibodies recognize the shape of proteins on the viral surface. Previous infections only prepare the body to fight viruses with recognizable shapes. Thus, only those viruses that have undergone mutations that change their shape can cause disease. Over time, new strains of the virus continually emerge, spread and produce offspring lineages that undergo further mutations. This process is called antigenic drift. "The cycle goes on and on-new antibodies, new mutants," Fitch said.

The research into the virus' genetic data focused on the evolution of the hemagglutinin gene-the gene that codes for the major influenza surface protein. Fitch and fellow researchers constructed "family trees" for viral strains from 11 consecutive flu seasons. Each branch on the tree represents a new mutant strain of the virus. They found that the viral strains undergoing the greatest number of amino acid changes in specified positions of the hemagglutinin gene were most closely related to future influenza lineages in nine of the 11 flu seasons tested.

By studying the family trees of various flu strains, Fitch said, researchers can attempt to predict the evolution of an influenza virus and thus potentially aid in the development of more effective influenza vaccines.

The research team is currently expanding its work to include all three groups of circulating influenza viruses, hoping that contrasting their evolutionary strategies may lend more insight into the evolution of influenza.

Along with Fitch and Bush, Catherine A. Bender, Kanta Subbarao and Nancy J. Cox of the Centers for Disease Control and Prevention participated in the study.

A talk by Walter Fitch (slides and sound) is here



The goal of this exercise is to detect sites in hemmagglutinin that are under positive selection.

Since the analysis takes a very long time to run (several days), here are the saved results of the MrBayes run: Fitch_HA.nex.p , Fitch_HA.nex.t .

The original data file is flu_data.paup . The dataset is obtained from an article by Yang et al, 2000 . The File used for MrBayes is here

The MrBayes block used to obtain results above is:

begin mrbayes;
set autoclose=yes;
lset nst=2 rates=gamma nucmodel=codon omegavar=Ny98;
mcmcp samplefreq=500 printfreq=500;
mcmc ngen=500000;
sump burnin=50;
sumt burnin=50; end;

Selecting a nucmodel=codon with Omegavar=Ny98 specifies a model in which for every codon the ratio of the rate of non-synonymous to synonymous substitutions is considered. This ratio is called OMEGA. The Ny98 model considers three different omegas, one equal to 1 (no selection, this site is neutral); the second with omega < 1, these sites are under purifying selection; and the third with Omega >1, i.e. these sites are under positive or diversifying selection. (The problem of this model is that the there are only three distinct omegas estimated, and for each site the probability to fall into one of these three classes. If the omega>1 is estimated to be very large, because one site has a large omega, the other sites might not have a high probability to have the same omega, even though they might also be under positive selection. This leads to the site with largest omega to be identified with confidence, the others have more moderate probabilities to be under positive selection).

Note : Version 2.0 of Mr Bayes has a model that estimates omega for each site individually, the new version only allows the Ny98 model as described above..

  1. First, you need to detect how many generations to burn in (meaning the number of samples you will have to discard). Import the file Fitch_HA.nex.p.txt into Excel (link is above, open a new spreadsheet, select DATA, get external data, import text file (if your file does not have a txt extension, select enable all document types) -- select windows format, else defaults; (if you use an older version of Excel you need to imprt the file into two separate worksheets - it has too many columns; to import the remaining columns of the spreadsheet see 2. below) and plot # of generations versus -LnL values. Determine after how many generations the graph becomes "stationary". The burnin value is that number of generations divided by 50 (since only every 50th generation was sampled; i.e. the burnin value roughly is equal to the number of rows - not quite because there is a header). To more accurately determine the burnin, you need to rescale the Y-axis (click at the Y-axis -- if you aim accurately, you'll get a box that allows rescaling).
    The result (scatterplot of LogL versus generation) might look like this:

  2. Load Fitch_HA.nex.p into Excel. Go to Data->Get External Data->Import Text File. (In older versions of Excel, you need to load the data into 2 sheets, sincethe old Excel does not allow to load more than 256 columns per sheet. To load the data, create a new Excel spreadsheet. Go to Data->Get External Data->Import Text File , and load first 256 columns.

    Go to a separate sheet, and repeat "Get External Data" command to load the rest of the data, you need to block out (=select =make black) and exclude the first 256 columns (the last imported codon ended on nuc 555) -- you need to click the radio button AFTER you selected the columns to skip!). This file contains information for posterior probabilities for each codon (columns) at each sampled generation (rows).)

  3. Calculate average posterior probability for each site of being under positive selection (Do not forget to exclude first N rows as a burnin; you should have detected value of N in the first question of this exercise - to be clear on where the burnin ends, you might want to highlight the rows representing the burnin and select a different font color. (Use AVERAGE() function of Excel, enter the formula in a cell below the values for the individual trees -- starting in column pr+(1,2,3) -- copy the formula to all columns)

  4. Plot average posterior probability vs. site # . (select the row in which you calculated the averages, then click Graph, and select a bar graph). Write down the codon positions for a few sites with the highest posterior probability of being positively selected (the columns name pr+(1,2,3), pr+(4,5,6)....and so on. pr+(1,2,3) mean probability of codon #1 (nucleotide #1, #2 and #3) to be under positive selection)) (Examples are here and here , check all sheets - note: these were calculated from only 30 000 generations)

  5. Create a histogram for the omega value <1 (denoted omega(-)) and the omega>1 (denoted omega(+)).

    You can either try to use the histo.xls from HERE. The original source comes from this page.

    Copy the values from the relevant column (e.g., omega(-)), and paste it in the "Data" tab of histo.xls, under the "X".
    Change the "Bin Range" by making the first and last midpoint 0.2 and 0.7 respectively. Change the increment to 0.03
    The histogram will now be displayed in the "Histogram" tab.

    OR you can use this R-script to generate the histogram.

    Save the R script (control-click save..) in the same folder where you saved the Fitch_HA.nex.p file. When you execute the R script you need to tell it the name of your data file (Fitch_HA.nex.p), the column which contains the data to plot (5 for the omega < 1, or 7 for the omega > 1 values in this case), the number of generations to discard in the burnin, and how many classes to divide the data into (50 works well here).

    In the terminal change directory (cd) to where you saved the R script and data file. To plot a histogram of the omega < 1 data, inserting your chosen burnin value for [BURNIN], copy and paste the following into the terminal and press [enter]: R CMD BATCH '--args infile="Fitch_HA.nex.p" column=5 burnin=[BURNIN] classes=50' hist_from_table.R. It will plot the histogram into a pdf file called MB_plots_[column name]. Change the 5 to a 7 to plot the omega > 1 data.

Again, do not forget to discard the burnin from consideration. What is the shape of the distribution?
The histogram from omega <1 should look something like this:

  1. Determine the 95% credibility interval for the omega<1 value. To do this you sort posterior probability column in ascending order (Select data you want to sort, and go to Data->Sort... ). Again, do not forget to discard the burnin ; the easiest might be to actually delete it.. After sorting, exclude 2.5% of the data on the top and on the bottom. The range of the remaining data gives you the 95% confidence interval. (The values for the 95% credibility interval should be approximately .31- .52 ; using a burnin of 55500 generations, 2.5% of the remainder is about 7.)

  2. The structure of hemagglutinin has been crystallized and is publicly available through PDB. Download the PDB file here and examine it with SPDBV. Chain A of the PDB file corresponds to the sequences we did our analysis with (color the molecule according to chain). Below is a comparison of the one of the sequences we used for analyses with the sequence for which the structure was determined:

    Using this alignment as a help, map the sites which have the highest probability to belong to the class with omega>1. Where are these sites located in the protein? (Reminders: The position number in the nexus file corresponds to nucleotide sequence, the structure is based on the amino acid sequence - take the third codon position and divide by 3 to find the amino acid. You only want to be concerned with Chain A!)


Send email to your instructor (and yourself) upon submit
Send email to yourself only upon submit (as a backup)
Show summary upon submit but do not send email to anyone.