You might also want to know a few practical tips when doing PCA:

Here’s the code (NOTE: If you can’t copy and paste the code below, try downloading the PDF):

## In this example, the data is in a matrix called ## data.matrix ## columns are individual samples (i.e. cells) ## rows are measurements taken for all the samples (i.e. genes) ## Just for the sake of the example, here's some made up data... data.matrix <- matrix(nrow=100, ncol=10) colnames(data.matrix) <- c( paste("wt", 1:5, sep=""), paste("ko", 1:5, sep="")) rownames(data.matrix) <- paste("gene", 1:100, sep="") for (i in 1:100) { wt.values <- rpois(5, lambda=sample(x=10:1000, size=1)) ko.values <- rpois(5, lambda=sample(x=10:1000, size=1)) data.matrix[i,] <- c(wt.values, ko.values) } head(data.matrix) dim(data.matrix) pca <- prcomp(t(data.matrix), scale=TRUE) ## plot pc1 and pc2 plot(pca$x[,1], pca$x[,2]) ## make a scree plot pca.var <- pca$sdev^2 pca.var.per <- round(pca.var/sum(pca.var)*100, 1) barplot(pca.var.per, main="Scree Plot", xlab="Principal Component", ylab="Percent Variation") ## now make a fancy looking plot that shows the PCs and the variation: library(ggplot2) pca.data <- data.frame(Sample=rownames(pca$x), X=pca$x[,1], Y=pca$x[,2]) pca.data ggplot(data=pca.data, aes(x=X, y=Y, label=Sample)) + geom_text() + xlab(paste("PC1 - ", pca.var.per[1], "%", sep="")) + ylab(paste("PC2 - ", pca.var.per[2], "%", sep="")) + theme_bw() + ggtitle("My PCA Graph") ## get the name of the top 10 measurements (genes) that contribute ## most to pc1. loading_scores <- pca$rotation[,1] gene_scores <- abs(loading_scores) ## get the magnitudes gene_score_ranked <- sort(gene_scores, decreasing=TRUE) top_10_genes <- names(gene_score_ranked[1:10]) top_10_genes ## show the names of the top 10 genes pca$rotation[top_10_genes,1] ## show the scores (and +/- sign) ####### ## ## NOTE: Everything that follow is just bonus stuff. ## It simply demonstrates how to get the same ## results using "svd()" (Singular Value Decomposition) or using "eigen()" ## (Eigen Decomposition). ## ## ####### ############################################ ## ## Now let's do the same thing with svd() ## ## svd() returns three things ## v = the "rotation" that prcomp() returns, this is a matrix of eigenvectors ## in other words, a matrix of loading scores ## u = this is similar to the "x" that prcomp() returns. In other words, ## sum(the rotation * the original data), but compressed to the unit vector ## You can spread it out by multiplying by "d" ## d = this is similar to the "sdev" value that prcomp() returns (and thus ## related to the eigen values), but not ## scaled by sample size in an unbiased way (ie. 1/(n-1)). ## For prcomp(), sdev = sqrt(var) = sqrt(ss(fit)/(n-1)) ## For svd(), d = sqrt(ss(fit)) ############################################ svd.stuff <- svd(scale(t(data.matrix), center=TRUE)) ## calculate the PCs svd.data <- data.frame(Sample=colnames(data.matrix), X=(svd.stuff$u[,1] * svd.stuff$d[1]), Y=(svd.stuff$u[,2] * svd.stuff$d[2])) svd.data ## alternatively, we could compute the PCs with the eigen vectors and the ## original data svd.pcs <- t(t(svd.stuff$v) %*% t(scale(t(data.matrix), center=TRUE))) svd.pcs[,1:2] ## the first to principal components svd.df <- ncol(data.matrix) - 1 svd.var <- svd.stuff$d^2 / svd.df svd.var.per <- round(svd.var/sum(svd.var)*100, 1) ggplot(data=svd.data, aes(x=X, y=Y, label=Sample)) + geom_text() + xlab(paste("PC1 - ", svd.var.per[1], "%", sep="")) + ylab(paste("PC2 - ", svd.var.per[2], "%", sep="")) + theme_bw() + ggtitle("svd(scale(t(data.matrix), center=TRUE)") ############################################ ## ## Now let's do the same thing with eigen() ## ## eigen() returns two things... ## vectors = eigen vectors (vectors of loading scores) ## NOTE: pcs = sum(loading scores * values for sample) ## values = eigen values ############################################ cov.mat <- cov(scale(t(data.matrix), center=TRUE)) dim(cov.mat) ## since the covariance matrix is symmetric, we can tell eigen() to just ## work on the lower triangle with "symmetric=TRUE" eigen.stuff <- eigen(cov.mat, symmetric=TRUE) dim(eigen.stuff$vectors) head(eigen.stuff$vectors[,1:2]) eigen.pcs <- t(t(eigen.stuff$vectors) %*% t(scale(t(data.matrix), center=TRUE))) eigen.pcs[,1:2] eigen.data <- data.frame(Sample=rownames(eigen.pcs), X=(-1 * eigen.pcs[,1]), ## eigen() flips the X-axis in this case, so we flip it back Y=eigen.pcs[,2]) ## X axis will be PC1, Y axis will be PC2 eigen.data eigen.var.per <- round(eigen.stuff$values/sum(eigen.stuff$values)*100, 1) ggplot(data=eigen.data, aes(x=X, y=Y, label=Sample)) + geom_text() + xlab(paste("PC1 - ", eigen.var.per[1], "%", sep="")) + ylab(paste("PC2 - ", eigen.var.per[2], "%", sep="")) + theme_bw() + ggtitle("eigen on cov(t(data.matrix))")

Advertisements

Hi Josh, really helpful code! I have a question about the loadings – the loading scores are extracted from pca$rotation and values are between -1 and 1. What are the differences between these loadings, and the values given by prcomp((data.matrix), scale=TRUE)$x ?

LikeLike

I’m pretty sure I covered this in detail in the video, but just in case you missed it: The “x” values are the locations for the samples in the PCA graph (should you choose to draw it). The loadings reflect how much influence each variable has on each axis in the PCA graph.

LikeLike

Hi, Josh. Excellent and detailed presentation. But I am a little confused that whether read counts should be used to do PCA, or RPKM/KPKM/TPM should be used? After searching for the answers, it seemed that some used read counts, and some used RPKM/FPKM. Thanks a lot!

LikeLike

You should use “normalized” counts (i.e. you should use RPKM or TPM etc.) for PCA. This was just an example and I didn’t want to get into the details of RPKM/TPM – especially for people that are not biologists. But yes, when you’re working with real sequencing data, use the normalized counts, not the raw counts.

LikeLike

OK. Thanks again for your kind reply.

LikeLike

Hi Josh, really clear explanation. I am wondering what the argument “scale” is doing in the “prcomp()”. I found that the PC values between “scale=TRUE” and “scale=FALSE” are very different. Although “wt” cluster is still on the left of the x axis and “ko” cluster is on the right， but the relative position of sample individuals within each cluster differs.

Because in my own data, there are many 0 counts, so it cannot be scaled. In this case, I don’t know whether it is okay if I use “scale = False ” result.

LikeLike

I’m glad you like the explanation. I explain the “scale” option in another video. Here’s the link: https://youtu.be/oRvgq966yZg

LikeLike

In this https://statquest.org/2017/11/27/statquest-pca-in-r-clearly-explained/ video

Why don’t we draw a 10 by 100 matrix instead of 100 x 10 matrix so that we do not need to us t() function in the prcomp() function ?

Is there a particular reasons?

Thank you very much.

Best Regards

LikeLike

Genetics data is almost always in the 100 x 10 format (rows = genes, columns = sample), and I’m a geneticist – so that’s what I see most of the time. However, other fields do it differently – so it’s important to know that when using the prcomp() function in R, the samples are supposed to be rows, regardless of how the original data is formatted.

LikeLike

What should we do if my data set contain “NA”? Is there a way to still include those samples? ( I have a huge dataset with 2000 samples and more then a hundred “genes” (I am working with traits but whatever really) and most samples are missing at least 20 traits.) The 5 most important traits are present for all samples

LikeLike

It’s possible that there are some PCA packages out there that can handle NAs. I don’t know of any, but that doesn’t mean they don’t exist, so look around. Another thing you can do is impute the values. One way to impute values is with a Random Forest (I mention this method specifically because I have a video that describes how to do it – however, there are other methods). Here’s the link on how to impute values with a RandomForest: https://youtu.be/6EXPYzbfLCE

LikeLike

Hi Josh!

Thanks a lot for a very informative and good tutorial.

Is it possible to plot the column names instead of rownames in the ggplot? Tried to change that in the code:

pca.data <- data.frame(Sample=rownames(pca$x),

X=pca$x[,1],

Y=pca$x[,2])

columns instead of rownames but got an error message.

Thanks in advance!

LikeLike

Since there are only 10 column names (since there are 10 different samples), but there are 100 row names (since there are 100 genes, or 100 variables that we measure per sample), it doesn’t make sense to plot 10 samples on a PCA plot and then try to label them with 100 names. However, you can re-do the PCA plot to use the samples as the variables and the variables as the samples. This would plot 100 genes on the PCA plot, clustered by sample. Then you could use the gene names as labels.

LikeLike

By chance, would it be possible to extract gene names after performing t-SNE, similarly to prcomp? Does t-SNE has an option similar to pca$rotation?

LikeLike

t-SNE has a fundamentally different approach to clustering samples and has no concept of a loading score or a way to rank variables based on importance to the clustering. Thus, I would be very surprised if t-SNE had something like pca$rotation, and I would be very skeptical of how to interpret it. Here’s a StatQuest that describes how t-SNE works: https://youtu.be/NEaUSP4YerM

LikeLike

There is another question that I would like to ask. How can I run LASSO-modified PCA? Is any special function for it?

LikeLike

Hi Josh, I am having trouble understanding what is going on in lines 34 to 37 of the code. I am trying to use ggplot on my data set. I have a gene expression dataset with transript id’s as the first column and every other column is a sample containing TPM values. In my sample preprocessing, I got rid of the first column, removed all rows that sum to zero, and I added 1 to all values in the dataset (because I got the log later on). What do I need to do to have it ready for ggplot?

Thanks for all your great content.

LikeLike

Hello, thank you for this useful tutorial.

I’m trying to do the PCA analysis on my data, but I’m having some troubles in constructing my matrix. I have 3 tests (I put them in columns) and 47 mice from 3 groups (13 Ctrl,20 Pres and 14Psus ,in rows). I also had to insert my data. Here is the matrix part of my code :

data.matrix <- matrix(nrow=47, ncol=3)

colnames(data.matrix) <- c(

paste("Avtes", 1:2, sep=""),

paste("EP", 1:2, sep=""),

paste("Sen",1:2, sep="")

rownames(data.matrix) <- c(

paste("Ctrl", 1:13, sep=""),

paste("Pres", 14:34, sep="")),

paste("Psus", 35:47, sep="")) {

Ctrl.values <-c('79','44','33','90','159','173','122','240','184','18','162','230','16','0.54','0.46','0.72','0.60','0.68','0.54','0.66','0.27','0.47','0.44','0.37','0.05','0.62''25','0','0','0','11','66','0','0','0','0','0','0','0')

Pres.values<-c('200','40','400','55','45','9','171','480','215','480','48','26','366','148','75','42.8','97.5','25.6','91.9','194','120','0.70','0.60','0.72','0.92','0.63','0.49','0.48','0.51','0.51','0.46','0.72','0.55','0.46','0.59','0.47','0.47','0.65','0.34','0.64','0.64','0.35','0','5','0','0','0','18','0','0','0','3,33','0','0','0','0','0','9','7,66','9','46,5','0','3,33')

Psus.values<-c('480','480','480','480','480','480','366','245','480','266.2','480','480','480','0.49','0.94','0.47','0.71','0.81','0.56','0.79','0.74','0.55','0.60','0.86','0.79','0.94','0','43,33','0','11,66','0','0','16','0','26,66','18','45','2,16','57,33','9,5')

data.matrix[i,] <- c(Avtes.values, EP.values,Sen.values)

}

head(data.matrix)

dim(data.matrix)

Can anyone review it please?

Thank you !

PS: I'm not used to programming in R (and in general), so, excuse my potentially dumb mistakes.

LikeLike