In the new era of omics data, precision medicine has become the new paradigm of cancer treatment. Among all available omics techniques, gene expression profiling, in particular, has been increasingly used to classify tumor subtypes with different biological behavior. Cancer subtype discovery is usually approached from two possible perspectives:
-Using the molecular data alone with unsupervised techniques such as clustering analysis. -Using supervised techniques focusing entirely on survival data.
The problem of finding patients subgroups with survival differences while maintaining cluster consistency could be viewed as a bi-objective problem, where there is a trade-off between the separability of the different groups and the ability of a given signature to consistently distinguish patients with different clinical outcomes. This gives rise to a set of optimal solutions, also known as Pareto-optimal solutions. To overcome these issues, we combined the advantages of clustering methods for grouping heterogeneous omics data and the search properties of genetic algorithms in GSgalgoR: A flexible yet robust multi-objective meta-heuristic for disease subtype discovery based on an elitist non-dominated sorting genetic algorithm (NSGA-II), driven by the underlying premise of maximizing survival differences between groups while getting high consistency and robustness of the clusters obtained.
In the GSgalgoR package, the NSGA-II framework was used for finding multiple Pareto-optimal solutions to classify patients according to their gene expression patterns. Basically, NSGA-II starts with a population of competing individuals which are evaluated under a set of fitness functions that estimate the survival differences and cohesiveness of the different transcriptomic groups. Then, solutions are ranked and sorted according to their non-domination level which will affect the way they are chosen to be submitted to the so-called “evolutionary operators” such as crossover and mutation. Once a set of well-suited solutions are selected and reproduced, a new offspring of individuals composed of a mixture of the “genetic information” of the parents is obtained. Parents and offspring are pooled and the best-ranked solutions are selected and passed to the next generation which will start over the same process again.
To install GSgalgoR package, start R and enter:
if (!requireNamespace("BiocManager", quietly = TRUE))
install.packages("BiocManager")
BiocManager::install("GSgalgoR")
library(GSgalgoR)
Alternatively you can install GSgalgoR from github using the devtool package
devtools::install_github("https://github.com/harpomaxx/GSgalgoR")
library(GSgalgoR)
To standardize the structure of genomic data, we use the
ExpressionSet
structure for the examples given in this guide. The ExpressionSet
objects are
formed mainly by:
To start testing GSgalgoR, we will use two Breast Cancer datasets. Namely, the UPP and the TRANSBIG datasets. Additionally, we will use PAM50 centroids to perform breast cancer sample classification. The datasets can be accessed from the following Bioconductor packages:
BiocManager::install("breastCancerUPP",version = "devel")
BiocManager::install("breastCancerTRANSBIG",version = "devel")
library(breastCancerTRANSBIG)
library(breastCancerUPP)
Also, some basic packages are needed to run the example in this vignette
library(GSgalgoR)
library(Biobase)
library(genefu)
library(survival)
library(survminer)
library(ggplot2)
data(pam50)
To access the ExpressionSets
we use:
data(upp)
Train<- upp
rm(upp)
data(transbig)
Test<- transbig
rm(transbig)
#To access gene expression data
train_expr<- exprs(Train)
test_expr<- exprs(Test)
#To access feature data
train_features<- fData(Train)
test_features<- fData(Test)
#To access clinical data
train_clinic <- pData(Train)
test_clinic <- pData(Test)
Galgo can accept any numeric data, like probe intensity from microarray experiments or RNAseq normalized counts, nevertheless, features are expected to be scaled across the dataset before being plugged in into the Galgo Framework. For PAM50 classification, Gene Symbols are expected, so probesets are mapped into their respective gene symbols. Probesets mapping for multiple genes are expanded while Genes mapped to multiple probes are collapsed selecting the probes with the highest variance for each duplicated gene.
#Custom function to drop duplicated genes (keep genes with highest variance)
DropDuplicates<- function(eset, map= "Gene.symbol"){
#Drop NA's
drop <- which(is.na(fData(eset)[,map]))
eset <- eset[-drop,]
#Drop duplicates
drop <- NULL
Dup <- as.character(unique(fData(eset)[which(duplicated
(fData(eset)[,map])),map]))
Var <- apply(exprs(eset),1,var)
for(j in Dup){
pos <- which(fData(eset)[,map]==j)
drop <- c(drop,pos[-which.max(Var[pos])])
}
eset <- eset[-drop,]
featureNames(eset) <- fData(eset)[,map]
return(eset)
}
# Custom function to expand probesets mapping to multiple genes
expandProbesets <- function (eset, sep = "///", map="Gene.symbol"){
x <- lapply(featureNames(eset), function(x) strsplit(x, sep)[[1]])
y<- lapply(as.character(fData(eset)[,map]), function(x) strsplit(x,sep))
eset <- eset[order(sapply(x, length)), ]
x <- lapply(featureNames(eset), function(x) strsplit(x, sep)[[1]])
y<- lapply(as.character(fData(eset)[,map]), function(x) strsplit(x,sep))
idx <- unlist(sapply(1:length(x), function(i) rep(i,length(x[[i]]))))
idy <- unlist(sapply(1:length(y), function(i) rep(i,length(y[[i]]))))
xx <- !duplicated(unlist(x))
idx <- idx[xx]
idy <- idy[xx]
x <- unlist(x)[xx]
y <- unlist(y)[xx]
eset <- eset[idx, ]
featureNames(eset) <- x
fData(eset)[,map] <- x
fData(eset)$gene <- y
return(eset)
}
Train=DropDuplicates(Train)
Train=expandProbesets(Train)
#Drop NAs in survival
Train <- Train[,!is.na(
survival::Surv(time=pData(Train)$t.rfs,event=pData(Train)$e.rfs))]
Test=DropDuplicates(Test)
Test=expandProbesets(Test)
#Drop NAs in survival
Test <-
Test[,!is.na(survival::Surv(
time=pData(Test)$t.rfs,event=pData(Test)$e.rfs))]
#Determine common probes (Genes)
Int= intersect(rownames(Train),rownames(Test))
Train= Train[Int,]
Test= Test[Int,]
identical(rownames(Train),rownames(Test))
#> [1] TRUE
For simplicity and speed, we will create a reduced expression matrix for the examples.
#First we will get PAM50 centroids from genefu package
PAM50Centroids <- pam50$centroids
PAM50Genes <- pam50$centroids.map$probe
PAM50Genes<- featureNames(Train)[ featureNames(Train) %in% PAM50Genes]
#Now we sample 200 random genes from expression matrix
Non_PAM50Genes<- featureNames(Train)[ !featureNames(Train) %in% PAM50Genes]
Non_PAM50Genes <- sample(Non_PAM50Genes,200, replace=FALSE)
reduced_set <- c(PAM50Genes, Non_PAM50Genes)
#Now we get the reduced training and test sets
Train<- Train[reduced_set,]
Test<- Test[reduced_set,]
Apply robust linear scaling as proposed in paper reference
exprs(Train) <- t(apply(exprs(Train),1,genefu::rescale,na.rm=TRUE,q=0.05))
exprs(Test) <- t(apply(exprs(Test),1,genefu::rescale,na.rm=TRUE,q=0.05))
train_expr <- exprs(Train)
test_expr <- exprs(Test)
The ‘Surv’ object is created by the Surv() function of the survival package.
This uses phenotypic data that are contained in the corresponding datasets,
accessed by the pData
command.
train_clinic <- pData(Train)
test_clinic <- pData(Test)
train_surv <- survival::Surv(time=train_clinic$t.rfs,event=train_clinic$e.rfs)
test_surv <- survival::Surv(time=test_clinic$t.rfs,event=test_clinic$e.rfs)
The main function in this package is galgo()
. It accepts an expression matrix
and survival object to find robust gene expression signatures related to a given
outcome. This function contains some parameters that can be modified, according
to the characteristics of the analysis to be performed.
The principal parameters are:
# For testing reasons it is set to a low number but ideally should be above 100
population <- 30
# For testing reasons it is set to a low number but ideally should be above 150
generations <-15
nCV <- 5
distancetype <- "pearson"
TournamentSize <- 2
period <- 3650
set.seed(264)
output <- GSgalgoR::galgo(generations = generations,
population = population,
prob_matrix = train_expr,
OS = train_surv,
nCV = nCV,
distancetype = distancetype,
TournamentSize = TournamentSize,
period = period)
print(class(output))
#> [1] "galgo.Obj"
#> attr(,"package")
#> [1] "GSgalgoR"
The output of the galgo() function is an object of type galgo.Obj
that has two
slots with the elements:
Is a l x (n + 5) matrix where n is the number of features evaluated and l is the number of solutions obtained.
Is a list of length equal to the number of generations run in the algorithm. Each element is a l x 2 matrix where l is the number of solutions obtained and the columns are the SC Fitness and the Survival Fitness values respectively.
For easier interpretation of the galgo.Obj
, the output can be transformed to a
list
or to a data.frame
objects.
This function restructurates a galgo.Obj
to a more easy to understand an use
list. This output is particularly useful if one wants to select a given solution
and use its outputs in a new classifier. The output of type list has a length
equals to the number of solutions obtained by the galgo algorithm.
Basically this output is a list of lists, where each element of the output is named after the solution’s name (solution.n, where n is the number assigned to that solution), and inside of it, it has all the constituents for that given solution with the following structure:
outputList <- to_list(output)
head(names(outputList))
#> [1] "Solution.1" "Solution.2" "Solution.3" "Solution.4" "Solution.5"
#> [6] "Solution.6"
To evaluate the structure of the first solution we can run:
outputList[["Solution.1"]]
#> $Genes
#> [1] "PHGDH" "KRT5" "RRM2" "SFRP1" "SLC39A6"
#> [6] "BIRC5" "CDC20" "CDH3" "BCL2" "MMP11"
#> [11] "EXO1" "MELK" "KRT17" "ESR1" "PGR"
#> [16] "KIF2C" "GRB7" "FGFR4" "FOXC1" "CCNE1"
#> [21] "NAT1" "TYMS" "MLPH" "CEP55" "ACTR3B"
#> [26] "CCNB1" "BAG1" "MDM2" "TNS1" "FRAT2"
#> [31] "FOXD1" "KRT23" "DPP4" "GJA9" "SOLH"
#> [36] "ACTR3P1" "ANG" "PMAIP1" "AARS" "WIPI2"
#> [41] "HARS2" "SC65" "ABCB6" "SIKE1" "RAPGEF3"
#> [46] "HPS5" "C9orf53" "STAG1" "NCKAP1" "ROCK2"
#> [51] "PRM1" "IMP3" "NBPF9" "TMEM66" "CTSC"
#> [56] "ZDHHC24" "GFPT2" "CEND1" "MUSK" "PPL"
#> [61] "EXOC7" "ZDHHC8P1" "TKT" "EGR4" "TBC1D3"
#> [66] "LOC100287697" "CRTC3" "SIRT5" "AIFM1" "RAB23"
#> [71] "HOXA6" "APOBEC3F" "DDX46" "SPCS3" "SNX19"
#> [76] "RPF1" "OR2A5" "TPSG1" "RUFY2" "HTR2C"
#> [81] "CHRNB3" "LRRC36" "UBASH3A" "GABRA1" "FOXN3"
#> [86] "MAPRE2" "PCOLCE" "SH3D19" "RHOBTB2" "ZNF193"
#> [91] "HSD17B11" "MYOM2" "FBXO28" "MBTPS2" "UCKL1"
#> [96] "KCNN1" "EIF4E2" "TMEM222" "DGCR11" "GNE"
#> [101] "HRH3" "TCL1A" "PPP1R1A" "GAGE12F" "PEX16"
#> [106] "CWF19L1" "SLC26A6" "SPPL2B" "BEST1" "ZNF460"
#> [111] "CYP2A7" "LOC92973" "TET3" "ORAI2" "TEAD4"
#> [116] "N4BP2L2" "TMEM49" "HJURP" "NFYC" "SEMA4A"
#> [121] "SRPX" "RPSA" "NARFL" "CROCCL1" "AP1AR"
#> [126] "MCF2L2" "DDX50" "RGS6" "TRHDE" "TREM2"
#> [131] "MYCN" "LOC652291" "RPLP0P6" "HOMER1" "C1orf114"
#> [136] "ZBTB16" "SH3TC2" "RPL23AP22" "ST7" "MCM7"
#> [141] "FAM64A" "NUDT21" "HPX" "IRGQ" "NOSIP"
#> [146] "KAP2.1B" "CDK10" "KIN" "NOTCH1" "VARS"
#> [151] "PRDX4" "NINL" "DDX24" "DYRK1B" "DEPDC1"
#> [156] "ALDOB" "OLIG2" "GREM2" "ACTR3" "ZNF81"
#> [161] "UTY" "IDH3A" "DCHS2" "SIPA1L1" "POFUT1"
#>
#> $k
#> [1] 6
#>
#> $SC.Fit
#> [1] 0.0452201
#>
#> $Surv.Fit
#> [1] 888.3443
#>
#> $rank
#> [1] 1
#>
#> $CrowD
#> [1] Inf
The current function restructures a galgo.Obj
to a more easy to understand
an use data.frame
. The output data frame has m x n dimensions, were the
rownames (m) are the solutions obtained by the galgo algorithm. The columns has
the following structure:
outputDF <- to_dataframe(output)
head(outputDF)
#> Genes k SC.Fit Surv.Fit Rank CrowD
#> Solutions.1 PHGDH, K.... 6 0.04522010 888.3443 1 Inf
#> Solutions.2 RRM2, SF.... 2 0.20154784 603.2835 1 Inf
#> Solutions.3 PHGDH, R.... 2 0.16897911 778.5902 1 0.6386015
#> Solutions.4 RRM2, CD.... 3 0.08083693 793.5661 1 0.6280816
#> Solutions.5 MYBL2, C.... 6 0.06638887 846.8008 1 0.3030152
#> Solutions.6 PHGDH, R.... 2 0.18224530 711.2830 1 0.2454359
Once we obtain the galgo.obj
from the output of galgo()
we can plot the
obtained Pareto front and see how it evolved trough the tested number of
generations
plot_pareto(output)
Breast cancer (BRCA) is the most common neoplasm in women to date and one of the best studied cnacer types. Currently, numerous molecular alteration for this type of cancer are well known and many transcriptomic signatures have been developed for this type of cancer. In this regards, Perou et al. proposed one of the first molecular subtype classification according to transcriptomic profiles of the tumor, which recapitulates naturally-occurring gene expression patterns that encompass different functional pathways and patient outcomes. These subtypes, (LumA, LumB, Basal-like, HER2 and Normal-Like) have a strong overlap with the classical histopathological classification of BRCA tumors and might affect decision making when used to decided chemotherapy in certain cases.
To evaluate Galgo’s performance along with PAM50 classification, we will use the two already scaled and reduced BRCA gene expression datasets and will compare Galgo performance with the widely used intrinsic molecular subtype PAM50 classification. Galgo performs feature selection by design, so this step is not strictly necessary to use galgoR (although feature selection might fasten GSgalgoRruns), nevertheless, appropriate gene expression scaling is critical when running GSgalgoR.
The scaled expression values of each patient are compared with the prototypical centroids using Pearson’s correlation coefficient and the closest centroid to each patient is used to assign the corresponding labels.
#The reduced UPP dataset will be used as training set
train_expression <- exprs(Train)
train_clinic<- pData(Train)
train_features<- fData(Train)
train_surv<- survival::Surv(time=train_clinic$t.rfs,event=train_clinic$e.rfs)
#The reduced TRANSBIG dataset will be used as test set
test_expression <- exprs(Test)
test_clinic<- pData(Test)
test_features<- fData(Test)
test_surv<- survival::Surv(time=test_clinic$t.rfs,event=test_clinic$e.rfs)
#PAM50 centroids
centroids<- pam50$centroids
#Extract features from both data.frames
inBoth<- Reduce(intersect, list(rownames(train_expression),rownames(centroids)))
#Classify samples
PAM50_train<- cluster_classify(train_expression[inBoth,],centroids[inBoth,],
method = "spearman")
table(PAM50_train)
#> PAM50_train
#> 1 2 3 4 5
#> 22 30 94 73 15
PAM50_test<- cluster_classify(test_expression[inBoth,],centroids[inBoth,],
method = "spearman")
table(PAM50_test)
#> PAM50_test
#> 1 2 3 4 5
#> 45 26 80 44 3
# Classify samples using genefu
#annot<- fData(Train)
#colnames(annot)[3]="Gene.Symbol"
#PAM50_train<- molecular.subtyping(sbt.model = "pam50",
# data = t(train_expression), annot = annot,do.mapping = TRUE)
Once the patients are classified according to their closest centroids, we can now evaluate the survival curves for the different types in each of the datasets
surv_formula <-
as.formula("Surv(train_clinic$t.rfs,train_clinic$e.rfs)~ PAM50_train")
tumortotal1 <- surv_fit(surv_formula,data=train_clinic)
tumortotal1diff <- survdiff(surv_formula)
tumortotal1pval<- pchisq(tumortotal1diff$chisq, length(tumortotal1diff$n) - 1,
lower.tail = FALSE)
p<-ggsurvplot(tumortotal1,
data=train_clinic,
risk.table=TRUE,
pval=TRUE,
palette="dark2",
title="UPP breast cancer \n PAM50 subtypes survival",
surv.scale="percent",
conf.int=FALSE,
xlab="time (days)",
ylab="survival(%)",
xlim=c(0,3650),
break.time.by = 365,
ggtheme = theme_minimal(),
risk.table.y.text.col = TRUE,
risk.table.y.text = FALSE,censor=FALSE)
print(p)
surv_formula <-
as.formula("Surv(test_clinic$t.rfs,test_clinic$e.rfs)~ PAM50_test")
tumortotal2 <- surv_fit(surv_formula,data=test_clinic)
tumortotal2diff <- survdiff(surv_formula)
tumortotal2pval<- pchisq(tumortotal2diff$chisq, length(tumortotal2diff$n) - 1,
lower.tail = FALSE)
p<-ggsurvplot(tumortotal2,
data=test_clinic,
risk.table=TRUE,
pval=TRUE,
palette="dark2",
title="TRANSBIG breast cancer \n PAM50 subtypes survival",
surv.scale="percent",
conf.int=FALSE,
xlab="time (days)",
ylab="survival(%)",
xlim=c(0,3650),
break.time.by = 365,
ggtheme = theme_minimal(),
risk.table.y.text.col = TRUE,
risk.table.y.text = FALSE,
censor=FALSE)
print(p)
Now we run Galgo to find cohesive and clinically meaningful signatures for BRCA using UPP data as training set and TRANSBIG data as test set
population <- 15
generations <-5
nCV <- 5
distancetype <- "pearson"
TournamentSize <- 2
period <- 3650
Run Galgo on the training set
output= GSgalgoR::galgo(generations = generations,
population = population,
prob_matrix = train_expression,
OS=train_surv,
nCV= nCV,
distancetype=distancetype,
TournamentSize=TournamentSize,
period=period)
print(class(output))
plot_pareto(output)
output_df<- to_dataframe(output)
NonDom_solutions<- output_df[output_df$Rank==1,]
# N of non-dominated solutions
nrow(NonDom_solutions)
#> [1] 5
# N of partitions found
table(NonDom_solutions$k)
#>
#> 2 5
#> 4 1
#Average N of genes per signature
mean(unlist(lapply(NonDom_solutions$Genes,length)))
#> [1] 105.8
#SC range
range(NonDom_solutions$SC.Fit)
#> [1] 0.05485651 0.15461697
# Survival fitnesss range
range(NonDom_solutions$Surv.Fit)
#> [1] 592.2302 698.6729
Now we select the best performing solutions for each number of partitions (k) according to C.Index
RESULT<- non_dominated_summary(output=output,
OS=train_surv,
prob_matrix= train_expression,
distancetype =distancetype
)
best_sol=NULL
for(i in unique(RESULT$k)){
best_sol=c(
best_sol,
RESULT[RESULT$k==i,"solution"][which.max(RESULT[RESULT$k==i,"C.Index"])])
}
print(best_sol)
#> [1] "Solutions.1" "Solutions.3"
Now we create the prototypic centroids of the selected solutions
CentroidsList <- create_centroids(output,
solution_names = best_sol,
trainset = train_expression)
We will test the Galgo signatures found with the UPP training set in an independent test set (TRANSBIG)
train_classes<- classify_multiple(prob_matrix=train_expression,
centroid_list= CentroidsList,
distancetype = distancetype)
test_classes<- classify_multiple(prob_matrix=test_expression,
centroid_list= CentroidsList,
distancetype = distancetype)
To calculate the train and test C.Index, the risk coefficients are calculated for each subclass in the training set and then are used to predict the risk of the different groups in the test set. This is particularly important for signatures with high number of partitions, were the survival differences of different groups might overlap and change their relative order, which is of great importance in the C.Index calculation.
Prediction.models<- list()
for(i in best_sol){
OS<- train_surv
predicted_class<- as.factor(train_classes[,i])
predicted_classdf <- as.data.frame(predicted_class)
colnames(predicted_classdf)<- i
surv_formula <- as.formula(paste0("OS~ ",i))
coxsimple=coxph(surv_formula,data=predicted_classdf)
Prediction.models[[i]]<- coxsimple
}
C.indexes<- data.frame(train_CI=rep(NA,length(best_sol)),
test_CI=rep(NA,length(best_sol)))
rownames(C.indexes)<- best_sol
for(i in best_sol){
predicted_class_train<- as.factor(train_classes[,i])
predicted_class_train_df <- as.data.frame(predicted_class_train)
colnames(predicted_class_train_df)<- i
CI_train<-
concordance.index(predict(Prediction.models[[i]],
predicted_class_train_df),
surv.time=train_surv[,1],
surv.event=train_surv[,2],
outx=FALSE)$c.index
C.indexes[i,"train_CI"]<- CI_train
predicted_class_test<- as.factor(test_classes[,i])
predicted_class_test_df <- as.data.frame(predicted_class_test)
colnames(predicted_class_test_df)<- i
CI_test<-
concordance.index(predict(Prediction.models[[i]],
predicted_class_test_df),
surv.time=test_surv[,1],
surv.event=test_surv[,2],
outx=FALSE)$c.index
C.indexes[i,"test_CI"]<- CI_test
}
print(C.indexes)
#> train_CI test_CI
#> Solutions.1 0.6318177 0.5488363
#> Solutions.3 0.6461337 0.5697436
best_signature<- best_sol[which.max(C.indexes$test_CI)]
print(best_signature)
#> [1] "Solutions.3"
We test best galgo signature with training and test sets
train_class <- train_classes[,best_signature]
surv_formula <-
as.formula("Surv(train_clinic$t.rfs,train_clinic$e.rfs)~ train_class")
tumortotal1 <- surv_fit(surv_formula,data=train_clinic)
tumortotal1diff <- survdiff(surv_formula)
tumortotal1pval<- pchisq(tumortotal1diff$chisq,
length(tumortotal1diff$n) - 1,
lower.tail = FALSE)
p<-ggsurvplot(tumortotal1,
data=train_clinic,
risk.table=TRUE,pval=TRUE,palette="dark2",
title="UPP breast cancer \n Galgo subtypes survival",
surv.scale="percent",
conf.int=FALSE, xlab="time (days)",
ylab="survival(%)", xlim=c(0,3650),
break.time.by = 365,
ggtheme = theme_minimal(),
risk.table.y.text.col = TRUE,
risk.table.y.text = FALSE,censor=FALSE)
print(p)
test_class <- test_classes[,best_signature]
surv_formula <-
as.formula("Surv(test_clinic$t.rfs,test_clinic$e.rfs)~ test_class")
tumortotal1 <- surv_fit(surv_formula,data=test_clinic)
tumortotal1diff <- survdiff(surv_formula)
tumortotal1pval<- pchisq(tumortotal1diff$chisq,
length(tumortotal1diff$n) - 1,
lower.tail = FALSE)
p<-ggsurvplot(tumortotal1,
data=test_clinic,
risk.table=TRUE,
pval=TRUE,palette="dark2",
title="TRANSBIG breast cancer \n Galgo subtypes survival",
surv.scale="percent",
conf.int=FALSE,
xlab="time (days)",
ylab="survival(%)",
xlim=c(0,3650),
break.time.by = 365,
ggtheme = theme_minimal(),
risk.table.y.text.col = TRUE,
risk.table.y.text = FALSE,
censor=FALSE)
print(p)
Compare PAM50 classification vs Galgo classification in the TRANSBIG (test) dataset
surv_formula1 <-
as.formula("Surv(test_clinic$t.rfs,test_clinic$e.rfs)~ test_class")
tumortotal1 <- surv_fit(surv_formula1,data=test_clinic)
tumortotal1diff <- survdiff(surv_formula1)
tumortotal1pval<- pchisq(tumortotal1diff$chisq,
length(tumortotal1diff$n) - 1,
lower.tail = FALSE)
surv_formula2 <-
as.formula("Surv(test_clinic$t.rfs,test_clinic$e.rfs)~ PAM50_test")
tumortotal2 <- surv_fit(surv_formula2,data=test_clinic)
tumortotal2diff <- survdiff(surv_formula2)
tumortotal2pval<- pchisq(tumortotal1diff$chisq,
length(tumortotal2diff$n) - 1,
lower.tail = FALSE)
SURV=list(GALGO=tumortotal1,PAM50=tumortotal2 )
COLS=c(1:8,10)
par(cex=1.35, mar=c(3.8, 3.8, 2.5, 2.5) + 0.1)
p=ggsurvplot(SURV,
combine=TRUE,
data=test_clinic,
risk.table=TRUE,
pval=TRUE,
palette="dark2",
title="Galgo vs. PAM50 subtypes \n BRCA survival comparison",
surv.scale="percent",
conf.int=FALSE,
xlab="time (days)",
ylab="survival(%)",
xlim=c(0,period),
break.time.by = 365,
ggtheme = theme_minimal(),
risk.table.y.text.col = TRUE,
risk.table.y.text = FALSE,
censor=FALSE)
print(p)
sessionInfo()
#> R version 4.2.1 (2022-06-23)
#> Platform: x86_64-pc-linux-gnu (64-bit)
#> Running under: Ubuntu 20.04.5 LTS
#>
#> Matrix products: default
#> BLAS: /home/biocbuild/bbs-3.16-bioc/R/lib/libRblas.so
#> LAPACK: /home/biocbuild/bbs-3.16-bioc/R/lib/libRlapack.so
#>
#> locale:
#> [1] LC_CTYPE=en_US.UTF-8 LC_NUMERIC=C
#> [3] LC_TIME=en_GB LC_COLLATE=C
#> [5] LC_MONETARY=en_US.UTF-8 LC_MESSAGES=en_US.UTF-8
#> [7] LC_PAPER=en_US.UTF-8 LC_NAME=C
#> [9] LC_ADDRESS=C LC_TELEPHONE=C
#> [11] LC_MEASUREMENT=en_US.UTF-8 LC_IDENTIFICATION=C
#>
#> attached base packages:
#> [1] stats graphics grDevices utils datasets methods base
#>
#> other attached packages:
#> [1] survminer_0.4.9 ggpubr_0.4.0
#> [3] ggplot2_3.3.6 genefu_2.30.0
#> [5] AIMS_1.30.0 e1071_1.7-12
#> [7] iC10_1.5 iC10TrainingData_1.3.1
#> [9] impute_1.72.0 pamr_1.56.1
#> [11] cluster_2.1.4 biomaRt_2.54.0
#> [13] survcomp_1.48.0 prodlim_2019.11.13
#> [15] survival_3.4-0 Biobase_2.58.0
#> [17] BiocGenerics_0.44.0 GSgalgoR_1.8.0
#> [19] breastCancerUPP_1.35.0 breastCancerTRANSBIG_1.35.0
#> [21] BiocStyle_2.26.0
#>
#> loaded via a namespace (and not attached):
#> [1] backports_1.4.1 BiocFileCache_2.6.0 splines_4.2.1
#> [4] listenv_0.8.0 GenomeInfoDb_1.34.0 digest_0.6.30
#> [7] SuppDists_1.1-9.7 foreach_1.5.2 htmltools_0.5.3
#> [10] magick_2.7.3 fansi_1.0.3 magrittr_2.0.3
#> [13] memoise_2.0.1 doParallel_1.0.17 limma_3.54.0
#> [16] globals_0.16.1 Biostrings_2.66.0 prettyunits_1.1.1
#> [19] colorspace_2.0-3 blob_1.2.3 rappdirs_0.3.3
#> [22] xfun_0.34 dplyr_1.0.10 crayon_1.5.2
#> [25] RCurl_1.98-1.9 jsonlite_1.8.3 zoo_1.8-11
#> [28] iterators_1.0.14 glue_1.6.2 gtable_0.3.1
#> [31] zlibbioc_1.44.0 XVector_0.38.0 car_3.1-1
#> [34] future.apply_1.9.1 abind_1.4-5 scales_1.2.1
#> [37] DBI_1.1.3 rstatix_0.7.0 Rcpp_1.0.9
#> [40] gridtext_0.1.5 xtable_1.8-4 progress_1.2.2
#> [43] bit_4.0.4 proxy_0.4-27 mclust_6.0.0
#> [46] km.ci_0.5-6 stats4_4.2.1 lava_1.7.0
#> [49] httr_1.4.4 ellipsis_0.3.2 pkgconfig_2.0.3
#> [52] XML_3.99-0.12 farver_2.1.1 sass_0.4.2
#> [55] dbplyr_2.2.1 utf8_1.2.2 tidyselect_1.2.0
#> [58] labeling_0.4.2 rlang_1.0.6 AnnotationDbi_1.60.0
#> [61] munsell_0.5.0 tools_4.2.1 cachem_1.0.6
#> [64] cli_3.4.1 generics_0.1.3 RSQLite_2.2.18
#> [67] broom_1.0.1 evaluate_0.17 stringr_1.4.1
#> [70] fastmap_1.1.0 yaml_2.3.6 bootstrap_2019.6
#> [73] knitr_1.40 bit64_4.0.5 survMisc_0.5.6
#> [76] purrr_0.3.5 KEGGREST_1.38.0 future_1.28.0
#> [79] xml2_1.3.3 compiler_4.2.1 filelock_1.0.2
#> [82] curl_4.3.3 png_0.1-7 ggsignif_0.6.4
#> [85] tibble_3.1.8 bslib_0.4.0 stringi_1.7.8
#> [88] highr_0.9 lattice_0.20-45 Matrix_1.5-1
#> [91] commonmark_1.8.1 markdown_1.3 survivalROC_1.0.3
#> [94] KMsurv_0.1-5 vctrs_0.5.0 pillar_1.8.1
#> [97] lifecycle_1.0.3 BiocManager_1.30.19 jquerylib_0.1.4
#> [100] data.table_1.14.4 bitops_1.0-7 R6_2.5.1
#> [103] bookdown_0.29 KernSmooth_2.23-20 gridExtra_2.3
#> [106] IRanges_2.32.0 parallelly_1.32.1 codetools_0.2-18
#> [109] assertthat_0.2.1 withr_2.5.0 S4Vectors_0.36.0
#> [112] GenomeInfoDbData_1.2.9 ggtext_0.1.2 parallel_4.2.1
#> [115] hms_1.1.2 grid_4.2.1 nsga2R_1.1
#> [118] tidyr_1.2.1 class_7.3-20 rmarkdown_2.17
#> [121] mco_1.15.6 carData_3.0-5 rmeta_3.0