class: center, middle, inverse, title-slide .title[ # 12: Integration and Modularity ] .author[ ### ] --- ### Trait Covariation + Organisms are composed of recognizable parts + These parts are to some extent correlated + Why is this the case? <img src="LectureData/12.integr.mod/ConceptPic.png" width="70%" style="display: block; margin: auto;" /> --- ### Trait Covariation: Integration `\(^1\)` + Trait correlations arise when biological factors elicit concomitant changes in more than one trait + Factors at multiple overlapping levels affect trait covariation + Developmental + Ontogenetic + Functional/Biomechanical + Evolutionary + These associations lead to the **integration** among different body parts .footnote[1: Olson and Miller. (1958). *Morphological Integration.*] --- ### Integration + Integration describes how characters are correlated with each other + Correlations that are stronger among some subsets of traits than between others (Olson and Miller 1958) + Cohesion among traits that result from interactions of biological processes (Klingenberg 2008) <img src="LectureData/12.integr.mod/2014-ArmbrusterIntegration.png" width="40%" style="display: block; margin: auto;" /> --- ### Trait Covariation: Modularity + Trait covariation is sometimes unevenly dispersed across traits + This results in integration that is concentrated within subsets of traits + These subsets of traits are less correlated with other subsets + Such patterns are termed **Modularity** -- + **Other definitions:** + The relative degree of connectivity among traits (Klingenberg 2008) + A complex of characters that serve a functional role, are tightly integrated, and are relatively independent from other such units (Wagner 1996) + Maximal subset of traits for which pairs of traits within the subset of mutually informative, conditional on all other traits under consideration (Magwene 2001) --- ### Modularity + Modular structure in snakes <img src="LectureData/12.integr.mod/2021-Rhoda.png" width="40%" style="display: block; margin: auto;" /> --- ### Levels of Integration and Modularity + Both integration and modularity may be observed at different biological levels, and be explained by different biological processes + Levels mirror those evaluated for allometric patterns: static, ontogenetic, evolutionary <img src="LectureData/12.integr.mod/Klingenberg2014.png" width="70%" style="display: block; margin: auto;" /> --- ### Quantifying Integration and Modularity: Conceptual Considerations + Patterns of trait covariation (integration and modularity) have been explored in different ways + Different approaches are appropriate for different hypotheses + Some considerations when embarking on a trait covariation study are: + Does one evaluate overall patterns of integration or integration among subsets of traits? + What is the expected pattern when neither integration nor modularity is present? + Is modularity the 'contrary' of integration, or can both simultaneously be present? + What is the appropriate `\(H_0\)` for evaluating integration? Modularity, disintegration, random integration, other? + What is the appropriate `\(H_0\)` for evaluating modularity: Integration, random modularity, other? + Does `\(H_0\)` differ when testing overall integration versus integration among subsets? `\(^1\)` .footnote[1: As we will see, the empirical answer to the last few questions is found via RRPP] --- ### Methods for Evaluating Integration and Modularity + Here we review some methods for evaluating patterns of integration and modularity + 1: Methods for evaluating and comparing overall integration + 2: Methods for evaluating and comparing integration among subsets of traits + 3: Methods for evaluating and comparing modularity --- ### 1: Overall Integration + Is there integration (covariaton) in a set of traits? + Most approaches based on exploring patterns in the trait covariance matrix: `$$\hat{\mathbf\Sigma} = \mathbf{Y_c^T}\mathbf{Y_c}/ (n-1)$$` -- Using our shape data notation: `$$\hat{\mathbf\Sigma} = \mathbf{Z^T}\mathbf{Z}/ (n-1)$$` where `\(\mathbf{Z}\)` is an `\(n \times pk\)` matrix of Procrustes coordinates -- + Identify large pairwise correlations in `\(\small{R}\)` (Van Valen 1965) + Identify clusters of traits using cluster analysis (Cheverud 1982) + Factor analysis for identifying sets of correlated traits (Zelditch 1987) + Methods attempted to identify whether integration was present, but usually without *a priori* hypotheses regarding integrated subsets --- ### 1: Quantifying Overall Integration + Integrated traits are correlated (covary) + Eigenvalues of `\(\hat{\Sigma}\)` describe the degree of covariation + Thus, the dispersion of `\(\lambda_p\)` for a set of `\(p\)` traits is one way to describe their integration <img src="12-IntegrationModularity_files/figure-html/unnamed-chunk-5-1.png" width="35%" style="display: block; margin: auto;" /> ##### From Conaway and Adams 2022 (*Evol*) --- ### 1: Quantifying Overall Integration: Some Methods + Many measures for summarizing the dispersion of `\(\lambda_p\)` have been proposed Index | Equation | Source :------- | :----------- | :-------------- ICV | `\(\frac{\sigma_\lambda}{\overline{\lambda}}\)` | Shirai and Marroig, 2010 VE | `\(\sum(\lambda_i-\overline{\lambda})^2/p\)`| Wagner, 1984 `\(V_{rel}\)` | `\(\frac{\sum(\lambda_i-\overline{\lambda})^2}{p(p-1)\overline{\lambda}^2}\)` | Pavlicev et al. 2009 `\(T_1\)` | `\(1-\frac{\sum{\sqrt{\lambda_i}}}{p\sqrt{\lambda_1}}\)` | van Valen, 1974 `\(T_2\)` | `\(1-\frac{\sum{\lambda_i}}{p(\lambda_1)}\)` | van Valen, 1974 `\(D_r\)` | `\(\frac{\sqrt[2R_e]{\prod{\lambda_{R_e}}}}{\sqrt{1/\pi{R_e}}}\)` | O'Keefe et al., 2022 + Which one to use? --- ### 1: Overall Integration: Which Method to Use? + Only `\(V_{rel}\)` remains stable across `\(n\)` and `\(p\)` .pull-left[ <img src="LectureData/12.integr.mod/2022ConawayFig2a.png" width="95%" style="display: block; margin: auto;" /> ] .pull-right[ <img src="LectureData/12.integr.mod/2022ConawayFig2b.png" width="95%" style="display: block; margin: auto;" /> ] --- ### 1: An Effect Size for `\(V_{rel}\)` .pull-left[ + `\(V_{rel}\)` recovers known input levels of covariation + But variance unequal across input levels + Can't compare across datasets <img src="LectureData/12.integr.mod/2022ConawayFig3.png" width="70%" style="display: block; margin: auto;" /> ] -- .pull-right[ + Conversion to an effect size alleviates problem! + Rescale: `\(V_{rel}^*=2V_{rel}-1\)` + Convert to `\(Z\)`-score: `\(Z_{Vrel}=\frac{1}{2}ln (\frac{1+V_{rel}^*}{1-V_{rel}^*})\)` <img src="LectureData/12.integr.mod/2022ConawayFig4.png" width="70%" style="display: block; margin: auto;" /> ] ##### NOTE: redundant dimensions removed prior to estimating `\(V_{rel}\)`: See Conaway and Adams 2022 (Evol.) --- ### 1: Comparing Overall Integration + One can compare overall integration across datasets using `\(Z_{Vrel}\)` + Statistically compare integration levels as: `\(\hat{Z}_{12}=\frac{\lvert{Z_1-Z_2}\rvert}{\sqrt{\sigma^2_{Z_1}+\sigma^2_{Z_2}}}\)` <img src="LectureData/12.integr.mod/2022ConawayFig6.png" width="70%" style="display: block; margin: auto;" /> --- ### 1: Comparing Overall Integration: Example Compare overall integration in shape shape across *Plethodon* species .scrollable[ ``` r data("plethodon") Y.gpa <- gpagen(plethodon$land, print.progress = FALSE) #Separate data by species coords.gp <- coords.subset(Y.gpa$coords, plethodon$species) #Z_Vrel by species Vrel.gp <- Map(function(x) integration.Vrel(x), coords.gp) compare.ZVrel(Vrel.gp$Jord, Vrel.gp$Teyah) ``` ``` ## ## Effect sizes ## ## Vrel.gp$Jord Vrel.gp$Teyah ## -0.2978931 -0.2642648 ## ## Effect sizes for pairwise differences in rel.eig effect size ## ## Vrel.gp$Jord Vrel.gp$Teyah ## Vrel.gp$Jord 0.00000000 0.09804249 ## Vrel.gp$Teyah 0.09804249 0.00000000 ## ## P-values ## ## Vrel.gp$Jord Vrel.gp$Teyah ## Vrel.gp$Jord 1.0000000 0.9218986 ## Vrel.gp$Teyah 0.9218986 1.0000000 ``` ] --- ### 2: Integration Among Subsets of Traits + Sometimes, we wish to know whether there there are associations among *sets* of traits (e.g., between limb traits and head traits) + This addresses whether these biological units (subsets of traits) are integrated with one another `\(^1\)` + One may evaluate such hypotheses using tests of **Multivariate Association** + Two approaches have been used: the RV coefficient and Partial Least Squares <sup>1: In the literature, subsets of traits are often referred to as 'blocks' or 'modules'</sup> --- ### 2: Integration Among Subsets of Traits (Cont.) + But first recall: + One can combine traits from the two subsets and estimate a combined covariance matrix: `\(\hat{\mathbf{\Sigma}}\)` + `\(\small\hat{\mathbf{\Sigma}}\)` can be considered a partitioned matrix, where different sub-components describe covariation within blocks or between blocks of variables .pull-left[ <img src="LectureData/06.covariation/CovMatParts2.png" width="80%" style="display: block; margin: auto;" /> ] .pull-right[ `\(\small\mathbf{S}_{11}\)`: covariation of variables in `\(\small\mathbf{Z}_{1}\)` `\(\small\mathbf{S}_{22}\)`: covariation of variables in `\(\small\mathbf{Z}_{2}\)` `\(\small\mathbf{S}_{21}=\mathbf{S}_{12}^{T}\)`: covariation between `\(\small\mathbf{Z}_{1}\)` and `\(\small\mathbf{Z}_{2}\)` `\(\small\mathbf{S}_{21}=\mathbf{S}_{12}^{T}\)` is the multivariate equivalent of `\(\small\sigma_{21}\)` ] --- ### 2: Integration Among Subsets: The RV Coefficient + Escoffier's RV Coefficient characterizes covariation between subsets relative to covaration within subsets `$$RV=\frac{tr(\mathbf{S}_{12}\mathbf{S}_{21})}{\sqrt{tr(\mathbf{S}_{11}\mathbf{S}_{11})tr(\mathbf{S}_{22}\mathbf{S}_{22})}}$$` + The RV coefficient is *analogous* to `\(\small{r}^{2}\)` but it is not a strict mathematical generalization `\(^1\)` + `\(RV\)` (like `\(r^{2}\)`) is a ratio of between-block relative to within-block variation + Range of `\(\mathbf{RV}\)`: `\(\small{0}\rightarrow{1}\)` + Significance is assessed via permutation <sup>1: Technically, `\(RV\)` is a ratio of squared covariances, not variances as in `\(r^2\)`: see Bookstein 2016</sup> --- ### 2: Integration Among Subsets: Partial Least Squares + Another way to summarize the covariation between blocks is via Partial Least Squares (PLS) + *Decomposing* the information in `\(\small\mathbf{S}_{12}\)` to find rotational solution (direction) that describes greatest covariation between `\(\small\mathbf{Z}_{1}\)` and `\(\small\mathbf{Z}_{2}\)` `$$\small\mathbf{S}_{12}=\mathbf{UD{V}}^T$$` + Ordination scores found by projection of centered data on vectors `\(\small\mathbf{U}\)` and `\(\small\mathbf{V}\)` `$$\small\mathbf{P}_{1}=\mathbf{Z}_{1}\mathbf{U}$$` `$$\small\mathbf{P}_{2}=\mathbf{Z}_{2}\mathbf{V}$$` + The first columns of `\(\small\mathbf{P}_{1}\)` and `\(\small\mathbf{P}_{2}\)` describe the maximal covariation between `\(\small\mathbf{Z}_{1}\)` and `\(\small\mathbf{Z}_{2}\)` + The correlation between `\(\small\mathbf{P}_{11}\)` and `\(\small\mathbf{P}_{21}\)` is the PLS-correlation `$$\small{r}_{PLS}={cor}_{P_{11}P_{21}}$$` + Significance is assessed via permutation ###### Bookstein et al. (2003). *J. Hum. Evol.* --- ### 2: Integration using RV: Example .pull-left[ + *Pecos* pupfish + Is there an association between head shape and body shape? <img src="LectureData/06.covariation/Pupfish Motivation.png" width="80%" style="display: block; margin: auto;" /> ] .pull-right[ ``` r data(pupfish) Y.gpa <- gpagen(pupfish$coords, print.progress = FALSE) shape <- two.d.array(Y.gpa$coords) head <- c(4, 10:17, 39:56) all <- 1:56 body <- all[-head] land.gps<-rep('b',56); land.gps[c(4,10:17,39:56)]<-'a' # for PLS y <- two.d.array(Y.gpa$coords[head, , ]) x <- two.d.array(Y.gpa$coords[body, , ]) y<-scale(y,center=TRUE, scale=FALSE) x<-scale(x,center=TRUE, scale=FALSE) S12 <- crossprod(x,y)/(dim(x)[1] - 1) S11 <- var(x) S22 <- var(y) RV <- sum(colSums(S12^2))/sqrt(sum(S11^2)*sum(S22^2)) ``` `$$\small{RV}=\frac{tr(\mathbf{S}_{12}\mathbf{S}_{21})}{\sqrt{tr(\mathbf{S}_{11}\mathbf{S}_{11})tr(\mathbf{S}_{22}\mathbf{S}_{22})}}=0.607$$` `$$\small\sqrt{RV}=0.779$$` ] --- ### 2: Integration using PLS: Example .pull-left[ ``` r PLS <- two.b.pls(y,x, iter=999, print.progress = FALSE) summary(PLS) ``` ``` ## ## Call: ## two.b.pls(A1 = y, A2 = x, iter = 999, print.progress = FALSE) ## ## ## ## r-PLS: 0.917 ## ## Effect Size (Z): 5.4039 ## ## P-value: 0.001 ## ## Based on 1000 random permutations ``` `\(\tiny{RV}=\frac{tr(\mathbf{S}_{12}\mathbf{S}_{21})}{\sqrt{tr(\mathbf{S}_{11}\mathbf{S}_{11})tr(\mathbf{S}_{22}\mathbf{S}_{22})}}=0.607\)` and `\(\tiny\sqrt{RV}=0.779\)` `\(\small{r}_{PLS}={cor}_{P_{11}P_{21}}=0.917\)` ] .pull-right[ .scrollable[ ``` r plot(PLS) ``` <img src="12-IntegrationModularity_files/figure-html/unnamed-chunk-16-1.png" width="80%" style="display: block; margin: auto;" /> ] ] --- ### 2: Evaluating Multivariate Associations + We now have two potential test measures of multivariate correlation `$$\small{RV}=\frac{tr(\mathbf{S}_{12}\mathbf{S}_{21})}{\sqrt{tr(\mathbf{S}_{11}\mathbf{S}_{11})tr(\mathbf{S}_{22}\mathbf{S}_{22})}}$$` `$$\small{r}_{PLS}={cor}_{P_{11}P_{21}}$$` + Is one approach preferable over the other? --- ### 2: Permutation Tests for Multivariate Association + Test statistics: `\(\small\hat\rho=\sqrt{RV}\)` and `\(\small\hat\rho={r}_{PLS}\)` + H~0~: `\(\small\rho=0\)` + H~1~: `\(\small\rho>0\)` + Use RRPP to generate empirical sampling distribution for each (note: row-permutation) <img src="12-IntegrationModularity_files/figure-html/unnamed-chunk-18-1.png" width="30%" style="display: block; margin: auto;" /> + For the pupfish dataset, both are significant at p = 0.001 --- ### 2: Permutation Tests for RV and r~PLS~: Example Compare permutation distributions with one another (minus observed in this case) <img src="12-IntegrationModularity_files/figure-html/unnamed-chunk-19-1.png" width="40%" style="display: block; margin: auto;" /> + All things considered, *r~PLS~* performs better --- ### 2: Integration using PLS: Example 2 + Cranial integration for pairs of modules in *Homo* <img src="LectureData/12.integr.mod/PLS-Bookstein03.png" width="70%" style="display: block; margin: auto;" /> --- ### 2B: Comparing Integration Across Datasets + One may wish to compare integration among subsets across datasets + Cannot do so directly with `\(RV\)` or `\(r_{PLS}\)` as both vary with `\(n\)` and `\(p\)` <img src="LectureData/12.integr.mod/RV.PLS.n.p.png" width="50%" style="display: block; margin: auto;" /> + We require appropriate effect sizes for comparison --- ### 2B: Comparing Integration Across Datasets (Cont.) `\(^1\)` + Conversion of `\(R_{PLS}\)` to an effect size alleviates the concern `$$\mathbf{Z}=\frac{r_{PLS_{obs}}-\mu_{r_{PLS_{rand}}}}{\sigma_{r_{PLS_{rand}}}}$$` <img src="LectureData/12.integr.mod/Z-PLS-WithN-P.png" width="50%" style="display: block; margin: auto;" /> + Statistical comparisons of effect sizes are then possible: `$$\hat{Z}_{12}=\frac{\lvert{Z_1-Z_2}\rvert}{\sqrt{\sigma^2_{Z_1}+\sigma^2_{Z_2}}}$$` .footnote[1: Adams and Collyer (2016). *Evol.*] --- ### 2B: Comparing Integration Across Datasets: Example + Are the modules of lizard heads equally integrated across environments? + Does this integration change ontogenetically? <img src="LectureData/12.integr.mod/CompareIntegr_example.png" width="50%" style="display: block; margin: auto;" /> + Yes it does! --- ### 2B: Comparing Integration Across Datasets: Example 2 + Example using the pupfish data ``` r # Compare morphological integration between pupfish head and body shapes data(pupfish) # GPA previously performed group <- factor(paste(pupfish$Pop, pupfish$Sex, sep = ".")) # Subset 3D array by group, returning a list of 3D arrays tail.LM <- c(1:3, 5:9, 18:38) head.LM <- (1:56)[-tail.LM] tail.coords <- pupfish$coords[tail.LM,,] head.coords <- pupfish$coords[head.LM,,] tail.coords.gp <- coords.subset(tail.coords, group) head.coords.gp <- coords.subset(head.coords, group) ``` --- ### 2B: Comparing Integration Across Datasets: Example 2 (Cont.) .scrollable[ ``` r # Obtain Integration for groups integ.tests <- Map(function(x,y) integration.test(x, y, iter=499, print.progress = FALSE), head.coords.gp, tail.coords.gp) # Compare Integration compare.pls(integ.tests) ``` ``` ## ## Effect sizes ## ## Marsh.F Marsh.M Sinkhole.F Sinkhole.M ## 3.1410345 1.5130737 0.7429882 2.7156556 ## ## Effect sizes for pairwise differences in PLS effect size ## ## Marsh.F Marsh.M Sinkhole.F Sinkhole.M ## Marsh.F 0.0000000 0.6586628 1.9027247 0.7518767 ## Marsh.M 0.6586628 0.0000000 0.8778808 1.1927859 ## Sinkhole.F 1.9027247 0.8778808 0.0000000 2.0927712 ## Sinkhole.M 0.7518767 1.1927859 2.0927712 0.0000000 ## ## P-values ## ## Marsh.F Marsh.M Sinkhole.F Sinkhole.M ## Marsh.F 1.00000000 0.5101123 0.05707647 0.45212519 ## Marsh.M 0.51011230 1.0000000 0.38000838 0.23295324 ## Sinkhole.F 0.05707647 0.3800084 1.00000000 0.03636958 ## Sinkhole.M 0.45212519 0.2329532 0.03636958 1.00000000 ``` ] --- ### 3: From Integration to Modularity + Sometimes, patterns of integration are not uniform across an organism + Instead, integration is 'concentrated' in subsets of traits + In turn, these tratis are relatively independent of other sets of traits that are also inter-correlated + This pattern is termed 'Modularity' -- + The question is: How does one identify (and then statistically evaluate!) modular structure? --- ### 3: Identifying Modules: Conditional Independence + Detect significant correlations, while accounting for correlations with other traits + Procedure + Calculate `\(\small{R}\)` for a set of traits + Find inverse `\(\small{R}^{-1}\)` (elements of which are `\(\small{\Omega_{ij}}\)`) + Rescale `\(\small{R}^{-1}\)` to **partial** correlations: `\(\small{\rho_{ij}=\frac{-\Omega_{ij}}{\sqrt{\Omega_{ii}\Omega_{jj}}}}\)` + Evaluate partial correlations: `\(\small{-nln(1-\rho^2_{ij})\approx\chi^2}\)` for all `\(\small{\rho_{ij}}\)`. Set non-significant values to zero + Remaining `\(\small{\rho_{ij}}\)` describe correlations among integrated traits + Graphically, this is equivalent to ‘pruning’ links between traits + Method ‘exploratory’ in that modules are not known *a priori* --- ### 3: Identifying Modules: Conditional Independence: Example + Sewall Wright's 'chickenbone' dataset <img src="LectureData/12.integr.mod/Magwene-Chickenbone.png" width="70%" style="display: block; margin: auto;" /> --- ### 3: Identifying Modularity + Modularity addresses a question complementary to that of integration + Modules: tightly integrated sets of traits, which are relatively independent from other such sets <img src="LectureData/12.integr.mod/ModulCovBtwn.png" width="70%" /> --- ### 3: Quantifying Modularity: `\(CR\)` Coefficient `\(^1\)` + As shown previously, the `\(RV\)` coefficient (though frequently used) is not constant across `\(n\)` and `\(p\)` + Instead use the covariance ratio: `$$CR=\frac{tr(\mathbf{S}_{12}\mathbf{S}_{21})}{\sqrt{tr(\mathbf{S}^*_{11}\mathbf{S}^*_{11})tr(\mathbf{S}^*_{22}\mathbf{S}^*_{22})}}$$` + where `\(\mathbf{S}^*_{11}\)` & `\(\mathbf{S}^*_{22}\)` represent the within-module covariance matrices with `\(0\)` along the diagonal + The `\(CR\)` coefficient does *NOT* vary with *n* and *p*: <img src="LectureData/12.integr.mod/CRPattern.png" width="50%" style="display: block; margin: auto;" /> .footnote[1: Adams (2016). *Methods Ecol. Evol.*] --- ### 3: `\(CR\)` Coefficient: Statistical Properties + The `\(CR\)` coefficient displays appropriate statistical properties <img src="LectureData/12.integr.mod/CRStatProp.png" width="60%" style="display: block; margin: auto;" /> --- ### 3: `\(CR\)` Coefficient: Examples <img src="LectureData/12.integr.mod/CRExamples.png" width="80%" style="display: block; margin: auto;" /> --- ### 3: Evaluating Modularity: Example 2 ``` r data(pupfish) Y.gpa<-gpagen(pupfish$coords, print.progress = FALSE) #GPA-alignment # landmarks on the body vs. operculum land.gps<-rep('a',56); land.gps[39:48]<-'b' modularity.test(Y.gpa$coords,land.gps,CI=FALSE,print.progress = FALSE) ``` ``` ## ## Call: ## modularity.test(A = Y.gpa$coords, partition.gp = land.gps, CI = FALSE, ## print.progress = FALSE) ## ## ## ## CR: 0.9075 ## ## P-value: 0.021 ## ## Effect Size: -2.3097 ## ## Based on 1000 random permutations ``` --- ### 3B: Comparing Modularity Across Datasets `\(^1\)` + One might be interested in evaluating alternative modular hypotheses for the same dataset... + ... or ask whether one group exhibits higher modular signal than another + Again, an effect size (*Z*-score) is useful for this purpose .footnote[1: Adams and Collyer (2019). *Evol.*] -- + Convert `\(CR\)` to an effect size: `$$\mathbf{Z}=\frac{r_{CR_{obs}}-\mu_{r_{CR_{rand}}}}{\sigma_{r_{CR_{rand}}}}$$` + Compare effect sizes as: `$$\hat{Z}_{12}=\frac{\lvert{Z_1-Z_2}\rvert}{\sqrt{\sigma^2_{Z_1}+\sigma^2_{Z_2}}}$$` --- ### 3B: Comparing Modularity Across Datasets: example + Which is the most supported modular division for the mouse mandible? + Do some species exhibit higher modularity than others? <img src="LectureData/12.integr.mod/Fig6.png" width="35%" style="display: block; margin: auto;" /> --- ### 3B: Comparing Modularity Across Datasets: Example 2 + Example using the pupfish data ``` r # Compare modularity between pupfish head and body shapes data(pupfish) Y.gpa<-gpagen(pupfish$coords, print.progress = FALSE) #GPA-alignment # landmarks on the body vs. operculum land.gps<-rep('a',56); land.gps[39:48]<-'b' # Pupfish groups (of observations) group <- factor(paste(pupfish$Pop, pupfish$Sex, sep = ".")) coords.gp <- coords.subset(Y.gpa$coords, group) ``` --- ### 3B: Comparing Modularity Across Datasets: Example 2 (Cont.) .scrollable[ ``` r # Modularity tests per group modul.tests <- Map(function(x) modularity.test(x, land.gps,print.progress = FALSE), coords.gp) # Compare modularity compare.CR(modul.tests, CR.null = FALSE) ``` ``` ## ## NOTE: more negative effects represent stronger modular signal! ## ## ## Effect sizes ## ## Marsh.F Marsh.M Sinkhole.F Sinkhole.M ## -0.7265087 -4.1650804 -2.4916997 -2.1320218 ## ## Effect sizes for pairwise differences in CR effect size ## ## Marsh.F Marsh.M Sinkhole.F Sinkhole.M ## Marsh.F 0.000000 2.45689940 1.74666288 1.0170828 ## Marsh.M 2.456899 0.00000000 0.07932252 1.4191065 ## Sinkhole.F 1.746663 0.07932252 0.00000000 0.9811416 ## Sinkhole.M 1.017083 1.41910651 0.98114159 0.0000000 ## ## P-values ## ## Marsh.F Marsh.M Sinkhole.F Sinkhole.M ## Marsh.F 1.00000000 0.01401419 0.08069583 0.3091141 ## Marsh.M 0.01401419 1.00000000 0.93677609 0.1558680 ## Sinkhole.F 0.08069583 0.93677609 1.00000000 0.3265229 ## Sinkhole.M 0.30911405 0.15586797 0.32652292 1.0000000 ``` ] --- ### Integration and Modularity: Perspectives + Integration and modularity are of relevance for many E&E questions + All approaches decompose information in `\(\hat{\mathbf{\Sigma}}\)` + Require methods that are robust to `\(n\)` and `\(p\)` + Effect sizes of test statistics are most useful, and can be compared + Overall integration for a set of traits: `\(Z_{Vrel}\)` + Integration among subsets: `\(Z_{r_{PLS}}\)` + Modularity among subsets: `\(Z_{CR}\)` + RRPP provides analytical tool for statistical evaluation and comparison of patterns