This is completed downloadable of Solution Manual for Using Multivariate Statistics 7th Edition Barbara G. Tabachnick, Linda S. Fidell
Product Details:
- ISBN-10 : 0134790545
- ISBN-13 : 978-0134790541
- Author: Barbara G. Tabachnick, Linda S. Fidell
Using Multivariate Statistics, 7th Edition presents complex statistical procedures in a way that is maximally useful and accessible to researchers who may not be statisticians. The authors focus on the benefits and limitations of applying a technique to a data set – when, why, and how to do it. Only a limited knowledge of higher-level mathematics is assumed.
Students using this text will learn to conduct numerous types of multivariate statistical analyses; find the best technique to use; understand limitations to applications; and learn how to use SPSS and SAS syntax and output.
Table of Content:
- Chapter 1 Introduction
- Learning Objectives
- 1.1 Multivariate Statistics: Why?
- 1.1.1 The Domain of Multivariate Statistics: Numbers of IVs and DVs
- 1.1.2 Experimental and Nonexperimental Research
- 1.1.3 Computers and Multivariate Statistics
- 1.1.4 Garbage In, Roses Out?
- 1.2 Some Useful Definitions
- 1.2.1 Continuous, Discrete, and Dichotomous Data
- 1.2.2 Samples and Populations
- 1.2.3 Descriptive and Inferential Statistics
- 1.2.4 Orthogonality: Standard and Sequential Analyses
- 1.3 Linear Combinations of Variables
- 1.4 Number and Nature of Variables to Include
- 1.5 Statistical Power
- 1.6 Data Appropriate for Multivariate Statistics
- 1.6.1 The Data Matrix
- 1.6.2 The Correlation Matrix
- 1.6.3 The Variance–Covariance Matrix
- 1.6.4 The Sum-of-Squares and Cross-Products Matrix
- 1.6.5 Residuals
- 1.7 Organization of the Book
- Chapter 2 A Guide to Statistical Techniques Using the Book
- Learning Objectives
- 2.1 Research Questions and Associated Techniques
- 2.1.1 Degree of Relationship Among Variables
- 2.1.1.1 Bivariate r
- 2.1.1.2 Multiple R
- 2.1.1.3 Sequential R
- 2.1.1.4 Canonical R
- 2.1.1.5 Multiway Frequency Analysis
- 2.1.1.6 Multilevel Modeling
- 2.1.2 Significance of Group Differences
- 2.1.2.1 One-Way ANOVA and t Test
- 2.1.2.2 One-Way ANCOVA
- 2.1.2.3 Factorial ANOVA
- 2.1.2.4 Factorial ANCOVA
- 2.1.2.5 Hotelling’s T2
- 2.1.2.6 One-Way MANOVA
- 2.1.2.7 One-Way MANCOVA
- 2.1.2.8 Factorial MANOVA
- 2.1.2.9 Factorial MANCOVA
- 2.1.2.10 Profile Analysis of Repeated Measures
- 2.1.3 Prediction of Group Membership
- 2.1.3.1 One-Way Discriminant Analysis
- 2.1.3.2 Sequential One-Way Discriminant Analysis
- 2.1.3.3 Multiway Frequency Analysis (Logit)
- 2.1.3.4 Logistic Regression
- 2.1.3.5 Sequential Logistic Regression
- 2.1.3.6 Factorial Discriminant Analysis
- 2.1.3.7 Sequential Factorial Discriminant Analysis
- 2.1.4 Structure
- 2.1.4.1 Principal Components
- 2.1.4.2 Factor Analysis
- 2.1.4.3 Structural Equation Modeling
- 2.1.5 Time Course of Events
- 2.1.5.1 Survival/Failure Analysis
- 2.1.5.2 Time-Series Analysis
- 2.2 Some Further Comparisons
- 2.3 A Decision Tree
- 2.4 Technique Chapters
- 2.5 Preliminary Check of the Data
- Chapter 3 Review of Univariate and Bivariate Statistics
- Learning Objectives
- 3.1 Hypothesis Testing
- 3.1.1 One-Sample z Test as Prototype
- 3.1.2 Power
- 3.1.3 Extensions of the Model
- 3.1.4 Controversy Surrounding Significance Testing
- 3.2 Analysis of Variance
- 3.2.1 One-Way Between-Subjects ANOVA
- 3.2.2 Factorial Between-Subjects ANOVA
- 3.2.3 Within-Subjects ANOVA
- 3.2.4 Mixed Between-Within-Subjects ANOVA6
- 3.2.5 Design Complexity
- 3.2.5.1 Nesting
- 3.2.5.2 Latin-Square Designs
- 3.2.5.3 Unequal n and Nonorthogonality
- 3.2.5.4 Fixed and Random Effects
- 3.2.6 Specific Comparisons
- 3.2.6.1 Weighting Coefficients for Comparisons
- 3.2.6.2 Orthogonality of Weighting Coefficients
- 3.2.6.3 Obtained F for Comparisons
- 3.2.6.4 Critical F for Planned Comparisons
- 3.2.6.5 Critical F for Post Hoc Comparisons
- 3.3 Parameter Estimation
- 3.4 Effect Size
- 3.5 Bivariate Statistics: Correlation and Regression
- 3.5.1 Correlation
- 3.5.2 Regression
- 3.6 Chi-Square Analysis
- Chapter 4 Cleaning Up Your Act Screening Data Prior to Analysis
- Learning Objectives
- 4.1 Important Issues in Data Screening
- 4.1.1 Accuracy of Data File
- 4.1.2 Honest Correlations
- 4.1.2.1 Inflated Correlation
- 4.1.2.2 Deflated Correlation
- 4.1.3 Missing Data
- 4.1.3.1 Deleting Cases or Variables
- 4.1.3.2 Estimating Missing Data
- 4.1.3.3 Using a Missing Data Correlation Matrix
- 4.1.3.4 Treating Missing Data as Data
- 4.1.3.5 Repeating Analyses With and Without Missing Data
- 4.1.3.6 Choosing Among Methods for Dealing With Missing Data
- 4.1.4 Outliers
- 4.1.4.1 Detecting Univariate and Multivariate Outliers
- 4.1.4.2 Describing Outliers
- 4.1.4.3 Reducing the Influence of Outliers
- 4.1.4.4 Outliers in a Solution
- 4.1.5 Normality, Linearity, and Homoscedasticity
- 4.1.5.1 Normality
- 4.1.5.2 Linearity
- 4.1.5.3 Homoscedasticity, Homogeneity of Variance, and Homogeneity of Variance–Covariance Matrices
- 4.1.6 Common Data Transformations
- 4.1.7 Multicollinearity and Singularity
- 4.1.8 A Checklist and Some Practical Recommendations
- 4.2 Complete Examples of Data Screening
- 4.2.1 Screening Ungrouped Data
- 4.2.1.1 Accuracy of Input, Missing Data, Distributions, and Univariate Outliers
- 4.2.1.2 Linearity and Homoscedasticity
- 4.2.1.3 Transformation
- 4.2.1.4 Detecting Multivariate Outliers
- 4.2.1.5 Variables Causing Cases to Be Outliers
- 4.2.1.6 Multicollinearity
- 4.2.2 Screening Grouped Data
- 4.2.2.1 Accuracy of Input, Missing Data, Distributions, Homogeneity of Variance, and Univariate Outliers
- 4.2.2.2 Linearity
- 4.2.2.3 Multivariate Outliers
- 4.2.2.4 Variables Causing Cases to be Outliers
- 4.2.2.5 Multicollinearity
- Chapter 5 Multiple Regression
- Learning Objectives
- 5.1 General Purpose and Description
- 5.2 Kinds of Research Questions
- 5.2.1 Degree of Relationship
- 5.2.2 Importance of IVs
- 5.2.3 Adding IVs
- 5.2.4 Changing IVs
- 5.2.5 Contingencies Among IVs
- 5.2.6 Comparing Sets of IVs
- 5.2.7 Predicting DV Scores for Members of a New Sample
- 5.2.8 Parameter Estimates
- 5.3 Limitations to Regression Analyses
- 5.3.1 Theoretical Issues
- 5.3.2 Practical Issues
- 5.3.2.1 Ratio of Cases to IVs
- 5.3.2.2 Absence of Outliers Among the IVs and on the DV
- 5.3.2.3 Absence of Multicollinearity and Singularity
- 5.3.2.4 Normality, Linearity, and Homoscedasticity of Residuals
- 5.3.2.5 Independence of Errors
- 5.3.2.6 Absence of Outliers in the Solution
- 5.4 Fundamental Equations for Multiple Regression
- 5.4.1 General Linear Equations
- 5.4.2 Matrix Equations
- 5.4.3 Computer Analyses of Small-Sample Example
- 5.5 Major Types of Multiple Regression
- 5.5.1 Standard Multiple Regression
- 5.5.2 Sequential Multiple Regression
- 5.5.3 Statistical (Stepwise) Regression
- 5.5.4 Choosing Among Regression Strategies
- 5.6 Some Important Issues
- 5.6.1 Importance of IVs
- 5.6.1.1 Standard Multiple Regression
- 5.6.1.2 Sequential or Statistical Regression
- 5.6.1.3 Commonality Analysis
- 5.6.1.4 Relative Importance Analysis
- 5.6.2 Statistical Inference
- 5.6.2.1 Test for Multiple R
- 5.6.2.2 Test of Regression Components
- 5.6.2.3 Test of Added Subset of IVs
- 5.6.2.4 Confidence Limits
- 5.6.2.5 Comparing Two Sets of Predictors
- 5.6.3 Adjustment of R2
- 5.6.4 Suppressor Variables
- 5.6.5 Regression Approach to ANOVA
- 5.6.6 Centering When Interactions and Powers of IVs are Included
- 5.6.7 Mediation in Causal Sequence
- 5.7 Complete Examples of Regression Analysis
- 5.7.1 Evaluation of Assumptions
- 5.7.1.1 Ratio of Cases to IVs
- 5.7.1.2 Normality, Linearity, Homoscedasticity, and Independence of Residuals
- 5.7.1.3 Outliers
- 5.7.1.4 Multicollinearity and Singularity
- 5.7.2 Standard Multiple Regression
- 5.7.3 Sequential Regression
- 5.7.4 Example of Standard Multiple Regression with Missing Values Multiply Imputed
- 5.8 Comparison of Programs
- 5.8.1 IBM SPSS Package
- 5.8.2 SAS System
- 5.8.3 SYSTAT System
- Chapter 6 Analysis of Covariance
- Learning Objectives
- 6.1 General Purpose and Description
- 6.2 Kinds of Research Questions
- 6.2.1 Main Effects of IVs
- 6.2.2 Interactions Among IVs
- 6.2.3 Specific Comparisons and Trend Analysis
- 6.2.4 Effects of Covariates
- 6.2.5 Effect Size
- 6.2.6 Parameter Estimates
- 6.3 Limitations to Analysis of Covariance
- 6.3.1 Theoretical Issues
- 6.3.2 Practical Issues
- 6.3.2.1 Unequal Sample Sizes, Missing Data, and Ratio of Cases to IVs
- 6.3.2.2 Absence of Outliers
- 6.3.2.3 Absence of Multicollinearity and Singularity
- 6.3.2.4 Normality of Sampling Distributions
- 6.3.2.5 Homogeneity of Variance
- 6.3.2.6 Linearity
- 6.3.2.7 Homogeneity of Regression
- 6.3.2.8 Reliability of Covariates
- 6.4 Fundamental Equations for Analysis of Covariance
- 6.4.1 Sums of Squares and Cross-Products
- 6.4.2 Significance Test and Effect Size
- 6.4.3 Computer Analyses of Small-Sample Example
- 6.5 Some Important Issues
- 6.5.1 Choosing Covariates
- 6.5.2 Evaluation of Covariates
- 6.5.3 Test for Homogeneity of Regression
- 6.5.4 Design Complexity
- 6.5.4.1 Within-Subjects and Mixed Within-Between Designs
- 6.5.4.1.1 Same Covariate(s) for All Cells
- 6.5.4.1.2 Varying Covariate(s) Over Cells
- 6.5.4.2 Unequal Sample Sizes
- 6.5.4.3 Specific Comparisons and Trend Analysis
- 6.5.4.4 Effect Size
- 6.5.5 Alternatives to ANCOVA
- 6.6 Complete Example of Analysis of Covariance
- 6.6.1 Evaluation of Assumptions
- 6.6.1.1 Unequal n and Missing Data
- 6.6.1.2 Normality
- 6.6.1.3 Linearity
- 6.6.1.4 Outliers
- 6.6.1.5 Multicollinearity and Singularity
- 6.6.1.6 Homogeneity of Variance
- 6.6.1.7 Homogeneity of Regression
- 6.6.1.8 Reliability of Covariates
- 6.6.2 Analysis of Covariance
- 6.6.2.1 Main Analysis
- 6.6.2.2 Evaluation of Covariates
- 6.6.2.3 Homogeneity of Regression Run
- 6.7 Comparison of Programs
- 6.7.1 IBM SPSS Package
- 6.7.2 SAS System
- 6.7.3 SYSTAT System
- Chapter 7 Multivariate Analysis of Variance and Covariance
- Learning Objectives
- 7.1 General Purpose and Description
- 7.2 Kinds of Research Questions
- 7.2.1 Main Effects of IVs
- 7.2.2 Interactions Among IVs
- 7.2.3 Importance of DVs
- 7.2.4 Parameter Estimates
- 7.2.5 Specific Comparisons and Trend Analysis
- 7.2.6 Effect Size
- 7.2.7 Effects of Covariates
- 7.2.8 Repeated-Measures Analysis of Variance
- 7.3 Limitations to Multivariate Analysis of Variance and Covariance
- 7.3.1 Theoretical Issues
- 7.3.2 Practical Issues
- 7.3.2.1 Unequal Sample Sizes, Missing Data, and Power
- 7.3.2.2 Multivariate Normality
- 7.3.2.3 Absence of Outliers
- 7.3.2.4 Homogeneity of Variance–Covariance Matrices
- 7.3.2.5 Linearity
- 7.3.2.6 Homogeneity of Regression
- 7.3.2.7 Reliability of Covariates
- 7.3.2.8 Absence of Multicollinearity and Singularity
- 7.4 Fundamental Equations for Multivariate Analysis of Variance and Covariance
- 7.4.1 Multivariate Analysis of Variance
- 7.4.2 Computer Analyses of Small-Sample Example
- 7.4.3 Multivariate Analysis of Covariance
- 7.5 Some Important Issues
- 7.5.1 MANOVA Versus ANOVAs
- 7.5.2 Criteria for Statistical Inference
- 7.5.3 Assessing DVs
- 7.5.3.1 Univariate F
- 7.5.3.2 Roy–Bargmann Stepdown Analysis10
- 7.5.3.3 Using Discriminant Analysis
- 7.5.3.4 Choosing Among Strategies for Assessing DVs
- 7.5.4 Specific Comparisons and Trend Analysis
- 7.5.5 Design Complexity
- 7.5.5.1 Within-Subjects and Between-Within Designs
- 7.5.5.2 Unequal Sample Sizes
- 7.6 Complete Examples of Multivariate Analysis of Variance and Covariance
- 7.6.1 Evaluation of Assumptions
- 7.6.1.1 Unequal Sample Sizes and Missing Data
- 7.6.1.2 Multivariate Normality
- 7.6.1.3 Linearity
- 7.6.1.4 Outliers
- 7.6.1.5 Homogeneity of Variance–Covariance Matrices
- 7.6.1.6 Homogeneity of Regression
- 7.6.1.7 Reliability of Covariates
- 7.6.1.8 Multicollinearity and Singularity
- 7.6.2 Multivariate Analysis of Variance
- 7.6.3 Multivariate Analysis of Covariance
- 7.6.3.1 Assessing Covariates
- 7.6.3.2 Assessing DVs
- 7.7 Comparison of Programs
- 7.7.1 IBM SPSS Package
- 7.7.2 SAS System
- 7.7.3 SYSTAT System
- Chapter 8 Profile Analysis: The Multivariate Approach to Repeated Measures
- Learning Objectives
- 8.1 General Purpose and Description
- 8.2 Kinds of Research Questions
- 8.2.1 Parallelism of Profiles
- 8.2.2 Overall Difference Among Groups
- 8.2.3 Flatness of Profiles
- 8.2.4 Contrasts Following Profile Analysis
- 8.2.5 Parameter Estimates
- 8.2.6 Effect Size
- 8.3 Limitations to Profile Analysis
- 8.3.1 Theoretical Issues
- 8.3.2 Practical Issues
- 8.3.2.1 Sample Size, Missing Data, and Power
- 8.3.2.2 Multivariate Normality
- 8.3.2.3 Absence of Outliers
- 8.3.2.4 Homogeneity of Variance–Covariance Matrices
- 8.3.2.5 Linearity
- 8.3.2.6 Absence of Multicollinearity and Singularity
- 8.4 Fundamental Equations for Profile Analysis
- 8.4.1 Differences in Levels
- 8.4.2 Parallelism
- 8.4.3 Flatness
- 8.4.4 Computer Analyses of Small-Sample Example
- 8.5 Some Important Issues
- 8.5.1 Univariate Versus Multivariate Approach to Repeated Measures
- 8.5.2 Contrasts in Profile Analysis
- 8.5.2.1 Parallelism and Flatness Significant, Levels Not Significant (Simple-Effects Analysis)
- 8.5.2.2 Parallelism and Levels Significant, Flatness Not Significant (Simple-Effects Analysis)
- 8.5.2.3 Parallelism, Levels, and Flatness Significant (Interaction Contrasts)
- 8.5.2.4 Only Parallelism Significant
- 8.5.3 Doubly Multivariate Designs
- 8.5.4 Classifying Profiles
- 8.5.5 Imputation of Missing Values
- 8.6 Complete Examples of Profile Analysis
- 8.6.1 Profile Analysis of Subscales of the WISC
- 8.6.1.1 Evaluation of Assumptions
- 8.6.1.1.1 Unequal Sample Sizes and Missing Data
- 8.6.1.1.2 Multivariate Normality
- 8.6.1.1.3 Linearity
- 8.6.1.1.4 Outliers
- 8.6.1.1.5 Homogeneity of Variance–Covariance Matrices
- 8.6.1.1.6 Multicollinearity and Singularity
- 8.6.1.2 Profile Analysis
- 8.6.2 Doubly Multivariate Analysis of Reaction Time
- 8.6.2.1 Evaluation of Assumptions
- 8.6.2.1.1 Unequal Sample Sizes, Missing Data, Multivariate Normality, and Linearity
- 8.6.2.1.2 Outliers
- 8.6.2.1.3 Homogeneity of Variance–Covariance Matrices
- 8.6.2.1.4 Homogeneity of Regression
- 8.6.2.1.5 Reliability of DVs
- 8.6.2.1.6 Multicollinearity and Singularity
- 8.6.2.2 Doubly Multivariate Analysis of Slope and Intercept
- 8.7 Comparison of Programs
- 8.7.1 IBM SPSS Package
- 8.7.2 SAS System
- 8.7.3 SYSTAT System
- Chapter 9 Discriminant Analysis
- Learning Objectives
- 9.1 General Purpose and Description
- 9.2 Kinds of Research Questions
- 9.2.1 Significance of Prediction
- 9.2.2 Number of Significant Discriminant Functions
- 9.2.3 Dimensions of Discrimination
- 9.2.4 Classification Functions
- 9.2.5 Adequacy of Classification
- 9.2.6 Effect Size
- 9.2.7 Importance of Predictor Variables
- 9.2.8 Significance of Prediction with Covariates
- 9.2.9 Estimation of Group Means
- 9.3 Limitations to Discriminant Analysis
- 9.3.1 Theoretical Issues
- 9.3.2 Practical Issues
- 9.3.2.1 Unequal Sample Sizes, Missing Data, and Power
- 9.3.2.2 Multivariate Normality
- 9.3.2.3 Absence of Outliers
- 9.3.2.4 Homogeneity of Variance–Covariance Matrices
- 9.3.2.5 Linearity
- 9.3.2.6 Absence of Multicollinearity and Singularity
- 9.4 Fundamental Equations for Discriminant Analysis
- 9.4.1 Derivation and Test of Discriminant Functions
- 9.4.2 Classification
- 9.4.3 Computer Analyses of Small-Sample Example
- 9.5 Types of Discriminant Analyses
- 9.5.1 Direct Discriminant Analysis
- 9.5.2 Sequential Discriminant Analysis
- 9.5.3 Stepwise (Statistical) Discriminant Analysis
- 9.6 Some Important Issues
- 9.6.1 Statistical Inference
- 9.6.1.1 Criteria for Overall Statistical Significance
- 9.6.1.2 Stepping Methods
- 9.6.2 Number of Discriminant Functions
- 9.6.3 Interpreting Discriminant Functions
- 9.6.3.1 Discriminant Function Plots
- 9.6.3.2 Structure Matrix of Loadings
- 9.6.4 Evaluating Predictor Variables
- 9.6.5 Effect Size
- 9.6.6 Design Complexity: Factorial Designs
- 9.6.7 Use of Classification Procedures
- 9.6.7.1 Cross-Validation and New Cases
- 9.6.7.2 Jackknifed Classification
- 9.6.7.3 Evaluating Improvement in Classification
- 9.7 Complete Example of Discriminant Analysis
- 9.7.1 Evaluation of Assumptions
- 9.7.1.1 Unequal Sample Sizes and Missing Data
- 9.7.1.2 Multivariate Normality
- 9.7.1.3 Linearity
- 9.7.1.4 Outliers
- 9.7.1.5 Homogeneity of Variance–Covariance Matrices
- 9.7.1.6 Multicollinearity and Singularity
- 9.7.2 Direct Discriminant Analysis
- 9.8 Comparison of Programs
- 9.8.1 IBM SPSS Package
- 9.8.2 SAS System
- 9.8.3 SYSTAT System
- Chapter 10 Logistic Regression
- Learning Objectives
- 10.1 General Purpose and Description
- 10.2 Kinds of Research Questions
- 10.2.1 Prediction of Group Membership or Outcome
- 10.2.2 Importance of Predictors
- 10.2.3 Interactions Among Predictors
- 10.2.4 Parameter Estimates
- 10.2.5 Classification of Cases
- 10.2.6 Significance of Prediction with Covariates
- 10.2.7 Effect Size
- 10.3 Limitations to Logistic Regression Analysis
- 10.3.1 Theoretical Issues
- 10.3.2 Practical Issues
- 10.3.2.1 Ratio of Cases to Variables
- 10.3.2.2 Adequacy of Expected Frequencies and Power
- 10.3.2.3 Linearity in the Logit
- 10.3.2.4 Absence of Multicollinearity
- 10.3.2.5 Absence of Outliers in the Solution
- 10.3.2.6 Independence of Errors
- 10.4 Fundamental Equations for Logistic Regression
- 10.4.1 Testing and Interpreting Coefficients
- 10.4.2 Goodness of Fit
- 10.4.3 Comparing Models
- 10.4.4 Interpretation and Analysis of Residuals
- 10.4.5 Computer Analyses of Small-Sample Example
- 10.5 Types of Logistic Regression
- 10.5.1 Direct Logistic Regression
- 10.5.2 Sequential Logistic Regression
- 10.5.3 Statistical (Stepwise) Logistic Regression
- 10.5.4 Probit and Other Analyses
- 10.6 Some Important Issues
- 10.6.1 Statistical Inference
- 10.6.1.1 Assessing Goodness of Fit of Models
- 10.6.1.1.1 Constant-Only versus Full Model
- 10.6.1.1.2 Comparison with a Perfect (Hypothetical) Model
- 10.6.1.1.3 Deciles of Risk
- 10.6.1.2 Tests of Individual PREDICTORS
- 10.6.2 Effect Sizes
- 10.6.2.1 Effect Size for a Model
- 10.6.2.2 Effect Sizes for Predictors
- 10.6.3 Interpretation of Coefficients Using Odds
- 10.6.4 Coding Outcome and Predictor Categories
- 10.6.5 Number and Type of Outcome Categories
- 10.6.6 Classification of Cases
- 10.6.7 Hierarchical and Nonhierarchical Analysis
- 10.6.8 Importance of Predictors
- 10.6.9 Logistic Regression for Matched Groups
- 10.7 Complete Examples of Logistic Regression
- 10.7.1 Evaluation of Limitations
- 10.7.1.1 Ratio of Cases to Variables and Missing Data
- 10.7.1.2 Multicollinearity
- 10.7.1.3 Outliers in the Solution
- 10.7.2 Direct Logistic Regression with Two-Category Outcome and Continuous Predictors
- 10.7.2.1 Limitation: Linearity in the Logit
- 10.7.2.2 Direct Logistic Regression with Two-Category Outcome
- 10.7.3 Sequential Logistic Regression with Three Categories of Outcome
- 10.7.3.1 Limitations of Multinomial Logistic Regression
- 10.7.3.1.1 Adequacy of Expected Frequencies
- 10.7.3.1.2 Linearity in the Logit
- 10.7.3.2 Sequential Multinomial Logistic Regression
- 10.8 Comparison of Programs
- 10.8.1 IBM SPSS Package
- 10.8.2 SAS System
- 10.8.3 SYSTAT System
- Chapter 11 Survival/Failure Analysis
- Learning Objectives
- 11.1 General Purpose and Description
- 11.2 Kinds of Research Questions
- 11.2.1 Proportions Surviving at Various Times
- 11.2.2 Group Differences in Survival
- 11.2.3 Survival Time with Covariates
- 11.2.3.1 Treatment Effects
- 11.2.3.2 Importance of Covariates
- 11.2.3.3 Parameter Estimates
- 11.2.3.4 Contingencies Among Covariates
- 11.2.3.5 Effect Size and Power
- 11.3 Limitations to Survival Analysis
- 11.3.1 Theoretical Issues
- 11.3.2 Practical Issues
- 11.3.2.1 Sample Size and Missing Data
- 11.3.2.2 Normality of Sampling Distributions, Linearity, and Homoscedasticity
- 11.3.2.3 Absence of Outliers
- 11.3.2.4 Differences Between Withdrawn and Remaining Cases
- 11.3.2.5 Change in Survival Conditions over Time
- 11.3.2.6 Proportionality of Hazards
- 11.3.2.7 Absence of Multicollinearity
- 11.4 Fundamental Equations for Survival Analysis
- 11.4.1 Life Tables
- 11.4.2 Standard Error of Cumulative Proportion Surviving
- 11.4.3 Hazard and Density Functions
- 11.4.4 Plot of Life Tables
- 11.4.5 Test for Group Differences
- 11.4.6 Computer Analyses of Small-Sample Example
- 11.5 Types of Survival Analyses
- 11.5.1 Actuarial and Product-Limit Life Tables and Survivor Functions
- 11.5.2 Prediction of Group Survival Times from Covariates
- 11.5.2.1 Direct, Sequential, and Statistical Analysis
- 11.5.2.2 Cox Proportional-Hazards Model
- 11.5.2.3 Accelerated Failure-Time Models
- 11.5.2.4 Choosing a Method
- 11.6 Some Important Issues
- 11.6.1 Proportionality of Hazards
- 11.6.2 Censored Data
- 11.6.2.1 Right-Censored Data
- 11.6.2.2 Other Forms of Censoring
- 11.6.3 Effect Size and Power
- 11.6.4 Statistical Criteria
- 11.6.4.1 Test Statistics for Group Differences in Survival Functions
- 11.6.4.2 Test Statistics for Prediction From Covariates
- 11.6.5 Predicting Survival Rate
- 11.6.5.1 Regression Coefficients (Parameter Estimates)
- 11.6.5.2 Hazard Ratios
- 11.6.5.3 Expected Survival Rates
- 11.7 Complete Example of Survival Analysis
- 11.7.1 Evaluation of Assumptions
- 11.7.1.1 Accuracy of Input, Adequacy of Sample Size, Missing Data, and Distributions
- 11.7.1.2 Outliers
- 11.7.1.3 Differences Between Withdrawn and Remaining Cases
- 11.7.1.4 Change in Survival Experience over Time
- 11.7.1.5 Proportionality of Hazards
- 11.7.1.6 Multicollinearity
- 11.7.2 Cox Regression Survival Analysis
- 11.7.2.1 Effect of Drug Treatment
- 11.7.2.2 Evaluation of Other Covariates
- 11.8 Comparison of Programs
- 11.8.1 SAS System
- 11.8.2 IBM SPSS Package
- 11.8.3 SYSTAT System
- Chapter 12 Canonical Correlation
- Learning Objectives
- 12.1 General Purpose and Description
- 12.2 Kinds of Research Questions
- 12.2.1 Number of Canonical Variate Pairs
- 12.2.2 Interpretation of Canonical Variates
- 12.2.3 Importance of Canonical Variates and Predictors
- 12.2.4 Canonical Variate Scores
- 12.3 Limitations
- 12.3.1 Theoretical Limitations1
- 12.3.2 Practical Issues
- 12.3.2.1 Ratio of Cases to IVs
- 12.3.2.2 Normality, Linearity, and Homoscedasticity
- 12.3.2.3 Missing Data
- 12.3.2.4 Absence of Outliers
- 12.3.2.5 Absence of Multicollinearity and Singularity
- 12.4 Fundamental Equations for Canonical Correlation
- 12.4.1 Eigenvalues and Eigenvectors
- 12.4.2 Matrix Equations
- 12.4.3 Proportions of Variance Extracted
- 12.4.4 Computer Analyses of Small-Sample Example
- 12.5 Some Important Issues
- 12.5.1 Importance of Canonical Variates
- 12.5.2 Interpretation of Canonical Variates
- 12.6 Complete Example of Canonical Correlation
- 12.6.1 Evaluation of Assumptions
- 12.6.1.1 Missing Data
- 12.6.1.2 Normality, Linearity, and Homoscedasticity
- 12.6.1.3 Outliers
- 12.6.1.4 Multicollinearity and Singularity
- 12.6.2 Canonical Correlation
- 12.7 Comparison of Programs
- 12.7.1 SAS System
- 12.7.2 IBM SPSS Package
- 12.7.3 SYSTAT System
- Chapter 13 Principal Components and Factor Analysis
- Learning Objectives
- 13.1 General Purpose and Description
- 13.2 Kinds of Research Questions
- 13.2.1 Number of Factors
- 13.2.2 Nature of Factors
- 13.2.3 Importance of Solutions and Factors
- 13.2.4 Testing Theory in FA
- 13.2.5 Estimating Scores on Factors
- 13.3 Limitations
- 13.3.1 Theoretical Issues
- 13.3.2 Practical Issues
- 13.3.2.1 Sample Size and Missing Data
- 13.3.2.2 Normality
- 13.3.2.3 Linearity
- 13.3.2.4 Absence of Outliers Among Cases
- 13.3.2.5 Absence of Multicollinearity and Singularity
- 13.3.2.6 Factorability of R
- 13.3.2.7 Absence of Outliers Among Variables
- 13.4 Fundamental Equations for Factor Analysis
- 13.4.1 Extraction
- 13.4.2 Orthogonal Rotation
- 13.4.3 Communalities, Variance, and Covariance
- 13.4.4 Factor Scores
- 13.4.5 Oblique Rotation
- 13.4.6 Computer Analyses of Small-Sample Example
- 13.5 Major Types of Factor Analyses
- 13.5.1 Factor Extraction Techniques
- 13.5.1.1 PCA Versus FA
- 13.5.1.2 Principal Components
- 13.5.1.3 Principal Factors
- 13.5.1.4 Image Factor Extraction
- 13.5.1.5 Maximum Likelihood Factor Extraction
- 13.5.1.6 Unweighted Least Squares Factoring
- 13.5.1.7 Generalized (Weighted) Least Squares Factoring
- 13.5.1.8 Alpha Factoring
- 13.5.2 Rotation
- 13.5.2.1 Orthogonal Rotation
- 13.5.2.2 Oblique Rotation
- 13.5.2.3 Geometric Interpretation
- 13.5.3 Some Practical Recommendations
- 13.6 Some Important Issues
- 13.6.1 Estimates of Communalities
- 13.6.2 Adequacy of Extraction and Number of Factors
- 13.6.3 Adequacy of Rotation and Simple Structure
- 13.6.4 Importance and Internal Consistency of Factors
- 13.6.5 Interpretation of Factors
- 13.6.6 Factor Scores
- 13.6.7 Comparisons Among Solutions and Groups
- 13.7 Complete Example of FA
- 13.7.1 Evaluation of Limitations
- 13.7.1.1 Sample Size and Missing Data
- 13.7.1.2 Normality
- 13.7.1.3 Linearity
- 13.7.1.4 Outliers
- 13.7.1.5 Multicollinearity and Singularity
- 13.7.1.6 Factorability of R
- 13.7.1.7 Outliers Among Variables
- 13.7.2 Principal Factors Extraction with Varimax Rotation
- 13.8 Comparison of Programs
- 13.8.1 IBM SPSS Package
- 13.8.2 SAS System
- 13.8.3 SYSTAT System
- Chapter 14 Structural Equation Modeling
- Learning Objectives
- 14.1 General Purpose and Description
- 14.2 Kinds of Research Questions
- 14.2.1 Adequacy of the Model
- 14.2.2 Testing Theory
- 14.2.3 Amount of Variance in the Variables Accounted for by the Factors
- 14.2.4 Reliability of the Indicators
- 14.2.5 Parameter Estimates
- 14.2.6 Intervening Variables
- 14.2.7 Group Differences
- 14.2.8 Longitudinal Differences
- 14.2.9 Multilevel Modeling
- 14.2.10 Latent Class Analysis
- 14.3 Limitations to Structural Equation Modeling
- 14.3.1 Theoretical Issues
- 14.3.2 Practical Issues
- 14.3.2.1 Sample Size and Missing Data
- 14.3.2.2 Multivariate Normality and Outliers
- 14.3.2.3 Linearity
- 14.3.2.4 Absence of Multicollinearity and Singularity
- 14.3.2.5 Residuals
- 14.4 Fundamental Equations for Structural Equations Modeling
- 14.4.1 Covariance Algebra
- 14.4.2 Model Hypotheses
- 14.4.3 Model Specification
- 14.4.4 Model Estimation
- 14.4.5 Model Evaluation
- 14.4.6 Computer Analysis of Small-Sample Example
- 14.5 Some Important Issues
- 14.5.1 Model Identification
- 14.5.2 Estimation Techniques
- 14.5.2.1 Estimation Methods and Sample Size
- 14.5.2.2 Estimation Methods and Nonnormality
- 14.5.2.3 Estimation Methods and Dependence
- 14.5.2.4 Some Recommendations for Choice of Estimation Method
- 14.5.3 Assessing the Fit of the Model
- 14.5.3.1 Comparative Fit Indices
- 14.5.3.2 Absolute Fit Index
- 14.5.3.3 Indices of Proportion of Variance Accounted FOR
- 14.5.3.4 Degree of Parsimony Fit Indices
- 14.5.3.5 Residual-Based Fit Indices
- 14.5.3.6 Choosing among Fit Indices
- 14.5.4 Model Modification
- 14.5.4.1 Chi-Square Difference Test
- 14.5.4.2 Lagrange Multiplier (LM) Test
- 14.5.4.3 Wald Test
- 14.5.4.4 Some Caveats and Hints on Model Modification
- 14.5.5 Reliability and Proportion of Variance
- 14.5.6 Discrete and Ordinal Data
- 14.5.7 Multiple Group Models
- 14.5.8 Mean and Covariance Structure Models
- 14.6 Complete Examples of Structural Equation Modeling Analysis
- 14.6.1 Confirmatory Factor Analysis of the WISC
- 14.6.1.1 Model Specification for CFA
- 14.6.1.2 Evaluation of Assumptions for CFA
- 14.6.1.2.1 Sample Size and Missing Data
- 14.6.1.2.2 Normality and Linearity
- 14.6.1.2.3 Outliers
- 14.6.1.2.4 Multicollinearity and Singularity
- 14.6.1.2.5 Residuals
- 14.6.1.3 CFA Model Estimation and Preliminary Evaluation
- 14.6.1.4 Model Modification
- The Hypothesized Model
- Assumptions
- Model Estimation
- 14.6.2 SEM of Health Data
- 14.6.2.1 SEM Model Specification
- 14.6.2.2 Evaluation of Assumptions for SEM
- 14.6.2.2.1 Sample Size and Missing Data
- 14.6.2.2.2 Normality and Linearity
- 14.6.2.2.3 Outliers
- 14.6.2.2.4 Multicollinearity and Singularity
- 14.6.2.2.5 Adequacy of Covariances
- 14.6.2.2.6 Residuals
- 14.6.2.3 SEM Model Estimation and Preliminary Evaluation
- 14.6.2.4 Model Modification
- The Hypothesized Model
- Assumptions
- Model Estimation
- Direct Effects
- Indirect Effects
- 14.7 Comparison of Programs
- 14.7.1 EQS
- 14.7.2 LISREL
- 14.7.3 AMOS
- 14.7.4 SAS System
- Chapter 15 Multilevel Linear Modeling
- Learning Objectives
- 15.1 General Purpose and Description
- 15.2 Kinds of Research Questions
- 15.2.1 Group Differences in Means
- 15.2.2 Group Differences in Slopes
- 15.2.3 Cross-Level Interactions
- 15.2.4 Meta-Analysis
- 15.2.5 Relative Strength of Predictors at Various Levels
- 15.2.6 Individual and Group Structure
- 15.2.7 Effect Size
- 15.2.8 Path Analysis at Individual and Group Levels
- 15.2.9 Analysis of Longitudinal Data
- 15.2.10 Multilevel Logistic Regression
- 15.2.11 Multiple Response Analysis
- 15.3 Limitations to Multilevel Linear Modeling
- 15.3.1 Theoretical Issues
- 15.3.2 Practical Issues
- 15.3.2.1 Sample Size, Unequal-n, and Missing Data
- 15.3.2.2 Independence of Errors
- 15.3.2.3 Absence of Multicollinearity and Singularity
- 15.4 Fundamental Equations
- 15.4.1 Intercepts-Only Model
- 15.4.1.1 The Intercepts-Only Model: Level-1 Equation
- 15.4.1.2 The Intercepts-Only Model: Level-2 Equation
- 15.4.1.3 Computer Analyses of Intercepts-Only Model
- 15.4.2 Model with a First-Level Predictor
- 15.4.2.1 Level-1 Equation for a Model With a Level-1 Predictor
- 15.4.2.2 Level-2 Equations for a Model With a Level-1 Predictor
- 15.4.2.3 Computer Analysis of a Model With a Level-1 Predictor
- 15.4.3 Model with Predictors at First and Second Levels
- 15.4.3.1 Level-1 Equation for Model with Predictors at Both Levels
- 15.4.3.2 Level-2 Equations for Model with Predictors at Both Levels
- 15.4.3.3 Computer Analyses of Model With Predictors at First and Second Levels
- 15.5 Types of MLM
- 15.5.1 Repeated Measures
- 15.5.2 Higher-Order MLM
- 15.5.3 Latent Variables
- 15.5.4 Nonnormal Outcome Variables
- 15.5.5 Multiple Response Models
- 15.6 Some Important Issues
- 15.6.1 Intraclass Correlation
- 15.6.2 Centering Predictors and Changes in Their Interpretations
- 15.6.3 Interactions
- 15.6.4 Random and Fixed Intercepts and Slopes
- 15.6.5 Statistical Inference
- 15.6.5.1 Assessing Models
- 15.6.5.2 Tests of Individual Effects
- 15.6.6 Effect Size
- 15.6.7 Estimation Techniques and Convergence Problems
- 15.6.8 Exploratory Model Building
- 15.7 Complete Example of MLM
- 15.7.1 Evaluation of Assumptions
- 15.7.1.1 Sample Sizes, Missing Data, and Distributions
- 15.7.1.2 Outliers
- 15.7.1.3 Multicollinearity and Singularity
- 15.7.1.4 Independence of Errors: Intraclass Correlations
- 15.7.2 Multilevel Modeling
- Hypothesized Model
- Assumptions
- Multilevel Modeling
- 15.8 Comparison of Programs
- 15.8.1 SAS System
- 15.8.2 IBM SPSS Package
- 15.8.3 HLM Program
- 15.8.4 MLwiN Program
- 15.8.5 SYSTAT System
- Chapter 16 Multiway Frequency Analysis
- Learning Objectives
- 16.1 General Purpose and Description
- 16.2 Kinds of Research Questions
- 16.2.1 Associations Among Variables
- 16.2.2 Effect on a Dependent Variable
- 16.2.3 Parameter Estimates
- 16.2.4 Importance of Effects
- 16.2.5 Effect Size
- 16.2.6 Specific Comparisons and Trend Analysis
- 16.3 Limitations to Multiway Frequency Analysis
- 16.3.1 Theoretical Issues
- 16.3.2 Practical Issues
- 16.3.2.1 Independence
- 16.3.2.2 Ratio of Cases to Variables
- 16.3.2.3 Adequacy of Expected Frequencies
- 16.3.2.4 Absence of Outliers in the Solution
- 16.4 Fundamental Equations for Multiway Frequency Analysis
- 16.4.1 Screening for Effects
- 16.4.1.1 Total Effect
- 16.4.1.2 First-Order Effects
- 16.4.1.3 Second-Order Effects
- 16.4.1.4 Third-Order Effect
- 16.4.2 Modeling
- 16.4.3 Evaluation and Interpretation
- 16.4.3.1 Residuals
- 16.4.3.2 Parameter Estimates
- 16.4.4 Computer Analyses of Small-Sample Example
- 16.5 Some Important Issues
- 16.5.1 Hierarchical and Nonhierarchical Models
- 16.5.2 Statistical Criteria
- 16.5.2.1 Tests of Models
- 16.5.2.2 Tests of Individual Effects
- 16.5.3 Strategies for Choosing a Model
- 16.5.3.1 IBM SPSS HILOGLINEAR (Hierarchical)
- 16.5.3.2 IBM SPSS GENLOG (General Log-Linear)
- 16.5.3.3 SAS CATMOD and IBM SPSS LOGLINEAR (General Log-Linear)
- 16.6 Complete Example of Multiway Frequency Analysis
- 16.6.1 Evaluation of Assumptions: Adequacy of Expected Frequencies
- 16.6.2 Hierarchical Log-Linear Analysis
- 16.6.2.1 Preliminary Model Screening
- 16.6.2.2 Stepwise Model Selection
- 16.6.2.3 Adequacy of Fit
- 16.6.2.4 Interpretation of the Selected Model
- 16.7 Comparison of Programs
- 16.7.1 IBM SPSS Package
- 16.7.2 SAS System
- 16.7.3 SYSTAT System
- Chapter 17 Time-Series Analysis
- Learning Objectives
- 17.1 General Purpose and Description
- 17.2 Kinds of Research Questions
- 17.2.1 Pattern of Autocorrelation
- 17.2.2 Seasonal Cycles and Trends
- 17.2.3 Forecasting
- 17.2.4 Effect of an Intervention
- 17.2.5 Comparing Time Series
- 17.2.6 Time Series with Covariates
- 17.2.7 Effect Size and Power
- 17.3 Assumptions of Time-Series Analysis
- 17.3.1 Theoretical Issues
- 17.3.2 Practical Issues
- 17.3.2.1 Normality of Distributions of Residuals
- 17.3.2.2 Homogeneity of Variance and Zero Mean of Residuals
- 17.3.2.3 Independence of Residuals
- 17.3.2.4 Absence of Outliers
- 17.3.2.5 Sample Size and Missing Data
- 17.4 Fundamental Equations for Time-Series ARIMA Models
- 17.4.1 Identification of ARIMA (p, d, q) Models
- 17.4.1.1 Trend Components, d: Making the Process Stationary
- 17.4.1.2 Auto-Regressive Components
- 17.4.1.3 Moving Average Components
- 17.4.1.4 Mixed Models
- 17.4.1.5 ACFs and PACFs
- 17.4.2 Estimating Model Parameters
- 17.4.3 Diagnosing a Model
- 17.4.4 Computer Analysis of Small-Sample Time-Series Example
- 17.5 Types of Time-Series Analyses
- 17.5.1 Models with Seasonal Components
- 17.5.2 Models with Interventions
- 17.5.2.1 Abrupt, Permanent Effects
- 17.5.2.2 Abrupt, Temporary Effects
- 17.5.2.3 Gradual, Permanent Effects
- 17.5.2.4 Models with Multiple Interventions
- 17.5.3 Adding Continuous Variables
- 17.6 Some Important Issues
- 17.6.1 Patterns of ACFs and PACFs
- 17.6.2 Effect Size
- 17.6.3 Forecasting
- 17.6.4 Statistical Methods for Comparing Two Models
- 17.7 Complete Examples of Time-Series Analysis
- 17.7.1 Time-Series Analysis of Introduction of Seat Belt Law
- 17.7.1.1 Evaluation of Assumptions
- 17.7.1.1.1 Normality of Sampling Distributions
- 17.7.1.1.2 Homogeneity of Variance
- 17.7.1.1.3 Outliers
- 17.7.1.2 Baseline Model Identification and Estimation
- 17.7.1.3 Baseline Model Diagnosis
- 17.7.1.4 Intervention Analysis
- 17.7.1.4.1 Model Diagnosis
- 17.7.1.4.2 Model Interpretation
- 17.7.2. Time-Series Analysis of Introduction of a Dashboard to an Educational Computer Game
- 17.7.2.1 Evaluation of Assumptions
- 17.7.2.1.1 Normality of Sampling Distributions and Homogeneity of Variance
- 17.7.2.1.2 Outliers
- 17.7.2.2 Baseline Model Identification and Diagnosis
- 17.7.2.3 Intervention Analysis
- 17.7.2.3.1 Model Diagnosis
- 17.7.2.3.2 Model Interpretation
- 17.8 Comparison of Programs
- 17.8.1 IBM SPSS Package
- 17.8.2 SAS System
- 17.8.3 SYSTAT System
- Chapter 18 An Overview of the General Linear Model
- Learning Objectives
- 18.1 Linearity and the General Linear Model
- 18.2 Bivariate to Multivariate Statistics and Overview of Techniques
- 18.2.1 Bivariate Form
- 18.2.2 Simple Multivariate Form
- 18.2.3 Full Multivariate Form
- 18.3 Alternative Research Strategies
- Appendix A A Skimpy Introduction to Matrix Algebra
- A.1 The Trace of a Matrix
- A.2 Addition or Subtraction of a Constant to a Matrix
- A.3 Multiplication or Division of a Matrix by a Constant
- A.4 Addition and Subtraction of Two Matrices
- A.5 Multiplication, Transposes, and Square Roots of Matrices
- A.6 Matrix “Division” (Inverses and Determinants)
- A.7 Eigenvalues and Eigenvectors: Procedures for Consolidating Variance from a Matrix
- Appendix B Research Designs for Complete Examples
- B.1 Women’s Health and Drug Study
- Method
- B.2 Sexual Attraction Study
- Method
- B.3 Learning Disabilities Data Bank
- B.4 Reaction Time to Identify Figures
- B.5 Field Studies of Noise-Induced Sleep Disturbance
- B.6 Clinical Trial for Primary Biliary Cirrhosis
- B.7 Impact of Seat Belt Law
- B.8 The Selene Online Educational Game
- Appendix C Statistical Tables
- References
- Index