### Glossary Index

###### 2

- 2D Bar/Column Plots
- 2D Box Plots
- 2D Box Plots - Box Whiskers
- 2D Box Plots - Boxes
- 2D Box Plots - Columns
- 2D Box Plots - Error Bars
- 2D Box Plots - Whiskers
- 2D Categorized Detrended Probability Plots
- 2D Categorized Half-Norm. Probability Plots
- 2D Categorized Normal Probability Plots
- 2D Detrended Probability Plots
- 2D Histograms
- 2D Histograms - Hanging Bars
- 2D Histograms - Double-Y
- 2D Line Plots
- 2D Line Plots - Aggregated
- 2D Line Plots - Double-Y
- 2D Line Plots - Multiple
- 2D Line Plots - Regular
- 2D Line Plots - XY Trace
- 2D Range Plots - Error Bars
- 2D Matrix Plots
- 2D Matrix Plots - Columns
- 2D Matrix Plots - Lines
- 2D Matrix Plots - Scatterplot
- 2D Normal Probability Plots
- 2D Probability-Probability Plots
- 2D Probability-Probability Plots-Categorized
- 2D Quantile-Quantile Plots
- 2D Quantile-Quantile Plots - Categorized
- 2D Scatterplot
- 2D Scatterplot - Categorized Ternary Graph
- 2D Scatterplot - Double-Y
- 2D Scatterplot - Frequency
- 2D Scatterplot - Multiple
- 2D Scatterplot - Regular
- 2D Scatterplot - Voronoi
- 2D Sequential/Stacked Plots
- 2D Sequential/Stacked Plots - Area
- 2D Sequential/Stacked Plots - Column
- 2D Sequential/Stacked Plots - Lines
- 2D Sequential/Stacked Plots - Mixed Line
- 2D Sequential/Stacked Plots - Mixed Step
- 2D Sequential/Stacked Plots - Step
- 2D Sequential/Stacked Plots - Step Area
- 2D Ternary Plots - Scatterplot

###### 3

- 3D Bivariate Histogram
- 3D Box Plots
- 3D Box Plots - Border-style Ranges
- 3D Box Plots - Double Ribbon Ranges
- 3D Box Plots - Error Bars
- 3D Box Plots - Flying Blocks
- 3D Box Plots - Flying Boxes
- 3D Box Plots - Points
- 3D Categorized Plots - Contour Plot
- 3D Categorized Plots - Deviation Plot
- 3D Categorized Plots - Scatterplot
- 3D Categorized Plots - Space Plot
- 3D Categorized Plots - Spectral Plot
- 3D Categorized Plots - Surface Plot
- 3D Deviation Plots
- 3D Range Plot - Error Bars
- 3D Raw Data Plots - Contour/Discrete
- 3D Scatterplots
- 3D Scatterplots - Ternary Graph
- 3D Space Plots
- 3D Ternary Plots
- 3D Ternary Plots - Categorized Scatterplot
- 3D Ternary Plots - Categorized Space
- 3D Ternary Plots - Categorized Surface
- 3D Ternary Plots - Categorized Trace
- 3D Ternary Plots - Contour/Areas
- 3D Ternary Plots - Contour/Lines
- 3D Ternary Plots - Deviation
- 3D Ternary Plots - Space
- 3D Trace Plots

###### A

- Aberration, Minimum
- Abrupt Permanent Impact
- Abrupt Temporary Impact
- Accept-Support Testing
- Accept Threshold
- Activation Function (in Neural Networks)
- Additive Models
- Additive Season, Damped Trend
- Additive Season, Exponential Trend
- Additive Season, Linear Trend
- Additive Season, No Trend
- Adjusted means
- Aggregation
- AID
- Akaike Information Criterion (AIC)
- Algorithm
- Alpha
- Anderson-Darling Test
- ANOVA
- Append a Network
- Append Cases and/or Variables
- Application Programming Interface (API)
- Arrow
- Assignable Causes and Actions
- Association Rules
- Asymmetrical Distribution
- AT&T Runs Rules
- Attribute (attribute variable)
- Augmented Product Moment Matrix
- Autoassociative Network
- Automatic Network Designer

###### B

- B Coefficients
- Back Propagation
- Bagging (Voting, Averaging)
- Balanced ANOVA Design
- Banner Tables
- Bar/Column Plots, 2D
- Bar Dev Plot
- Bar Left Y Plot
- Bar Right Y Plot
- Bar Top Plot
- Bar X Plot
- Bartlett Window
- Basis Functions
- Batch algorithms in
*STATISTICA Neural Net* - Bayesian Information Criterion (BIC)
- Bayesian Networks
- Bayesian Statistics
- Bernoulli Distribution
- Best Network Retention
- Best Subset Regression
- Beta Coefficients
- Beta Distribution
- Bimodal Distribution
- Binomial Distribution
- Bivariate Normal Distribution
- Blocking
- Bonferroni Adjustment
- Bonferroni Test
- Boosting
- Boundary Case
- Box Plot/Medians (Block Stats Graphs)
- Box Plot/Means (Block Stats Graphs)
- Box Plots, 2D
- Box Plots, 2D - Box Whiskers
- Box Plots, 2D - Boxes
- Box Plots, 2D - Whiskers
- Box Plots, 3D
- Box Plots, 3D - Border-Style Ranges
- Box Plots, 3D - Double Ribbon Ranges
- Box Plots, 3D - Error Bars
- Box Plots, 3D - Flying Blocks
- Box Plots, 3D - Flying Boxes
- Box Plots, 3D - Points
- Box-Ljung Q Statistic
- Breakdowns
- Breaking Down (Categorizing)
- Brown-Forsythe Homogeneity of Variances
- Brushing
- Burt Table

###### C

- Canonical Correlation
- Cartesian Coordinates
- Casewise Missing Data Deletion
- Categorical Dependent Variable
- Categorical Predictor
- Categorized Graphs
- Categorized Plots, 2D-Detrended Prob. Plots
- Categorized Plots, 2D-Half-Normal Prob. Plots
- Categorized Plots, 2D - Normal Prob. Plots
- Categorized Plots, 2D - Prob.-Prob. Plots
- Categorized Plots, 2D - Quantile Plots
- Categorized Plots, 3D - Contour Plot
- Categorized Plots, 3D - Deviation Plot
- Categorized Plots, 3D - Scatterplot
- Categorized Plots, 3D - Space Plot
- Categorized Plots, 3D - Spectral Plot
- Categorized Plots, 3D - Surface Plot
- Categorized 3D Scatterplot (Ternary graph)
- Categorized Contour/Areas (Ternary graph)
- Categorized Contour/Lines (Ternary graph)
- Categorizing
- Cauchy Distribution
- Cause-and-Effect Diagram
- Censoring (Censored Observations)
- Censoring, Left
- Censoring, Multiple
- Censoring, Right
- Censoring, Single
- Censoring, Type I
- Censoring, Type II
- CHAID
- Characteristic Life
- Chernoff Faces (Icon Plots)
*Chi*-square Distribution- Circumplex
- City-Block (Manhattan) Distance
- Classification
- Classification (in Neural Networks)
- Classification and Regression Trees
- Classification by Labeled Exemplars (in NN)
- Classification Statistics (in Neural Networks)
- Classification Thresholds (in Neural Networks)
- Classification Trees
- Class Labeling (in Neural Networks)
- Cluster Analysis
- Cluster Diagram (in Neural Networks)
- Cluster Networks (in Neural Networks)
- Coarse Coding
- Codes
- Coding Variable
- Coefficient of Determination
- Coefficient of Variation
- Column Sequential/Stacked Plot
- Columns (Box Plot)
- Columns (Icon Plot)
- Common Causes
- Communality
- Complex Numbers
- Conditional Probability
- Conditioning (Categorizing)
- Confidence Interval
- Confidence Interval for the Mean
- Confidence Interval vs. Prediction Interval
- Confidence Limits
- Confidence Value (Association Rules)
- Confusion Matrix (in Neural Networks)
- Conjugate Gradient Descent (in Neural Net)
- Continuous Dependent Variable
- Contour/Discrete Raw Data Plot
- Contour Plot
- Control, Quality
- Cook's Distance
- Correlation
- Correlation, Intraclass
- Correlation (Pearson r)
- Correlation Value (Association Rules)
- Correspondence Analysis
- Cox-Snell Gen. Coefficient Determination
- Cpk, Cp, Cr
- CRISP
- Cross Entropy (in Neural Networks)
- Cross Verification (in Neural Networks)
- Cross-Validation
- Crossed Factors
- Crosstabulations
- C-SVM Classification
- Cubic Spline Smoother
- "Curse" of Dimensionality

###### D

- Daniell (or Equal Weight) Window
- Data Mining
- Data Preparation Phase
- Data Reduction
- Data Rotation (in 3D space)
- Data Warehousing
- Decision Trees
- Degrees of Freedom
- Deleted Residual
- Denominator Synthesis
- Dependent t-test
- Dependent vs. Independent Variables
- Deployment
- Derivative-Free Funct. Min. Algorithms
- Design, Experimental
- Design Matrix
- Desirability Profiles
- Detrended Probability Plots
- Deviance
- Deviance Residuals
- Deviation
- Deviation Assign. Algorithms (in Neural Net)
- Deviation Plot (Ternary Graph)
- Deviation Plots, 3D
- DFFITS
- DIEHARD Suite of Tests & Randm. Num. Gen.
- Differencing (in Time Series)
- Dimensionality Reduction
- Discrepancy Function
- Discriminant Function Analysis
- Distribution Function
- DOE
- Document Frequency
- Double-Y Histograms
- Double-Y Line Plots
- Double-Y Scatterplot
- Drill-Down Analysis
- Drilling-down (Categorizing)
- Duncan's test
- Dunnett's test
- DV

###### E

- Effective Hypothesis Decomposition
- Efficient Score Statistic
- Eigenvalues
- Ellipse, Prediction Area and Range
- EM Clustering
- Endogenous Variable
- Ensembles (in Neural Networks)
- Enterprise Resource Planning (ERP)
- Enterprise SPC
- Enterprise-Wide Software Systems
- Entropy
- Epoch in (Neural Networks)
- Eps
- EPSEM Samples
- ERP
- Error Bars (2D Box Plots)
- Error Bars (2D Range Plots)
- Error Bars (3D Box Plots)
- Error Bars (3D Range Plots)
- Error Function (in Neural Networks)
- Estimable Functions
- Euclidean Distance
- Euler's e
- Exogenous Variable
- Experimental Design
- Explained Variance
- Exploratory Data Analysis
- Exponential Distribution
- Exponential Family of Distributions
- Exponential Function
- Exponentially Weighted Moving Avg. Line
- Extrapolation
- Extreme Values (in Box Plots)
- Extreme Value Distribution

###### F

- F Distribution
- FACT
- Factor Analysis
- Fast Analysis Shared Multidimensional Info. FASMI
- Feature Extraction (vs. Feature Selection)
- Feature Selection
- Feedforward Networks
- Fisher LSD
- Fixed Effects (in ANOVA)
- Free Parameter
- Frequencies, Marginal
- Frequency Scatterplot
- Frequency Tables
- Function Minimization Algorithms

###### G

- g2 Inverse
- Gains Chart
- Gamma Coefficient
- Gamma Distribution
- Gaussian Distribution
- Gauss-Newton Method
- General ANOVA/MANOVA
- General Linear Model
- Generalization (in Neural Networks)
- Generalized Additive Models
- Generalized Inverse
- Generalized Linear Model
- Genetic Algorithm
- Genetic Algorithm Input Selection
- Geometric Distribution
- Geometric Mean
- Gibbs Sampler
- Gini Measure of Node Impurity
- Gompertz Distribution
- Goodness of Fit
- Gradient
- Gradient Descent
- Gradual Permanent Impact
- Group Charts
- Grouping (Categorizing)
- Grouping Variable
- Groupware

###### H

- Half-Normal Probability Plots
- Half-Normal Probability Plots - Categorized
- Hamming Window
- Hanging Bars Histogram
- Harmonic Mean
- Hazard
- Hazard Rate
- Heuristic
- Heywood Case
- Hidden Layers (in Neural Networks)
- High-Low Close
- Histograms, 2D
- Histograms, 2D - Double-Y
- Histograms, 2D - Hanging Bars
- Histograms, 2D - Multiple
- Histograms, 2D - Regular
- Histograms, 3D Bivariate
- Histograms, 3D - Box Plots
- Histograms, 3D - Contour/Discrete
- Histograms, 3D - Contour Plot
- Histograms, 3D - Spikes
- Histograms, 3D - Surface Plot
- Hollander-Proschan Test
- Hooke-Jeeves Pattern Moves
- Hosmer-Lemeshow Test
- HTM
- HTML
- Hyperbolic Tangent (tanh)
- Hyperplane
- Hypersphere

###### I

- Icon Plots
- Icon Plots - Chernoff Faces
- Icon Plots - Columns
- Icon Plots - Lines
- Icon Plots - Pies
- Icon Plots - Polygons
- Icon Plots - Profiles
- Icon Plots - Stars
- Icon Plots - Sun Rays
- Increment vs Non-Increment Learning Algr.
- Independent Events
- Independent t-test
- Independent vs. Dependent Variables
- Industrial Experimental Design
- Inertia
- Inlier
- In-Place Database Processing (IDP)
- Interactions
- Interpolation
- Interval Scale
- Intraclass Correlation Coefficient
- Invariance Const. Scale Factor ICSF
- Invariance Under Change of Scale (ICS)
- Inverse Document Frequency
- Ishikawa Chart
- Isotropic Deviation Assignment
- Item and Reliability Analysis
- IV

###### J

###### K

###### L

- Lack of Fit
- Lambda Prime
- Laplace Distribution
- Latent Semantic Indexing
- Latent Variable
- Layered Compression
- Learned Vector Quantization (in Neural Net)
- Learning Rate (in Neural Networks)
- Least Squares (2D graphs)
- Least Squares (3D graphs)
- Least Squares Estimator
- Least Squares Means
- Left and Right Censoring
- Levenberg-Marquardt Algorithm (in Neural Net)
- Levene's Test for Homogeneity of Variances
- Leverage values
- Life Table
- Life, Characteristic
- Lift Charts
- Likelihood
- Lilliefors test
- Line Plots, 2D
- Line Plots, 2D - Aggregated
- Line Plots, 2D (Case Profiles)
- Line Plots, 2D - Double-Y
- Line Plots, 2D - Multiple
- Line Plots, 2D - Regular
- Line Plots, 2D - XY Trace
- Linear (2D graphs)
- Linear (3D graphs)
- Linear Activation function
- Linear Modeling
- Linear Units
- Lines (Icon Plot)
- Lines (Matrix Plot)
- Lines Sequential/Stacked Plot
- Link Function
- Local Minima
- Locally Weighted (Robust) Regression
- Logarithmic Function
- Logistic Distribution
- Logistic Function
- Logit Regression and Transformation
- Log-Linear Analysis
- Log-Normal Distribution
- Lookahead (in Neural Networks)
- Loss Function
- LOWESS Smoothing

###### M

- Machine Learning
- Mahalanobis Distance
- Mallow's CP
- Manifest Variable
- Mann-Scheuer-Fertig Test
- MANOVA
- Marginal Frequencies
- Markov Chain Monte Carlo (MCMC)
- Mass
- Matching Moments Method
- Matrix Collinearity
- Matrix Ill-Conditioning
- Matrix Inverse
- Matrix Plots
- Matrix Plots - Columns
- Matrix Plots - Lines
- Matrix Plots - Scatterplot
- Matrix Rank
- Matrix Singularity
- Maximum Likelihood Loss Function
- Maximum Likelihood Method
- Maximum Unconfounding
- MD (Missing data)
- Mean
- Mean/S.D. Algorithm (in Neural Networks)
- Mean, Geometric
- Mean, Harmonic
- Mean Substitution of Missing Data
- Means, Adjusted
- Means, Unweighted
- Median
- Meta-Learning
- Method of Matching Moments
- Minimax
- Minimum Aberration
- Mining, Data
- Missing values
- Mixed Line Sequential/Stacked Plot
- Mixed Step Sequential/Stacked Plot
- Mode
- Model Profiles (in Neural Networks)
- Models for Data Mining
- Monte Carlo
- Multi-Pattern Bar
- Multicollinearity
- Multidimensional Scaling
- Multilayer Perceptrons
- Multimodal Distribution
- Multinomial Distribution
- Multinomial Logit and Probit Regression
- Multiple Axes in Graphs
- Multiple Censoring
- Multiple Dichotomies
- Multiple Histogram
- Multiple Line Plots
- Multiple Scatterplot
- Multiple R
- Multiple Regression
- Multiple Response Variables
- Multiple-Response Tables
- Multiple Stream Group Charts
- Multiplicative Season, Damped Trend
- Multiplicative Season, Exponential Trend
- Multiplicative Season, Linear Trend
- Multiplicative Season, No Trend
- Multivar. Adapt. Regres. Splines MARSplines
- Multi-way Tables

###### N

- Nagelkerke Gen. Coefficient Determination
- Naive Bayes
- Neat Scaling of Intervals
- Negative Correlation
- Negative Exponential (2D graphs)
- Negative Exponential (3D graphs)
- Neighborhood (in Neural Networks)
- Nested Factors
- Nested Sequence of Models
- Neural Networks
- Neuron
- Newman-Keuls Test
- N-in-One Encoding
- Noise Addition (in Neural Networks)
- Nominal Scale
- Nominal Variables
- Nonlinear Estimation
- Nonparametrics
- Non-Outlier Range
- Nonseasonal, Damped Trend
- Nonseasonal, Exponential Trend
- Nonseasonal, Linear Trend
- Nonseasonal, No Trend
- Normal Distribution
- Normal Distribution, Bivariate
- Normal Fit
- Normality Tests
- Normalization
- Normal Probability Plots
- Normal Probability Plots (Computation Note)
- n Point Moving Average Line

###### O

- ODBC
- Odds Ratio
- OLE DB
- On-Line Analytic Processing (OLAP)
- One-Off (in Neural Networks)
- One-of-N Encoding (in Neural Networks)
- One-Sample t-Test
- One-Sided Ranges Error Bars Range Plots
- One-Way Tables
- Operating Characteristic Curves
- Ordinal Multinomial Distribution
- Ordinal Scale
- Outer Arrays
- Outliers
- Outliers (in Box Plots)
- Overdispersion
- Overfitting
- Overlearning (in Neural Networks)
- Overparameterized Model

###### P

- Pairwise Del. Missing Data vs Mean Subst.
- Pairwise MD Deletion
- Parametric Curve
- Pareto Chart Analysis
- Pareto Distribution
- Part Correlation
- Partial Correlation
- Partial Least Squares Regression
- Partial Residuals
- Parzen Window
- Pearson Correlation
- Pearson Curves
- Pearson Residuals
- Penalty Functions
- Percentiles
- Perceptrons (in Neural Networks)
- Pie Chart
- Pie Chart - Counts
- Pie Chart - Multi-Pattern Bar
- Pie Chart - Values
- Pies (Icon Plots)
- PMML (Predictive Model Markup Language)
- PNG Files
- Poisson Distribution
- Polar Coordinates
- Polygons (Icon Plots)
- Polynomial
- Population Stability Report
- Portable Network Graphics Files
- Positive Correlation
- Post hoc Comparisons
- Post Synaptic Potential (PSP) Function
- Posterior Probability
- Power (Statistical)
- Power Goal
- Ppk, Pp, Pr
- Prediction Interval Ellipse
- Prediction Profiles
- Predictive Data Mining
- Predictive Mapping
- Predictive Model Markup Language (PMML)
- Predictors
- PRESS Statistic
- Principal Components Analysis
- Prior Probabilities
- Probability
- Probability Plots - Detrended
- Probability Plots - Normal
- Probability Plots - Half-Normal
- Probability-Probability Plots
- Probability-Probability Plots - Categorized
- Probability Sampling
- Probit Regression and Transformation
- PROCEED
- Process Analysis
- Process Capability Indices
- Process Performance Indices
- Profiles, Desirability
- Profiles, Prediction
- Profiles (Icon Plots)
- Pruning (in Classification Trees)
- Pseudo-Components
- Pseudo-Inverse Algorithm
- Pseudo-Inverse-Singular Val. Decomp. NN
- PSP (Post Synaptic Potential) Function
- Pure Error
- p-Value (Statistical Significance)

###### Q

###### R

- R Programming Language
- Radial Basis Functions
- Radial Sampling (in Neural Networks)
- Random Effects (in Mixed Model ANOVA)
- Random Forests
- Random Num. from Arbitrary Distributions
- Random Numbers (Uniform)
- Random Sub-Sampling in Data Mining
- Range Ellipse
- Range Plots - Boxes
- Range Plots - Columns
- Range Plots - Whiskers
- Rank
- Rank Correlation
- Ratio Scale
- Raw Data, 3D Scatterplot
- Raw Data Plots, 3D - Contour/Discrete
- Raw Data Plots, 3D - Spikes
- Raw Data Plots, 3D - Surface Plot
- Rayleigh Distribution
- Receiver Oper. Characteristic Curve
- Receiver Oper. Characteristic (in Neural Net)
- Rectangular Distribution
- Regression
- Regression (in Neural Networks)
- Regression, Multiple
- Regression Summary Statistics (in Neural Net)
- Regular Histogram
- Regular Line Plots
- Regular Scatterplot
- Regularization (in Neural Networks)
- Reject Inference
- Reject Threshold
- Relative Function Change Criterion
- Reliability
- Reliability and Item Analysis
- Representative Sample
- Resampling (in Neural Networks)
- Residual
- Resolution
- Response Surface
- Right Censoring
- RMS (Root Mean Squared) Error
- Robust Locally Weighted Regression
- ROC Curve
- ROC Curve (in Neural Networks)
- Root Cause Analysis
- Root Mean Square Stand. Effect RMSSE
- Rosenbrock Pattern Search
- Rotating Coordinates, Method of
- r (Pearson Correlation Coefficient)
- Runs Tests (in Quality Control)

###### S

- Sampling Fraction
- Scalable Software Systems
- Scaling
- Scatterplot, 2D
- Scatterplot, 2D-Categorized Ternary Graph
- Scatterplot, 2D - Double-Y
- Scatterplot, 2D - Frequency
- Scatterplot, 2D - Multiple
- Scatterplot, 2D - Regular
- Scatterplot, 2D - Voronoi
- Scatterplot, 3D
- Scatterplot, 3D - Raw Data
- Scatterplot, 3D - Ternary Graph
- Scatterplot Smoothers
- Scheffe's Test
- Score Statistic
- Scree Plot, Scree Test
- S.D. Ratio
- Semi-Partial Correlation
- SEMMA
- Sensitivity Analysis (in Neural Networks)
- Sequential Contour Plot, 3D
- Sequential/Stacked Plots, 2D
- Sequential/Stacked Plots, 2D - Area
- Sequential/Stacked Plots, 2D - Column
- Sequential/Stacked Plots, 2D - Lines
- Sequential/Stacked Plots, 2D - Mixed Line
- Sequential/Stacked Plots, 2D - Mixed Step
- Sequential/Stacked Plots, 2D - Step
- Sequential/Stacked Plots, 2D - Step Area
- Sequential Surface Plot, 3D
- Sets of Samples in Quality Control Charts
- Shapiro-Wilks' W test
- Shewhart Control Charts
- Short Run Control Charts
- Shuffle, Back Propagation (in Neural Net)
- Shuffle Data (in Neural Networks)
- Sigma Restricted Model
- Sigmoid Function
- Signal Detection Theory
- Simple Random Sampling (SRS)
- Simplex Algorithm
- Single and Multiple Censoring
- Singular Value Decomposition
- Six Sigma (DMAIC)
- Six Sigma Process
- Skewness
- Slicing (Categorizing)
- Smoothing
- SOFMs Self-Organizing Maps Kohonen Net
- Softmax
- Space Plots 3D
- SPC
- Spearman R
- Special Causes
- Spectral Plot
- Spikes (3D graphs)
- Spinning Data (in 3D space)
- Spline (2D graphs)
- Spline (3D graphs)
- Split Selection (for Classification Trees)
- Splitting (Categorizing)
- Spurious Correlations
- SQL
- Square Root of the Signal to Noise Ratio (f)
- Stacked Generalization
- Stacking (Stacked Generalization)
- Standard Deviation
- Standard Error
- Standard Error of the Mean
- Standard Error of the Proportion
- Standardization
- Standardized DFFITS
- Standardized Effect (Es)
- Standard Residual Value
- Stars (Icon Plots)
- Stationary Series (in Time Series)
- STATISTICA Advanced Linear/Nonlinear
- STATISTICA Automated Neural Networks
- STATISTICA Base
- STATISTICA Data Miner
- STATISTICA Data Warehouse
- STATISTICA Document Management System
- STATISTICA Enterprise
- STATISTICA Enterprise/QC
- STATISTICA Enterprise Server
- STATISTICA Enterprise SPC
- STATISTICA Monitoring and Alerting Server
- STATISTICA MultiStream
- STATISTICA Multivariate Stat. Process Ctrl
- STATISTICA PI Connector
- STATISTICA PowerSolutions
- STATISTICA Process Optimization
- STATISTICA Quality Control Charts
- STATISTICA Sequence Assoc. Link Analysis
- STATISTICA Text Miner
- STATISTICA Variance Estimation Precision
- Statistical Power
- Statistical Process Control (SPC)
- Statistical Significance (p-value)
- Steepest Descent Iterations
- Stemming
- Steps
- Stepwise Regression
- Stiffness Parameter (in Fitting Options)
- Stopping Conditions
- Stopping Conditions (in Neural Networks)
- Stopping Rule (in Classification Trees)
- Stratified Random Sampling
- Stub and Banner Tables
- Studentized Deleted Residuals
- Studentized Residuals
- Student's t Distribution
- Sum-Squared Error Function
- Sums of Squares (Type I, II, III (IV, V, VI))
- Sun Rays (Icon Plots)
- Supervised Learning (in Neural Networks)
- Support Value (Association Rules)
- Support Vector
- Support Vector Machine (SVM)
- Suppressor Variable
- Surface Plot (from Raw Data)
- Survival Analysis
- Survivorship Function
- Sweeping
- Symmetrical Distribution
- Symmetric Matrix
- Synaptic Functions (in Neural Networks)

###### T

- Tables
- Tapering
- t Distribution (Student's)
- Tau, Kendall
- Ternary Plots, 2D - Scatterplot
- Ternary Plots, 3D
- Ternary Plots, 3D - Categorized Scatterplot
- Ternary Plots, 3D - Categorized Space
- Ternary Plots, 3D - Categorized Surface
- Ternary Plots, 3D - Categorized Trace
- Ternary Plots, 3D - Contour/Areas
- Ternary Plots, 3D - Contour/Lines
- Ternary Plots, 3D - Deviation
- Ternary Plots, 3D - Space
- Text Mining
- THAID
- Threshold
- Time Series
- Time Series (in Neural Networks)
- Time-Dependent Covariates
- Tolerance (in Multiple Regression)
- Topological Map
- Trace Plots, 3D
- Trace Plot, Categorized (Ternary Graph)
- Training/Test Error/Classification Accuracy
- Transformation (Probit Regression)
- Trellis Graphs
- Trimmed Means
- t-Test (independent & dependent samples)
- Tukey HSD
- Tukey Window
- Two-State (in Neural Networks)
- Type I, II, III (IV, V, VI) Sums of Squares
- Type I Censoring
- Type II Censoring
- Type I Error Rate

###### U

###### V

###### W

###### X

###### Y

###### Z

ODBC. *ODBC (Open DataBase Connectivity)* is a set of conventions introduced by Microsoft that allows access to information from a wide range of databases (e.g., MS Access, Oracle) and performing queries via SQL.

Odds Ratio. The odds ratio is useful in the interpretation of the results of Logistic regression (see Neter, Wasserman, and Kutner, 1989) and is computed from a 2x2 classification table that displays the predicted and observed classification of cases for a binary dependent variable:

(f_{11} * f_{22})/(f_{12} * f_{21})

where *f _{ij}* represents the respective frequencies in the 2x2 table.

OLE DB. *OLE DB (Object Linking and Embedding Database)* is a set of conventions introduced by Microsoft that allows access to information from a wide range of databases (e.g., MS Access, Oracle). *OLE DB* is a database architecture that provides universal data integration over an enterprise's network, from mainframe to desktop, regardless of the data type. *OLE DB* is a more generalized and more efficient strategy for data access than ODBC because it allows access to more types of data and is based on the Component Object Model (COM).

On-Line Analytic Processing (OLAP) (or Fast Analysis of Shared Multidimensional Information - FASMI). The term On-Line Analytic Processing refers to technology that allows users of multidimensional databases to generate on-line descriptive or comparative summaries ("views") of data and other analytic queries.

For more information, see On-Line Analytic Processing (OLAP); see also, Data Warehousing and Data Mining techniques.

One-Off (in Neural Networks). A case typed in and submitted to the neural network as a one-off procedure (not part of a data set, and not used in training). See, Neural Networks.

One-of-N Encoding (in Neural Networks). Representing a nominal variable using a set of input or output units, one unit for each possible nominal value. During training, one of the units will be on and the others off. See, Neural Networks.

One-Sample t-Test. See, t-Test (for Independent and Dependent Samples).

"One-Sided" Ranges or Error Bars in Range Plots. In order to display a *"one-sided" range* (relative to the mid-point) or an *error bar* that extends in only one direction, set the respective values of the variable defining the range boundary to 0 (when the *Relative to the Mid-point* style is selected) or the mid-point (when the *Absolute* style is selected).

Operating Characteristic Curves, for Quality Control Charts. A common supplementary plot to standard quality control charts is the so-called operating characteristic or OC curve. One question that comes to mind when using standard variable or attribute charts is how sensitive is the current quality control procedure. Put in more specific terms, how likely is it that you will not find a sample (e.g., a mean in an X-bar chart) outside the control limits (i.e., accept the production process as "in control"), when, in fact, it has shifted by a certain amount? This probability is usually referred to as the b (beta) error probability, that is, the probability of erroneously accepting a process (mean, mean proportion, mean rate defectives, etc.) as being "in control."

Operating characteristic curves are extremely useful for exploring the power of the quality control procedure. The actual decision concerning sample sizes should depend not only on the cost of implementing the plan (e.g., cost per item sampled), but also on the costs resulting from not detecting quality problems. The OC curve allows the engineer to estimate the probabilities of not detecting shifts of certain sizes in the production quality.

For more information, see also Operating Characteristic Curves.

Ordinal Multinomial Distribution. If the categories for a multinomial response variable can be ordered, then the distribution of that variable is referred to as *ordinal multinomial*. For example, if in a survey the responses to a question are recorded such that respondents have to choose from the pre-arranged categories "Strongly agree," "Agree," "Neither agree nor disagree," "Disagree," and "Strongly disagree," then the counts (number of respondents) that endorsed the different categories would follow an ordinal multinomial distribution (since the response categories are ordered with respect to increasing degrees of disagreement).

Specialized methods for analyzing multinomial and ordinal multinomial response variables can be found in *Generalized Linear Models*.

Ordinal Scale. The ordinal scale of measurement represents the ranks of a variable's values. Values measured on an ordinal scale contain information about their relationship to other values only in terms of whether they are "greater than" or "less than" other values but not in terms of "how much greater" or "how much smaller."

See also, Measurement scales.

Outer Arrays. In Taguchi experimental design methodology, the repeated measurements of the response variable are often taken in a systematic fashion, with the goal to manipulate noise factors. The levels of those factors are then arranged in a so-called outer array, i.e., an (orthogonal) experimental design. However, usually the repeated measurements are placed in separate columns in the data spreadsheet (i.e., each is a different variable); thus the index i (in the formulas for smaller-the-better, larger-the-better, and signed target) runs across the columns or variables in the data spreadsheet, or the levels of the factors in the outer array.

See Signal-to-Noise (S/N) Ratios for more details.

Outliers. Outliers are atypical (by definition), infrequent observations; data points that do not appear to follow the characteristic distribution of the rest of the data. These may reflect genuine properties of the underlying phenomenon (variable), or be due to measurement errors or other anomalies that should not be modeled. In contrast, an inlier is an observation that does follow the characteristic distibution of the rest of the data, but is an error. See Inlier.

Because of the way in which the regression line is determined in *Multiple Regression* (especially the fact that it is based on minimizing not the sum of simple distances but the sum of *squares of distances* of data points from the line), outliers have a profound influence on the slope of the regression line (see the animation below) and consequently on the value of the correlation coefficient. A single outlier is capable of considerably changing the slope of the regression line and, consequently, the value of the correlation. Note that, as shown in the illustration, just one outlier can be entirely responsible for a high value of the correlation that otherwise (without the outlier) would be close to zero. Needless to say, we should never base important conclusions on the value of the correlation coefficient alone (i.e., examining the respective scatterplot is always recommended).

Note that if the sample size is relatively small, then including or excluding specific data points that are not as clearly "outliers" as the one shown in the previous example may have a profound influence on the regression line (and the correlation coefficient). This is illustrated in the following example where we call the points being excluded "outliers"; we may argue, however, that they are not outliers but rather extreme values.

Typically, we believe that outliers represent a random error that we want to be able to control. Needless to say, outliers may not only artificially increase the value of a correlation coefficient, but they can also decrease the value of a "legitimate" correlation.

See also Confidence Ellipse.

Outliers (in Box Plots). Values that are "far" from the middle of the distribution are referred to as outliers and *extreme values* if they meet certain conditions.

A data point is deemed to be an *outlier* if the following conditions hold:

data point value > UBV + *o.c.*(UBV - LBV)

or

data point value < LBV - *o.c.*(UBV - LBV)

where

UBV is the upper value of the box in the box plot (e.g., the mean + standard error or the 75th percentile).

LBV is the lower value of the box in the box plot (e.g., the mean - standard error or the 25th percentile).

o.c. is the outlier coefficient.

For example, the following diagram illustrates the ranges of outliers and extremes in the "classic" box and whisker plot (for more information about box plots, see Tukey, 1977).

Overdispersion. The term *Overdispersion *refers to the condition when the variance of an observed dependent (response) variable exceeds the nominal variance, given the respective assumed distribution. This condition occurs frequently when fitting generalized linear models to categorical response variables, and the assumed distribution is binomial, multinomial, ordinal multinomial, or Poisson. When overdispersion occurs, the standard errors of the parameter estimates and related statistics (e.g., standard errors of predicted and residual statistics) must be computed taking into account the overdispersion.

For details, see Agresti (1996); see also *Generalized Linear/Nonlinear Models*.

Overfitting. When attempting to fit a curve to a set of data points, producing a curve with high curvature that fits the data points well but does not model the underlying function well, its shape being distorted by the noise inherent in the data.

See also, Neural Networks.

Overlearning (in Neural Networks). When an iterative training algorithm is run, overfitting that occurs when the algorithm is run for too long (and the network is too complex for the problem or the available quantity of data).

See also, Neural Networks.

Overparameterized Model. An *overparameterized model *uses the indicator variable approach to represent effects for categorical predictor variables in *general linear models* and *generalized linear/nonlinear models*. To illustrate indicator variable coding, suppose that a categorical predictor *variable *called *Gender *has two levels (i.e., *Male *and *Female*). A separate continuous predictor variable would be coded for each group identified by the categorical predictor variable. *Females *might be assigned a value of 1 and *Males *a value of 0 on a first predictor variable identifying membership in the female *Gender* group, and males would then be assigned a value of 1 and females a value of 0 on a second predictor variable identifying membership in the male *Gender* group.

Note that this method of coding for *categorical predictor variables *will almost always lead to design matrices with redundant columns in *general linear models* and *generalized linear/nonlinear models*, and thus requires a generalized inverse for solving the normal equations. As such, this method is often called the *overparameterized *model for representing categorical predictor variables, because it results in more columns in the design matrix than are necessary for determining the relationships of the* *categorical predictor variables to responses on the dependent variables.

See also categorical predictor variable, design matrix; or *General Linear Models*.