www.statsoft.com

Electronic statistics textbook banner

Glossary Index

2
3
A
B
C
D
E
F
G
H
I
J
K
L
M
N
O
P
Q
R
S
T
U
V
W
X
Y
Z

g2 Inverse. A g2 inverse is a generalized inverse of a rectangular matrix of values A that satisfies both:

A-AA = A and A-AA- = A-

Gains Chart. The gains chart provides a visual summary of the usefulness of the information provided by one or more statistical models for predicting a binomial (categorical) outcome variable (dependent variable); for multinomial (multiple-category) outcome variables, gains charts can be computed for each category. Specifically, the chart summarizes the utility that one can expect by using the respective predictive models, as compared to using baseline information only.

The gains chart is applicable to most statistical methods that compute predictions (predicted classifications) for binomial or multinomial responses. This and similar summary charts (see Lift Chart) are commonly used in data mining projects when the dependent or outcome variable of interest is binomial or multinomial in nature.

Example. To illustrate how the gains chart is constructed, consider this example. Suppose you have a mailing list of previous customers of your business, and you want to offer to those customers an additional service by mailing an elaborate brochure and other materials describing the service. During previous similar mail-out campaigns, you collected useful information about your customers (e.g., demographic information, previous purchasing patterns) that you could relate to the response rate, i.e., whether the respective customers responded to your mail solicitation and the type of order they placed.

Given the baseline response rate and the cost of the mail-out, sending the offer to all customers would result in a net-loss. Hence, you want to use statistical analyses to help you identify the customers who are most likely to respond. Suppose you build such a model based on the data collected in the previous mail-out campaign. You can now select only the 10 percent of the customers from the mailing lists who, according to prediction from the model, are most likely to respond. Next you can compute the number of accurately predicted responses, relative to the total number of responses in the sample; this percentage is the gain due to using the model. Put another way, of those customers likely to respond in the current sample, you can accurately identify ("capture") y percent by selecting from the customer list the top 10% who were predicted by the model with the greatest certainty to respond (where y is the gains value).

Analogous values can be computed for each percentile of the population (customers on the mailing list). You could compute separate gains values for selecting the top 20% of customers who are predicted to be among likely responders to the mail campaign, the top 30%, etc. Hence, the gains values for different percentiles can be connected by a line that will typically ascend slowly and merge with the baseline if all customers (100%) were selected.

If more than one predictive model is used, multiple gains charts can be overlaid (as shown in the illustration above) to provide a graphical summary of the utility of different models.

Gamma Coefficient. The Gamma statistic is preferable to Spearman R or Kendall tau when the data contain many tied observations. In terms of the underlying assumptions, Gamma is equivalent to Spearman R or Kendall tau; in terms of its interpretation and computation, it is more similar to Kendall tau than Spearman R. In short, Gamma is also a probability; specifically, it is computed as the difference between the probability that the rank ordering of the two variables agree minus the probability that they disagree, divided by 1 minus the probability of ties. Thus, Gamma is basically equivalent to Kendall tau, except that ties are explicitly taken into account. Detailed discussions of the Gamma statistic can be found in Goodman and Kruskal (1954, 1959, 1963, 1972), Siegel (1956), and Siegel and Castellan (1988).

Gamma Distribution. The Gamma distribution (the term first used by Weatherburn, 1946) is defined as:

f(x) = (x/b)c-1 * e(-x/b) * [1/b (c)]
0 x, b > 0, c > 0

where
  (gamma) is the Gamma function
b     is the scale parameter
a     is the so-called shape parameter
e     is the base of the natural logarithm, sometimes called Euler's e (2.71...)
 


 

The animation above shows the gamma distribution as the shape parameter changes from 1 to 6.

Gaussian Distribution. The normal distribution - a bell-shaped function.

Gauss-Newton Method. The Gauss-Newton method is a class of methods for solving nonlinear least-squares problems. In general, this method makes use of the Jacobian matrix J of first-order derivatives of a function F to find the vector of parameter values x that minimizes the residual sums of squares (sum of squared deviations of predicted values from observed values). An improved and efficient version of the method is the so-called Levenberg-Marquardt algorithm. For a detailed discussion of this class of methods, see Dennis & Schnabel (1983).

General ANOVA/MANOVA. The purpose of analysis of variance (ANOVA) is to test for significant differences between means by comparing (i.e., analyzing) variances. More specifically, by partitioning the total variation into different sources (associated with the different effects in the design), we are able to compare the variance due to the between-groups (or treatments) variability with that due to the within-group (treatment) variability. Under the null hypothesis (that there are no mean differences between groups or treatments in the population), the variance estimated from the within-group (treatment) variability should be about the same as the variance estimated from between-groups (treatments) variability. For more information, see ANOVA/MANOVA.

General Linear Model. The general linear model is a generalization of the linear regression model, such that effects can be tested (1) for categorical predictor variables as well as for effects for continuous predictor variables and (2) in designs with multiple dependent variables as well as in designs with a single dependent variable. For an overview of the general linear model, see the General Linear Models overview.

Generalization (in Neural Networks). The ability of a neural network to make accurate predictions when faced with data not drawn from the original training set (but drawn from the same source as the training set).

Generalized Additive Models. Generalized Additive Models are generalizations of generalized linear models. In generalized linear models, the transformed dependent variable values are predicted from (is linked to) a linear combination of predictor variables; the transformation is referred to as the link function; also, different distributions can be assumed for the dependent variable values. An example of a generalized linear model is the Logit Regression model, where the dependent variable is assumed to be binomial, and the link function is the logit transformation. In generalized additive models, the linear function of the predictor values is replaced by an unspecified (non-parametric) function, obtained by applying a scatterplot smoother to the scatterplot of partial residuals (for the transformed dependent variable values). See also, Hastie and Tibshirani, 1990, or Schimek, 2000.

Generalized Inverse. A generalized inverse (denoted by a superscript of -) of a rectangular matrix of values A is any matrix that satisfies

A-AA=A

A generalized inverse of a nonsingular matrix is unique and is called the regular matrix inverse. See also, matrix singularity, matrix inverse.

Generalized Linear Model. The generalized linear model is a generalization of the linear regression model such that (1) nonlinear, as well as linear, effects can be tested (2) for categorical predictor variables as well as for continuous predictor variables, using (3) any dependent variable whose distribution follows several special members of the exponential family of distributions (e.g., gamma, Poisson, binomial, etc.), as well as for any normally-distributed dependent variable. For an overview of the generalized linear model, see Generalized Linear Models.

Genetic Algorithm. A search algorithm which locates optimal binary strings by processing an initially random population of strings using artificial mutation, crossover and selection operators, in an analogy with the process of natural selection (Goldberg, 1989). See also, Neural Networks.

Genetic Algorithm Input Selection. Application of a genetic algorithm to determine an "optimal" set of input variables, by constructing binary masks which indicate which inputs to retain and which to discard (Goldberg, 1989). This method is implemented in STATISTICA Neural Networks and can be used as part of a model building process where variables identified as the most "relevant" (in STATISTICA Neural Networks) are then used in a traditional model building stage of the analysis (e.g., using a linear regression or nonlinear estimation method).

Geometric Distribution. The geometric distribution (the term first used by Feller, 1950) is defined as:

f(x) = p*(1-p)x

where
p     is the probability that a particular event (e.g., success) will occur

Geometric Mean. The Geometric Mean is a "summary" statistic useful when the measurement scale is not linear; it is computed as:

G = (x1*x2*...*xn)1/n

where
n     is the sample size.

Gibbs Sampler. The Gibbs sampler is a popular method used for MCMC (Markov chain Monte Carlo) analyses. It provides an elegant way for sampling from the joint distributions of multiple variables, by applying the notion that: to sample from a joint distribution just sample repeatedly from its one-dimensional conditionals given whatever you've seen at the time.

For example, the values from the joint distribution of two random variables, X and Y, can be easily simulated by the Gibbs sampler that uses their conditional distributions rather than their joint distribution. Starting with an arbitrary choice of X and Y, X is simulated from the conditional distribution of X, given Y, and Y is simulated from conditional distribution of Y, given X. Alternating between two conditional distributions, in the subsequent steps, generates a sample from the correct joint distribution of X and Y; the approximation gets better and better as the length of the Gibbs sampler path increases.

Gini Measure of Node Impurity. According to Breiman, Friedman, Olshen, & Stone (1984), the Gini measure of node impurity at node (which STATISTICA uses by default in GC&RT and, therefore, Boosted Trees) is defined to be (pp. 28 & 38)

where

and

such that

p ( j | t ) is the estimated probability that an observation belongs to group j given that it is in node t,

p ( j , t ) is the estimated probability that an observation is in group j and at node t,

p ( t ) is the estimated probability that an observation is at node t, ,

is the prior probability for group j,

N j ( t ) is the number of group j members at node t,

and N j is the size of group j.

Therefore, the prior probabilities play a role in every Gini Measure computation at every node. However, Breiman et al. also note that, when the prior probabilities are estimated from the data,

This fact can cause higher misclassification rates in under-represented groups.

Gompertz Distribution. The Gompertz distribution is a theoretical distribution of survival times. Gompertz (1825) proposed a probability model for human mortality, based on the assumption that the "average exhaustion of a man's power to avoid death to be such that at the end of equal infinitely small intervals of time he lost equal portions of his remaining power to oppose destruction which he had at the commencement of these intervals" (Johnson, Kotz, Blakrishnan, 1995, p. 25). The resultant hazard function:

r(x)=Bcx,    for x £ 0, B > 0, c £ 1

is often used in survival analysis. See Johnson, Kotz, Blakrishnan (1995) for additional details.

Goodness of Fit. Various goodness-of-fit summary statistics can be computed for continuous and categorical dependent variables. Most of these statistics are discussed in greater detail in Witten and Frank (2000); in the context of forecasting; different statistics are also discussed in Makridakis and Wheelwright (1983). Goodness of fit statistics for regression problems (for continuous variables) include:

  • Least squares deviation (LSD), mean square error
  • Average deviation, mean absolute error
  • Relative squared error, mean relative squared error
  • Correlation coefficient (Pearson product moment correlation)

Goodness of fit statistics for classification problems (for categorical variables) include:

Gradient. In Structural Equation Modeling the gradient is the vector of first partial derivatives of the discrepancy function with respect to the parameter values. At a local or global minimum, the discrepancy function should be at the bottom of a "valley," where all first partial derivatives are zero, so the elements of the gradient should all be near to zero when a minimum is obtained.

The elements of the gradient, by themselves, can, on occasion, be somewhat unreliable as indicators of when convergence has occurred, especially when the model fit is not good, and the discrepancy function value itself is quite large. For this reason, the gradient is not employed as a convergence criterion by this program.

Gradient Descent. Optimization techniques for non-linear functions (e.g. the error function of a neural network as the weights are varied) which attempt to move incrementally to successively lower points in search space, in order to locate a minimum.

Gradual Permanent Impact. In Time Series, the gradual permanent impact pattern implies that the increase or decrease due to the intervention is gradual, and that the final permanent impact becomes evident only after some time. This type of intervention can be summarized by the expression:

Impact t = * Impact t-1 +
(for all t time of impact, else = 0).

Note that this impact pattern is defined by the two parameters (delta) and (omega). If is near 0 (zero), then the final permanent amount of impact will be evident after only a few more observations; if is close to 1, then the final permanent amount of impact will only be evident after many more observations. As long as the d parameter is greater than 0 and less than 1 (the bounds of system stability), the impact will be gradual and result in an asymptotic change (shift) in the overall mean by the quantity:

Asymptotic change in level = /(1-)

Group Charts. See Multiple Stream Group Charts.

Grouping (or Coding) Variable. A grouping (or coding) variable is used to identify group membership for individual cases in the data file. Typically, the grouping variable is categorical (i.e., contains either discrete values, e.g., 1, 2, 3, ...,

Group Score 1 Score 2
1
3
2
2
383.5
726.4
843.7
729.9
4568.4
6752.3
5384.7
6216.9

or a few text values, e.g., MALE, FEMALE)

Group Score 1 Score 2
MALE
FEMALE
FEMALE
MALE
383.5
726.4
843.7
729.9
4568.4
6752.3
5384.7
6216.9

and the values are referred to as codes (they can be integer values or integer values with text value equivalents).

Groupware. Software intended to enable a group of users on a network to collaborate on specific projects. Groupware can provide services for communication (such as e-mail), collaborative document development, analysis, reporting, statistical data analysis, scheduling, or tracking. Documents can include text, images, or any other forms of information (e.g., multimedia).