1 @c PSPP - a program for statistical analysis.
2 @c Copyright (C) 2017, 2020 Free Software Foundation, Inc.
3 @c Permission is granted to copy, distribute and/or modify this document
4 @c under the terms of the GNU Free Documentation License, Version 1.3
5 @c or any later version published by the Free Software Foundation;
6 @c with no Invariant Sections, no Front-Cover Texts, and no Back-Cover Texts.
7 @c A copy of the license is included in the section entitled "GNU
8 @c Free Documentation License".
13 This chapter documents the statistical procedures that @pspp{} supports so
17 * DESCRIPTIVES:: Descriptive statistics.
18 * FREQUENCIES:: Frequency tables.
19 * EXAMINE:: Testing data for normality.
21 * CORRELATIONS:: Correlation tables.
22 * CROSSTABS:: Crosstabulation tables.
23 * CTABLES:: Custom tables.
24 * FACTOR:: Factor analysis and Principal Components analysis.
25 * GLM:: Univariate Linear Models.
26 * LOGISTIC REGRESSION:: Bivariate Logistic Regression.
27 * MEANS:: Average values and other statistics.
28 * NPAR TESTS:: Nonparametric tests.
29 * T-TEST:: Test hypotheses about means.
30 * ONEWAY:: One way analysis of variance.
31 * QUICK CLUSTER:: K-Means clustering.
32 * RANK:: Compute rank scores.
33 * REGRESSION:: Linear regression.
34 * RELIABILITY:: Reliability analysis.
35 * ROC:: Receiver Operating Characteristic.
44 /VARIABLES=@var{var_list}
45 /MISSING=@{VARIABLE,LISTWISE@} @{INCLUDE,NOINCLUDE@}
46 /FORMAT=@{LABELS,NOLABELS@} @{NOINDEX,INDEX@} @{LINE,SERIAL@}
48 /STATISTICS=@{ALL,MEAN,SEMEAN,STDDEV,VARIANCE,KURTOSIS,
49 SKEWNESS,RANGE,MINIMUM,MAXIMUM,SUM,DEFAULT,
50 SESKEWNESS,SEKURTOSIS@}
51 /SORT=@{NONE,MEAN,SEMEAN,STDDEV,VARIANCE,KURTOSIS,SKEWNESS,
52 RANGE,MINIMUM,MAXIMUM,SUM,SESKEWNESS,SEKURTOSIS,NAME@}
56 The @cmd{DESCRIPTIVES} procedure reads the active dataset and outputs
57 linear descriptive statistics requested by the user. In addition, it can optionally
60 The @subcmd{VARIABLES} subcommand, which is required, specifies the list of
61 variables to be analyzed. Keyword @subcmd{VARIABLES} is optional.
63 All other subcommands are optional:
65 The @subcmd{MISSING} subcommand determines the handling of missing variables. If
66 @subcmd{INCLUDE} is set, then user-missing values are included in the
67 calculations. If @subcmd{NOINCLUDE} is set, which is the default, user-missing
68 values are excluded. If @subcmd{VARIABLE} is set, then missing values are
69 excluded on a variable by variable basis; if @subcmd{LISTWISE} is set, then
70 the entire case is excluded whenever any value in that case has a
71 system-missing or, if @subcmd{INCLUDE} is set, user-missing value.
73 The @subcmd{FORMAT} subcommand has no effect. It is accepted for
74 backward compatibility.
76 The @subcmd{SAVE} subcommand causes @cmd{DESCRIPTIVES} to calculate Z scores for all
77 the specified variables. The Z scores are saved to new variables.
78 Variable names are generated by trying first the original variable name
79 with Z prepended and truncated to a maximum of 8 characters, then the
80 names ZSC000 through ZSC999, STDZ00 through STDZ09, ZZZZ00 through
81 ZZZZ09, ZQZQ00 through ZQZQ09, in that sequence. In addition, Z score
82 variable names can be specified explicitly on @subcmd{VARIABLES} in the variable
83 list by enclosing them in parentheses after each variable.
84 When Z scores are calculated, @pspp{} ignores @cmd{TEMPORARY},
85 treating temporary transformations as permanent.
87 The @subcmd{STATISTICS} subcommand specifies the statistics to be displayed:
91 All of the statistics below.
95 Standard error of the mean.
98 @item @subcmd{VARIANCE}
100 @item @subcmd{KURTOSIS}
101 Kurtosis and standard error of the kurtosis.
102 @item @subcmd{SKEWNESS}
103 Skewness and standard error of the skewness.
113 Mean, standard deviation of the mean, minimum, maximum.
115 Standard error of the kurtosis.
117 Standard error of the skewness.
120 The @subcmd{SORT} subcommand specifies how the statistics should be sorted. Most
121 of the possible values should be self-explanatory. @subcmd{NAME} causes the
122 statistics to be sorted by name. By default, the statistics are listed
123 in the order that they are specified on the @subcmd{VARIABLES} subcommand.
124 The @subcmd{A} and @subcmd{D} settings request an ascending or descending
125 sort order, respectively.
127 @subsection Descriptives Example
129 The @file{physiology.sav} file contains various physiological data for a sample
130 of persons. Running the @cmd{DESCRIPTIVES} command on the variables @exvar{height}
131 and @exvar{temperature} with the default options allows one to see simple linear
132 statistics for these two variables. In @ref{descriptives:ex}, these variables
133 are specfied on the @subcmd{VARIABLES} subcommand and the @subcmd{SAVE} option
134 has been used, to request that Z scores be calculated.
136 After the command has completed, this example runs @cmd{DESCRIPTIVES} again, this
137 time on the @exvar{zheight} and @exvar{ztemperature} variables,
138 which are the two normalized (Z-score) variables generated by the
139 first @cmd{DESCRIPTIVES} command.
141 @float Example, descriptives:ex
142 @psppsyntax {descriptives.sps}
143 @caption {Running two @cmd{DESCRIPTIVES} commands, one with the @subcmd{SAVE} subcommand}
146 @float Screenshot, descriptives:scr
147 @psppimage {descriptives}
148 @caption {The Descriptives dialog box with two variables and Z-Scores option selected}
151 In @ref{descriptives:res}, we can see that there are 40 valid data for each of the variables
152 and no missing values. The mean average of the height and temperature is 16677.12
153 and 37.02 respectively. The descriptive statistics for temperature seem reasonable.
154 However there is a very high standard deviation for @exvar{height} and a suspiciously
155 low minimum. This is due to a data entry error in the
156 data (@pxref{Identifying incorrect data}).
158 In the second Descriptive Statistics command, one can see that the mean and standard
159 deviation of both Z score variables is 0 and 1 respectively. All Z score statistics
160 should have these properties since they are normalized versions of the original scores.
162 @float Result, descriptives:res
163 @psppoutput {descriptives}
164 @caption {Descriptives statistics including two normalized variables (Z-scores)}
173 /VARIABLES=@var{var_list}
174 /FORMAT=@{TABLE,NOTABLE,LIMIT(@var{limit})@}
175 @{AVALUE,DVALUE,AFREQ,DFREQ@}
176 /MISSING=@{EXCLUDE,INCLUDE@}
177 /STATISTICS=@{DEFAULT,MEAN,SEMEAN,MEDIAN,MODE,STDDEV,VARIANCE,
178 KURTOSIS,SKEWNESS,RANGE,MINIMUM,MAXIMUM,SUM,
179 SESKEWNESS,SEKURTOSIS,ALL,NONE@}
181 /PERCENTILES=percent@dots{}
182 /HISTOGRAM=[MINIMUM(@var{x_min})] [MAXIMUM(@var{x_max})]
183 [@{FREQ[(@var{y_max})],PERCENT[(@var{y_max})]@}] [@{NONORMAL,NORMAL@}]
184 /PIECHART=[MINIMUM(@var{x_min})] [MAXIMUM(@var{x_max})]
185 [@{FREQ,PERCENT@}] [@{NOMISSING,MISSING@}]
186 /BARCHART=[MINIMUM(@var{x_min})] [MAXIMUM(@var{x_max})]
188 /ORDER=@{ANALYSIS,VARIABLE@}
191 (These options are not currently implemented.)
196 The @cmd{FREQUENCIES} procedure outputs frequency tables for specified
198 @cmd{FREQUENCIES} can also calculate and display descriptive statistics
199 (including median and mode) and percentiles, and various graphical representations
200 of the frequency distribution.
202 The @subcmd{VARIABLES} subcommand is the only required subcommand. Specify the
203 variables to be analyzed.
205 The @subcmd{FORMAT} subcommand controls the output format. It has several
210 @subcmd{TABLE}, the default, causes a frequency table to be output for every
211 variable specified. @subcmd{NOTABLE} prevents them from being output. @subcmd{LIMIT}
212 with a numeric argument causes them to be output except when there are
213 more than the specified number of values in the table.
216 Normally frequency tables are sorted in ascending order by value. This
217 is @subcmd{AVALUE}. @subcmd{DVALUE} tables are sorted in descending order by value.
218 @subcmd{AFREQ} and @subcmd{DFREQ} tables are sorted in ascending and descending order,
219 respectively, by frequency count.
222 The @subcmd{MISSING} subcommand controls the handling of user-missing values.
223 When @subcmd{EXCLUDE}, the default, is set, user-missing values are not included
224 in frequency tables or statistics. When @subcmd{INCLUDE} is set, user-missing
225 are included. System-missing values are never included in statistics,
226 but are listed in frequency tables.
228 The available @subcmd{STATISTICS} are the same as available
229 in @cmd{DESCRIPTIVES} (@pxref{DESCRIPTIVES}), with the addition
230 of @subcmd{MEDIAN}, the data's median
231 value, and MODE, the mode. (If there are multiple modes, the smallest
232 value is reported.) By default, the mean, standard deviation of the
233 mean, minimum, and maximum are reported for each variable.
236 @subcmd{PERCENTILES} causes the specified percentiles to be reported.
237 The percentiles should be presented at a list of numbers between 0
239 The @subcmd{NTILES} subcommand causes the percentiles to be reported at the
240 boundaries of the data set divided into the specified number of ranges.
241 For instance, @subcmd{/NTILES=4} would cause quartiles to be reported.
244 The @subcmd{HISTOGRAM} subcommand causes the output to include a histogram for
245 each specified numeric variable. The X axis by default ranges from
246 the minimum to the maximum value observed in the data, but the @subcmd{MINIMUM}
247 and @subcmd{MAXIMUM} keywords can set an explicit range.
248 @footnote{The number of
249 bins is chosen according to the Freedman-Diaconis rule:
250 @math{2 \times IQR(x)n^{-1/3}}, where @math{IQR(x)} is the interquartile range of @math{x}
251 and @math{n} is the number of samples. Note that
252 @cmd{EXAMINE} uses a different algorithm to determine bin sizes.}
253 Histograms are not created for string variables.
255 Specify @subcmd{NORMAL} to superimpose a normal curve on the
259 The @subcmd{PIECHART} subcommand adds a pie chart for each variable to the data. Each
260 slice represents one value, with the size of the slice proportional to
261 the value's frequency. By default, all non-missing values are given
263 The @subcmd{MINIMUM} and @subcmd{MAXIMUM} keywords can be used to limit the
264 displayed slices to a given range of values.
265 The keyword @subcmd{NOMISSING} causes missing values to be omitted from the
266 piechart. This is the default.
267 If instead, @subcmd{MISSING} is specified, then the pie chart includes
268 a single slice representing all system missing and user-missing cases.
271 The @subcmd{BARCHART} subcommand produces a bar chart for each variable.
272 The @subcmd{MINIMUM} and @subcmd{MAXIMUM} keywords can be used to omit
273 categories whose counts which lie outside the specified limits.
274 The @subcmd{FREQ} option (default) causes the ordinate to display the frequency
275 of each category, whereas the @subcmd{PERCENT} option displays relative
278 The @subcmd{FREQ} and @subcmd{PERCENT} options on @subcmd{HISTOGRAM} and
279 @subcmd{PIECHART} are accepted but not currently honoured.
281 The @subcmd{ORDER} subcommand is accepted but ignored.
283 @subsection Frequencies Example
285 @ref{frequencies:ex} runs a frequency analysis on the @exvar{sex}
286 and @exvar{occupation} variables from the @file{personnel.sav} file.
287 This is useful to get an general idea of the way in which these nominal
288 variables are distributed.
290 @float Example, frequencies:ex
291 @psppsyntax {frequencies.sps}
292 @caption {Running frequencies on the @exvar{sex} and @exvar{occupation} variables}
295 If you are using the graphic user interface, the dialog box is set up such that
296 by default, several statistics are calculated. Some are not particularly useful
297 for categorical variables, so you may want to disable those.
299 @float Screenshot, frequencies:scr
300 @psppimage {frequencies}
301 @caption {The frequencies dialog box with the @exvar{sex} and @exvar{occupation} variables selected}
304 From @ref{frequencies:res} it is evident that there are 33 males, 21 females and
305 2 persons for whom their sex has not been entered.
307 One can also see how many of each occupation there are in the data.
308 When dealing with string variables used as nominal values, running a frequency
309 analysis is useful to detect data input entries. Notice that
310 one @exvar{occupation} value has been mistyped as ``Scrientist''. This entry should
311 be corrected, or marked as missing before using the data.
313 @float Result, frequencies:res
314 @psppoutput {frequencies}
315 @caption {The relative frequencies of @exvar{sex} and @exvar{occupation}}
322 @cindex Exploratory data analysis
323 @cindex normality, testing
327 VARIABLES= @var{var1} [@var{var2}] @dots{} [@var{varN}]
328 [BY @var{factor1} [BY @var{subfactor1}]
329 [ @var{factor2} [BY @var{subfactor2}]]
331 [ @var{factor3} [BY @var{subfactor3}]]
333 /STATISTICS=@{DESCRIPTIVES, EXTREME[(@var{n})], ALL, NONE@}
334 /PLOT=@{BOXPLOT, NPPLOT, HISTOGRAM, SPREADLEVEL[(@var{t})], ALL, NONE@}
336 /COMPARE=@{GROUPS,VARIABLES@}
337 /ID=@var{identity_variable}
339 /PERCENTILE=[@var{percentiles}]=@{HAVERAGE, WAVERAGE, ROUND, AEMPIRICAL, EMPIRICAL @}
340 /MISSING=@{LISTWISE, PAIRWISE@} [@{EXCLUDE, INCLUDE@}]
341 [@{NOREPORT,REPORT@}]
345 The @cmd{EXAMINE} command is used to perform exploratory data analysis.
346 In particular, it is useful for testing how closely a distribution follows a
347 normal distribution, and for finding outliers and extreme values.
349 The @subcmd{VARIABLES} subcommand is mandatory.
350 It specifies the dependent variables and optionally variables to use as
351 factors for the analysis.
352 Variables listed before the first @subcmd{BY} keyword (if any) are the
354 The dependent variables may optionally be followed by a list of
355 factors which tell @pspp{} how to break down the analysis for each
358 Following the dependent variables, factors may be specified.
359 The factors (if desired) should be preceded by a single @subcmd{BY} keyword.
360 The format for each factor is
362 @var{factorvar} [BY @var{subfactorvar}].
364 Each unique combination of the values of @var{factorvar} and
365 @var{subfactorvar} divide the dataset into @dfn{cells}.
366 Statistics are calculated for each cell
367 and for the entire dataset (unless @subcmd{NOTOTAL} is given).
369 The @subcmd{STATISTICS} subcommand specifies which statistics to show.
370 @subcmd{DESCRIPTIVES} produces a table showing some parametric and
371 non-parametrics statistics.
372 @subcmd{EXTREME} produces a table showing the extremities of each cell.
373 A number in parentheses, @var{n} determines
374 how many upper and lower extremities to show.
375 The default number is 5.
377 The subcommands @subcmd{TOTAL} and @subcmd{NOTOTAL} are mutually exclusive.
378 If @subcmd{TOTAL} appears, then statistics for the entire dataset
379 as well as for each cell are produced.
380 If @subcmd{NOTOTAL} appears, then statistics are produced only for the cells
381 (unless no factor variables have been given).
382 These subcommands have no effect if there have been no factor variables
388 @cindex spreadlevel plot
389 The @subcmd{PLOT} subcommand specifies which plots are to be produced if any.
390 Available plots are @subcmd{HISTOGRAM}, @subcmd{NPPLOT}, @subcmd{BOXPLOT} and
391 @subcmd{SPREADLEVEL}.
392 The first three can be used to visualise how closely each cell conforms to a
393 normal distribution, whilst the spread vs.@: level plot can be useful to visualise
394 how the variance differs between factors.
395 Boxplots show you the outliers and extreme values.
396 @footnote{@subcmd{HISTOGRAM} uses Sturges' rule to determine the number of
397 bins, as approximately @math{1 + \log2(n)}, where @math{n} is the number of samples.
398 Note that @cmd{FREQUENCIES} uses a different algorithm to find the bin size.}
400 The @subcmd{SPREADLEVEL} plot displays the interquartile range versus the
401 median. It takes an optional parameter @var{t}, which specifies how the data
402 should be transformed prior to plotting.
403 The given value @var{t} is a power to which the data are raised. For example, if
404 @var{t} is given as 2, then the square of the data is used.
405 Zero, however is a special value. If @var{t} is 0 or
406 is omitted, then data are transformed by taking its natural logarithm instead of
407 raising to the power of @var{t}.
410 When one or more plots are requested, @subcmd{EXAMINE} also performs the
411 Shapiro-Wilk test for each category.
412 There are however a number of provisos:
414 @item All weight values must be integer.
415 @item The cumulative weight value must be in the range [3, 5000]
418 The @subcmd{COMPARE} subcommand is only relevant if producing boxplots, and it is only
419 useful there is more than one dependent variable and at least one factor.
421 @subcmd{/COMPARE=GROUPS} is specified, then one plot per dependent variable is produced,
422 each of which contain boxplots for all the cells.
423 If @subcmd{/COMPARE=VARIABLES} is specified, then one plot per cell is produced,
424 each containing one boxplot per dependent variable.
425 If the @subcmd{/COMPARE} subcommand is omitted, then @pspp{} behaves as if
426 @subcmd{/COMPARE=GROUPS} were given.
428 The @subcmd{ID} subcommand is relevant only if @subcmd{/PLOT=BOXPLOT} or
429 @subcmd{/STATISTICS=EXTREME} has been given.
430 If given, it should provide the name of a variable which is to be used
431 to labels extreme values and outliers.
432 Numeric or string variables are permissible.
433 If the @subcmd{ID} subcommand is not given, then the case number is used for
436 The @subcmd{CINTERVAL} subcommand specifies the confidence interval to use in
437 calculation of the descriptives command. The default is 95%.
440 The @subcmd{PERCENTILES} subcommand specifies which percentiles are to be calculated,
441 and which algorithm to use for calculating them. The default is to
442 calculate the 5, 10, 25, 50, 75, 90, 95 percentiles using the
443 @subcmd{HAVERAGE} algorithm.
445 The @subcmd{TOTAL} and @subcmd{NOTOTAL} subcommands are mutually exclusive. If @subcmd{NOTOTAL}
446 is given and factors have been specified in the @subcmd{VARIABLES} subcommand,
447 then statistics for the unfactored dependent variables are
448 produced in addition to the factored variables. If there are no
449 factors specified then @subcmd{TOTAL} and @subcmd{NOTOTAL} have no effect.
452 The following example generates descriptive statistics and histograms for
453 two variables @var{score1} and @var{score2}.
454 Two factors are given, @i{viz}: @var{gender} and @var{gender} BY @var{culture}.
455 Therefore, the descriptives and histograms are generated for each
457 of @var{gender} @emph{and} for each distinct combination of the values
458 of @var{gender} and @var{race}.
459 Since the @subcmd{NOTOTAL} keyword is given, statistics and histograms for
460 @var{score1} and @var{score2} covering the whole dataset are not produced.
462 EXAMINE @var{score1} @var{score2} BY
464 @var{gender} BY @var{culture}
465 /STATISTICS = DESCRIPTIVES
470 Here is a second example showing how the @cmd{examine} command can be used to find extremities.
472 EXAMINE @var{height} @var{weight} BY
474 /STATISTICS = EXTREME (3)
479 In this example, we look at the height and weight of a sample of individuals and
480 how they differ between male and female.
481 A table showing the 3 largest and the 3 smallest values of @exvar{height} and
482 @exvar{weight} for each gender, and for the whole dataset as are shown.
483 In addition, the @subcmd{/PLOT} subcommand requests boxplots.
484 Because @subcmd{/COMPARE = GROUPS} was specified, boxplots for male and female are
485 shown in juxtaposed in the same graphic, allowing us to easily see the difference between
487 Since the variable @var{name} was specified on the @subcmd{ID} subcommand,
488 values of the @var{name} variable are used to label the extreme values.
491 If you specify many dependent variables or factor variables
492 for which there are many distinct values, then @cmd{EXAMINE} will produce a very
493 large quantity of output.
499 @cindex Exploratory data analysis
500 @cindex normality, testing
504 /HISTOGRAM [(NORMAL)]= @var{var}
505 /SCATTERPLOT [(BIVARIATE)] = @var{var1} WITH @var{var2} [BY @var{var3}]
506 /BAR = @{@var{summary-function}(@var{var1}) | @var{count-function}@} BY @var{var2} [BY @var{var3}]
507 [ /MISSING=@{LISTWISE, VARIABLE@} [@{EXCLUDE, INCLUDE@}] ]
508 [@{NOREPORT,REPORT@}]
512 The @cmd{GRAPH} command produces graphical plots of data. Only one of the subcommands
513 @subcmd{HISTOGRAM}, @subcmd{BAR} or @subcmd{SCATTERPLOT} can be specified, @i{i.e.} only one plot
514 can be produced per call of @cmd{GRAPH}. The @subcmd{MISSING} is optional.
517 * SCATTERPLOT:: Cartesian Plots
518 * HISTOGRAM:: Histograms
519 * BAR CHART:: Bar Charts
523 @subsection Scatterplot
526 The subcommand @subcmd{SCATTERPLOT} produces an xy plot of the
528 @cmd{GRAPH} uses the third variable @var{var3}, if specified, to determine
529 the colours and/or markers for the plot.
530 The following is an example for producing a scatterplot.
534 /SCATTERPLOT = @var{height} WITH @var{weight} BY @var{gender}.
537 This example produces a scatterplot where @var{height} is plotted versus @var{weight}. Depending
538 on the value of the @var{gender} variable, the colour of the datapoint is different. With
539 this plot it is possible to analyze gender differences for @var{height} versus @var{weight} relation.
542 @subsection Histogram
545 The subcommand @subcmd{HISTOGRAM} produces a histogram. Only one variable is allowed for
547 The keyword @subcmd{NORMAL} may be specified in parentheses, to indicate that the ideal normal curve
548 should be superimposed over the histogram.
549 For an alternative method to produce histograms @pxref{EXAMINE}. The
550 following example produces a histogram plot for the variable @var{weight}.
554 /HISTOGRAM = @var{weight}.
558 @subsection Bar Chart
561 The subcommand @subcmd{BAR} produces a bar chart.
562 This subcommand requires that a @var{count-function} be specified (with no arguments) or a @var{summary-function} with a variable @var{var1} in parentheses.
563 Following the summary or count function, the keyword @subcmd{BY} should be specified and then a catagorical variable, @var{var2}.
564 The values of the variable @var{var2} determine the labels of the bars to be plotted.
565 Optionally a second categorical variable @var{var3} may be specified in which case a clustered (grouped) bar chart is produced.
567 Valid count functions are
570 The weighted counts of the cases in each category.
572 The weighted counts of the cases in each category expressed as a percentage of the total weights of the cases.
574 The cumulative weighted counts of the cases in each category.
576 The cumulative weighted counts of the cases in each category expressed as a percentage of the total weights of the cases.
579 The summary function is applied to @var{var1} across all cases in each category.
580 The recognised summary functions are:
592 The following examples assume a dataset which is the results of a survey.
593 Each respondent has indicated annual income, their sex and city of residence.
594 One could create a bar chart showing how the mean income varies between of residents of different cities, thus:
596 GRAPH /BAR = MEAN(@var{income}) BY @var{city}.
599 This can be extended to also indicate how income in each city differs between the sexes.
601 GRAPH /BAR = MEAN(@var{income}) BY @var{city} BY @var{sex}.
604 One might also want to see how many respondents there are from each city. This can be achieved as follows:
606 GRAPH /BAR = COUNT BY @var{city}.
609 Bar charts can also be produced using the @ref{FREQUENCIES} and @ref{CROSSTABS} commands.
612 @section CORRELATIONS
617 /VARIABLES = @var{var_list} [ WITH @var{var_list} ]
622 /VARIABLES = @var{var_list} [ WITH @var{var_list} ]
623 /VARIABLES = @var{var_list} [ WITH @var{var_list} ]
626 [ /PRINT=@{TWOTAIL, ONETAIL@} @{SIG, NOSIG@} ]
627 [ /STATISTICS=DESCRIPTIVES XPROD ALL]
628 [ /MISSING=@{PAIRWISE, LISTWISE@} @{INCLUDE, EXCLUDE@} ]
632 The @cmd{CORRELATIONS} procedure produces tables of the Pearson correlation coefficient
633 for a set of variables. The significance of the coefficients are also given.
635 At least one @subcmd{VARIABLES} subcommand is required. If you specify the @subcmd{WITH}
636 keyword, then a non-square correlation table is produced.
637 The variables preceding @subcmd{WITH}, are used as the rows of the table,
638 and the variables following @subcmd{WITH} are used as the columns of the table.
639 If no @subcmd{WITH} subcommand is specified, then @cmd{CORRELATIONS} produces a
640 square, symmetrical table using all variables.
642 The @cmd{MISSING} subcommand determines the handling of missing variables.
643 If @subcmd{INCLUDE} is set, then user-missing values are included in the
644 calculations, but system-missing values are not.
645 If @subcmd{EXCLUDE} is set, which is the default, user-missing
646 values are excluded as well as system-missing values.
648 If @subcmd{LISTWISE} is set, then the entire case is excluded from analysis
649 whenever any variable specified in any @cmd{/VARIABLES} subcommand
650 contains a missing value.
651 If @subcmd{PAIRWISE} is set, then a case is considered missing only if either of the
652 values for the particular coefficient are missing.
653 The default is @subcmd{PAIRWISE}.
655 The @subcmd{PRINT} subcommand is used to control how the reported significance values are printed.
656 If the @subcmd{TWOTAIL} option is used, then a two-tailed test of significance is
657 printed. If the @subcmd{ONETAIL} option is given, then a one-tailed test is used.
658 The default is @subcmd{TWOTAIL}.
660 If the @subcmd{NOSIG} option is specified, then correlation coefficients with significance less than
661 0.05 are highlighted.
662 If @subcmd{SIG} is specified, then no highlighting is performed. This is the default.
665 The @subcmd{STATISTICS} subcommand requests additional statistics to be displayed. The keyword
666 @subcmd{DESCRIPTIVES} requests that the mean, number of non-missing cases, and the non-biased
667 estimator of the standard deviation are displayed.
668 These statistics are displayed in a separated table, for all the variables listed
669 in any @subcmd{/VARIABLES} subcommand.
670 The @subcmd{XPROD} keyword requests cross-product deviations and covariance estimators to
671 be displayed for each pair of variables.
672 The keyword @subcmd{ALL} is the union of @subcmd{DESCRIPTIVES} and @subcmd{XPROD}.
680 /TABLES=@var{var_list} BY @var{var_list} [BY @var{var_list}]@dots{}
681 /MISSING=@{TABLE,INCLUDE,REPORT@}
682 /FORMAT=@{TABLES,NOTABLES@}
684 /CELLS=@{COUNT,ROW,COLUMN,TOTAL,EXPECTED,RESIDUAL,SRESIDUAL,
685 ASRESIDUAL,ALL,NONE@}
686 /COUNT=@{ASIS,CASE,CELL@}
688 /STATISTICS=@{CHISQ,PHI,CC,LAMBDA,UC,BTAU,CTAU,RISK,GAMMA,D,
689 KAPPA,ETA,CORR,ALL,NONE@}
693 /VARIABLES=@var{var_list} (@var{low},@var{high})@dots{}
696 The @cmd{CROSSTABS} procedure displays crosstabulation
697 tables requested by the user. It can calculate several statistics for
698 each cell in the crosstabulation tables. In addition, a number of
699 statistics can be calculated for each table itself.
701 The @subcmd{TABLES} subcommand is used to specify the tables to be reported. Any
702 number of dimensions is permitted, and any number of variables per
703 dimension is allowed. The @subcmd{TABLES} subcommand may be repeated as many
704 times as needed. This is the only required subcommand in @dfn{general
707 Occasionally, one may want to invoke a special mode called @dfn{integer
708 mode}. Normally, in general mode, @pspp{} automatically determines
709 what values occur in the data. In integer mode, the user specifies the
710 range of values that the data assumes. To invoke this mode, specify the
711 @subcmd{VARIABLES} subcommand, giving a range of data values in parentheses for
712 each variable to be used on the @subcmd{TABLES} subcommand. Data values inside
713 the range are truncated to the nearest integer, then assigned to that
714 value. If values occur outside this range, they are discarded. When it
715 is present, the @subcmd{VARIABLES} subcommand must precede the @subcmd{TABLES}
718 In general mode, numeric and string variables may be specified on
719 TABLES. In integer mode, only numeric variables are allowed.
721 The @subcmd{MISSING} subcommand determines the handling of user-missing values.
722 When set to @subcmd{TABLE}, the default, missing values are dropped on a table by
723 table basis. When set to @subcmd{INCLUDE}, user-missing values are included in
724 tables and statistics. When set to @subcmd{REPORT}, which is allowed only in
725 integer mode, user-missing values are included in tables but marked with
726 a footnote and excluded from statistical calculations.
728 The @subcmd{FORMAT} subcommand controls the characteristics of the
729 crosstabulation tables to be displayed. It has a number of possible
734 @subcmd{TABLES}, the default, causes crosstabulation tables to be output.
735 @subcmd{NOTABLES}, which is equivalent to @code{CELLS=NONE}, suppresses them.
738 @subcmd{AVALUE}, the default, causes values to be sorted in ascending order.
739 @subcmd{DVALUE} asserts a descending sort order.
742 The @subcmd{CELLS} subcommand controls the contents of each cell in the displayed
743 crosstabulation table. The possible settings are:
759 Standardized residual.
761 Adjusted standardized residual.
765 Suppress cells entirely.
768 @samp{/CELLS} without any settings specified requests @subcmd{COUNT}, @subcmd{ROW},
769 @subcmd{COLUMN}, and @subcmd{TOTAL}.
770 If @subcmd{CELLS} is not specified at all then only @subcmd{COUNT}
773 By default, crosstabulation and statistics use raw case weights,
774 without rounding. Use the @subcmd{/COUNT} subcommand to perform
775 rounding: CASE rounds the weights of individual weights as cases are
776 read, CELL rounds the weights of cells within each crosstabulation
777 table after it has been constructed, and ASIS explicitly specifies the
778 default non-rounding behavior. When rounding is requested, ROUND, the
779 default, rounds to the nearest integer and TRUNCATE rounds toward
782 The @subcmd{STATISTICS} subcommand selects statistics for computation:
788 Pearson chi-square, likelihood ratio, Fisher's exact test, continuity
789 correction, linear-by-linear association.
793 Contingency coefficient.
797 Uncertainty coefficient.
813 Spearman correlation, Pearson's r.
820 Selected statistics are only calculated when appropriate for the
821 statistic. Certain statistics require tables of a particular size, and
822 some statistics are calculated only in integer mode.
824 @samp{/STATISTICS} without any settings selects CHISQ. If the
825 @subcmd{STATISTICS} subcommand is not given, no statistics are calculated.
828 The @samp{/BARCHART} subcommand produces a clustered bar chart for the first two
829 variables on each table.
830 If a table has more than two variables, the counts for the third and subsequent levels
831 are aggregated and the chart is produced as if there were only two variables.
834 @strong{Please note:} Currently the implementation of @cmd{CROSSTABS} has the
835 following limitations:
839 Significance of some symmetric and directional measures is not calculated.
841 Asymptotic standard error is not calculated for
842 Goodman and Kruskal's tau or symmetric Somers' d.
844 Approximate T is not calculated for symmetric uncertainty coefficient.
847 Fixes for any of these deficiencies would be welcomed.
849 @subsection Crosstabs Example
851 @cindex chi-square test of independence
853 A researcher wishes to know if, in an industry, a person's sex is related to
854 the person's occupation. To investigate this, she has determined that the
855 @file{personnel.sav} is a representative, randomly selected sample of persons.
856 The researcher's null hypothesis is that a person's sex has no relation to a
857 person's occupation. She uses a chi-squared test of independence to investigate
860 @float Example, crosstabs:ex
861 @psppsyntax {crosstabs.sps}
862 @caption {Running crosstabs on the @exvar{sex} and @exvar{occupation} variables}
865 The syntax in @ref{crosstabs:ex} conducts a chi-squared test of independence.
866 The line @code{/tables = occupation by sex} indicates that @exvar{occupation}
867 and @exvar{sex} are the variables to be tabulated. To do this using the @gui{}
868 you must place these variable names respectively in the @samp{Row} and
869 @samp{Column} fields as shown in @ref{crosstabs:scr}.
871 @float Screenshot, crosstabs:scr
872 @psppimage {crosstabs}
873 @caption {The Crosstabs dialog box with the @exvar{sex} and @exvar{occupation} variables selected}
876 Similarly, the @samp{Cells} button shows a dialog box to select the @code{count}
877 and @code{expected} options. All other cell options can be deselected for this
880 You would use the @samp{Format} and @samp{Statistics} buttons to select options
881 for the @subcmd{FORMAT} and @subcmd{STATISTICS} subcommands. In this example,
882 the @samp{Statistics} requires only the @samp{Chisq} option to be checked. All
883 other options should be unchecked. No special settings are required from the
884 @samp{Format} dialog.
886 As shown in @ref{crosstabs:res} @cmd{CROSSTABS} generates a contingency table
887 containing the observed count and the expected count of each sex and each
888 occupation. The expected count is the count which would be observed if the
889 null hypothesis were true.
891 The significance of the Pearson Chi-Square value is very much larger than the
892 normally accepted value of 0.05 and so one cannot reject the null hypothesis.
893 Thus the researcher must conclude that a person's sex has no relation to the
896 @float Results, crosstabs:res
897 @psppoutput {crosstabs}
898 @caption {The results of a test of independence between @exvar{sex} and @exvar{occupation}}
905 @cindex custom tables
906 @cindex tables, custom
908 @code{CTABLES} has the following overall syntax. At least one
909 @code{TABLE} subcommand is required:
913 @dots{}@i{global subcommands}@dots{}
914 [@t{/TABLE} @i{rows} @t{BY} @i{columns} @t{BY} @i{layers}
915 @dots{}@i{per-table subcommands}@dots{}]@dots{}
918 The following subcommands precede the first @code{TABLE} subcommand
919 and apply to all of the output tables. All of these subcommands are
924 [@t{MINCOLWIDTH=}@{@t{DEFAULT} @math{|} @i{width}@}]
925 [@t{MAXCOLWIDTH=}@{@t{DEFAULT} @math{|} @i{width}@}]
926 [@t{UNITS=}@{@t{POINTS} @math{|} @t{INCHES} @math{|} @t{CM}@}]
927 [@t{EMPTY=}@{@t{ZERO} @math{|} @t{BLANK} @math{|} @i{string}@}]
928 [@t{MISSING=}@i{string}]
930 @t{VARIABLES=}@i{variables}
931 @t{DISPLAY}=@{@t{DEFAULT} @math{|} @t{NAME} @math{|} @t{LABEL} @math{|} @t{BOTH} @math{|} @t{NONE}@}
932 @t{/MRSETS COUNTDUPLICATES=}@{@t{YES} @math{|} @t{NO}@}
933 @t{/SMISSING} @{@t{VARIABLE} @math{|} @t{LISTWISE}@}
934 @t{/PCOMPUTE} @t{&}@i{category}@t{=EXPR(}@i{expression}@t{)}
935 @t{/PPROPERTIES} @t{&}@i{category}@dots{}
936 [@t{LABEL=}@i{string}]
937 [@t{FORMAT=}[@i{summary} @i{format}]@dots{}]
938 [@t{HIDESOURCECATS=}@{@t{NO} @math{|} @t{YES}@}
939 @t{/WEIGHT VARIABLE=}@i{variable}
940 @t{/HIDESMALLCOUNTS COUNT=@i{count}}
943 The following subcommands follow @code{TABLE} and apply only to the
944 previous @code{TABLE}. All of these subcommands are optional:
948 [@t{POSITION=}@{@t{COLUMN} @math{|} @t{ROW} @math{|} @t{LAYER}@}]
949 [@t{VISIBLE=}@{@t{YES} @math{|} @t{NO}@}]
950 @t{/CLABELS} @{@t{AUTO} @math{|} @{@t{ROWLABELS}@math{|}@t{COLLABELS}@}@t{=}@{@t{OPPOSITE}@math{|}@t{LAYER}@}@}
951 @t{/CRITERIA CILEVEL=}@i{percentage}
952 @t{/CATEGORIES} @t{VARIABLES=}@i{variables}
953 @{@t{[}@i{value}@t{,} @i{value}@dots{}@t{]}
954 @math{|} [@t{ORDER=}@{@t{A} @math{|} @t{D}@}]
955 [@t{KEY=}@{@t{VALUE} @math{|} @t{LABEL} @math{|} @i{summary}@t{(}@i{variable}@t{)}@}]
956 [@t{MISSING=}@{@t{EXCLUDE} @math{|} @t{INCLUDE}@}]@}
957 [@t{TOTAL=}@{@t{NO} @math{|} @t{YES}@} [@t{LABEL=}@i{string}] [@t{POSITION=}@{@t{AFTER} @math{|} @t{BEFORE}@}]]
958 [@t{EMPTY=}@{@t{INCLUDE} @math{|} @t{EXCLUDE}@}]
960 [@t{TITLE=}@i{string}@dots{}]
961 [@t{CAPTION=}@i{string}@dots{}]
962 [@t{CORNER=}@i{string}@dots{}]
963 @t{/SIGTEST TYPE=CHISQUARE}
964 [@t{ALPHA=}@i{siglevel}]
965 [@t{INCLUDEMRSETS=}@{@t{YES} @math{|} @t{NO}@}]
966 [@t{CATEGORIES=}@{@t{ALLVISIBLE} @math{|} @t{SUBTOTALS}@}]
967 @t{/COMPARETEST TYPE=}@{@t{PROP} @math{|} @t{MEAN}@}
968 [@t{ALPHA=}@i{value}[@t{,} @i{value}]]
969 [@t{ADJUST=}@{@t{BONFERRONI} @math{|} @t{BH} @math{|} @t{NONE}@}]
970 [@t{INCLUDEMRSETS=}@{@t{YES} @math{|} @t{NO}@}]
971 [@t{MEANSVARIANCE=}@{@t{ALLCATS} @math{|} @t{TESTEDCATS}@}]
972 [@t{CATEGORIES=}@{@t{ALLVISIBLE} @math{|} @t{SUBTOTALS}@}]
973 [@t{MERGE=}@{@t{NO} @math{|} @t{YES}@}]
974 [@t{STYLE=}@{@t{APA} @math{|} @t{SIMPLE}@}]
975 [@t{SHOWSIG=}@{@t{NO} @math{|} @t{YES}@}]
982 @cindex factor analysis
983 @cindex principal components analysis
984 @cindex principal axis factoring
985 @cindex data reduction
989 VARIABLES=@var{var_list},
990 MATRIX IN (@{CORR,COV@}=@{*,@var{file_spec}@})
993 [ /METHOD = @{CORRELATION, COVARIANCE@} ]
995 [ /ANALYSIS=@var{var_list} ]
997 [ /EXTRACTION=@{PC, PAF@}]
999 [ /ROTATION=@{VARIMAX, EQUAMAX, QUARTIMAX, PROMAX[(@var{k})], NOROTATE@}]
1001 [ /PRINT=[INITIAL] [EXTRACTION] [ROTATION] [UNIVARIATE] [CORRELATION] [COVARIANCE] [DET] [KMO] [AIC] [SIG] [ALL] [DEFAULT] ]
1005 [ /FORMAT=[SORT] [BLANK(@var{n})] [DEFAULT] ]
1007 [ /CRITERIA=[FACTORS(@var{n})] [MINEIGEN(@var{l})] [ITERATE(@var{m})] [ECONVERGE (@var{delta})] [DEFAULT] ]
1009 [ /MISSING=[@{LISTWISE, PAIRWISE@}] [@{INCLUDE, EXCLUDE@}] ]
1012 The @cmd{FACTOR} command performs Factor Analysis or Principal Axis Factoring on a dataset. It may be used to find
1013 common factors in the data or for data reduction purposes.
1015 The @subcmd{VARIABLES} subcommand is required (unless the @subcmd{MATRIX IN}
1016 subcommand is used).
1017 It lists the variables which are to partake in the analysis. (The @subcmd{ANALYSIS}
1018 subcommand may optionally further limit the variables that
1019 participate; it is useful primarily in conjunction with @subcmd{MATRIX IN}.)
1021 If @subcmd{MATRIX IN} instead of @subcmd{VARIABLES} is specified, then the analysis
1022 is performed on a pre-prepared correlation or covariance matrix file instead of on
1023 individual data cases. Typically the matrix file will have been generated by
1024 @cmd{MATRIX DATA} (@pxref{MATRIX DATA}) or provided by a third party.
1025 If specified, @subcmd{MATRIX IN} must be followed by @samp{COV} or @samp{CORR},
1026 then by @samp{=} and @var{file_spec} all in parentheses.
1027 @var{file_spec} may either be an asterisk, which indicates the currently loaded
1028 dataset, or it may be a file name to be loaded. @xref{MATRIX DATA}, for the expected
1031 The @subcmd{/EXTRACTION} subcommand is used to specify the way in which factors
1032 (components) are extracted from the data.
1033 If @subcmd{PC} is specified, then Principal Components Analysis is used.
1034 If @subcmd{PAF} is specified, then Principal Axis Factoring is
1035 used. By default Principal Components Analysis is used.
1037 The @subcmd{/ROTATION} subcommand is used to specify the method by which the
1038 extracted solution is rotated. Three orthogonal rotation methods are available:
1039 @subcmd{VARIMAX} (which is the default), @subcmd{EQUAMAX}, and @subcmd{QUARTIMAX}.
1040 There is one oblique rotation method, @i{viz}: @subcmd{PROMAX}.
1041 Optionally you may enter the power of the promax rotation @var{k}, which must be enclosed in parentheses.
1042 The default value of @var{k} is 5.
1043 If you don't want any rotation to be performed, the word @subcmd{NOROTATE}
1044 prevents the command from performing any rotation on the data.
1046 The @subcmd{/METHOD} subcommand should be used to determine whether the
1047 covariance matrix or the correlation matrix of the data is
1048 to be analysed. By default, the correlation matrix is analysed.
1050 The @subcmd{/PRINT} subcommand may be used to select which features of the analysis are reported:
1053 @item @subcmd{UNIVARIATE}
1054 A table of mean values, standard deviations and total weights are printed.
1055 @item @subcmd{INITIAL}
1056 Initial communalities and eigenvalues are printed.
1057 @item @subcmd{EXTRACTION}
1058 Extracted communalities and eigenvalues are printed.
1059 @item @subcmd{ROTATION}
1060 Rotated communalities and eigenvalues are printed.
1061 @item @subcmd{CORRELATION}
1062 The correlation matrix is printed.
1063 @item @subcmd{COVARIANCE}
1064 The covariance matrix is printed.
1066 The determinant of the correlation or covariance matrix is printed.
1068 The anti-image covariance and anti-image correlation matrices are printed.
1070 The Kaiser-Meyer-Olkin measure of sampling adequacy and the Bartlett test of sphericity is printed.
1072 The significance of the elements of correlation matrix is printed.
1074 All of the above are printed.
1075 @item @subcmd{DEFAULT}
1076 Identical to @subcmd{INITIAL} and @subcmd{EXTRACTION}.
1079 If @subcmd{/PLOT=EIGEN} is given, then a ``Scree'' plot of the eigenvalues is
1080 printed. This can be useful for visualizing the factors and deciding
1081 which factors (components) should be retained.
1083 The @subcmd{/FORMAT} subcommand determined how data are to be
1084 displayed in loading matrices. If @subcmd{SORT} is specified, then
1085 the variables are sorted in descending order of significance. If
1086 @subcmd{BLANK(@var{n})} is specified, then coefficients whose absolute
1087 value is less than @var{n} are not printed. If the keyword
1088 @subcmd{DEFAULT} is specified, or if no @subcmd{/FORMAT} subcommand is
1089 specified, then no sorting is performed, and all coefficients are printed.
1091 You can use the @subcmd{/CRITERIA} subcommand to specify how the number of
1092 extracted factors (components) are chosen. If @subcmd{FACTORS(@var{n})} is
1093 specified, where @var{n} is an integer, then @var{n} factors are
1094 extracted. Otherwise, the @subcmd{MINEIGEN} setting is used.
1095 @subcmd{MINEIGEN(@var{l})} requests that all factors whose eigenvalues
1096 are greater than or equal to @var{l} are extracted. The default value
1097 of @var{l} is 1. The @subcmd{ECONVERGE} setting has effect only when
1098 using iterative algorithms for factor extraction (such as Principal Axis
1099 Factoring). @subcmd{ECONVERGE(@var{delta})} specifies that
1100 iteration should cease when the maximum absolute value of the
1101 communality estimate between one iteration and the previous is less
1102 than @var{delta}. The default value of @var{delta} is 0.001.
1104 The @subcmd{ITERATE(@var{m})} may appear any number of times and is
1105 used for two different purposes. It is used to set the maximum number
1106 of iterations (@var{m}) for convergence and also to set the maximum
1107 number of iterations for rotation.
1108 Whether it affects convergence or rotation depends upon which
1109 subcommand follows the @subcmd{ITERATE} subcommand.
1110 If @subcmd{EXTRACTION} follows, it affects convergence.
1111 If @subcmd{ROTATION} follows, it affects rotation.
1112 If neither @subcmd{ROTATION} nor @subcmd{EXTRACTION} follow a
1113 @subcmd{ITERATE} subcommand, then the entire subcommand is ignored.
1114 The default value of @var{m} is 25.
1116 The @cmd{MISSING} subcommand determines the handling of missing
1117 variables. If @subcmd{INCLUDE} is set, then user-missing values are
1118 included in the calculations, but system-missing values are not.
1119 If @subcmd{EXCLUDE} is set, which is the default, user-missing
1120 values are excluded as well as system-missing values. This is the
1121 default. If @subcmd{LISTWISE} is set, then the entire case is excluded
1122 from analysis whenever any variable specified in the @cmd{VARIABLES}
1123 subcommand contains a missing value.
1125 If @subcmd{PAIRWISE} is set, then a case is considered missing only if
1126 either of the values for the particular coefficient are missing.
1127 The default is @subcmd{LISTWISE}.
1133 @cindex univariate analysis of variance
1134 @cindex fixed effects
1135 @cindex factorial anova
1136 @cindex analysis of variance
1141 GLM @var{dependent_vars} BY @var{fixed_factors}
1142 [/METHOD = SSTYPE(@var{type})]
1143 [/DESIGN = @var{interaction_0} [@var{interaction_1} [... @var{interaction_n}]]]
1144 [/INTERCEPT = @{INCLUDE|EXCLUDE@}]
1145 [/MISSING = @{INCLUDE|EXCLUDE@}]
1148 The @cmd{GLM} procedure can be used for fixed effects factorial Anova.
1150 The @var{dependent_vars} are the variables to be analysed.
1151 You may analyse several variables in the same command in which case they should all
1152 appear before the @code{BY} keyword.
1154 The @var{fixed_factors} list must be one or more categorical variables. Normally it
1155 does not make sense to enter a scalar variable in the @var{fixed_factors} and doing
1156 so may cause @pspp{} to do a lot of unnecessary processing.
1158 The @subcmd{METHOD} subcommand is used to change the method for producing the sums of
1159 squares. Available values of @var{type} are 1, 2 and 3. The default is type 3.
1161 You may specify a custom design using the @subcmd{DESIGN} subcommand.
1162 The design comprises a list of interactions where each interaction is a
1163 list of variables separated by a @samp{*}. For example the command
1165 GLM subject BY sex age_group race
1166 /DESIGN = age_group sex group age_group*sex age_group*race
1168 @noindent specifies the model @math{subject = age_group + sex + race + age_group*sex + age_group*race}.
1169 If no @subcmd{DESIGN} subcommand is specified, then the default is all possible combinations
1170 of the fixed factors. That is to say
1172 GLM subject BY sex age_group race
1175 @math{subject = age_group + sex + race + age_group*sex + age_group*race + sex*race + age_group*sex*race}.
1178 The @subcmd{MISSING} subcommand determines the handling of missing
1180 If @subcmd{INCLUDE} is set then, for the purposes of GLM analysis,
1181 only system-missing values are considered
1182 to be missing; user-missing values are not regarded as missing.
1183 If @subcmd{EXCLUDE} is set, which is the default, then user-missing
1184 values are considered to be missing as well as system-missing values.
1185 A case for which any dependent variable or any factor
1186 variable has a missing value is excluded from the analysis.
1188 @node LOGISTIC REGRESSION
1189 @section LOGISTIC REGRESSION
1191 @vindex LOGISTIC REGRESSION
1192 @cindex logistic regression
1193 @cindex bivariate logistic regression
1196 LOGISTIC REGRESSION [VARIABLES =] @var{dependent_var} WITH @var{predictors}
1198 [/CATEGORICAL = @var{categorical_predictors}]
1200 [@{/NOCONST | /ORIGIN | /NOORIGIN @}]
1202 [/PRINT = [SUMMARY] [DEFAULT] [CI(@var{confidence})] [ALL]]
1204 [/CRITERIA = [BCON(@var{min_delta})] [ITERATE(@var{max_interations})]
1205 [LCON(@var{min_likelihood_delta})] [EPS(@var{min_epsilon})]
1206 [CUT(@var{cut_point})]]
1208 [/MISSING = @{INCLUDE|EXCLUDE@}]
1211 Bivariate Logistic Regression is used when you want to explain a dichotomous dependent
1212 variable in terms of one or more predictor variables.
1214 The minimum command is
1216 LOGISTIC REGRESSION @var{y} WITH @var{x1} @var{x2} @dots{} @var{xn}.
1218 Here, @var{y} is the dependent variable, which must be dichotomous and @var{x1} @dots{} @var{xn}
1219 are the predictor variables whose coefficients the procedure estimates.
1221 By default, a constant term is included in the model.
1222 Hence, the full model is
1225 = b_0 + b_1 {\bf x_1}
1231 Predictor variables which are categorical in nature should be listed on the @subcmd{/CATEGORICAL} subcommand.
1232 Simple variables as well as interactions between variables may be listed here.
1234 If you want a model without the constant term @math{b_0}, use the keyword @subcmd{/ORIGIN}.
1235 @subcmd{/NOCONST} is a synonym for @subcmd{/ORIGIN}.
1237 An iterative Newton-Raphson procedure is used to fit the model.
1238 The @subcmd{/CRITERIA} subcommand is used to specify the stopping criteria of the procedure,
1239 and other parameters.
1240 The value of @var{cut_point} is used in the classification table. It is the
1241 threshold above which predicted values are considered to be 1. Values
1242 of @var{cut_point} must lie in the range [0,1].
1243 During iterations, if any one of the stopping criteria are satisfied, the procedure is
1244 considered complete.
1245 The stopping criteria are:
1247 @item The number of iterations exceeds @var{max_iterations}.
1248 The default value of @var{max_iterations} is 20.
1249 @item The change in the all coefficient estimates are less than @var{min_delta}.
1250 The default value of @var{min_delta} is 0.001.
1251 @item The magnitude of change in the likelihood estimate is less than @var{min_likelihood_delta}.
1252 The default value of @var{min_delta} is zero.
1253 This means that this criterion is disabled.
1254 @item The differential of the estimated probability for all cases is less than @var{min_epsilon}.
1255 In other words, the probabilities are close to zero or one.
1256 The default value of @var{min_epsilon} is 0.00000001.
1260 The @subcmd{PRINT} subcommand controls the display of optional statistics.
1261 Currently there is one such option, @subcmd{CI}, which indicates that the
1262 confidence interval of the odds ratio should be displayed as well as its value.
1263 @subcmd{CI} should be followed by an integer in parentheses, to indicate the
1264 confidence level of the desired confidence interval.
1266 The @subcmd{MISSING} subcommand determines the handling of missing
1268 If @subcmd{INCLUDE} is set, then user-missing values are included in the
1269 calculations, but system-missing values are not.
1270 If @subcmd{EXCLUDE} is set, which is the default, user-missing
1271 values are excluded as well as system-missing values.
1272 This is the default.
1283 [ BY @{@var{var_list}@} [BY @{@var{var_list}@} [BY @{@var{var_list}@} @dots{} ]]]
1285 [ /@{@var{var_list}@}
1286 [ BY @{@var{var_list}@} [BY @{@var{var_list}@} [BY @{@var{var_list}@} @dots{} ]]] ]
1288 [/CELLS = [MEAN] [COUNT] [STDDEV] [SEMEAN] [SUM] [MIN] [MAX] [RANGE]
1289 [VARIANCE] [KURT] [SEKURT]
1290 [SKEW] [SESKEW] [FIRST] [LAST]
1291 [HARMONIC] [GEOMETRIC]
1296 [/MISSING = [INCLUDE] [DEPENDENT]]
1299 You can use the @cmd{MEANS} command to calculate the arithmetic mean and similar
1300 statistics, either for the dataset as a whole or for categories of data.
1302 The simplest form of the command is
1306 @noindent which calculates the mean, count and standard deviation for @var{v}.
1307 If you specify a grouping variable, for example
1309 MEANS @var{v} BY @var{g}.
1311 @noindent then the means, counts and standard deviations for @var{v} after having
1312 been grouped by @var{g} are calculated.
1313 Instead of the mean, count and standard deviation, you could specify the statistics
1314 in which you are interested:
1316 MEANS @var{x} @var{y} BY @var{g}
1317 /CELLS = HARMONIC SUM MIN.
1319 This example calculates the harmonic mean, the sum and the minimum values of @var{x} and @var{y}
1322 The @subcmd{CELLS} subcommand specifies which statistics to calculate. The available statistics
1326 @cindex arithmetic mean
1327 The arithmetic mean.
1328 @item @subcmd{COUNT}
1329 The count of the values.
1330 @item @subcmd{STDDEV}
1331 The standard deviation.
1332 @item @subcmd{SEMEAN}
1333 The standard error of the mean.
1335 The sum of the values.
1340 @item @subcmd{RANGE}
1341 The difference between the maximum and minimum values.
1342 @item @subcmd{VARIANCE}
1344 @item @subcmd{FIRST}
1345 The first value in the category.
1347 The last value in the category.
1350 @item @subcmd{SESKEW}
1351 The standard error of the skewness.
1354 @item @subcmd{SEKURT}
1355 The standard error of the kurtosis.
1356 @item @subcmd{HARMONIC}
1357 @cindex harmonic mean
1359 @item @subcmd{GEOMETRIC}
1360 @cindex geometric mean
1364 In addition, three special keywords are recognized:
1366 @item @subcmd{DEFAULT}
1367 This is the same as @subcmd{MEAN} @subcmd{COUNT} @subcmd{STDDEV}.
1369 All of the above statistics are calculated.
1371 No statistics are calculated (only a summary is shown).
1375 More than one @dfn{table} can be specified in a single command.
1376 Each table is separated by a @samp{/}. For
1380 @var{c} @var{d} @var{e} BY @var{x}
1381 /@var{a} @var{b} BY @var{x} @var{y}
1382 /@var{f} BY @var{y} BY @var{z}.
1384 has three tables (the @samp{TABLE =} is optional).
1385 The first table has three dependent variables @var{c}, @var{d} and @var{e}
1386 and a single categorical variable @var{x}.
1387 The second table has two dependent variables @var{a} and @var{b},
1388 and two categorical variables @var{x} and @var{y}.
1389 The third table has a single dependent variables @var{f}
1390 and a categorical variable formed by the combination of @var{y} and @var{z}.
1393 By default values are omitted from the analysis only if missing values
1394 (either system missing or user missing)
1395 for any of the variables directly involved in their calculation are
1397 This behaviour can be modified with the @subcmd{/MISSING} subcommand.
1398 Three options are possible: @subcmd{TABLE}, @subcmd{INCLUDE} and @subcmd{DEPENDENT}.
1400 @subcmd{/MISSING = INCLUDE} says that user missing values, either in the dependent
1401 variables or in the categorical variables should be taken at their face
1402 value, and not excluded.
1404 @subcmd{/MISSING = DEPENDENT} says that user missing values, in the dependent
1405 variables should be taken at their face value, however cases which
1406 have user missing values for the categorical variables should be omitted
1407 from the calculation.
1409 @subsection Example Means
1411 The dataset in @file{repairs.sav} contains the mean time between failures (@exvar{mtbf})
1412 for a sample of artifacts produced by different factories and trialed under
1413 different operating conditions.
1414 Since there are four combinations of categorical variables, by simply looking
1415 at the list of data, it would be hard to how the scores vary for each category.
1416 @ref{means:ex} shows one way of tabulating the @exvar{mtbf} in a way which is
1417 easier to understand.
1419 @float Example, means:ex
1420 @psppsyntax {means.sps}
1421 @caption {Running @cmd{MEANS} on the @exvar{mtbf} score with categories @exvar{factory} and @exvar{environment}}
1424 The results are shown in @ref{means:res}. The figures shown indicate the mean,
1425 standard deviation and number of samples in each category.
1426 These figures however do not indicate whether the results are statistically
1427 significant. For that, you would need to use the procedures @cmd{ONEWAY}, @cmd{GLM} or
1428 @cmd{T-TEST} depending on the hypothesis being tested.
1430 @float Result, means:res
1432 @caption {The @exvar{mtbf} categorised by @exvar{factory} and @exvar{environment}}
1435 Note that there is no limit to the number of variables for which you can calculate
1436 statistics, nor to the number of categorical variables per layer, nor the number
1438 However, running @cmd{MEANS} on a large numbers of variables, or with categorical variables
1439 containing a large number of distinct values may result in an extremely large output, which
1440 will not be easy to interpret.
1441 So you should consider carefully which variables to select for participation in the analysis.
1447 @cindex nonparametric tests
1452 nonparametric test subcommands
1457 [ /STATISTICS=@{DESCRIPTIVES@} ]
1459 [ /MISSING=@{ANALYSIS, LISTWISE@} @{INCLUDE, EXCLUDE@} ]
1461 [ /METHOD=EXACT [ TIMER [(@var{n})] ] ]
1464 @cmd{NPAR TESTS} performs nonparametric tests.
1465 Non parametric tests make very few assumptions about the distribution of the
1467 One or more tests may be specified by using the corresponding subcommand.
1468 If the @subcmd{/STATISTICS} subcommand is also specified, then summary statistics are
1469 produces for each variable that is the subject of any test.
1471 Certain tests may take a long time to execute, if an exact figure is required.
1472 Therefore, by default asymptotic approximations are used unless the
1473 subcommand @subcmd{/METHOD=EXACT} is specified.
1474 Exact tests give more accurate results, but may take an unacceptably long
1475 time to perform. If the @subcmd{TIMER} keyword is used, it sets a maximum time,
1476 after which the test is abandoned, and a warning message printed.
1477 The time, in minutes, should be specified in parentheses after the @subcmd{TIMER} keyword.
1478 If the @subcmd{TIMER} keyword is given without this figure, then a default value of 5 minutes
1483 * BINOMIAL:: Binomial Test
1484 * CHISQUARE:: Chi-square Test
1485 * COCHRAN:: Cochran Q Test
1486 * FRIEDMAN:: Friedman Test
1487 * KENDALL:: Kendall's W Test
1488 * KOLMOGOROV-SMIRNOV:: Kolmogorov Smirnov Test
1489 * KRUSKAL-WALLIS:: Kruskal-Wallis Test
1490 * MANN-WHITNEY:: Mann Whitney U Test
1491 * MCNEMAR:: McNemar Test
1492 * MEDIAN:: Median Test
1494 * SIGN:: The Sign Test
1495 * WILCOXON:: Wilcoxon Signed Ranks Test
1500 @subsection Binomial test
1502 @cindex binomial test
1505 [ /BINOMIAL[(@var{p})]=@var{var_list}[(@var{value1}[, @var{value2})] ] ]
1508 The @subcmd{/BINOMIAL} subcommand compares the observed distribution of a dichotomous
1509 variable with that of a binomial distribution.
1510 The variable @var{p} specifies the test proportion of the binomial
1512 The default value of 0.5 is assumed if @var{p} is omitted.
1514 If a single value appears after the variable list, then that value is
1515 used as the threshold to partition the observed values. Values less
1516 than or equal to the threshold value form the first category. Values
1517 greater than the threshold form the second category.
1519 If two values appear after the variable list, then they are used
1520 as the values which a variable must take to be in the respective
1522 Cases for which a variable takes a value equal to neither of the specified
1523 values, take no part in the test for that variable.
1525 If no values appear, then the variable must assume dichotomous
1527 If more than two distinct, non-missing values for a variable
1528 under test are encountered then an error occurs.
1530 If the test proportion is equal to 0.5, then a two tailed test is
1531 reported. For any other test proportion, a one tailed test is
1533 For one tailed tests, if the test proportion is less than
1534 or equal to the observed proportion, then the significance of
1535 observing the observed proportion or more is reported.
1536 If the test proportion is more than the observed proportion, then the
1537 significance of observing the observed proportion or less is reported.
1538 That is to say, the test is always performed in the observed
1541 @pspp{} uses a very precise approximation to the gamma function to
1542 compute the binomial significance. Thus, exact results are reported
1543 even for very large sample sizes.
1547 @subsection Chi-square Test
1549 @cindex chi-square test
1553 [ /CHISQUARE=@var{var_list}[(@var{lo},@var{hi})] [/EXPECTED=@{EQUAL|@var{f1}, @var{f2} @dots{} @var{fn}@}] ]
1557 The @subcmd{/CHISQUARE} subcommand produces a chi-square statistic for the differences
1558 between the expected and observed frequencies of the categories of a variable.
1559 Optionally, a range of values may appear after the variable list.
1560 If a range is given, then non integer values are truncated, and values
1561 outside the specified range are excluded from the analysis.
1563 The @subcmd{/EXPECTED} subcommand specifies the expected values of each
1565 There must be exactly one non-zero expected value, for each observed
1566 category, or the @subcmd{EQUAL} keyword must be specified.
1567 You may use the notation @subcmd{@var{n}*@var{f}} to specify @var{n}
1568 consecutive expected categories all taking a frequency of @var{f}.
1569 The frequencies given are proportions, not absolute frequencies. The
1570 sum of the frequencies need not be 1.
1571 If no @subcmd{/EXPECTED} subcommand is given, then equal frequencies
1574 @subsubsection Chi-square Example
1576 A researcher wishes to investigate whether there are an equal number of
1577 persons of each sex in a population. The sample chosen for invesigation
1578 is that from the @file {physiology.sav} dataset. The null hypothesis for
1579 the test is that the population comprises an equal number of males and females.
1580 The analysis is performed as shown in @ref{chisquare:ex}.
1582 @float Example, chisquare:ex
1583 @psppsyntax {chisquare.sps}
1584 @caption {Performing a chi-square test to check for equal distribution of sexes}
1587 There is only one test variable, @i{viz:} @exvar{sex}. The other variables in the dataset
1590 @float Screenshot, chisquare:scr
1591 @psppimage {chisquare}
1592 @caption {Performing a chi-square test using the graphic user interface}
1595 In @ref{chisquare:res} the summary box shows that in the sample, there are more males
1596 than females. However the significance of chi-square result is greater than 0.05
1597 --- the most commonly accepted p-value --- and therefore
1598 there is not enough evidence to reject the null hypothesis and one must conclude
1599 that the evidence does not indicate that there is an imbalance of the sexes
1602 @float Result, chisquare:res
1603 @psppoutput {chisquare}
1604 @caption {The results of running a chi-square test on @exvar{sex}}
1609 @subsection Cochran Q Test
1611 @cindex Cochran Q test
1612 @cindex Q, Cochran Q
1615 [ /COCHRAN = @var{var_list} ]
1618 The Cochran Q test is used to test for differences between three or more groups.
1619 The data for @var{var_list} in all cases must assume exactly two
1620 distinct values (other than missing values).
1622 The value of Q is displayed along with its Asymptotic significance
1623 based on a chi-square distribution.
1626 @subsection Friedman Test
1628 @cindex Friedman test
1631 [ /FRIEDMAN = @var{var_list} ]
1634 The Friedman test is used to test for differences between repeated measures when
1635 there is no indication that the distributions are normally distributed.
1637 A list of variables which contain the measured data must be given. The procedure
1638 prints the sum of ranks for each variable, the test statistic and its significance.
1641 @subsection Kendall's W Test
1643 @cindex Kendall's W test
1644 @cindex coefficient of concordance
1647 [ /KENDALL = @var{var_list} ]
1650 The Kendall test investigates whether an arbitrary number of related samples come from the
1652 It is identical to the Friedman test except that the additional statistic W, Kendall's Coefficient of Concordance is printed.
1653 It has the range [0,1] --- a value of zero indicates no agreement between the samples whereas a value of
1654 unity indicates complete agreement.
1657 @node KOLMOGOROV-SMIRNOV
1658 @subsection Kolmogorov-Smirnov Test
1659 @vindex KOLMOGOROV-SMIRNOV
1661 @cindex Kolmogorov-Smirnov test
1664 [ /KOLMOGOROV-SMIRNOV (@{NORMAL [@var{mu}, @var{sigma}], UNIFORM [@var{min}, @var{max}], POISSON [@var{lambda}], EXPONENTIAL [@var{scale}] @}) = @var{var_list} ]
1667 The one sample Kolmogorov-Smirnov subcommand is used to test whether or not a dataset is
1668 drawn from a particular distribution. Four distributions are supported, @i{viz:}
1669 Normal, Uniform, Poisson and Exponential.
1671 Ideally you should provide the parameters of the distribution against
1672 which you wish to test the data. For example, with the normal
1673 distribution the mean (@var{mu})and standard deviation (@var{sigma})
1674 should be given; with the uniform distribution, the minimum
1675 (@var{min})and maximum (@var{max}) value should be provided.
1676 However, if the parameters are omitted they are imputed from the
1677 data. Imputing the parameters reduces the power of the test so should
1678 be avoided if possible.
1680 In the following example, two variables @var{score} and @var{age} are
1681 tested to see if they follow a normal distribution with a mean of 3.5
1682 and a standard deviation of 2.0.
1685 /KOLMOGOROV-SMIRNOV (normal 3.5 2.0) = @var{score} @var{age}.
1687 If the variables need to be tested against different distributions, then a separate
1688 subcommand must be used. For example the following syntax tests @var{score} against
1689 a normal distribution with mean of 3.5 and standard deviation of 2.0 whilst @var{age}
1690 is tested against a normal distribution of mean 40 and standard deviation 1.5.
1693 /KOLMOGOROV-SMIRNOV (normal 3.5 2.0) = @var{score}
1694 /KOLMOGOROV-SMIRNOV (normal 40 1.5) = @var{age}.
1697 The abbreviated subcommand @subcmd{K-S} may be used in place of @subcmd{KOLMOGOROV-SMIRNOV}.
1699 @node KRUSKAL-WALLIS
1700 @subsection Kruskal-Wallis Test
1701 @vindex KRUSKAL-WALLIS
1703 @cindex Kruskal-Wallis test
1706 [ /KRUSKAL-WALLIS = @var{var_list} BY var (@var{lower}, @var{upper}) ]
1709 The Kruskal-Wallis test is used to compare data from an
1710 arbitrary number of populations. It does not assume normality.
1711 The data to be compared are specified by @var{var_list}.
1712 The categorical variable determining the groups to which the
1713 data belongs is given by @var{var}. The limits @var{lower} and
1714 @var{upper} specify the valid range of @var{var}.
1715 If @var{upper} is smaller than @var{lower}, the PSPP will assume their values
1716 to be reversed. Any cases for which @var{var} falls outside
1717 [@var{lower}, @var{upper}] are ignored.
1719 The mean rank of each group as well as the chi-squared value and
1720 significance of the test are printed.
1721 The abbreviated subcommand @subcmd{K-W} may be used in place of
1722 @subcmd{KRUSKAL-WALLIS}.
1726 @subsection Mann-Whitney U Test
1727 @vindex MANN-WHITNEY
1729 @cindex Mann-Whitney U test
1730 @cindex U, Mann-Whitney U
1733 [ /MANN-WHITNEY = @var{var_list} BY var (@var{group1}, @var{group2}) ]
1736 The Mann-Whitney subcommand is used to test whether two groups of data
1737 come from different populations. The variables to be tested should be
1738 specified in @var{var_list} and the grouping variable, that determines
1739 to which group the test variables belong, in @var{var}.
1740 @var{Var} may be either a string or an alpha variable.
1741 @var{Group1} and @var{group2} specify the
1742 two values of @var{var} which determine the groups of the test data.
1743 Cases for which the @var{var} value is neither @var{group1} or
1744 @var{group2} are ignored.
1746 The value of the Mann-Whitney U statistic, the Wilcoxon W, and the
1747 significance are printed.
1748 You may abbreviated the subcommand @subcmd{MANN-WHITNEY} to
1753 @subsection McNemar Test
1755 @cindex McNemar test
1758 [ /MCNEMAR @var{var_list} [ WITH @var{var_list} [ (PAIRED) ]]]
1761 Use McNemar's test to analyse the significance of the difference between
1762 pairs of correlated proportions.
1764 If the @code{WITH} keyword is omitted, then tests for all
1765 combinations of the listed variables are performed.
1766 If the @code{WITH} keyword is given, and the @code{(PAIRED)} keyword
1767 is also given, then the number of variables preceding @code{WITH}
1768 must be the same as the number following it.
1769 In this case, tests for each respective pair of variables are
1771 If the @code{WITH} keyword is given, but the
1772 @code{(PAIRED)} keyword is omitted, then tests for each combination
1773 of variable preceding @code{WITH} against variable following
1774 @code{WITH} are performed.
1776 The data in each variable must be dichotomous. If there are more
1777 than two distinct variables an error will occur and the test will
1781 @subsection Median Test
1786 [ /MEDIAN [(@var{value})] = @var{var_list} BY @var{variable} (@var{value1}, @var{value2}) ]
1789 The median test is used to test whether independent samples come from
1790 populations with a common median.
1791 The median of the populations against which the samples are to be tested
1792 may be given in parentheses immediately after the
1793 @subcmd{/MEDIAN} subcommand. If it is not given, the median is imputed from the
1794 union of all the samples.
1796 The variables of the samples to be tested should immediately follow the @samp{=} sign. The
1797 keyword @code{BY} must come next, and then the grouping variable. Two values
1798 in parentheses should follow. If the first value is greater than the second,
1799 then a 2 sample test is performed using these two values to determine the groups.
1800 If however, the first variable is less than the second, then a @i{k} sample test is
1801 conducted and the group values used are all values encountered which lie in the
1802 range [@var{value1},@var{value2}].
1806 @subsection Runs Test
1811 [ /RUNS (@{MEAN, MEDIAN, MODE, @var{value}@}) = @var{var_list} ]
1814 The @subcmd{/RUNS} subcommand tests whether a data sequence is randomly ordered.
1816 It works by examining the number of times a variable's value crosses a given threshold.
1817 The desired threshold must be specified within parentheses.
1818 It may either be specified as a number or as one of @subcmd{MEAN}, @subcmd{MEDIAN} or @subcmd{MODE}.
1819 Following the threshold specification comes the list of variables whose values are to be
1822 The subcommand shows the number of runs, the asymptotic significance based on the
1826 @subsection Sign Test
1831 [ /SIGN @var{var_list} [ WITH @var{var_list} [ (PAIRED) ]]]
1834 The @subcmd{/SIGN} subcommand tests for differences between medians of the
1836 The test does not make any assumptions about the
1837 distribution of the data.
1839 If the @code{WITH} keyword is omitted, then tests for all
1840 combinations of the listed variables are performed.
1841 If the @code{WITH} keyword is given, and the @code{(PAIRED)} keyword
1842 is also given, then the number of variables preceding @code{WITH}
1843 must be the same as the number following it.
1844 In this case, tests for each respective pair of variables are
1846 If the @code{WITH} keyword is given, but the
1847 @code{(PAIRED)} keyword is omitted, then tests for each combination
1848 of variable preceding @code{WITH} against variable following
1849 @code{WITH} are performed.
1852 @subsection Wilcoxon Matched Pairs Signed Ranks Test
1854 @cindex wilcoxon matched pairs signed ranks test
1857 [ /WILCOXON @var{var_list} [ WITH @var{var_list} [ (PAIRED) ]]]
1860 The @subcmd{/WILCOXON} subcommand tests for differences between medians of the
1862 The test does not make any assumptions about the variances of the samples.
1863 It does however assume that the distribution is symmetrical.
1865 If the @subcmd{WITH} keyword is omitted, then tests for all
1866 combinations of the listed variables are performed.
1867 If the @subcmd{WITH} keyword is given, and the @subcmd{(PAIRED)} keyword
1868 is also given, then the number of variables preceding @subcmd{WITH}
1869 must be the same as the number following it.
1870 In this case, tests for each respective pair of variables are
1872 If the @subcmd{WITH} keyword is given, but the
1873 @subcmd{(PAIRED)} keyword is omitted, then tests for each combination
1874 of variable preceding @subcmd{WITH} against variable following
1875 @subcmd{WITH} are performed.
1884 /MISSING=@{ANALYSIS,LISTWISE@} @{EXCLUDE,INCLUDE@}
1885 /CRITERIA=CI(@var{confidence})
1889 TESTVAL=@var{test_value}
1890 /VARIABLES=@var{var_list}
1893 (Independent Samples mode.)
1894 GROUPS=var(@var{value1} [, @var{value2}])
1895 /VARIABLES=@var{var_list}
1898 (Paired Samples mode.)
1899 PAIRS=@var{var_list} [WITH @var{var_list} [(PAIRED)] ]
1904 The @cmd{T-TEST} procedure outputs tables used in testing hypotheses about
1906 It operates in one of three modes:
1908 @item One Sample mode.
1909 @item Independent Groups mode.
1914 Each of these modes are described in more detail below.
1915 There are two optional subcommands which are common to all modes.
1917 The @cmd{/CRITERIA} subcommand tells @pspp{} the confidence interval used
1918 in the tests. The default value is 0.95.
1921 The @cmd{MISSING} subcommand determines the handling of missing
1923 If @subcmd{INCLUDE} is set, then user-missing values are included in the
1924 calculations, but system-missing values are not.
1925 If @subcmd{EXCLUDE} is set, which is the default, user-missing
1926 values are excluded as well as system-missing values.
1927 This is the default.
1929 If @subcmd{LISTWISE} is set, then the entire case is excluded from analysis
1930 whenever any variable specified in the @subcmd{/VARIABLES}, @subcmd{/PAIRS} or
1931 @subcmd{/GROUPS} subcommands contains a missing value.
1932 If @subcmd{ANALYSIS} is set, then missing values are excluded only in the analysis for
1933 which they would be needed. This is the default.
1937 * One Sample Mode:: Testing against a hypothesized mean
1938 * Independent Samples Mode:: Testing two independent groups for equal mean
1939 * Paired Samples Mode:: Testing two interdependent groups for equal mean
1942 @node One Sample Mode
1943 @subsection One Sample Mode
1945 The @subcmd{TESTVAL} subcommand invokes the One Sample mode.
1946 This mode is used to test a population mean against a hypothesized
1948 The value given to the @subcmd{TESTVAL} subcommand is the value against
1949 which you wish to test.
1950 In this mode, you must also use the @subcmd{/VARIABLES} subcommand to
1951 tell @pspp{} which variables you wish to test.
1953 @subsubsection Example - One Sample T-test
1955 A researcher wishes to know whether the weight of persons in a population
1956 is different from the national average.
1957 The samples are drawn from the population under investigation and recorded
1958 in the file @file{physiology.sav}.
1959 From the Department of Health, she
1960 knows that the national average weight of healthy adults is 76.8kg.
1961 Accordingly the @subcmd{TESTVAL} is set to 76.8.
1962 The null hypothesis therefore is that the mean average weight of the
1963 population from which the sample was drawn is 76.8kg.
1965 As previously noted (@pxref{Identifying incorrect data}), one
1966 sample in the dataset contains a weight value
1967 which is clearly incorrect. So this is excluded from the analysis
1968 using the @cmd{SELECT} command.
1970 @float Example, one-sample-t:ex
1971 @psppsyntax {one-sample-t.sps}
1972 @caption {Running a one sample T-Test after excluding all non-positive values}
1975 @float Screenshot, one-sample-t:scr
1976 @psppimage {one-sample-t}
1977 @caption {Using the One Sample T-Test dialog box to test @exvar{weight} for a mean of 76.8kg}
1981 @ref{one-sample-t:res} shows that the mean of our sample differs from the test value
1982 by -1.40kg. However the significance is very high (0.610). So one cannot
1983 reject the null hypothesis, and must conclude there is not enough evidence
1984 to suggest that the mean weight of the persons in our population is different
1987 @float Results, one-sample-t:res
1988 @psppoutput {one-sample-t}
1989 @caption {The results of a one sample T-test of @exvar{weight} using a test value of 76.8kg}
1992 @node Independent Samples Mode
1993 @subsection Independent Samples Mode
1995 The @subcmd{GROUPS} subcommand invokes Independent Samples mode or
1997 This mode is used to test whether two groups of values have the
1998 same population mean.
1999 In this mode, you must also use the @subcmd{/VARIABLES} subcommand to
2000 tell @pspp{} the dependent variables you wish to test.
2002 The variable given in the @subcmd{GROUPS} subcommand is the independent
2003 variable which determines to which group the samples belong.
2004 The values in parentheses are the specific values of the independent
2005 variable for each group.
2006 If the parentheses are omitted and no values are given, the default values
2007 of 1.0 and 2.0 are assumed.
2009 If the independent variable is numeric,
2010 it is acceptable to specify only one value inside the parentheses.
2011 If you do this, cases where the independent variable is
2012 greater than or equal to this value belong to the first group, and cases
2013 less than this value belong to the second group.
2014 When using this form of the @subcmd{GROUPS} subcommand, missing values in
2015 the independent variable are excluded on a listwise basis, regardless
2016 of whether @subcmd{/MISSING=LISTWISE} was specified.
2018 @subsubsection Example - Independent Samples T-test
2020 A researcher wishes to know whether within a population, adult males
2021 are taller than adult females.
2022 The samples are drawn from the population under investigation and recorded
2023 in the file @file{physiology.sav}.
2025 As previously noted (@pxref{Identifying incorrect data}), one
2026 sample in the dataset contains a height value
2027 which is clearly incorrect. So this is excluded from the analysis
2028 using the @cmd{SELECT} command.
2031 @float Example, indepdendent-samples-t:ex
2032 @psppsyntax {independent-samples-t.sps}
2033 @caption {Running a independent samples T-Test after excluding all observations less than 200kg}
2037 The null hypothesis is that both males and females are on average
2040 @float Screenshot, independent-samples-t:scr
2041 @psppimage {independent-samples-t}
2042 @caption {Using the Independent Sample T-test dialog, to test for differences of @exvar{height} between values of @exvar{sex}}
2046 In this case, the grouping variable is @exvar{sex}, so this is entered
2047 as the variable for the @subcmd{GROUP} subcommand. The group values are 0 (male) and
2050 If you are running the proceedure using syntax, then you need to enter
2051 the values corresponding to each group within parentheses.
2052 If you are using the graphic user interface, then you have to open
2053 the ``Define Groups'' dialog box and enter the values corresponding
2054 to each group as shown in @ref{define-groups-t:scr}. If, as in this case, the dataset has defined value
2055 labels for the group variable, then you can enter them by label
2058 @float Screenshot, define-groups-t:scr
2059 @psppimage {define-groups-t}
2060 @caption {Setting the values of the grouping variable for an Independent Samples T-test}
2063 From @ref{independent-samples-t:res}, one can clearly see that the @emph{sample} mean height
2064 is greater for males than for females. However in order to see if this
2065 is a significant result, one must consult the T-Test table.
2067 The T-Test table contains two rows; one for use if the variance of the samples
2068 in each group may be safely assumed to be equal, and the second row
2069 if the variances in each group may not be safely assumed to be equal.
2071 In this case however, both rows show a 2-tailed significance less than 0.001 and
2072 one must therefore reject the null hypothesis and conclude that within
2073 the population the mean height of males and of females are unequal.
2075 @float Result, independent-samples-t:res
2076 @psppoutput {independent-samples-t}
2077 @caption {The results of an independent samples T-test of @exvar{height} by @exvar{sex}}
2080 @node Paired Samples Mode
2081 @subsection Paired Samples Mode
2083 The @cmd{PAIRS} subcommand introduces Paired Samples mode.
2084 Use this mode when repeated measures have been taken from the same
2086 If the @subcmd{WITH} keyword is omitted, then tables for all
2087 combinations of variables given in the @cmd{PAIRS} subcommand are
2089 If the @subcmd{WITH} keyword is given, and the @subcmd{(PAIRED)} keyword
2090 is also given, then the number of variables preceding @subcmd{WITH}
2091 must be the same as the number following it.
2092 In this case, tables for each respective pair of variables are
2094 In the event that the @subcmd{WITH} keyword is given, but the
2095 @subcmd{(PAIRED)} keyword is omitted, then tables for each combination
2096 of variable preceding @subcmd{WITH} against variable following
2097 @subcmd{WITH} are generated.
2104 @cindex analysis of variance
2109 [/VARIABLES = ] @var{var_list} BY @var{var}
2110 /MISSING=@{ANALYSIS,LISTWISE@} @{EXCLUDE,INCLUDE@}
2111 /CONTRAST= @var{value1} [, @var{value2}] ... [,@var{valueN}]
2112 /STATISTICS=@{DESCRIPTIVES,HOMOGENEITY@}
2113 /POSTHOC=@{BONFERRONI, GH, LSD, SCHEFFE, SIDAK, TUKEY, ALPHA ([@var{value}])@}
2116 The @cmd{ONEWAY} procedure performs a one-way analysis of variance of
2117 variables factored by a single independent variable.
2118 It is used to compare the means of a population
2119 divided into more than two groups.
2121 The dependent variables to be analysed should be given in the @subcmd{VARIABLES}
2123 The list of variables must be followed by the @subcmd{BY} keyword and
2124 the name of the independent (or factor) variable.
2126 You can use the @subcmd{STATISTICS} subcommand to tell @pspp{} to display
2127 ancillary information. The options accepted are:
2130 Displays descriptive statistics about the groups factored by the independent
2133 Displays the Levene test of Homogeneity of Variance for the
2134 variables and their groups.
2137 The @subcmd{CONTRAST} subcommand is used when you anticipate certain
2138 differences between the groups.
2139 The subcommand must be followed by a list of numerals which are the
2140 coefficients of the groups to be tested.
2141 The number of coefficients must correspond to the number of distinct
2142 groups (or values of the independent variable).
2143 If the total sum of the coefficients are not zero, then @pspp{} will
2144 display a warning, but will proceed with the analysis.
2145 The @subcmd{CONTRAST} subcommand may be given up to 10 times in order
2146 to specify different contrast tests.
2147 The @subcmd{MISSING} subcommand defines how missing values are handled.
2148 If @subcmd{LISTWISE} is specified then cases which have missing values for
2149 the independent variable or any dependent variable are ignored.
2150 If @subcmd{ANALYSIS} is specified, then cases are ignored if the independent
2151 variable is missing or if the dependent variable currently being
2152 analysed is missing. The default is @subcmd{ANALYSIS}.
2153 A setting of @subcmd{EXCLUDE} means that variables whose values are
2154 user-missing are to be excluded from the analysis. A setting of
2155 @subcmd{INCLUDE} means they are to be included. The default is @subcmd{EXCLUDE}.
2157 Using the @code{POSTHOC} subcommand you can perform multiple
2158 pairwise comparisons on the data. The following comparison methods
2162 Least Significant Difference.
2163 @item @subcmd{TUKEY}
2164 Tukey Honestly Significant Difference.
2165 @item @subcmd{BONFERRONI}
2167 @item @subcmd{SCHEFFE}
2169 @item @subcmd{SIDAK}
2172 The Games-Howell test.
2176 Use the optional syntax @code{ALPHA(@var{value})} to indicate that
2177 @cmd{ONEWAY} should perform the posthoc tests at a confidence level of
2178 @var{value}. If @code{ALPHA(@var{value})} is not specified, then the
2179 confidence level used is 0.05.
2182 @section QUICK CLUSTER
2183 @vindex QUICK CLUSTER
2185 @cindex K-means clustering
2189 QUICK CLUSTER @var{var_list}
2190 [/CRITERIA=CLUSTERS(@var{k}) [MXITER(@var{max_iter})] CONVERGE(@var{epsilon}) [NOINITIAL]]
2191 [/MISSING=@{EXCLUDE,INCLUDE@} @{LISTWISE, PAIRWISE@}]
2192 [/PRINT=@{INITIAL@} @{CLUSTER@}]
2193 [/SAVE[=[CLUSTER[(@var{membership_var})]] [DISTANCE[(@var{distance_var})]]]
2196 The @cmd{QUICK CLUSTER} command performs k-means clustering on the
2197 dataset. This is useful when you wish to allocate cases into clusters
2198 of similar values and you already know the number of clusters.
2200 The minimum specification is @samp{QUICK CLUSTER} followed by the names
2201 of the variables which contain the cluster data. Normally you will also
2202 want to specify @subcmd{/CRITERIA=CLUSTERS(@var{k})} where @var{k} is the
2203 number of clusters. If this is not specified, then @var{k} defaults to 2.
2205 If you use @subcmd{/CRITERIA=NOINITIAL} then a naive algorithm to select
2206 the initial clusters is used. This will provide for faster execution but
2207 less well separated initial clusters and hence possibly an inferior final
2211 @cmd{QUICK CLUSTER} uses an iterative algorithm to select the clusters centers.
2212 The subcommand @subcmd{/CRITERIA=MXITER(@var{max_iter})} sets the maximum number of iterations.
2213 During classification, @pspp{} will continue iterating until until @var{max_iter}
2214 iterations have been done or the convergence criterion (see below) is fulfilled.
2215 The default value of @var{max_iter} is 2.
2217 If however, you specify @subcmd{/CRITERIA=NOUPDATE} then after selecting the initial centers,
2218 no further update to the cluster centers is done. In this case, @var{max_iter}, if specified.
2221 The subcommand @subcmd{/CRITERIA=CONVERGE(@var{epsilon})} is used
2222 to set the convergence criterion. The value of convergence criterion is @var{epsilon}
2223 times the minimum distance between the @emph{initial} cluster centers. Iteration stops when
2224 the mean cluster distance between one iteration and the next
2225 is less than the convergence criterion. The default value of @var{epsilon} is zero.
2227 The @subcmd{MISSING} subcommand determines the handling of missing variables.
2228 If @subcmd{INCLUDE} is set, then user-missing values are considered at their face
2229 value and not as missing values.
2230 If @subcmd{EXCLUDE} is set, which is the default, user-missing
2231 values are excluded as well as system-missing values.
2233 If @subcmd{LISTWISE} is set, then the entire case is excluded from the analysis
2234 whenever any of the clustering variables contains a missing value.
2235 If @subcmd{PAIRWISE} is set, then a case is considered missing only if all the
2236 clustering variables contain missing values. Otherwise it is clustered
2237 on the basis of the non-missing values.
2238 The default is @subcmd{LISTWISE}.
2240 The @subcmd{PRINT} subcommand requests additional output to be printed.
2241 If @subcmd{INITIAL} is set, then the initial cluster memberships will
2243 If @subcmd{CLUSTER} is set, the cluster memberships of the individual
2244 cases are displayed (potentially generating lengthy output).
2246 You can specify the subcommand @subcmd{SAVE} to ask that each case's cluster membership
2247 and the euclidean distance between the case and its cluster center be saved to
2248 a new variable in the active dataset. To save the cluster membership use the
2249 @subcmd{CLUSTER} keyword and to save the distance use the @subcmd{DISTANCE} keyword.
2250 Each keyword may optionally be followed by a variable name in parentheses to specify
2251 the new variable which is to contain the saved parameter. If no variable name is specified,
2252 then PSPP will create one.
2260 [VARIABLES=] @var{var_list} [@{A,D@}] [BY @var{var_list}]
2261 /TIES=@{MEAN,LOW,HIGH,CONDENSE@}
2262 /FRACTION=@{BLOM,TUKEY,VW,RANKIT@}
2264 /MISSING=@{EXCLUDE,INCLUDE@}
2266 /RANK [INTO @var{var_list}]
2267 /NTILES(k) [INTO @var{var_list}]
2268 /NORMAL [INTO @var{var_list}]
2269 /PERCENT [INTO @var{var_list}]
2270 /RFRACTION [INTO @var{var_list}]
2271 /PROPORTION [INTO @var{var_list}]
2272 /N [INTO @var{var_list}]
2273 /SAVAGE [INTO @var{var_list}]
2276 The @cmd{RANK} command ranks variables and stores the results into new
2279 The @subcmd{VARIABLES} subcommand, which is mandatory, specifies one or
2280 more variables whose values are to be ranked.
2281 After each variable, @samp{A} or @samp{D} may appear, indicating that
2282 the variable is to be ranked in ascending or descending order.
2283 Ascending is the default.
2284 If a @subcmd{BY} keyword appears, it should be followed by a list of variables
2285 which are to serve as group variables.
2286 In this case, the cases are gathered into groups, and ranks calculated
2289 The @subcmd{TIES} subcommand specifies how tied values are to be treated. The
2290 default is to take the mean value of all the tied cases.
2292 The @subcmd{FRACTION} subcommand specifies how proportional ranks are to be
2293 calculated. This only has any effect if @subcmd{NORMAL} or @subcmd{PROPORTIONAL} rank
2294 functions are requested.
2296 The @subcmd{PRINT} subcommand may be used to specify that a summary of the rank
2297 variables created should appear in the output.
2299 The function subcommands are @subcmd{RANK}, @subcmd{NTILES}, @subcmd{NORMAL}, @subcmd{PERCENT}, @subcmd{RFRACTION},
2300 @subcmd{PROPORTION} and @subcmd{SAVAGE}. Any number of function subcommands may appear.
2301 If none are given, then the default is RANK.
2302 The @subcmd{NTILES} subcommand must take an integer specifying the number of
2303 partitions into which values should be ranked.
2304 Each subcommand may be followed by the @subcmd{INTO} keyword and a list of
2305 variables which are the variables to be created and receive the rank
2306 scores. There may be as many variables specified as there are
2307 variables named on the @subcmd{VARIABLES} subcommand. If fewer are specified,
2308 then the variable names are automatically created.
2310 The @subcmd{MISSING} subcommand determines how user missing values are to be
2311 treated. A setting of @subcmd{EXCLUDE} means that variables whose values are
2312 user-missing are to be excluded from the rank scores. A setting of
2313 @subcmd{INCLUDE} means they are to be included. The default is @subcmd{EXCLUDE}.
2315 @include regression.texi
2319 @section RELIABILITY
2324 /VARIABLES=@var{var_list}
2325 /SCALE (@var{name}) = @{@var{var_list}, ALL@}
2326 /MODEL=@{ALPHA, SPLIT[(@var{n})]@}
2327 /SUMMARY=@{TOTAL,ALL@}
2328 /MISSING=@{EXCLUDE,INCLUDE@}
2331 @cindex Cronbach's Alpha
2332 The @cmd{RELIABILITY} command performs reliability analysis on the data.
2334 The @subcmd{VARIABLES} subcommand is required. It determines the set of variables
2335 upon which analysis is to be performed.
2337 The @subcmd{SCALE} subcommand determines the variables for which
2338 reliability is to be calculated. If @subcmd{SCALE} is omitted, then analysis for
2339 all variables named in the @subcmd{VARIABLES} subcommand are used.
2340 Optionally, the @var{name} parameter may be specified to set a string name
2343 The @subcmd{MODEL} subcommand determines the type of analysis. If @subcmd{ALPHA} is specified,
2344 then Cronbach's Alpha is calculated for the scale. If the model is @subcmd{SPLIT},
2345 then the variables are divided into 2 subsets. An optional parameter
2346 @var{n} may be given, to specify how many variables to be in the first subset.
2347 If @var{n} is omitted, then it defaults to one half of the variables in the
2348 scale, or one half minus one if there are an odd number of variables.
2349 The default model is @subcmd{ALPHA}.
2351 By default, any cases with user missing, or system missing values for
2352 any variables given in the @subcmd{VARIABLES} subcommand are omitted
2353 from the analysis. The @subcmd{MISSING} subcommand determines whether
2354 user missing values are included or excluded in the analysis.
2356 The @subcmd{SUMMARY} subcommand determines the type of summary analysis to be performed.
2357 Currently there is only one type: @subcmd{SUMMARY=TOTAL}, which displays per-item
2358 analysis tested against the totals.
2360 @subsection Example - Reliability
2362 Before analysing the results of a survey -- particularly for a multiple choice survey --
2363 it is desireable to know whether the respondents have considered their answers
2364 or simply provided random answers.
2366 In the following example the survey results from the file @file{hotel.sav} are used.
2367 All five survey questions are included in the reliability analysis.
2368 However, before running the analysis, the data must be preprocessed.
2369 An examination of the survey questions reveals that two questions, @i{viz:} v3 and v5
2370 are negatively worded, whereas the others are positively worded.
2371 All questions must be based upon the same scale for the analysis to be meaningful.
2372 One could use the @cmd{RECODE} command (@pxref{RECODE}), however a simpler way is
2373 to use @cmd{COMPUTE} (@pxref{COMPUTE}) and this is what is done in @ref{reliability:ex}.
2375 @float Example, reliability:ex
2376 @psppsyntax {reliability.sps}
2377 @caption {Investigating the reliability of survey responses}
2380 In this case, all variables in the data set are used. So we can use the special
2381 keyword @samp{ALL} (@pxref{BNF}).
2383 @float Screenshot, reliability:src
2384 @psppimage {reliability}
2385 @caption {Reliability dialog box with all variables selected}
2388 @ref{reliability:res} shows that Cronbach's Alpha is 0.11 which is a value normally considered too
2389 low to indicate consistency within the data. This is possibly due to the small number of
2390 survey questions. The survey should be redesigned before serious use of the results are
2393 @float Result, reliability:res
2394 @psppoutput {reliability}
2395 @caption {The results of the reliability command on @file{hotel.sav}}
2403 @cindex Receiver Operating Characteristic
2404 @cindex Area under curve
2407 ROC @var{var_list} BY @var{state_var} (@var{state_value})
2408 /PLOT = @{ CURVE [(REFERENCE)], NONE @}
2409 /PRINT = [ SE ] [ COORDINATES ]
2410 /CRITERIA = [ CUTOFF(@{INCLUDE,EXCLUDE@}) ]
2411 [ TESTPOS (@{LARGE,SMALL@}) ]
2412 [ CI (@var{confidence}) ]
2413 [ DISTRIBUTION (@{FREE, NEGEXPO @}) ]
2414 /MISSING=@{EXCLUDE,INCLUDE@}
2418 The @cmd{ROC} command is used to plot the receiver operating characteristic curve
2419 of a dataset, and to estimate the area under the curve.
2420 This is useful for analysing the efficacy of a variable as a predictor of a state of nature.
2422 The mandatory @var{var_list} is the list of predictor variables.
2423 The variable @var{state_var} is the variable whose values represent the actual states,
2424 and @var{state_value} is the value of this variable which represents the positive state.
2426 The optional subcommand @subcmd{PLOT} is used to determine if and how the @subcmd{ROC} curve is drawn.
2427 The keyword @subcmd{CURVE} means that the @subcmd{ROC} curve should be drawn, and the optional keyword @subcmd{REFERENCE},
2428 which should be enclosed in parentheses, says that the diagonal reference line should be drawn.
2429 If the keyword @subcmd{NONE} is given, then no @subcmd{ROC} curve is drawn.
2430 By default, the curve is drawn with no reference line.
2432 The optional subcommand @subcmd{PRINT} determines which additional
2433 tables should be printed. Two additional tables are available. The
2434 @subcmd{SE} keyword says that standard error of the area under the
2435 curve should be printed as well as the area itself. In addition, a
2436 p-value for the null hypothesis that the area under the curve equals
2437 0.5 is printed. The @subcmd{COORDINATES} keyword says that a
2438 table of coordinates of the @subcmd{ROC} curve should be printed.
2440 The @subcmd{CRITERIA} subcommand has four optional parameters:
2442 @item The @subcmd{TESTPOS} parameter may be @subcmd{LARGE} or @subcmd{SMALL}.
2443 @subcmd{LARGE} is the default, and says that larger values in the predictor variables are to be
2444 considered positive. @subcmd{SMALL} indicates that smaller values should be considered positive.
2446 @item The @subcmd{CI} parameter specifies the confidence interval that should be printed.
2447 It has no effect if the @subcmd{SE} keyword in the @subcmd{PRINT} subcommand has not been given.
2449 @item The @subcmd{DISTRIBUTION} parameter determines the method to be used when estimating the area
2451 There are two possibilities, @i{viz}: @subcmd{FREE} and @subcmd{NEGEXPO}.
2452 The @subcmd{FREE} method uses a non-parametric estimate, and the @subcmd{NEGEXPO} method a bi-negative
2453 exponential distribution estimate.
2454 The @subcmd{NEGEXPO} method should only be used when the number of positive actual states is
2455 equal to the number of negative actual states.
2456 The default is @subcmd{FREE}.
2458 @item The @subcmd{CUTOFF} parameter is for compatibility and is ignored.
2461 The @subcmd{MISSING} subcommand determines whether user missing values are to
2462 be included or excluded in the analysis. The default behaviour is to
2464 Cases are excluded on a listwise basis; if any of the variables in @var{var_list}
2465 or if the variable @var{state_var} is missing, then the entire case is
2468 @c LocalWords: subcmd subcommand