1 @c PSPP - a program for statistical analysis.
2 @c Copyright (C) 2017, 2020 Free Software Foundation, Inc.
3 @c Permission is granted to copy, distribute and/or modify this document
4 @c under the terms of the GNU Free Documentation License, Version 1.3
5 @c or any later version published by the Free Software Foundation;
6 @c with no Invariant Sections, no Front-Cover Texts, and no Back-Cover Texts.
7 @c A copy of the license is included in the section entitled "GNU
8 @c Free Documentation License".
13 This chapter documents the statistical procedures that @pspp{} supports so
17 * DESCRIPTIVES:: Descriptive statistics.
18 * FREQUENCIES:: Frequency tables.
19 * EXAMINE:: Testing data for normality.
21 * CORRELATIONS:: Correlation tables.
22 * CROSSTABS:: Crosstabulation tables.
23 * FACTOR:: Factor analysis and Principal Components analysis.
24 * GLM:: Univariate Linear Models.
25 * LOGISTIC REGRESSION:: Bivariate Logistic Regression.
26 * MEANS:: Average values and other statistics.
27 * NPAR TESTS:: Nonparametric tests.
28 * T-TEST:: Test hypotheses about means.
29 * ONEWAY:: One way analysis of variance.
30 * QUICK CLUSTER:: K-Means clustering.
31 * RANK:: Compute rank scores.
32 * REGRESSION:: Linear regression.
33 * RELIABILITY:: Reliability analysis.
34 * ROC:: Receiver Operating Characteristic.
43 /VARIABLES=@var{var_list}
44 /MISSING=@{VARIABLE,LISTWISE@} @{INCLUDE,NOINCLUDE@}
45 /FORMAT=@{LABELS,NOLABELS@} @{NOINDEX,INDEX@} @{LINE,SERIAL@}
47 /STATISTICS=@{ALL,MEAN,SEMEAN,STDDEV,VARIANCE,KURTOSIS,
48 SKEWNESS,RANGE,MINIMUM,MAXIMUM,SUM,DEFAULT,
49 SESKEWNESS,SEKURTOSIS@}
50 /SORT=@{NONE,MEAN,SEMEAN,STDDEV,VARIANCE,KURTOSIS,SKEWNESS,
51 RANGE,MINIMUM,MAXIMUM,SUM,SESKEWNESS,SEKURTOSIS,NAME@}
55 The @cmd{DESCRIPTIVES} procedure reads the active dataset and outputs
56 linear descriptive statistics requested by the user. In addition, it can optionally
59 The @subcmd{VARIABLES} subcommand, which is required, specifies the list of
60 variables to be analyzed. Keyword @subcmd{VARIABLES} is optional.
62 All other subcommands are optional:
64 The @subcmd{MISSING} subcommand determines the handling of missing variables. If
65 @subcmd{INCLUDE} is set, then user-missing values are included in the
66 calculations. If @subcmd{NOINCLUDE} is set, which is the default, user-missing
67 values are excluded. If @subcmd{VARIABLE} is set, then missing values are
68 excluded on a variable by variable basis; if @subcmd{LISTWISE} is set, then
69 the entire case is excluded whenever any value in that case has a
70 system-missing or, if @subcmd{INCLUDE} is set, user-missing value.
72 The @subcmd{FORMAT} subcommand has no effect. It is accepted for
73 backward compatibility.
75 The @subcmd{SAVE} subcommand causes @cmd{DESCRIPTIVES} to calculate Z scores for all
76 the specified variables. The Z scores are saved to new variables.
77 Variable names are generated by trying first the original variable name
78 with Z prepended and truncated to a maximum of 8 characters, then the
79 names ZSC000 through ZSC999, STDZ00 through STDZ09, ZZZZ00 through
80 ZZZZ09, ZQZQ00 through ZQZQ09, in that sequence. In addition, Z score
81 variable names can be specified explicitly on @subcmd{VARIABLES} in the variable
82 list by enclosing them in parentheses after each variable.
83 When Z scores are calculated, @pspp{} ignores @cmd{TEMPORARY},
84 treating temporary transformations as permanent.
86 The @subcmd{STATISTICS} subcommand specifies the statistics to be displayed:
90 All of the statistics below.
94 Standard error of the mean.
97 @item @subcmd{VARIANCE}
99 @item @subcmd{KURTOSIS}
100 Kurtosis and standard error of the kurtosis.
101 @item @subcmd{SKEWNESS}
102 Skewness and standard error of the skewness.
112 Mean, standard deviation of the mean, minimum, maximum.
114 Standard error of the kurtosis.
116 Standard error of the skewness.
119 The @subcmd{SORT} subcommand specifies how the statistics should be sorted. Most
120 of the possible values should be self-explanatory. @subcmd{NAME} causes the
121 statistics to be sorted by name. By default, the statistics are listed
122 in the order that they are specified on the @subcmd{VARIABLES} subcommand.
123 The @subcmd{A} and @subcmd{D} settings request an ascending or descending
124 sort order, respectively.
126 @subsection Descriptives Example
128 The @file{physiology.sav} file contains various physiological data for a sample
129 of persons. Running the @cmd{DESCRIPTIVES} command on the variables @exvar{height}
130 and @exvar{temperature} with the default options allows one to see simple linear
131 statistics for these two variables. In @ref{descriptives:ex}, these variables
132 are specfied on the @subcmd{VARIABLES} subcommand and the @subcmd{SAVE} option
133 has been used, to request that Z scores be calculated.
135 After the command has completed, this example runs @cmd{DESCRIPTIVES} again, this
136 time on the @exvar{zheight} and @exvar{ztemperature} variables,
137 which are the two normalized (Z-score) variables generated by the
138 first @cmd{DESCRIPTIVES} command.
140 @float Example, descriptives:ex
141 @psppsyntax {descriptives.sps}
142 @caption {Running two @cmd{DESCRIPTIVES} commands, one with the @subcmd{SAVE} subcommand}
145 @float Screenshot, descriptives:scr
146 @psppimage {descriptives}
147 @caption {The Descriptives dialog box with two variables and Z-Scores option selected}
150 In @ref{descriptives:res}, we can see that there are 40 valid data for each of the variables
151 and no missing values. The mean average of the height and temperature is 16677.12
152 and 37.02 respectively. The descriptive statistics for temperature seem reasonable.
153 However there is a very high standard deviation for @exvar{height} and a suspiciously
154 low minimum. This is due to a data entry error in the
155 data (@pxref{Identifying incorrect data}).
157 In the second Descriptive Statistics command, one can see that the mean and standard
158 deviation of both Z score variables is 0 and 1 respectively. All Z score statistics
159 should have these properties since they are normalized versions of the original scores.
161 @float Result, descriptives:res
162 @psppoutput {descriptives}
163 @caption {Descriptives statistics including two normalized variables (Z-scores)}
172 /VARIABLES=@var{var_list}
173 /FORMAT=@{TABLE,NOTABLE,LIMIT(@var{limit})@}
174 @{AVALUE,DVALUE,AFREQ,DFREQ@}
175 /MISSING=@{EXCLUDE,INCLUDE@}
176 /STATISTICS=@{DEFAULT,MEAN,SEMEAN,MEDIAN,MODE,STDDEV,VARIANCE,
177 KURTOSIS,SKEWNESS,RANGE,MINIMUM,MAXIMUM,SUM,
178 SESKEWNESS,SEKURTOSIS,ALL,NONE@}
180 /PERCENTILES=percent@dots{}
181 /HISTOGRAM=[MINIMUM(@var{x_min})] [MAXIMUM(@var{x_max})]
182 [@{FREQ[(@var{y_max})],PERCENT[(@var{y_max})]@}] [@{NONORMAL,NORMAL@}]
183 /PIECHART=[MINIMUM(@var{x_min})] [MAXIMUM(@var{x_max})]
184 [@{FREQ,PERCENT@}] [@{NOMISSING,MISSING@}]
185 /BARCHART=[MINIMUM(@var{x_min})] [MAXIMUM(@var{x_max})]
187 /ORDER=@{ANALYSIS,VARIABLE@}
190 (These options are not currently implemented.)
195 The @cmd{FREQUENCIES} procedure outputs frequency tables for specified
197 @cmd{FREQUENCIES} can also calculate and display descriptive statistics
198 (including median and mode) and percentiles, and various graphical representations
199 of the frequency distribution.
201 The @subcmd{VARIABLES} subcommand is the only required subcommand. Specify the
202 variables to be analyzed.
204 The @subcmd{FORMAT} subcommand controls the output format. It has several
209 @subcmd{TABLE}, the default, causes a frequency table to be output for every
210 variable specified. @subcmd{NOTABLE} prevents them from being output. @subcmd{LIMIT}
211 with a numeric argument causes them to be output except when there are
212 more than the specified number of values in the table.
215 Normally frequency tables are sorted in ascending order by value. This
216 is @subcmd{AVALUE}. @subcmd{DVALUE} tables are sorted in descending order by value.
217 @subcmd{AFREQ} and @subcmd{DFREQ} tables are sorted in ascending and descending order,
218 respectively, by frequency count.
221 The @subcmd{MISSING} subcommand controls the handling of user-missing values.
222 When @subcmd{EXCLUDE}, the default, is set, user-missing values are not included
223 in frequency tables or statistics. When @subcmd{INCLUDE} is set, user-missing
224 are included. System-missing values are never included in statistics,
225 but are listed in frequency tables.
227 The available @subcmd{STATISTICS} are the same as available
228 in @cmd{DESCRIPTIVES} (@pxref{DESCRIPTIVES}), with the addition
229 of @subcmd{MEDIAN}, the data's median
230 value, and MODE, the mode. (If there are multiple modes, the smallest
231 value is reported.) By default, the mean, standard deviation of the
232 mean, minimum, and maximum are reported for each variable.
235 @subcmd{PERCENTILES} causes the specified percentiles to be reported.
236 The percentiles should be presented at a list of numbers between 0
238 The @subcmd{NTILES} subcommand causes the percentiles to be reported at the
239 boundaries of the data set divided into the specified number of ranges.
240 For instance, @subcmd{/NTILES=4} would cause quartiles to be reported.
243 The @subcmd{HISTOGRAM} subcommand causes the output to include a histogram for
244 each specified numeric variable. The X axis by default ranges from
245 the minimum to the maximum value observed in the data, but the @subcmd{MINIMUM}
246 and @subcmd{MAXIMUM} keywords can set an explicit range.
247 @footnote{The number of
248 bins is chosen according to the Freedman-Diaconis rule:
249 @math{2 \times IQR(x)n^{-1/3}}, where @math{IQR(x)} is the interquartile range of @math{x}
250 and @math{n} is the number of samples. Note that
251 @cmd{EXAMINE} uses a different algorithm to determine bin sizes.}
252 Histograms are not created for string variables.
254 Specify @subcmd{NORMAL} to superimpose a normal curve on the
258 The @subcmd{PIECHART} subcommand adds a pie chart for each variable to the data. Each
259 slice represents one value, with the size of the slice proportional to
260 the value's frequency. By default, all non-missing values are given
262 The @subcmd{MINIMUM} and @subcmd{MAXIMUM} keywords can be used to limit the
263 displayed slices to a given range of values.
264 The keyword @subcmd{NOMISSING} causes missing values to be omitted from the
265 piechart. This is the default.
266 If instead, @subcmd{MISSING} is specified, then the pie chart includes
267 a single slice representing all system missing and user-missing cases.
270 The @subcmd{BARCHART} subcommand produces a bar chart for each variable.
271 The @subcmd{MINIMUM} and @subcmd{MAXIMUM} keywords can be used to omit
272 categories whose counts which lie outside the specified limits.
273 The @subcmd{FREQ} option (default) causes the ordinate to display the frequency
274 of each category, whereas the @subcmd{PERCENT} option displays relative
277 The @subcmd{FREQ} and @subcmd{PERCENT} options on @subcmd{HISTOGRAM} and
278 @subcmd{PIECHART} are accepted but not currently honoured.
280 The @subcmd{ORDER} subcommand is accepted but ignored.
282 @subsection Frequencies Example
284 @ref{frequencies:ex} runs a frequency analysis on the @exvar{sex}
285 and @exvar{occupation} variables from the @file{personnel.sav} file.
286 This is useful to get an general idea of the way in which these nominal
287 variables are distributed.
289 @float Example, frequencies:ex
290 @psppsyntax {frequencies.sps}
291 @caption {Running frequencies on the @exvar{sex} and @exvar{occupation} variables}
294 If you are using the graphic user interface, the dialog box is set up such that
295 by default, several statistics are calculated. Some are not particularly useful
296 for categorical variables, so you may want to disable those.
298 @float Screenshot, frequencies:scr
299 @psppimage {frequencies}
300 @caption {The frequencies dialog box with the @exvar{sex} and @exvar{occupation} variables selected}
303 From @ref{frequencies:res} it is evident that there are 33 males, 21 females and
304 2 persons for whom their sex has not been entered.
306 One can also see how many of each occupation there are in the data.
307 When dealing with string variables used as nominal values, running a frequency
308 analysis is useful to detect data input entries. Notice that
309 one @exvar{occupation} value has been mistyped as ``Scrientist''. This entry should
310 be corrected, or marked as missing before using the data.
312 @float Result, frequencies:res
313 @psppoutput {frequencies}
314 @caption {The relative frequencies of @exvar{sex} and @exvar{occupation}}
321 @cindex Exploratory data analysis
322 @cindex normality, testing
326 VARIABLES= @var{var1} [@var{var2}] @dots{} [@var{varN}]
327 [BY @var{factor1} [BY @var{subfactor1}]
328 [ @var{factor2} [BY @var{subfactor2}]]
330 [ @var{factor3} [BY @var{subfactor3}]]
332 /STATISTICS=@{DESCRIPTIVES, EXTREME[(@var{n})], ALL, NONE@}
333 /PLOT=@{BOXPLOT, NPPLOT, HISTOGRAM, SPREADLEVEL[(@var{t})], ALL, NONE@}
335 /COMPARE=@{GROUPS,VARIABLES@}
336 /ID=@var{identity_variable}
338 /PERCENTILE=[@var{percentiles}]=@{HAVERAGE, WAVERAGE, ROUND, AEMPIRICAL, EMPIRICAL @}
339 /MISSING=@{LISTWISE, PAIRWISE@} [@{EXCLUDE, INCLUDE@}]
340 [@{NOREPORT,REPORT@}]
344 The @cmd{EXAMINE} command is used to perform exploratory data analysis.
345 In particular, it is useful for testing how closely a distribution follows a
346 normal distribution, and for finding outliers and extreme values.
348 The @subcmd{VARIABLES} subcommand is mandatory.
349 It specifies the dependent variables and optionally variables to use as
350 factors for the analysis.
351 Variables listed before the first @subcmd{BY} keyword (if any) are the
353 The dependent variables may optionally be followed by a list of
354 factors which tell @pspp{} how to break down the analysis for each
357 Following the dependent variables, factors may be specified.
358 The factors (if desired) should be preceded by a single @subcmd{BY} keyword.
359 The format for each factor is
361 @var{factorvar} [BY @var{subfactorvar}].
363 Each unique combination of the values of @var{factorvar} and
364 @var{subfactorvar} divide the dataset into @dfn{cells}.
365 Statistics are calculated for each cell
366 and for the entire dataset (unless @subcmd{NOTOTAL} is given).
368 The @subcmd{STATISTICS} subcommand specifies which statistics to show.
369 @subcmd{DESCRIPTIVES} produces a table showing some parametric and
370 non-parametrics statistics.
371 @subcmd{EXTREME} produces a table showing the extremities of each cell.
372 A number in parentheses, @var{n} determines
373 how many upper and lower extremities to show.
374 The default number is 5.
376 The subcommands @subcmd{TOTAL} and @subcmd{NOTOTAL} are mutually exclusive.
377 If @subcmd{TOTAL} appears, then statistics for the entire dataset
378 as well as for each cell are produced.
379 If @subcmd{NOTOTAL} appears, then statistics are produced only for the cells
380 (unless no factor variables have been given).
381 These subcommands have no effect if there have been no factor variables
387 @cindex spreadlevel plot
388 The @subcmd{PLOT} subcommand specifies which plots are to be produced if any.
389 Available plots are @subcmd{HISTOGRAM}, @subcmd{NPPLOT}, @subcmd{BOXPLOT} and
390 @subcmd{SPREADLEVEL}.
391 The first three can be used to visualise how closely each cell conforms to a
392 normal distribution, whilst the spread vs.@: level plot can be useful to visualise
393 how the variance differs between factors.
394 Boxplots show you the outliers and extreme values.
395 @footnote{@subcmd{HISTOGRAM} uses Sturges' rule to determine the number of
396 bins, as approximately @math{1 + \log2(n)}, where @math{n} is the number of samples.
397 Note that @cmd{FREQUENCIES} uses a different algorithm to find the bin size.}
399 The @subcmd{SPREADLEVEL} plot displays the interquartile range versus the
400 median. It takes an optional parameter @var{t}, which specifies how the data
401 should be transformed prior to plotting.
402 The given value @var{t} is a power to which the data are raised. For example, if
403 @var{t} is given as 2, then the square of the data is used.
404 Zero, however is a special value. If @var{t} is 0 or
405 is omitted, then data are transformed by taking its natural logarithm instead of
406 raising to the power of @var{t}.
409 When one or more plots are requested, @subcmd{EXAMINE} also performs the
410 Shapiro-Wilk test for each category.
411 There are however a number of provisos:
413 @item All weight values must be integer.
414 @item The cumulative weight value must be in the range [3, 5000]
417 The @subcmd{COMPARE} subcommand is only relevant if producing boxplots, and it is only
418 useful there is more than one dependent variable and at least one factor.
420 @subcmd{/COMPARE=GROUPS} is specified, then one plot per dependent variable is produced,
421 each of which contain boxplots for all the cells.
422 If @subcmd{/COMPARE=VARIABLES} is specified, then one plot per cell is produced,
423 each containing one boxplot per dependent variable.
424 If the @subcmd{/COMPARE} subcommand is omitted, then @pspp{} behaves as if
425 @subcmd{/COMPARE=GROUPS} were given.
427 The @subcmd{ID} subcommand is relevant only if @subcmd{/PLOT=BOXPLOT} or
428 @subcmd{/STATISTICS=EXTREME} has been given.
429 If given, it should provide the name of a variable which is to be used
430 to labels extreme values and outliers.
431 Numeric or string variables are permissible.
432 If the @subcmd{ID} subcommand is not given, then the case number is used for
435 The @subcmd{CINTERVAL} subcommand specifies the confidence interval to use in
436 calculation of the descriptives command. The default is 95%.
439 The @subcmd{PERCENTILES} subcommand specifies which percentiles are to be calculated,
440 and which algorithm to use for calculating them. The default is to
441 calculate the 5, 10, 25, 50, 75, 90, 95 percentiles using the
442 @subcmd{HAVERAGE} algorithm.
444 The @subcmd{TOTAL} and @subcmd{NOTOTAL} subcommands are mutually exclusive. If @subcmd{NOTOTAL}
445 is given and factors have been specified in the @subcmd{VARIABLES} subcommand,
446 then statistics for the unfactored dependent variables are
447 produced in addition to the factored variables. If there are no
448 factors specified then @subcmd{TOTAL} and @subcmd{NOTOTAL} have no effect.
451 The following example generates descriptive statistics and histograms for
452 two variables @var{score1} and @var{score2}.
453 Two factors are given, @i{viz}: @var{gender} and @var{gender} BY @var{culture}.
454 Therefore, the descriptives and histograms are generated for each
456 of @var{gender} @emph{and} for each distinct combination of the values
457 of @var{gender} and @var{race}.
458 Since the @subcmd{NOTOTAL} keyword is given, statistics and histograms for
459 @var{score1} and @var{score2} covering the whole dataset are not produced.
461 EXAMINE @var{score1} @var{score2} BY
463 @var{gender} BY @var{culture}
464 /STATISTICS = DESCRIPTIVES
469 Here is a second example showing how the @cmd{examine} command can be used to find extremities.
471 EXAMINE @var{height} @var{weight} BY
473 /STATISTICS = EXTREME (3)
478 In this example, we look at the height and weight of a sample of individuals and
479 how they differ between male and female.
480 A table showing the 3 largest and the 3 smallest values of @exvar{height} and
481 @exvar{weight} for each gender, and for the whole dataset as are shown.
482 In addition, the @subcmd{/PLOT} subcommand requests boxplots.
483 Because @subcmd{/COMPARE = GROUPS} was specified, boxplots for male and female are
484 shown in juxtaposed in the same graphic, allowing us to easily see the difference between
486 Since the variable @var{name} was specified on the @subcmd{ID} subcommand,
487 values of the @var{name} variable are used to label the extreme values.
490 If you specify many dependent variables or factor variables
491 for which there are many distinct values, then @cmd{EXAMINE} will produce a very
492 large quantity of output.
498 @cindex Exploratory data analysis
499 @cindex normality, testing
503 /HISTOGRAM [(NORMAL)]= @var{var}
504 /SCATTERPLOT [(BIVARIATE)] = @var{var1} WITH @var{var2} [BY @var{var3}]
505 /BAR = @{@var{summary-function}(@var{var1}) | @var{count-function}@} BY @var{var2} [BY @var{var3}]
506 [ /MISSING=@{LISTWISE, VARIABLE@} [@{EXCLUDE, INCLUDE@}] ]
507 [@{NOREPORT,REPORT@}]
511 The @cmd{GRAPH} command produces graphical plots of data. Only one of the subcommands
512 @subcmd{HISTOGRAM}, @subcmd{BAR} or @subcmd{SCATTERPLOT} can be specified, @i{i.e.} only one plot
513 can be produced per call of @cmd{GRAPH}. The @subcmd{MISSING} is optional.
516 * SCATTERPLOT:: Cartesian Plots
517 * HISTOGRAM:: Histograms
518 * BAR CHART:: Bar Charts
522 @subsection Scatterplot
525 The subcommand @subcmd{SCATTERPLOT} produces an xy plot of the
527 @cmd{GRAPH} uses the third variable @var{var3}, if specified, to determine
528 the colours and/or markers for the plot.
529 The following is an example for producing a scatterplot.
533 /SCATTERPLOT = @var{height} WITH @var{weight} BY @var{gender}.
536 This example produces a scatterplot where @var{height} is plotted versus @var{weight}. Depending
537 on the value of the @var{gender} variable, the colour of the datapoint is different. With
538 this plot it is possible to analyze gender differences for @var{height} versus @var{weight} relation.
541 @subsection Histogram
544 The subcommand @subcmd{HISTOGRAM} produces a histogram. Only one variable is allowed for
546 The keyword @subcmd{NORMAL} may be specified in parentheses, to indicate that the ideal normal curve
547 should be superimposed over the histogram.
548 For an alternative method to produce histograms @pxref{EXAMINE}. The
549 following example produces a histogram plot for the variable @var{weight}.
553 /HISTOGRAM = @var{weight}.
557 @subsection Bar Chart
560 The subcommand @subcmd{BAR} produces a bar chart.
561 This subcommand requires that a @var{count-function} be specified (with no arguments) or a @var{summary-function} with a variable @var{var1} in parentheses.
562 Following the summary or count function, the keyword @subcmd{BY} should be specified and then a catagorical variable, @var{var2}.
563 The values of the variable @var{var2} determine the labels of the bars to be plotted.
564 Optionally a second categorical variable @var{var3} may be specified in which case a clustered (grouped) bar chart is produced.
566 Valid count functions are
569 The weighted counts of the cases in each category.
571 The weighted counts of the cases in each category expressed as a percentage of the total weights of the cases.
573 The cumulative weighted counts of the cases in each category.
575 The cumulative weighted counts of the cases in each category expressed as a percentage of the total weights of the cases.
578 The summary function is applied to @var{var1} across all cases in each category.
579 The recognised summary functions are:
591 The following examples assume a dataset which is the results of a survey.
592 Each respondent has indicated annual income, their sex and city of residence.
593 One could create a bar chart showing how the mean income varies between of residents of different cities, thus:
595 GRAPH /BAR = MEAN(@var{income}) BY @var{city}.
598 This can be extended to also indicate how income in each city differs between the sexes.
600 GRAPH /BAR = MEAN(@var{income}) BY @var{city} BY @var{sex}.
603 One might also want to see how many respondents there are from each city. This can be achieved as follows:
605 GRAPH /BAR = COUNT BY @var{city}.
608 Bar charts can also be produced using the @ref{FREQUENCIES} and @ref{CROSSTABS} commands.
611 @section CORRELATIONS
616 /VARIABLES = @var{var_list} [ WITH @var{var_list} ]
621 /VARIABLES = @var{var_list} [ WITH @var{var_list} ]
622 /VARIABLES = @var{var_list} [ WITH @var{var_list} ]
625 [ /PRINT=@{TWOTAIL, ONETAIL@} @{SIG, NOSIG@} ]
626 [ /STATISTICS=DESCRIPTIVES XPROD ALL]
627 [ /MISSING=@{PAIRWISE, LISTWISE@} @{INCLUDE, EXCLUDE@} ]
631 The @cmd{CORRELATIONS} procedure produces tables of the Pearson correlation coefficient
632 for a set of variables. The significance of the coefficients are also given.
634 At least one @subcmd{VARIABLES} subcommand is required. If you specify the @subcmd{WITH}
635 keyword, then a non-square correlation table is produced.
636 The variables preceding @subcmd{WITH}, are used as the rows of the table,
637 and the variables following @subcmd{WITH} are used as the columns of the table.
638 If no @subcmd{WITH} subcommand is specified, then @cmd{CORRELATIONS} produces a
639 square, symmetrical table using all variables.
641 The @cmd{MISSING} subcommand determines the handling of missing variables.
642 If @subcmd{INCLUDE} is set, then user-missing values are included in the
643 calculations, but system-missing values are not.
644 If @subcmd{EXCLUDE} is set, which is the default, user-missing
645 values are excluded as well as system-missing values.
647 If @subcmd{LISTWISE} is set, then the entire case is excluded from analysis
648 whenever any variable specified in any @cmd{/VARIABLES} subcommand
649 contains a missing value.
650 If @subcmd{PAIRWISE} is set, then a case is considered missing only if either of the
651 values for the particular coefficient are missing.
652 The default is @subcmd{PAIRWISE}.
654 The @subcmd{PRINT} subcommand is used to control how the reported significance values are printed.
655 If the @subcmd{TWOTAIL} option is used, then a two-tailed test of significance is
656 printed. If the @subcmd{ONETAIL} option is given, then a one-tailed test is used.
657 The default is @subcmd{TWOTAIL}.
659 If the @subcmd{NOSIG} option is specified, then correlation coefficients with significance less than
660 0.05 are highlighted.
661 If @subcmd{SIG} is specified, then no highlighting is performed. This is the default.
664 The @subcmd{STATISTICS} subcommand requests additional statistics to be displayed. The keyword
665 @subcmd{DESCRIPTIVES} requests that the mean, number of non-missing cases, and the non-biased
666 estimator of the standard deviation are displayed.
667 These statistics are displayed in a separated table, for all the variables listed
668 in any @subcmd{/VARIABLES} subcommand.
669 The @subcmd{XPROD} keyword requests cross-product deviations and covariance estimators to
670 be displayed for each pair of variables.
671 The keyword @subcmd{ALL} is the union of @subcmd{DESCRIPTIVES} and @subcmd{XPROD}.
679 /TABLES=@var{var_list} BY @var{var_list} [BY @var{var_list}]@dots{}
680 /MISSING=@{TABLE,INCLUDE,REPORT@}
681 /WRITE=@{NONE,CELLS,ALL@}
682 /FORMAT=@{TABLES,NOTABLES@}
687 /CELLS=@{COUNT,ROW,COLUMN,TOTAL,EXPECTED,RESIDUAL,SRESIDUAL,
688 ASRESIDUAL,ALL,NONE@}
689 /COUNT=@{ASIS,CASE,CELL@}
691 /STATISTICS=@{CHISQ,PHI,CC,LAMBDA,UC,BTAU,CTAU,RISK,GAMMA,D,
692 KAPPA,ETA,CORR,ALL,NONE@}
696 /VARIABLES=@var{var_list} (@var{low},@var{high})@dots{}
699 The @cmd{CROSSTABS} procedure displays crosstabulation
700 tables requested by the user. It can calculate several statistics for
701 each cell in the crosstabulation tables. In addition, a number of
702 statistics can be calculated for each table itself.
704 The @subcmd{TABLES} subcommand is used to specify the tables to be reported. Any
705 number of dimensions is permitted, and any number of variables per
706 dimension is allowed. The @subcmd{TABLES} subcommand may be repeated as many
707 times as needed. This is the only required subcommand in @dfn{general
710 Occasionally, one may want to invoke a special mode called @dfn{integer
711 mode}. Normally, in general mode, @pspp{} automatically determines
712 what values occur in the data. In integer mode, the user specifies the
713 range of values that the data assumes. To invoke this mode, specify the
714 @subcmd{VARIABLES} subcommand, giving a range of data values in parentheses for
715 each variable to be used on the @subcmd{TABLES} subcommand. Data values inside
716 the range are truncated to the nearest integer, then assigned to that
717 value. If values occur outside this range, they are discarded. When it
718 is present, the @subcmd{VARIABLES} subcommand must precede the @subcmd{TABLES}
721 In general mode, numeric and string variables may be specified on
722 TABLES. In integer mode, only numeric variables are allowed.
724 The @subcmd{MISSING} subcommand determines the handling of user-missing values.
725 When set to @subcmd{TABLE}, the default, missing values are dropped on a table by
726 table basis. When set to @subcmd{INCLUDE}, user-missing values are included in
727 tables and statistics. When set to @subcmd{REPORT}, which is allowed only in
728 integer mode, user-missing values are included in tables but marked with
729 a footnote and excluded from statistical calculations.
731 Currently the @subcmd{WRITE} subcommand is ignored.
733 The @subcmd{FORMAT} subcommand controls the characteristics of the
734 crosstabulation tables to be displayed. It has a number of possible
739 @subcmd{TABLES}, the default, causes crosstabulation tables to be output.
740 @subcmd{NOTABLES} suppresses them.
743 @subcmd{PIVOT}, the default, causes each @subcmd{TABLES} subcommand to be displayed in a
744 pivot table format. @subcmd{NOPIVOT} causes the old-style crosstabulation format
748 @subcmd{AVALUE}, the default, causes values to be sorted in ascending order.
749 @subcmd{DVALUE} asserts a descending sort order.
752 @subcmd{INDEX} and @subcmd{NOINDEX} are currently ignored.
755 @subcmd{BOX} and @subcmd{NOBOX} is currently ignored.
758 The @subcmd{CELLS} subcommand controls the contents of each cell in the displayed
759 crosstabulation table. The possible settings are:
775 Standardized residual.
777 Adjusted standardized residual.
781 Suppress cells entirely.
784 @samp{/CELLS} without any settings specified requests @subcmd{COUNT}, @subcmd{ROW},
785 @subcmd{COLUMN}, and @subcmd{TOTAL}.
786 If @subcmd{CELLS} is not specified at all then only @subcmd{COUNT}
789 By default, crosstabulation and statistics use raw case weights,
790 without rounding. Use the @subcmd{/COUNT} subcommand to perform
791 rounding: CASE rounds the weights of individual weights as cases are
792 read, CELL rounds the weights of cells within each crosstabulation
793 table after it has been constructed, and ASIS explicitly specifies the
794 default non-rounding behavior. When rounding is requested, ROUND, the
795 default, rounds to the nearest integer and TRUNCATE rounds toward
798 The @subcmd{STATISTICS} subcommand selects statistics for computation:
804 Pearson chi-square, likelihood ratio, Fisher's exact test, continuity
805 correction, linear-by-linear association.
809 Contingency coefficient.
813 Uncertainty coefficient.
829 Spearman correlation, Pearson's r.
836 Selected statistics are only calculated when appropriate for the
837 statistic. Certain statistics require tables of a particular size, and
838 some statistics are calculated only in integer mode.
840 @samp{/STATISTICS} without any settings selects CHISQ. If the
841 @subcmd{STATISTICS} subcommand is not given, no statistics are calculated.
844 The @samp{/BARCHART} subcommand produces a clustered bar chart for the first two
845 variables on each table.
846 If a table has more than two variables, the counts for the third and subsequent levels
847 are aggregated and the chart is produced as if there were only two variables.
850 @strong{Please note:} Currently the implementation of @cmd{CROSSTABS} has the
851 following limitations:
855 Significance of some symmetric and directional measures is not calculated.
857 Asymptotic standard error is not calculated for
858 Goodman and Kruskal's tau or symmetric Somers' d.
860 Approximate T is not calculated for symmetric uncertainty coefficient.
863 Fixes for any of these deficiencies would be welcomed.
869 @cindex factor analysis
870 @cindex principal components analysis
871 @cindex principal axis factoring
872 @cindex data reduction
876 VARIABLES=@var{var_list},
877 MATRIX IN (@{CORR,COV@}=@{*,@var{file_spec}@})
880 [ /METHOD = @{CORRELATION, COVARIANCE@} ]
882 [ /ANALYSIS=@var{var_list} ]
884 [ /EXTRACTION=@{PC, PAF@}]
886 [ /ROTATION=@{VARIMAX, EQUAMAX, QUARTIMAX, PROMAX[(@var{k})], NOROTATE@}]
888 [ /PRINT=[INITIAL] [EXTRACTION] [ROTATION] [UNIVARIATE] [CORRELATION] [COVARIANCE] [DET] [KMO] [AIC] [SIG] [ALL] [DEFAULT] ]
892 [ /FORMAT=[SORT] [BLANK(@var{n})] [DEFAULT] ]
894 [ /CRITERIA=[FACTORS(@var{n})] [MINEIGEN(@var{l})] [ITERATE(@var{m})] [ECONVERGE (@var{delta})] [DEFAULT] ]
896 [ /MISSING=[@{LISTWISE, PAIRWISE@}] [@{INCLUDE, EXCLUDE@}] ]
899 The @cmd{FACTOR} command performs Factor Analysis or Principal Axis Factoring on a dataset. It may be used to find
900 common factors in the data or for data reduction purposes.
902 The @subcmd{VARIABLES} subcommand is required (unless the @subcmd{MATRIX IN}
904 It lists the variables which are to partake in the analysis. (The @subcmd{ANALYSIS}
905 subcommand may optionally further limit the variables that
906 participate; it is useful primarily in conjunction with @subcmd{MATRIX IN}.)
908 If @subcmd{MATRIX IN} instead of @subcmd{VARIABLES} is specified, then the analysis
909 is performed on a pre-prepared correlation or covariance matrix file instead of on
910 individual data cases. Typically the matrix file will have been generated by
911 @cmd{MATRIX DATA} (@pxref{MATRIX DATA}) or provided by a third party.
912 If specified, @subcmd{MATRIX IN} must be followed by @samp{COV} or @samp{CORR},
913 then by @samp{=} and @var{file_spec} all in parentheses.
914 @var{file_spec} may either be an asterisk, which indicates the currently loaded
915 dataset, or it may be a file name to be loaded. @xref{MATRIX DATA}, for the expected
918 The @subcmd{/EXTRACTION} subcommand is used to specify the way in which factors
919 (components) are extracted from the data.
920 If @subcmd{PC} is specified, then Principal Components Analysis is used.
921 If @subcmd{PAF} is specified, then Principal Axis Factoring is
922 used. By default Principal Components Analysis is used.
924 The @subcmd{/ROTATION} subcommand is used to specify the method by which the
925 extracted solution is rotated. Three orthogonal rotation methods are available:
926 @subcmd{VARIMAX} (which is the default), @subcmd{EQUAMAX}, and @subcmd{QUARTIMAX}.
927 There is one oblique rotation method, @i{viz}: @subcmd{PROMAX}.
928 Optionally you may enter the power of the promax rotation @var{k}, which must be enclosed in parentheses.
929 The default value of @var{k} is 5.
930 If you don't want any rotation to be performed, the word @subcmd{NOROTATE}
931 prevents the command from performing any rotation on the data.
933 The @subcmd{/METHOD} subcommand should be used to determine whether the
934 covariance matrix or the correlation matrix of the data is
935 to be analysed. By default, the correlation matrix is analysed.
937 The @subcmd{/PRINT} subcommand may be used to select which features of the analysis are reported:
940 @item @subcmd{UNIVARIATE}
941 A table of mean values, standard deviations and total weights are printed.
942 @item @subcmd{INITIAL}
943 Initial communalities and eigenvalues are printed.
944 @item @subcmd{EXTRACTION}
945 Extracted communalities and eigenvalues are printed.
946 @item @subcmd{ROTATION}
947 Rotated communalities and eigenvalues are printed.
948 @item @subcmd{CORRELATION}
949 The correlation matrix is printed.
950 @item @subcmd{COVARIANCE}
951 The covariance matrix is printed.
953 The determinant of the correlation or covariance matrix is printed.
955 The anti-image covariance and anti-image correlation matrices are printed.
957 The Kaiser-Meyer-Olkin measure of sampling adequacy and the Bartlett test of sphericity is printed.
959 The significance of the elements of correlation matrix is printed.
961 All of the above are printed.
962 @item @subcmd{DEFAULT}
963 Identical to @subcmd{INITIAL} and @subcmd{EXTRACTION}.
966 If @subcmd{/PLOT=EIGEN} is given, then a ``Scree'' plot of the eigenvalues is
967 printed. This can be useful for visualizing the factors and deciding
968 which factors (components) should be retained.
970 The @subcmd{/FORMAT} subcommand determined how data are to be
971 displayed in loading matrices. If @subcmd{SORT} is specified, then
972 the variables are sorted in descending order of significance. If
973 @subcmd{BLANK(@var{n})} is specified, then coefficients whose absolute
974 value is less than @var{n} are not printed. If the keyword
975 @subcmd{DEFAULT} is specified, or if no @subcmd{/FORMAT} subcommand is
976 specified, then no sorting is performed, and all coefficients are printed.
978 You can use the @subcmd{/CRITERIA} subcommand to specify how the number of
979 extracted factors (components) are chosen. If @subcmd{FACTORS(@var{n})} is
980 specified, where @var{n} is an integer, then @var{n} factors are
981 extracted. Otherwise, the @subcmd{MINEIGEN} setting is used.
982 @subcmd{MINEIGEN(@var{l})} requests that all factors whose eigenvalues
983 are greater than or equal to @var{l} are extracted. The default value
984 of @var{l} is 1. The @subcmd{ECONVERGE} setting has effect only when
985 using iterative algorithms for factor extraction (such as Principal Axis
986 Factoring). @subcmd{ECONVERGE(@var{delta})} specifies that
987 iteration should cease when the maximum absolute value of the
988 communality estimate between one iteration and the previous is less
989 than @var{delta}. The default value of @var{delta} is 0.001.
991 The @subcmd{ITERATE(@var{m})} may appear any number of times and is
992 used for two different purposes. It is used to set the maximum number
993 of iterations (@var{m}) for convergence and also to set the maximum
994 number of iterations for rotation.
995 Whether it affects convergence or rotation depends upon which
996 subcommand follows the @subcmd{ITERATE} subcommand.
997 If @subcmd{EXTRACTION} follows, it affects convergence.
998 If @subcmd{ROTATION} follows, it affects rotation.
999 If neither @subcmd{ROTATION} nor @subcmd{EXTRACTION} follow a
1000 @subcmd{ITERATE} subcommand, then the entire subcommand is ignored.
1001 The default value of @var{m} is 25.
1003 The @cmd{MISSING} subcommand determines the handling of missing
1004 variables. If @subcmd{INCLUDE} is set, then user-missing values are
1005 included in the calculations, but system-missing values are not.
1006 If @subcmd{EXCLUDE} is set, which is the default, user-missing
1007 values are excluded as well as system-missing values. This is the
1008 default. If @subcmd{LISTWISE} is set, then the entire case is excluded
1009 from analysis whenever any variable specified in the @cmd{VARIABLES}
1010 subcommand contains a missing value.
1012 If @subcmd{PAIRWISE} is set, then a case is considered missing only if
1013 either of the values for the particular coefficient are missing.
1014 The default is @subcmd{LISTWISE}.
1020 @cindex univariate analysis of variance
1021 @cindex fixed effects
1022 @cindex factorial anova
1023 @cindex analysis of variance
1028 GLM @var{dependent_vars} BY @var{fixed_factors}
1029 [/METHOD = SSTYPE(@var{type})]
1030 [/DESIGN = @var{interaction_0} [@var{interaction_1} [... @var{interaction_n}]]]
1031 [/INTERCEPT = @{INCLUDE|EXCLUDE@}]
1032 [/MISSING = @{INCLUDE|EXCLUDE@}]
1035 The @cmd{GLM} procedure can be used for fixed effects factorial Anova.
1037 The @var{dependent_vars} are the variables to be analysed.
1038 You may analyse several variables in the same command in which case they should all
1039 appear before the @code{BY} keyword.
1041 The @var{fixed_factors} list must be one or more categorical variables. Normally it
1042 does not make sense to enter a scalar variable in the @var{fixed_factors} and doing
1043 so may cause @pspp{} to do a lot of unnecessary processing.
1045 The @subcmd{METHOD} subcommand is used to change the method for producing the sums of
1046 squares. Available values of @var{type} are 1, 2 and 3. The default is type 3.
1048 You may specify a custom design using the @subcmd{DESIGN} subcommand.
1049 The design comprises a list of interactions where each interaction is a
1050 list of variables separated by a @samp{*}. For example the command
1052 GLM subject BY sex age_group race
1053 /DESIGN = age_group sex group age_group*sex age_group*race
1055 @noindent specifies the model @math{subject = age_group + sex + race + age_group*sex + age_group*race}.
1056 If no @subcmd{DESIGN} subcommand is specified, then the default is all possible combinations
1057 of the fixed factors. That is to say
1059 GLM subject BY sex age_group race
1062 @math{subject = age_group + sex + race + age_group*sex + age_group*race + sex*race + age_group*sex*race}.
1065 The @subcmd{MISSING} subcommand determines the handling of missing
1067 If @subcmd{INCLUDE} is set then, for the purposes of GLM analysis,
1068 only system-missing values are considered
1069 to be missing; user-missing values are not regarded as missing.
1070 If @subcmd{EXCLUDE} is set, which is the default, then user-missing
1071 values are considered to be missing as well as system-missing values.
1072 A case for which any dependent variable or any factor
1073 variable has a missing value is excluded from the analysis.
1075 @node LOGISTIC REGRESSION
1076 @section LOGISTIC REGRESSION
1078 @vindex LOGISTIC REGRESSION
1079 @cindex logistic regression
1080 @cindex bivariate logistic regression
1083 LOGISTIC REGRESSION [VARIABLES =] @var{dependent_var} WITH @var{predictors}
1085 [/CATEGORICAL = @var{categorical_predictors}]
1087 [@{/NOCONST | /ORIGIN | /NOORIGIN @}]
1089 [/PRINT = [SUMMARY] [DEFAULT] [CI(@var{confidence})] [ALL]]
1091 [/CRITERIA = [BCON(@var{min_delta})] [ITERATE(@var{max_interations})]
1092 [LCON(@var{min_likelihood_delta})] [EPS(@var{min_epsilon})]
1093 [CUT(@var{cut_point})]]
1095 [/MISSING = @{INCLUDE|EXCLUDE@}]
1098 Bivariate Logistic Regression is used when you want to explain a dichotomous dependent
1099 variable in terms of one or more predictor variables.
1101 The minimum command is
1103 LOGISTIC REGRESSION @var{y} WITH @var{x1} @var{x2} @dots{} @var{xn}.
1105 Here, @var{y} is the dependent variable, which must be dichotomous and @var{x1} @dots{} @var{xn}
1106 are the predictor variables whose coefficients the procedure estimates.
1108 By default, a constant term is included in the model.
1109 Hence, the full model is
1112 = b_0 + b_1 {\bf x_1}
1118 Predictor variables which are categorical in nature should be listed on the @subcmd{/CATEGORICAL} subcommand.
1119 Simple variables as well as interactions between variables may be listed here.
1121 If you want a model without the constant term @math{b_0}, use the keyword @subcmd{/ORIGIN}.
1122 @subcmd{/NOCONST} is a synonym for @subcmd{/ORIGIN}.
1124 An iterative Newton-Raphson procedure is used to fit the model.
1125 The @subcmd{/CRITERIA} subcommand is used to specify the stopping criteria of the procedure,
1126 and other parameters.
1127 The value of @var{cut_point} is used in the classification table. It is the
1128 threshold above which predicted values are considered to be 1. Values
1129 of @var{cut_point} must lie in the range [0,1].
1130 During iterations, if any one of the stopping criteria are satisfied, the procedure is
1131 considered complete.
1132 The stopping criteria are:
1134 @item The number of iterations exceeds @var{max_iterations}.
1135 The default value of @var{max_iterations} is 20.
1136 @item The change in the all coefficient estimates are less than @var{min_delta}.
1137 The default value of @var{min_delta} is 0.001.
1138 @item The magnitude of change in the likelihood estimate is less than @var{min_likelihood_delta}.
1139 The default value of @var{min_delta} is zero.
1140 This means that this criterion is disabled.
1141 @item The differential of the estimated probability for all cases is less than @var{min_epsilon}.
1142 In other words, the probabilities are close to zero or one.
1143 The default value of @var{min_epsilon} is 0.00000001.
1147 The @subcmd{PRINT} subcommand controls the display of optional statistics.
1148 Currently there is one such option, @subcmd{CI}, which indicates that the
1149 confidence interval of the odds ratio should be displayed as well as its value.
1150 @subcmd{CI} should be followed by an integer in parentheses, to indicate the
1151 confidence level of the desired confidence interval.
1153 The @subcmd{MISSING} subcommand determines the handling of missing
1155 If @subcmd{INCLUDE} is set, then user-missing values are included in the
1156 calculations, but system-missing values are not.
1157 If @subcmd{EXCLUDE} is set, which is the default, user-missing
1158 values are excluded as well as system-missing values.
1159 This is the default.
1170 [ BY @{@var{var_list}@} [BY @{@var{var_list}@} [BY @{@var{var_list}@} @dots{} ]]]
1172 [ /@{@var{var_list}@}
1173 [ BY @{@var{var_list}@} [BY @{@var{var_list}@} [BY @{@var{var_list}@} @dots{} ]]] ]
1175 [/CELLS = [MEAN] [COUNT] [STDDEV] [SEMEAN] [SUM] [MIN] [MAX] [RANGE]
1176 [VARIANCE] [KURT] [SEKURT]
1177 [SKEW] [SESKEW] [FIRST] [LAST]
1178 [HARMONIC] [GEOMETRIC]
1183 [/MISSING = [INCLUDE] [DEPENDENT]]
1186 You can use the @cmd{MEANS} command to calculate the arithmetic mean and similar
1187 statistics, either for the dataset as a whole or for categories of data.
1189 The simplest form of the command is
1193 @noindent which calculates the mean, count and standard deviation for @var{v}.
1194 If you specify a grouping variable, for example
1196 MEANS @var{v} BY @var{g}.
1198 @noindent then the means, counts and standard deviations for @var{v} after having
1199 been grouped by @var{g} are calculated.
1200 Instead of the mean, count and standard deviation, you could specify the statistics
1201 in which you are interested:
1203 MEANS @var{x} @var{y} BY @var{g}
1204 /CELLS = HARMONIC SUM MIN.
1206 This example calculates the harmonic mean, the sum and the minimum values of @var{x} and @var{y}
1209 The @subcmd{CELLS} subcommand specifies which statistics to calculate. The available statistics
1213 @cindex arithmetic mean
1214 The arithmetic mean.
1215 @item @subcmd{COUNT}
1216 The count of the values.
1217 @item @subcmd{STDDEV}
1218 The standard deviation.
1219 @item @subcmd{SEMEAN}
1220 The standard error of the mean.
1222 The sum of the values.
1227 @item @subcmd{RANGE}
1228 The difference between the maximum and minimum values.
1229 @item @subcmd{VARIANCE}
1231 @item @subcmd{FIRST}
1232 The first value in the category.
1234 The last value in the category.
1237 @item @subcmd{SESKEW}
1238 The standard error of the skewness.
1241 @item @subcmd{SEKURT}
1242 The standard error of the kurtosis.
1243 @item @subcmd{HARMONIC}
1244 @cindex harmonic mean
1246 @item @subcmd{GEOMETRIC}
1247 @cindex geometric mean
1251 In addition, three special keywords are recognized:
1253 @item @subcmd{DEFAULT}
1254 This is the same as @subcmd{MEAN} @subcmd{COUNT} @subcmd{STDDEV}.
1256 All of the above statistics are calculated.
1258 No statistics are calculated (only a summary is shown).
1262 More than one @dfn{table} can be specified in a single command.
1263 Each table is separated by a @samp{/}. For
1267 @var{c} @var{d} @var{e} BY @var{x}
1268 /@var{a} @var{b} BY @var{x} @var{y}
1269 /@var{f} BY @var{y} BY @var{z}.
1271 has three tables (the @samp{TABLE =} is optional).
1272 The first table has three dependent variables @var{c}, @var{d} and @var{e}
1273 and a single categorical variable @var{x}.
1274 The second table has two dependent variables @var{a} and @var{b},
1275 and two categorical variables @var{x} and @var{y}.
1276 The third table has a single dependent variables @var{f}
1277 and a categorical variable formed by the combination of @var{y} and @var{z}.
1280 By default values are omitted from the analysis only if missing values
1281 (either system missing or user missing)
1282 for any of the variables directly involved in their calculation are
1284 This behaviour can be modified with the @subcmd{/MISSING} subcommand.
1285 Three options are possible: @subcmd{TABLE}, @subcmd{INCLUDE} and @subcmd{DEPENDENT}.
1287 @subcmd{/MISSING = INCLUDE} says that user missing values, either in the dependent
1288 variables or in the categorical variables should be taken at their face
1289 value, and not excluded.
1291 @subcmd{/MISSING = DEPENDENT} says that user missing values, in the dependent
1292 variables should be taken at their face value, however cases which
1293 have user missing values for the categorical variables should be omitted
1294 from the calculation.
1296 @subsection Example Means
1298 The dataset in @file{repairs.sav} contains the mean time between failures (@exvar{mtbf})
1299 for a sample of artifacts produced by different factories and trialed under
1300 different operating conditions.
1301 Since there are four combinations of categorical variables, by simply looking
1302 at the list of data, it would be hard to how the scores vary for each category.
1303 @ref{means:ex} shows one way of tabulating the @exvar{mtbf} in a way which is
1304 easier to understand.
1306 @float Example, means:ex
1307 @psppsyntax {means.sps}
1308 @caption {Running @cmd{MEANS} on the @exvar{mtbf} score with categories @exvar{factory} and @exvar{environment}}
1311 The results are shown in @ref{means:res}. The figures shown indicate the mean,
1312 standard deviation and number of samples in each category.
1313 These figures however do not indicate whether the results are statistically
1314 significant. For that, you would need to use the procedures @cmd{ONEWAY}, @cmd{GLM} or
1315 @cmd{T-TEST} depending on the hypothesis being tested.
1317 @float Result, means:res
1319 @caption {The @exvar{mtbf} categorised by @exvar{factory} and @exvar{environment}}
1322 Note that there is no limit to the number of variables for which you can calculate
1323 statistics, nor to the number of categorical variables per layer, nor the number
1325 However, running @cmd{MEANS} on a large numbers of variables, or with categorical variables
1326 containing a large number of distinct values may result in an extremely large output, which
1327 will not be easy to interpret.
1328 So you should consider carefully which variables to select for participation in the analysis.
1334 @cindex nonparametric tests
1339 nonparametric test subcommands
1344 [ /STATISTICS=@{DESCRIPTIVES@} ]
1346 [ /MISSING=@{ANALYSIS, LISTWISE@} @{INCLUDE, EXCLUDE@} ]
1348 [ /METHOD=EXACT [ TIMER [(@var{n})] ] ]
1351 @cmd{NPAR TESTS} performs nonparametric tests.
1352 Non parametric tests make very few assumptions about the distribution of the
1354 One or more tests may be specified by using the corresponding subcommand.
1355 If the @subcmd{/STATISTICS} subcommand is also specified, then summary statistics are
1356 produces for each variable that is the subject of any test.
1358 Certain tests may take a long time to execute, if an exact figure is required.
1359 Therefore, by default asymptotic approximations are used unless the
1360 subcommand @subcmd{/METHOD=EXACT} is specified.
1361 Exact tests give more accurate results, but may take an unacceptably long
1362 time to perform. If the @subcmd{TIMER} keyword is used, it sets a maximum time,
1363 after which the test is abandoned, and a warning message printed.
1364 The time, in minutes, should be specified in parentheses after the @subcmd{TIMER} keyword.
1365 If the @subcmd{TIMER} keyword is given without this figure, then a default value of 5 minutes
1370 * BINOMIAL:: Binomial Test
1371 * CHISQUARE:: Chi-square Test
1372 * COCHRAN:: Cochran Q Test
1373 * FRIEDMAN:: Friedman Test
1374 * KENDALL:: Kendall's W Test
1375 * KOLMOGOROV-SMIRNOV:: Kolmogorov Smirnov Test
1376 * KRUSKAL-WALLIS:: Kruskal-Wallis Test
1377 * MANN-WHITNEY:: Mann Whitney U Test
1378 * MCNEMAR:: McNemar Test
1379 * MEDIAN:: Median Test
1381 * SIGN:: The Sign Test
1382 * WILCOXON:: Wilcoxon Signed Ranks Test
1387 @subsection Binomial test
1389 @cindex binomial test
1392 [ /BINOMIAL[(@var{p})]=@var{var_list}[(@var{value1}[, @var{value2})] ] ]
1395 The @subcmd{/BINOMIAL} subcommand compares the observed distribution of a dichotomous
1396 variable with that of a binomial distribution.
1397 The variable @var{p} specifies the test proportion of the binomial
1399 The default value of 0.5 is assumed if @var{p} is omitted.
1401 If a single value appears after the variable list, then that value is
1402 used as the threshold to partition the observed values. Values less
1403 than or equal to the threshold value form the first category. Values
1404 greater than the threshold form the second category.
1406 If two values appear after the variable list, then they are used
1407 as the values which a variable must take to be in the respective
1409 Cases for which a variable takes a value equal to neither of the specified
1410 values, take no part in the test for that variable.
1412 If no values appear, then the variable must assume dichotomous
1414 If more than two distinct, non-missing values for a variable
1415 under test are encountered then an error occurs.
1417 If the test proportion is equal to 0.5, then a two tailed test is
1418 reported. For any other test proportion, a one tailed test is
1420 For one tailed tests, if the test proportion is less than
1421 or equal to the observed proportion, then the significance of
1422 observing the observed proportion or more is reported.
1423 If the test proportion is more than the observed proportion, then the
1424 significance of observing the observed proportion or less is reported.
1425 That is to say, the test is always performed in the observed
1428 @pspp{} uses a very precise approximation to the gamma function to
1429 compute the binomial significance. Thus, exact results are reported
1430 even for very large sample sizes.
1434 @subsection Chi-square Test
1436 @cindex chi-square test
1440 [ /CHISQUARE=@var{var_list}[(@var{lo},@var{hi})] [/EXPECTED=@{EQUAL|@var{f1}, @var{f2} @dots{} @var{fn}@}] ]
1444 The @subcmd{/CHISQUARE} subcommand produces a chi-square statistic for the differences
1445 between the expected and observed frequencies of the categories of a variable.
1446 Optionally, a range of values may appear after the variable list.
1447 If a range is given, then non integer values are truncated, and values
1448 outside the specified range are excluded from the analysis.
1450 The @subcmd{/EXPECTED} subcommand specifies the expected values of each
1452 There must be exactly one non-zero expected value, for each observed
1453 category, or the @subcmd{EQUAL} keyword must be specified.
1454 You may use the notation @subcmd{@var{n}*@var{f}} to specify @var{n}
1455 consecutive expected categories all taking a frequency of @var{f}.
1456 The frequencies given are proportions, not absolute frequencies. The
1457 sum of the frequencies need not be 1.
1458 If no @subcmd{/EXPECTED} subcommand is given, then equal frequencies
1461 @subsubsection Chi-square Example
1463 A researcher wishes to investigate whether there are an equal number of
1464 persons of each sex in a population. The sample chosen for invesigation
1465 is that from the @file {physiology.sav} dataset. The null hypothesis for
1466 the test is that the population comprises an equal number of males and females.
1467 The analysis is performed as shown in @ref{chisquare:ex}.
1469 @float Example, chisquare:ex
1470 @psppsyntax {chisquare.sps}
1471 @caption {Performing a chi-square test to check for equal distribution of sexes}
1474 There is only one test variable, @i{viz:} @exvar{sex}. The other variables in the dataset
1477 @float Screenshot, chisquare:scr
1478 @psppimage {chisquare}
1479 @caption {Performing a chi-square test using the graphic user interface}
1482 In @ref{chisquare:res} the summary box shows that in the sample, there are more males
1483 than females. However the significance of chi-square result is greater than 0.05
1484 --- the most commonly accepted p-value --- and therefore
1485 there is not enough evidence to reject the null hypothesis and one must conclude
1486 that the evidence does not indicate that there is an imbalance of the sexes
1489 @float Result, chisquare:res
1490 @psppoutput {chisquare}
1491 @caption {The results of running a chi-square test on @exvar{sex}}
1496 @subsection Cochran Q Test
1498 @cindex Cochran Q test
1499 @cindex Q, Cochran Q
1502 [ /COCHRAN = @var{var_list} ]
1505 The Cochran Q test is used to test for differences between three or more groups.
1506 The data for @var{var_list} in all cases must assume exactly two
1507 distinct values (other than missing values).
1509 The value of Q is displayed along with its Asymptotic significance
1510 based on a chi-square distribution.
1513 @subsection Friedman Test
1515 @cindex Friedman test
1518 [ /FRIEDMAN = @var{var_list} ]
1521 The Friedman test is used to test for differences between repeated measures when
1522 there is no indication that the distributions are normally distributed.
1524 A list of variables which contain the measured data must be given. The procedure
1525 prints the sum of ranks for each variable, the test statistic and its significance.
1528 @subsection Kendall's W Test
1530 @cindex Kendall's W test
1531 @cindex coefficient of concordance
1534 [ /KENDALL = @var{var_list} ]
1537 The Kendall test investigates whether an arbitrary number of related samples come from the
1539 It is identical to the Friedman test except that the additional statistic W, Kendall's Coefficient of Concordance is printed.
1540 It has the range [0,1] --- a value of zero indicates no agreement between the samples whereas a value of
1541 unity indicates complete agreement.
1544 @node KOLMOGOROV-SMIRNOV
1545 @subsection Kolmogorov-Smirnov Test
1546 @vindex KOLMOGOROV-SMIRNOV
1548 @cindex Kolmogorov-Smirnov test
1551 [ /KOLMOGOROV-SMIRNOV (@{NORMAL [@var{mu}, @var{sigma}], UNIFORM [@var{min}, @var{max}], POISSON [@var{lambda}], EXPONENTIAL [@var{scale}] @}) = @var{var_list} ]
1554 The one sample Kolmogorov-Smirnov subcommand is used to test whether or not a dataset is
1555 drawn from a particular distribution. Four distributions are supported, @i{viz:}
1556 Normal, Uniform, Poisson and Exponential.
1558 Ideally you should provide the parameters of the distribution against
1559 which you wish to test the data. For example, with the normal
1560 distribution the mean (@var{mu})and standard deviation (@var{sigma})
1561 should be given; with the uniform distribution, the minimum
1562 (@var{min})and maximum (@var{max}) value should be provided.
1563 However, if the parameters are omitted they are imputed from the
1564 data. Imputing the parameters reduces the power of the test so should
1565 be avoided if possible.
1567 In the following example, two variables @var{score} and @var{age} are
1568 tested to see if they follow a normal distribution with a mean of 3.5
1569 and a standard deviation of 2.0.
1572 /KOLMOGOROV-SMIRNOV (normal 3.5 2.0) = @var{score} @var{age}.
1574 If the variables need to be tested against different distributions, then a separate
1575 subcommand must be used. For example the following syntax tests @var{score} against
1576 a normal distribution with mean of 3.5 and standard deviation of 2.0 whilst @var{age}
1577 is tested against a normal distribution of mean 40 and standard deviation 1.5.
1580 /KOLMOGOROV-SMIRNOV (normal 3.5 2.0) = @var{score}
1581 /KOLMOGOROV-SMIRNOV (normal 40 1.5) = @var{age}.
1584 The abbreviated subcommand @subcmd{K-S} may be used in place of @subcmd{KOLMOGOROV-SMIRNOV}.
1586 @node KRUSKAL-WALLIS
1587 @subsection Kruskal-Wallis Test
1588 @vindex KRUSKAL-WALLIS
1590 @cindex Kruskal-Wallis test
1593 [ /KRUSKAL-WALLIS = @var{var_list} BY var (@var{lower}, @var{upper}) ]
1596 The Kruskal-Wallis test is used to compare data from an
1597 arbitrary number of populations. It does not assume normality.
1598 The data to be compared are specified by @var{var_list}.
1599 The categorical variable determining the groups to which the
1600 data belongs is given by @var{var}. The limits @var{lower} and
1601 @var{upper} specify the valid range of @var{var}. Any cases for
1602 which @var{var} falls outside [@var{lower}, @var{upper}] are
1605 The mean rank of each group as well as the chi-squared value and
1606 significance of the test are printed.
1607 The abbreviated subcommand @subcmd{K-W} may be used in place of
1608 @subcmd{KRUSKAL-WALLIS}.
1612 @subsection Mann-Whitney U Test
1613 @vindex MANN-WHITNEY
1615 @cindex Mann-Whitney U test
1616 @cindex U, Mann-Whitney U
1619 [ /MANN-WHITNEY = @var{var_list} BY var (@var{group1}, @var{group2}) ]
1622 The Mann-Whitney subcommand is used to test whether two groups of data
1623 come from different populations. The variables to be tested should be
1624 specified in @var{var_list} and the grouping variable, that determines
1625 to which group the test variables belong, in @var{var}.
1626 @var{Var} may be either a string or an alpha variable.
1627 @var{Group1} and @var{group2} specify the
1628 two values of @var{var} which determine the groups of the test data.
1629 Cases for which the @var{var} value is neither @var{group1} or
1630 @var{group2} are ignored.
1632 The value of the Mann-Whitney U statistic, the Wilcoxon W, and the
1633 significance are printed.
1634 You may abbreviated the subcommand @subcmd{MANN-WHITNEY} to
1639 @subsection McNemar Test
1641 @cindex McNemar test
1644 [ /MCNEMAR @var{var_list} [ WITH @var{var_list} [ (PAIRED) ]]]
1647 Use McNemar's test to analyse the significance of the difference between
1648 pairs of correlated proportions.
1650 If the @code{WITH} keyword is omitted, then tests for all
1651 combinations of the listed variables are performed.
1652 If the @code{WITH} keyword is given, and the @code{(PAIRED)} keyword
1653 is also given, then the number of variables preceding @code{WITH}
1654 must be the same as the number following it.
1655 In this case, tests for each respective pair of variables are
1657 If the @code{WITH} keyword is given, but the
1658 @code{(PAIRED)} keyword is omitted, then tests for each combination
1659 of variable preceding @code{WITH} against variable following
1660 @code{WITH} are performed.
1662 The data in each variable must be dichotomous. If there are more
1663 than two distinct variables an error will occur and the test will
1667 @subsection Median Test
1672 [ /MEDIAN [(@var{value})] = @var{var_list} BY @var{variable} (@var{value1}, @var{value2}) ]
1675 The median test is used to test whether independent samples come from
1676 populations with a common median.
1677 The median of the populations against which the samples are to be tested
1678 may be given in parentheses immediately after the
1679 @subcmd{/MEDIAN} subcommand. If it is not given, the median is imputed from the
1680 union of all the samples.
1682 The variables of the samples to be tested should immediately follow the @samp{=} sign. The
1683 keyword @code{BY} must come next, and then the grouping variable. Two values
1684 in parentheses should follow. If the first value is greater than the second,
1685 then a 2 sample test is performed using these two values to determine the groups.
1686 If however, the first variable is less than the second, then a @i{k} sample test is
1687 conducted and the group values used are all values encountered which lie in the
1688 range [@var{value1},@var{value2}].
1692 @subsection Runs Test
1697 [ /RUNS (@{MEAN, MEDIAN, MODE, @var{value}@}) = @var{var_list} ]
1700 The @subcmd{/RUNS} subcommand tests whether a data sequence is randomly ordered.
1702 It works by examining the number of times a variable's value crosses a given threshold.
1703 The desired threshold must be specified within parentheses.
1704 It may either be specified as a number or as one of @subcmd{MEAN}, @subcmd{MEDIAN} or @subcmd{MODE}.
1705 Following the threshold specification comes the list of variables whose values are to be
1708 The subcommand shows the number of runs, the asymptotic significance based on the
1712 @subsection Sign Test
1717 [ /SIGN @var{var_list} [ WITH @var{var_list} [ (PAIRED) ]]]
1720 The @subcmd{/SIGN} subcommand tests for differences between medians of the
1722 The test does not make any assumptions about the
1723 distribution of the data.
1725 If the @code{WITH} keyword is omitted, then tests for all
1726 combinations of the listed variables are performed.
1727 If the @code{WITH} keyword is given, and the @code{(PAIRED)} keyword
1728 is also given, then the number of variables preceding @code{WITH}
1729 must be the same as the number following it.
1730 In this case, tests for each respective pair of variables are
1732 If the @code{WITH} keyword is given, but the
1733 @code{(PAIRED)} keyword is omitted, then tests for each combination
1734 of variable preceding @code{WITH} against variable following
1735 @code{WITH} are performed.
1738 @subsection Wilcoxon Matched Pairs Signed Ranks Test
1740 @cindex wilcoxon matched pairs signed ranks test
1743 [ /WILCOXON @var{var_list} [ WITH @var{var_list} [ (PAIRED) ]]]
1746 The @subcmd{/WILCOXON} subcommand tests for differences between medians of the
1748 The test does not make any assumptions about the variances of the samples.
1749 It does however assume that the distribution is symmetrical.
1751 If the @subcmd{WITH} keyword is omitted, then tests for all
1752 combinations of the listed variables are performed.
1753 If the @subcmd{WITH} keyword is given, and the @subcmd{(PAIRED)} keyword
1754 is also given, then the number of variables preceding @subcmd{WITH}
1755 must be the same as the number following it.
1756 In this case, tests for each respective pair of variables are
1758 If the @subcmd{WITH} keyword is given, but the
1759 @subcmd{(PAIRED)} keyword is omitted, then tests for each combination
1760 of variable preceding @subcmd{WITH} against variable following
1761 @subcmd{WITH} are performed.
1770 /MISSING=@{ANALYSIS,LISTWISE@} @{EXCLUDE,INCLUDE@}
1771 /CRITERIA=CI(@var{confidence})
1775 TESTVAL=@var{test_value}
1776 /VARIABLES=@var{var_list}
1779 (Independent Samples mode.)
1780 GROUPS=var(@var{value1} [, @var{value2}])
1781 /VARIABLES=@var{var_list}
1784 (Paired Samples mode.)
1785 PAIRS=@var{var_list} [WITH @var{var_list} [(PAIRED)] ]
1790 The @cmd{T-TEST} procedure outputs tables used in testing hypotheses about
1792 It operates in one of three modes:
1794 @item One Sample mode.
1795 @item Independent Groups mode.
1800 Each of these modes are described in more detail below.
1801 There are two optional subcommands which are common to all modes.
1803 The @cmd{/CRITERIA} subcommand tells @pspp{} the confidence interval used
1804 in the tests. The default value is 0.95.
1807 The @cmd{MISSING} subcommand determines the handling of missing
1809 If @subcmd{INCLUDE} is set, then user-missing values are included in the
1810 calculations, but system-missing values are not.
1811 If @subcmd{EXCLUDE} is set, which is the default, user-missing
1812 values are excluded as well as system-missing values.
1813 This is the default.
1815 If @subcmd{LISTWISE} is set, then the entire case is excluded from analysis
1816 whenever any variable specified in the @subcmd{/VARIABLES}, @subcmd{/PAIRS} or
1817 @subcmd{/GROUPS} subcommands contains a missing value.
1818 If @subcmd{ANALYSIS} is set, then missing values are excluded only in the analysis for
1819 which they would be needed. This is the default.
1823 * One Sample Mode:: Testing against a hypothesized mean
1824 * Independent Samples Mode:: Testing two independent groups for equal mean
1825 * Paired Samples Mode:: Testing two interdependent groups for equal mean
1828 @node One Sample Mode
1829 @subsection One Sample Mode
1831 The @subcmd{TESTVAL} subcommand invokes the One Sample mode.
1832 This mode is used to test a population mean against a hypothesized
1834 The value given to the @subcmd{TESTVAL} subcommand is the value against
1835 which you wish to test.
1836 In this mode, you must also use the @subcmd{/VARIABLES} subcommand to
1837 tell @pspp{} which variables you wish to test.
1839 @subsubsection Example - One Sample T-test
1841 A researcher wishes to know whether the weight of persons in a population
1842 is different from the national average.
1843 The samples are drawn from the population under investigation and recorded
1844 in the file @file{physiology.sav}.
1845 From the Department of Health, she
1846 knows that the national average weight of healthy adults is 76.8kg.
1847 Accordingly the @subcmd{TESTVAL} is set to 76.8.
1848 The null hypothesis therefore is that the mean average weight of the
1849 population from which the sample was drawn is 76.8kg.
1851 As previously noted (@pxref{Identifying incorrect data}), one
1852 sample in the dataset contains a weight value
1853 which is clearly incorrect. So this is excluded from the analysis
1854 using the @cmd{SELECT} command.
1856 @float Example, one-sample-t:ex
1857 @psppsyntax {one-sample-t.sps}
1858 @caption {Running a one sample T-Test after excluding all non-positive values}
1861 @float Screenshot, one-sample-t:scr
1862 @psppimage {one-sample-t}
1863 @caption {Using the One Sample T-Test dialog box to test @exvar{weight} for a mean of 76.8kg}
1867 @ref{one-sample-t:res} shows that the mean of our sample differs from the test value
1868 by -1.40kg. However the significance is very high (0.610). So one cannot
1869 reject the null hypothesis, and must conclude there is not enough evidence
1870 to suggest that the mean weight of the persons in our population is different
1873 @float Results, one-sample-t:res
1874 @psppoutput {one-sample-t}
1875 @caption {The results of a one sample T-test of @exvar{weight} using a test value of 76.8kg}
1878 @node Independent Samples Mode
1879 @subsection Independent Samples Mode
1881 The @subcmd{GROUPS} subcommand invokes Independent Samples mode or
1883 This mode is used to test whether two groups of values have the
1884 same population mean.
1885 In this mode, you must also use the @subcmd{/VARIABLES} subcommand to
1886 tell @pspp{} the dependent variables you wish to test.
1888 The variable given in the @subcmd{GROUPS} subcommand is the independent
1889 variable which determines to which group the samples belong.
1890 The values in parentheses are the specific values of the independent
1891 variable for each group.
1892 If the parentheses are omitted and no values are given, the default values
1893 of 1.0 and 2.0 are assumed.
1895 If the independent variable is numeric,
1896 it is acceptable to specify only one value inside the parentheses.
1897 If you do this, cases where the independent variable is
1898 greater than or equal to this value belong to the first group, and cases
1899 less than this value belong to the second group.
1900 When using this form of the @subcmd{GROUPS} subcommand, missing values in
1901 the independent variable are excluded on a listwise basis, regardless
1902 of whether @subcmd{/MISSING=LISTWISE} was specified.
1904 @subsubsection Example - Independent Samples T-test
1906 A researcher wishes to know whether within a population, adult males
1907 are taller than adult females.
1908 The samples are drawn from the population under investigation and recorded
1909 in the file @file{physiology.sav}.
1911 As previously noted (@pxref{Identifying incorrect data}), one
1912 sample in the dataset contains a height value
1913 which is clearly incorrect. So this is excluded from the analysis
1914 using the @cmd{SELECT} command.
1917 @float Example, indepdendent-samples-t:ex
1918 @psppsyntax {independent-samples-t.sps}
1919 @caption {Running a independent samples T-Test after excluding all observations less than 200kg}
1923 The null hypothesis is that both males and females are on average
1926 @float Screenshot, independent-samples-t:scr
1927 @psppimage {independent-samples-t}
1928 @caption {Using the Independent Sample T-test dialog, to test for differences of @exvar{height} between values of @exvar{sex}}
1932 In this case, the grouping variable is @exvar{sex}, so this is entered
1933 as the variable for the @subcmd{GROUP} subcommand. The group values are 0 (male) and
1936 If you are running the proceedure using syntax, then you need to enter
1937 the values corresponding to each group within parentheses.
1938 If you are using the graphic user interface, then you have to open
1939 the ``Define Groups'' dialog box and enter the values corresponding
1940 to each group as shown in @ref{define-groups-t:scr}. If, as in this case, the dataset has defined value
1941 labels for the group variable, then you can enter them by label
1944 @float Screenshot, define-groups-t:scr
1945 @psppimage {define-groups-t}
1946 @caption {Setting the values of the grouping variable for an Independent Samples T-test}
1949 From @ref{independent-samples-t:res}, one can clearly see that the @emph{sample} mean height
1950 is greater for males than for females. However in order to see if this
1951 is a significant result, one must consult the T-Test table.
1953 The T-Test table contains two rows; one for use if the variance of the samples
1954 in each group may be safely assumed to be equal, and the second row
1955 if the variances in each group may not be safely assumed to be equal.
1957 In this case however, both rows show a 2-tailed significance less than 0.001 and
1958 one must therefore reject the null hypothesis and conclude that within
1959 the population the mean height of males and of females are unequal.
1961 @float Result, independent-samples-t:res
1962 @psppoutput {independent-samples-t}
1963 @caption {The results of an independent samples T-test of @exvar{height} by @exvar{sex}}
1966 @node Paired Samples Mode
1967 @subsection Paired Samples Mode
1969 The @cmd{PAIRS} subcommand introduces Paired Samples mode.
1970 Use this mode when repeated measures have been taken from the same
1972 If the @subcmd{WITH} keyword is omitted, then tables for all
1973 combinations of variables given in the @cmd{PAIRS} subcommand are
1975 If the @subcmd{WITH} keyword is given, and the @subcmd{(PAIRED)} keyword
1976 is also given, then the number of variables preceding @subcmd{WITH}
1977 must be the same as the number following it.
1978 In this case, tables for each respective pair of variables are
1980 In the event that the @subcmd{WITH} keyword is given, but the
1981 @subcmd{(PAIRED)} keyword is omitted, then tables for each combination
1982 of variable preceding @subcmd{WITH} against variable following
1983 @subcmd{WITH} are generated.
1990 @cindex analysis of variance
1995 [/VARIABLES = ] @var{var_list} BY @var{var}
1996 /MISSING=@{ANALYSIS,LISTWISE@} @{EXCLUDE,INCLUDE@}
1997 /CONTRAST= @var{value1} [, @var{value2}] ... [,@var{valueN}]
1998 /STATISTICS=@{DESCRIPTIVES,HOMOGENEITY@}
1999 /POSTHOC=@{BONFERRONI, GH, LSD, SCHEFFE, SIDAK, TUKEY, ALPHA ([@var{value}])@}
2002 The @cmd{ONEWAY} procedure performs a one-way analysis of variance of
2003 variables factored by a single independent variable.
2004 It is used to compare the means of a population
2005 divided into more than two groups.
2007 The dependent variables to be analysed should be given in the @subcmd{VARIABLES}
2009 The list of variables must be followed by the @subcmd{BY} keyword and
2010 the name of the independent (or factor) variable.
2012 You can use the @subcmd{STATISTICS} subcommand to tell @pspp{} to display
2013 ancillary information. The options accepted are:
2016 Displays descriptive statistics about the groups factored by the independent
2019 Displays the Levene test of Homogeneity of Variance for the
2020 variables and their groups.
2023 The @subcmd{CONTRAST} subcommand is used when you anticipate certain
2024 differences between the groups.
2025 The subcommand must be followed by a list of numerals which are the
2026 coefficients of the groups to be tested.
2027 The number of coefficients must correspond to the number of distinct
2028 groups (or values of the independent variable).
2029 If the total sum of the coefficients are not zero, then @pspp{} will
2030 display a warning, but will proceed with the analysis.
2031 The @subcmd{CONTRAST} subcommand may be given up to 10 times in order
2032 to specify different contrast tests.
2033 The @subcmd{MISSING} subcommand defines how missing values are handled.
2034 If @subcmd{LISTWISE} is specified then cases which have missing values for
2035 the independent variable or any dependent variable are ignored.
2036 If @subcmd{ANALYSIS} is specified, then cases are ignored if the independent
2037 variable is missing or if the dependent variable currently being
2038 analysed is missing. The default is @subcmd{ANALYSIS}.
2039 A setting of @subcmd{EXCLUDE} means that variables whose values are
2040 user-missing are to be excluded from the analysis. A setting of
2041 @subcmd{INCLUDE} means they are to be included. The default is @subcmd{EXCLUDE}.
2043 Using the @code{POSTHOC} subcommand you can perform multiple
2044 pairwise comparisons on the data. The following comparison methods
2048 Least Significant Difference.
2049 @item @subcmd{TUKEY}
2050 Tukey Honestly Significant Difference.
2051 @item @subcmd{BONFERRONI}
2053 @item @subcmd{SCHEFFE}
2055 @item @subcmd{SIDAK}
2058 The Games-Howell test.
2062 Use the optional syntax @code{ALPHA(@var{value})} to indicate that
2063 @cmd{ONEWAY} should perform the posthoc tests at a confidence level of
2064 @var{value}. If @code{ALPHA(@var{value})} is not specified, then the
2065 confidence level used is 0.05.
2068 @section QUICK CLUSTER
2069 @vindex QUICK CLUSTER
2071 @cindex K-means clustering
2075 QUICK CLUSTER @var{var_list}
2076 [/CRITERIA=CLUSTERS(@var{k}) [MXITER(@var{max_iter})] CONVERGE(@var{epsilon}) [NOINITIAL]]
2077 [/MISSING=@{EXCLUDE,INCLUDE@} @{LISTWISE, PAIRWISE@}]
2078 [/PRINT=@{INITIAL@} @{CLUSTER@}]
2079 [/SAVE[=[CLUSTER[(@var{membership_var})]] [DISTANCE[(@var{distance_var})]]]
2082 The @cmd{QUICK CLUSTER} command performs k-means clustering on the
2083 dataset. This is useful when you wish to allocate cases into clusters
2084 of similar values and you already know the number of clusters.
2086 The minimum specification is @samp{QUICK CLUSTER} followed by the names
2087 of the variables which contain the cluster data. Normally you will also
2088 want to specify @subcmd{/CRITERIA=CLUSTERS(@var{k})} where @var{k} is the
2089 number of clusters. If this is not specified, then @var{k} defaults to 2.
2091 If you use @subcmd{/CRITERIA=NOINITIAL} then a naive algorithm to select
2092 the initial clusters is used. This will provide for faster execution but
2093 less well separated initial clusters and hence possibly an inferior final
2097 @cmd{QUICK CLUSTER} uses an iterative algorithm to select the clusters centers.
2098 The subcommand @subcmd{/CRITERIA=MXITER(@var{max_iter})} sets the maximum number of iterations.
2099 During classification, @pspp{} will continue iterating until until @var{max_iter}
2100 iterations have been done or the convergence criterion (see below) is fulfilled.
2101 The default value of @var{max_iter} is 2.
2103 If however, you specify @subcmd{/CRITERIA=NOUPDATE} then after selecting the initial centers,
2104 no further update to the cluster centers is done. In this case, @var{max_iter}, if specified.
2107 The subcommand @subcmd{/CRITERIA=CONVERGE(@var{epsilon})} is used
2108 to set the convergence criterion. The value of convergence criterion is @var{epsilon}
2109 times the minimum distance between the @emph{initial} cluster centers. Iteration stops when
2110 the mean cluster distance between one iteration and the next
2111 is less than the convergence criterion. The default value of @var{epsilon} is zero.
2113 The @subcmd{MISSING} subcommand determines the handling of missing variables.
2114 If @subcmd{INCLUDE} is set, then user-missing values are considered at their face
2115 value and not as missing values.
2116 If @subcmd{EXCLUDE} is set, which is the default, user-missing
2117 values are excluded as well as system-missing values.
2119 If @subcmd{LISTWISE} is set, then the entire case is excluded from the analysis
2120 whenever any of the clustering variables contains a missing value.
2121 If @subcmd{PAIRWISE} is set, then a case is considered missing only if all the
2122 clustering variables contain missing values. Otherwise it is clustered
2123 on the basis of the non-missing values.
2124 The default is @subcmd{LISTWISE}.
2126 The @subcmd{PRINT} subcommand requests additional output to be printed.
2127 If @subcmd{INITIAL} is set, then the initial cluster memberships will
2129 If @subcmd{CLUSTER} is set, the cluster memberships of the individual
2130 cases are displayed (potentially generating lengthy output).
2132 You can specify the subcommand @subcmd{SAVE} to ask that each case's cluster membership
2133 and the euclidean distance between the case and its cluster center be saved to
2134 a new variable in the active dataset. To save the cluster membership use the
2135 @subcmd{CLUSTER} keyword and to save the distance use the @subcmd{DISTANCE} keyword.
2136 Each keyword may optionally be followed by a variable name in parentheses to specify
2137 the new variable which is to contain the saved parameter. If no variable name is specified,
2138 then PSPP will create one.
2146 [VARIABLES=] @var{var_list} [@{A,D@}] [BY @var{var_list}]
2147 /TIES=@{MEAN,LOW,HIGH,CONDENSE@}
2148 /FRACTION=@{BLOM,TUKEY,VW,RANKIT@}
2150 /MISSING=@{EXCLUDE,INCLUDE@}
2152 /RANK [INTO @var{var_list}]
2153 /NTILES(k) [INTO @var{var_list}]
2154 /NORMAL [INTO @var{var_list}]
2155 /PERCENT [INTO @var{var_list}]
2156 /RFRACTION [INTO @var{var_list}]
2157 /PROPORTION [INTO @var{var_list}]
2158 /N [INTO @var{var_list}]
2159 /SAVAGE [INTO @var{var_list}]
2162 The @cmd{RANK} command ranks variables and stores the results into new
2165 The @subcmd{VARIABLES} subcommand, which is mandatory, specifies one or
2166 more variables whose values are to be ranked.
2167 After each variable, @samp{A} or @samp{D} may appear, indicating that
2168 the variable is to be ranked in ascending or descending order.
2169 Ascending is the default.
2170 If a @subcmd{BY} keyword appears, it should be followed by a list of variables
2171 which are to serve as group variables.
2172 In this case, the cases are gathered into groups, and ranks calculated
2175 The @subcmd{TIES} subcommand specifies how tied values are to be treated. The
2176 default is to take the mean value of all the tied cases.
2178 The @subcmd{FRACTION} subcommand specifies how proportional ranks are to be
2179 calculated. This only has any effect if @subcmd{NORMAL} or @subcmd{PROPORTIONAL} rank
2180 functions are requested.
2182 The @subcmd{PRINT} subcommand may be used to specify that a summary of the rank
2183 variables created should appear in the output.
2185 The function subcommands are @subcmd{RANK}, @subcmd{NTILES}, @subcmd{NORMAL}, @subcmd{PERCENT}, @subcmd{RFRACTION},
2186 @subcmd{PROPORTION} and @subcmd{SAVAGE}. Any number of function subcommands may appear.
2187 If none are given, then the default is RANK.
2188 The @subcmd{NTILES} subcommand must take an integer specifying the number of
2189 partitions into which values should be ranked.
2190 Each subcommand may be followed by the @subcmd{INTO} keyword and a list of
2191 variables which are the variables to be created and receive the rank
2192 scores. There may be as many variables specified as there are
2193 variables named on the @subcmd{VARIABLES} subcommand. If fewer are specified,
2194 then the variable names are automatically created.
2196 The @subcmd{MISSING} subcommand determines how user missing values are to be
2197 treated. A setting of @subcmd{EXCLUDE} means that variables whose values are
2198 user-missing are to be excluded from the rank scores. A setting of
2199 @subcmd{INCLUDE} means they are to be included. The default is @subcmd{EXCLUDE}.
2201 @include regression.texi
2205 @section RELIABILITY
2210 /VARIABLES=@var{var_list}
2211 /SCALE (@var{name}) = @{@var{var_list}, ALL@}
2212 /MODEL=@{ALPHA, SPLIT[(@var{n})]@}
2213 /SUMMARY=@{TOTAL,ALL@}
2214 /MISSING=@{EXCLUDE,INCLUDE@}
2217 @cindex Cronbach's Alpha
2218 The @cmd{RELIABILITY} command performs reliability analysis on the data.
2220 The @subcmd{VARIABLES} subcommand is required. It determines the set of variables
2221 upon which analysis is to be performed.
2223 The @subcmd{SCALE} subcommand determines the variables for which
2224 reliability is to be calculated. If @subcmd{SCALE} is omitted, then analysis for
2225 all variables named in the @subcmd{VARIABLES} subcommand are used.
2226 Optionally, the @var{name} parameter may be specified to set a string name
2229 The @subcmd{MODEL} subcommand determines the type of analysis. If @subcmd{ALPHA} is specified,
2230 then Cronbach's Alpha is calculated for the scale. If the model is @subcmd{SPLIT},
2231 then the variables are divided into 2 subsets. An optional parameter
2232 @var{n} may be given, to specify how many variables to be in the first subset.
2233 If @var{n} is omitted, then it defaults to one half of the variables in the
2234 scale, or one half minus one if there are an odd number of variables.
2235 The default model is @subcmd{ALPHA}.
2237 By default, any cases with user missing, or system missing values for
2238 any variables given in the @subcmd{VARIABLES} subcommand are omitted
2239 from the analysis. The @subcmd{MISSING} subcommand determines whether
2240 user missing values are included or excluded in the analysis.
2242 The @subcmd{SUMMARY} subcommand determines the type of summary analysis to be performed.
2243 Currently there is only one type: @subcmd{SUMMARY=TOTAL}, which displays per-item
2244 analysis tested against the totals.
2246 @subsection Example - Reliability
2248 Before analysing the results of a survey -- particularly for a multiple choice survey --
2249 it is desireable to know whether the respondents have considered their answers
2250 or simply provided random answers.
2252 In the following example the survey results from the file @file{hotel.sav} are used.
2253 All five survey questions are included in the reliability analysis.
2254 However, before running the analysis, the data must be preprocessed.
2255 An examination of the survey questions reveals that two questions, @i{viz:} v3 and v5
2256 are negatively worded, whereas the others are positively worded.
2257 All questions must be based upon the same scale for the analysis to be meaningful.
2258 One could use the @cmd{RECODE} command (@pxref{RECODE}), however a simpler way is
2259 to use @cmd{COMPUTE} (@pxref{COMPUTE}) and this is what is done in @ref{reliability:ex}.
2261 @float Example, reliability:ex
2262 @psppsyntax {reliability.sps}
2263 @caption {Investigating the reliability of survey responses}
2266 In this case, all variables in the data set are used. So we can use the special
2267 keyword @samp{ALL} (@pxref{BNF}).
2269 @float Screenshot, reliability:src
2270 @psppimage {reliability}
2271 @caption {Reliability dialog box with all variables selected}
2274 @ref{reliability:res} shows that Cronbach's Alpha is 0.11 which is a value normally considered too
2275 low to indicate consistency within the data. This is possibly due to the small number of
2276 survey questions. The survey should be redesigned before serious use of the results are
2279 @float Result, reliability:res
2280 @psppoutput {reliability}
2281 @caption {The results of the reliability command on @file{hotel.sav}}
2289 @cindex Receiver Operating Characteristic
2290 @cindex Area under curve
2293 ROC @var{var_list} BY @var{state_var} (@var{state_value})
2294 /PLOT = @{ CURVE [(REFERENCE)], NONE @}
2295 /PRINT = [ SE ] [ COORDINATES ]
2296 /CRITERIA = [ CUTOFF(@{INCLUDE,EXCLUDE@}) ]
2297 [ TESTPOS (@{LARGE,SMALL@}) ]
2298 [ CI (@var{confidence}) ]
2299 [ DISTRIBUTION (@{FREE, NEGEXPO @}) ]
2300 /MISSING=@{EXCLUDE,INCLUDE@}
2304 The @cmd{ROC} command is used to plot the receiver operating characteristic curve
2305 of a dataset, and to estimate the area under the curve.
2306 This is useful for analysing the efficacy of a variable as a predictor of a state of nature.
2308 The mandatory @var{var_list} is the list of predictor variables.
2309 The variable @var{state_var} is the variable whose values represent the actual states,
2310 and @var{state_value} is the value of this variable which represents the positive state.
2312 The optional subcommand @subcmd{PLOT} is used to determine if and how the @subcmd{ROC} curve is drawn.
2313 The keyword @subcmd{CURVE} means that the @subcmd{ROC} curve should be drawn, and the optional keyword @subcmd{REFERENCE},
2314 which should be enclosed in parentheses, says that the diagonal reference line should be drawn.
2315 If the keyword @subcmd{NONE} is given, then no @subcmd{ROC} curve is drawn.
2316 By default, the curve is drawn with no reference line.
2318 The optional subcommand @subcmd{PRINT} determines which additional
2319 tables should be printed. Two additional tables are available. The
2320 @subcmd{SE} keyword says that standard error of the area under the
2321 curve should be printed as well as the area itself. In addition, a
2322 p-value for the null hypothesis that the area under the curve equals
2323 0.5 is printed. The @subcmd{COORDINATES} keyword says that a
2324 table of coordinates of the @subcmd{ROC} curve should be printed.
2326 The @subcmd{CRITERIA} subcommand has four optional parameters:
2328 @item The @subcmd{TESTPOS} parameter may be @subcmd{LARGE} or @subcmd{SMALL}.
2329 @subcmd{LARGE} is the default, and says that larger values in the predictor variables are to be
2330 considered positive. @subcmd{SMALL} indicates that smaller values should be considered positive.
2332 @item The @subcmd{CI} parameter specifies the confidence interval that should be printed.
2333 It has no effect if the @subcmd{SE} keyword in the @subcmd{PRINT} subcommand has not been given.
2335 @item The @subcmd{DISTRIBUTION} parameter determines the method to be used when estimating the area
2337 There are two possibilities, @i{viz}: @subcmd{FREE} and @subcmd{NEGEXPO}.
2338 The @subcmd{FREE} method uses a non-parametric estimate, and the @subcmd{NEGEXPO} method a bi-negative
2339 exponential distribution estimate.
2340 The @subcmd{NEGEXPO} method should only be used when the number of positive actual states is
2341 equal to the number of negative actual states.
2342 The default is @subcmd{FREE}.
2344 @item The @subcmd{CUTOFF} parameter is for compatibility and is ignored.
2347 The @subcmd{MISSING} subcommand determines whether user missing values are to
2348 be included or excluded in the analysis. The default behaviour is to
2350 Cases are excluded on a listwise basis; if any of the variables in @var{var_list}
2351 or if the variable @var{state_var} is missing, then the entire case is
2354 @c LocalWords: subcmd subcommand