1 Time-stamp: <2006-12-17 18:45:35 blp>
3 Get rid of need for GNU diff in `make check'.
5 CROSSTABS needs to be re-examined.
7 Scratch variables should not be available for use following TEMPORARY.
9 Check our results against the NIST StRD benchmark results at
10 strd.itl.nist.gov/div898/strd
12 Storage of value labels on disk is inefficient. Invent new data structure.
14 Fix spanned joint cells, i.e., EDLEVEL on crosstabs.stat.
16 SELECT IF should be moved before other transformations whenever possible. It
17 should only be impossible when one of the variables referred to in SELECT IF is
18 created or modified by a previous transformation.
20 Figure out a stylesheet for messages displayed by PSPP: i.e., what quotation
21 marks around filenames, etc.
23 From Zvi Grauer <z.grauer@csuohio.edu> and <zvi@mail.ohio.net>:
25 1. design of experiments software, specifically Factorial, response surface
26 methodology and mixrture design.
28 These would be EXTREMELY USEFUL for chemists, engineeris, and anyone
29 involved in the production of chemicals or formulations.
31 2. Multidimensional Scaling analysis (for market analysis) -
33 3. Preference mapping software for market analysis
35 4. Hierarchical clustering (as well as partition clustering)
39 6. Categorical data analsys ?
41 Sometimes very wide (or very tall) columns can occur in tables. What is a good
42 way to truncate them? It doesn't seem to cause problems for the ascii or
43 postscript drivers, but it's not good in the general case. Should they be
44 split somehow? (One way that wide columns can occur is through user request,
45 for instance through a wide PRINT request--try time-date.stat with a narrow
46 ascii page or with the postscript driver on letter size paper.)
48 From Moshe Braner <mbraner@nessie.vdh.state.vt.us>: An idea regarding MATCH
49 FILES, again getting BEYOND the state of SPSS: it always bothered me that if I
50 have a large data file and I want to match it to a small lookup table, via
51 MATCH FILES FILE= /TABLE= /BY key, I need to SORT the large file on key, do the
52 match, then (usually) re-sort back into the order I really want it. There is
53 no reason to do this, when the lookup table is small. Even a dumb sequential
54 search through the table, for every case in the big file, is better, in some
55 cases, than the sort. So here's my idea: first look at the /TABLE file, if it
56 is "small enough", read it into memory, and create an index (or hash table,
57 whatever) for it. Then read the /FILE and use the index to match to each case.
58 OTOH, if the /TABLE is too large, then do it the old way, complaining if either
59 file is not sorted on key.