+++ /dev/null
-Time-stamp: <2006-12-17 18:45:35 blp>
-
-Get rid of need for GNU diff in `make check'.
-
-CROSSTABS needs to be re-examined.
-
-Scratch variables should not be available for use following TEMPORARY.
-
-Check our results against the NIST StRD benchmark results at
-strd.itl.nist.gov/div898/strd
-
-Storage of value labels on disk is inefficient. Invent new data structure.
-
-Fix spanned joint cells, i.e., EDLEVEL on crosstabs.stat.
-
-SELECT IF should be moved before other transformations whenever possible. It
-should only be impossible when one of the variables referred to in SELECT IF is
-created or modified by a previous transformation.
-
-Figure out a stylesheet for messages displayed by PSPP: i.e., what quotation
-marks around filenames, etc.
-
-From Zvi Grauer <z.grauer@csuohio.edu> and <zvi@mail.ohio.net>:
-
- 1. design of experiments software, specifically Factorial, response surface
- methodology and mixrture design.
-
- These would be EXTREMELY USEFUL for chemists, engineeris, and anyone
- involved in the production of chemicals or formulations.
-
- 2. Multidimensional Scaling analysis (for market analysis) -
-
- 3. Preference mapping software for market analysis
-
- 4. Hierarchical clustering (as well as partition clustering)
-
- 5. Conjoint analysis
-
- 6. Categorical data analsys ?
-
-Sometimes very wide (or very tall) columns can occur in tables. What is a good
-way to truncate them? It doesn't seem to cause problems for the ascii or
-postscript drivers, but it's not good in the general case. Should they be
-split somehow? (One way that wide columns can occur is through user request,
-for instance through a wide PRINT request--try time-date.stat with a narrow
-ascii page or with the postscript driver on letter size paper.)
-
-From Moshe Braner <mbraner@nessie.vdh.state.vt.us>: An idea regarding MATCH
-FILES, again getting BEYOND the state of SPSS: it always bothered me that if I
-have a large data file and I want to match it to a small lookup table, via
-MATCH FILES FILE= /TABLE= /BY key, I need to SORT the large file on key, do the
-match, then (usually) re-sort back into the order I really want it. There is
-no reason to do this, when the lookup table is small. Even a dumb sequential
-search through the table, for every case in the big file, is better, in some
-cases, than the sort. So here's my idea: first look at the /TABLE file, if it
-is "small enough", read it into memory, and create an index (or hash table,
-whatever) for it. Then read the /FILE and use the index to match to each case.
-OTOH, if the /TABLE is too large, then do it the old way, complaining if either
-file is not sorted on key.
-
-Local Variables:
-mode: text
-fill-column: 79
-End: