The rewards of patience

Editor’s note: Steven Struhl is senior vice president, senior methodologist at Harris Interactive, Chicago.

They say good things come to those who wait. Well, SPSS must have had that old saw in mind when readying Version 11.5 of its flagship software, because we now have an update that was worth waiting for.

The program includes some excellent features that have been notably absent throughout SPSS’ fast-paced schedule of releasing both major program revisions and new products. This article will discuss these enhancements, as well as some features that have remained the same. We also will cover a remarkable integrated data analytic application called SegmentSolve that arrives as a mature and feature-laden application from an experienced Chicago-area software developer. Now that we have all of you trembling with anticipation, let’s proceed to the reviews.

A wealth of valuable new features in SPSS Version 11.5

We will leap directly into the what’s new in the program here. (Those of you who want a review of some SPSS basics might wish to skip ahead to the next major section before reading this.) SPSS has boosted the basic capabilities of its software package, brought an entire procedure into the realm of usefulness, and added a new dimension to a familiar set of routines. Aside from this, it has again appended a set of individual new features (most likely of interest to specialists) that would fill several pages. This is quite remarkable for a product given only a “fractional” (actually decimal) increment to 11.5. We only can wonder what they might have in store for the time they again reach whole numbers, with Version 12.

SPSS output now entirely readable by other programs

In all earlier SPSS versions, the program’s output could be read only by SPSS itself or its companion Smart Viewer. Now you can export everything in an analytical session so that it can be read and used by either Microsoft Excel or Microsoft Word - even if not exactly in one step. All the charts in the output will go out either in “rtf” format - which Word opens intact - or in the “xls” format used by Excel. All formatting is retained, with practically no hitches. (Only on some occasions with a highly complex chart did some cells that were merged originally need to be re-merged by hand after the transition to Excel.)

Here is the small catch in this. Any graphs or other non-table objects (including, for instance, the character-based territorial maps produced by discriminant analysis) need to be exported separately into another format. Which format you choose matters. Some of the export options, like extended metafiles (“emf” format) can go into another program and be edited there, element by element. PowerPoint, for instance, does particularly well in allowing you to customize any part of a chart after “ungrouping” it (an option offered with a right-click on the mouse button). Some chart export formats, though, like JPEG and “bmp,” remain collections of dots, and can be touched up only by using a photo-editing program.

Graphs created in SPSS cannot be manipulated as “live” objects in other programs. That is, basic properties like the scale used on a chart axis remain as they were in SPSS. Unfortunately, SPSS still gives less control over many charting options than does a program such as Excel. For instance, your reviewer remains frustrated in his efforts to change the starting and stopping values on the axis of an SPSS graph.

To get complex charts to appear very much as you would like them, you will find the SPSS companion product, DeltaGraph, a much better choice. (We reviewed DeltaGraph in an article here last year.) Oddly enough, DeltaGraph has been engineered to work inside Microsoft Office programs, like Word and Excel - you can call it up without leaving these programs and create charts with all of DeltaGraphs’s features - but it does not work inside SPSS itself.

Even with these limitations, the new export capabilities are a most welcome addition to SPSS. Now all parts of an analysis can go into files that the ubiquitous Office programs can use. This is a far wiser and more useful strategy on the part of SPSS than their former approach - which seemed to include the implicit assumption that anybody wanting to review all the output from an analysis also would want to buy either SPSS or the Smart Viewer program as well. This move by SPSS to more interconnection with other programs marks an important step toward true integration of analytical results with other documents.

TwoStep Clustering handles more variable types

SPSS has included a major new capability in its clustering routines. With the new TwoStep Cluster, you can now include categorical data such as job titles or regions (or yes/no responses) along with the usual scalar or continuous data that you have always used in clustering.

TwoStep Clustering is an entirely new application, and this shows in positive ways. It communicates more fully about the solution than any other SPSS clustering procedure, giving an estimate of how important each variable is in the clustering solution, and providing charts that help understand in which groups of variables provide the strongest differentiation. These charts (Figure 1) could serve as impressive additions to a report. Also, the procedure includes some advanced options for handling outliers or unusual cases - which can spoil clustering using a more traditional approach. Also, you can set the program to locate what it deems the optimal number of clusters - although here, information on how the program defines what is best remains somewhat sketchy (as we will discuss below).

Figure 1

The program’s interface is clear and straightforward. It is easy to get the basics of this program working. Following the guidelines provided should lead to good solutions.

However, this new application underscores the weakness that this reviewer perceives in the SPSS help system. TwoStep Clustering involves several concepts that will be new to most users. The SPSS help system does nearly nothing to explain these and provides no pointers to references that might do so. Comparing SPSS with the much less expensive NCSS, which provides both extensive tutorials and lists of references, we can see that SPSS has ample room for improvement in this area. Now that manuals no longer come with SPSS (this alone is worth a whole section of the review later), this lack is particularly salient.

Nonetheless, quibbles about the help system aside, the many new features that display and help interpret results in TwoStep Cluster represent a major improvement over all other clustering methods in SPSS. Your reviewer hopes that this new program foreshadows enhancements that might appear in these related procedures.

Tables come to the land of the living

Tables in SPSS now provide a useful addition to the analytical and presentation-related capabilities of the program. You now can create tables in real time, seeing how multiple column and row definitions work together before you give the OK to the final version. This uses a true graphical user interface (or GUI, pronounced “gooey” - in the infamous tradition of computer acronyms that has given us SCSI or “scuzzy”). You push and pull variables to form columns and rows, and you can run several variables across the page like a banner, or several variables in the rows, as you could in a large-scale tabulation program. Once you have your GUI output looking the way you like, you can paste the SPSS commands corresponding to its creation into the SPSS syntax window, and recycle the table format by substituting other variable names. This feature can save considerable time, especially when working with more complex tables.

The new tables module has several other useful features. You can display or exclude categories with no counts (or responses) - leading to clearer output. The module even allows statistical testing, with some adjustment for drawing comparisons among three or more columns. Even though this is a relatively simple Bonferroni correction (which can be overly demanding in declaring differences significant), it is far better than doing nothing, and about as good as anything produced in large, commercial tabulation packages.

Especially when compared with the frustrations of making tables in the old fashioned way, this module has made enormous progress, and now takes its rightful place among the many other useful routines in SPSS.

Data restructure wizard makes data more flexible

In version 11, SPSS added a strong new capability to rearrange data files. Starting with that version, you could change data to and from the so-called univariate layout - or several records per respondent - to and from the multivariate layout - or one record per respondent, and vice versa. This capability could be handy if you have data arranged in ways making it impossible to do certain analyses. For example, repeat measures analysis of variance requires data in the multivariate layout, with the repeated measurements all recorded on one line per respondent. Following version 11, even if the data came with the alternative many-records-per-respondent structure, you could rearrange and use it.

To this ability, SPSS has added a new data wizard that allows further restructuring of your data. You can either restructure selected variables into cases or restructure selected cases into variables. Alternatively, you can transpose all the data in the file: All rows will become columns and all columns will become rows in the new data. The use of a wizard to guide this process makes it possible for users at all levels of expertise to get it done correctly. This is another valuable program feature - more so as we deal increasingly with data from a wide variety of sources with varying levels of eccentricity in their layouts.

More connectivity: talk to the SAS and IBM OS communities

SPSS now allows direct export of SPSS data files to seven types of SAS data files (including Windows and UNIX versions and a transport file). You can also save SPSS value labels to SAS .sas syntax files - thus eliminating one of the main frustrations in making transitions between these two statistical heavyweights; no more loss of detailed labeling information going from SPSS to SAS.

Those needing to interchange files with IBM database users will be pleased to find that SPSS now has connectivity to DB2 UDB v. 7 for OS/390 and DB2 driver for AS/400, even adding ability to read z/OS, Oracle 9i, and Sybase 12.5. SPSS also can access directly mainframe data sources on OS/390, including mainframe data such as Adabas, Datacom, IDS, OS/390 Sequential Files, IDMS, VSAM and ISAM. (This section will not be on the quiz at the end of the article.)

There are many other enhancements to the product, some small and others doubtless important to various readers. A full list can be found on the SPSS Web site (www.spss.com).

Reviewing the basics about SPSS

With all its new features, SPSS retains its basic program structure (a base program with added modules that do more specialized or advanced tasks). The base covers many basic tasks and some more advanced ones. To have the full range of capabilities in SPSS, though, you would need to purchase not just this but also several add-on modules. Most important among these are the advanced and regression models modules, conjoint (which both generates the required fractional factorial designs and does the analysis), trends (for time series analysis), categories (for correspondence analysis and related procedures), and perhaps the special module for missing values analysis. If you work with small samples, you might also want the SPSS exact tests module, which returns incredibly precise statistical test results with limited amounts of data.

SPSS still works with three basic windows - each free-floating and given its own space in the task bar at the edge of the Windows screen (that’s the bar on the bottom for most users). One window contains the data, and looks much like a spreadsheet, but one which has no limits on columns and rows and which has an extra pane showing the characteristics of each variable. Another window handles the output from the analyses you run - and we will have more about this shortly. A third window - that many users may never see - can accept typed commands, which still work as well as making choices from the numerous menus and dialogue boxes to structure an analysis. This syntax window opens only if you request that it do so, or if you use the SPSS option to paste a command instead of running it directly from the menus.

Once you paste a command into the syntax window, you need to select it and tell the program to run it. You also can modify pasted commands, recycle them by substituting new variable names, and keep them as a record of the session or to run another time.

The syntax window is mainly a regular text editor something like the notepad program that comes with Windows, and all SPSS command files are made up of plain text. Its main special feature is a button that calls up a help panel showing all the options that can be typed into the command. You also can paste these options directly into the command from the help panel. SPSS syntax files can be easily reviewed and read with any program that can handle plain text. No exporting is needed for these.

As we mentioned earlier, you need to go to companion programs to get some types of tasks done. In addition to DeltaGraph for charting, one notably useful companion program is AnswerTree for classification tree analysis. It includes routines for analysis using CHAID, QUEST and C&RT - formerly CART before some geniuses managed to stick an ® mark on CART, which was the name of an analytical procedure. DeltaGraph does not rely on SPSS for any part of its charting, and has a fairly extensive set of tools for transforming and manipulating data it charts. AnswerTree, though, relies on you having a copy of SPSS to work best. That is, any data transformations or rearrangements that you need to do for an analysis to run as smoothly as possible in AnswerTree require SPSS to massage the data. (You could theoretically work on the data in another statistics program, and then try to export the results to AnswerTree - but this type of importing/exporting almost always is fraught with hidden problems.)

Some not overly modest suggestions

SPSS still lacks modules handling increasingly common tasks, such as generating so-called d-optimal (nearly optimal) experimental designs, or performing forms of regression that handle highly collinear variables, such as ridge regression or principal components regression. (SPSS has a sort of super-sized macro, or script, to do ridge regression that you can feed to the program if you are very, very good with SPSS syntax. This is hardly the same as a full-featured routine, though, and it does not help you choose the optimal values for the procedure to give the most accurate results.)

The basic “tree and output section” structure in the output window, used for organizing the results of statistical procedures, still has limitations. Unfortunately, in spite of temperate yet direct hints from this reviewer, the titles in the tree remain highly non-specific, doing little to guide you to the portion of a long analysis that you need. If you know enough SPSS syntax to type in the command specifying a title for a section of the output (and this is simple), SPSS does not put that text into the tree window where you can find it easily. Rather it inserts the supremely uninformative notation “page title” as shown in Figure 2. It would be far better if SPSS inserted the requested title text into the tree. Better still, SPSS might even consider putting a space in the dialog box for each procedure where you would be prompted to insert a title. Is anybody at SPSS listening?

Figure 2

SPSS has kept several other eccentricities from earlier versions. Some useful commands remain unavailable from its menus, and so must be typed into the syntax window. One example of a missing menu command is the option to rotate discriminant analysis solutions. Rotation of these solutions has much the same effect as rotation of factor analysis solutions, leading to clearer, more easily explainable results. To do this you must perform some careful surgery on the commands pasted from the menus, or just type everything from scratch. Similarly, the entire conjoint analysis procedure still requires use of the syntax window, with no menu equivalents.

We all know that typing syntax is good for you, with a particularly purifying influence on the soul. Nonetheless, newer users most likely will find going to the syntax window vexatious. Getting used to SPSS syntax and all the ways in which the program can act very picky about it - such as sanctions about the use of periods or slashes - can be a challenge to those not yet comfortable with the SPSS system.

The menus themselves can be somewhat unsettling until they become familiar, as mentioned in the last review. The grouping of the menu commands is not entirely intuitive - for instance, both clustering and discriminant analysis fall under the entry “classify.” The more complex forms of ANOVA, like MANOVA, repeat measures, and factorial ANOVA are listed under general linear model (or GLM), but regression, which also is part of GLM, has its own main menu item - and discriminant analysis (also GLM) is found, as mentioned, grouped with clustering. Some procedures never appear directly in the menus. For instance, you get to MANOVA by selecting “general linear model” and then “multivariate.” In any event, your reviewer would like to give SPSS another polite but firm hint that the program’s interface has not yet reached the pinnacle of earthly perfection, especially for neophytes. We can only hope that somewhere in the not-too-distant future some alterations to the sticky areas discussed will wend their ways into the program.

Still less is still less

One unfortunate change in SPSS is the continuing elimination of program manuals. In the last version, all manuals except for one covering the base product disappeared from the package sent by SPSS, even if you bought many optional modules along with the base. Now even the base manual is gone, replaced by a slender, perhaps even emaciated, operations manual.

All more serious documentation for advanced procedures now is provided in PDF form on the installation CD-ROM. You also can install these electronic manuals on your computer’s hard drive. SPSS also has added some additional heft to its online help system and its tutorials. However, for some users (including your reviewer), the paper manuals remain indispensable.

If you want these, you now need to order them and pay for them separately. No doubt this saves something in production costs for SPSS, and it seems to fit with the apparent drive by the software industry to reduce all shipping products to no more than a bare CD packed inside a used shoebox. Your reviewer, though, wonders if all this economy is much of a service to the user. The extremely useful syntax guidebook has long been an extra cost item, so perhaps this change for the manuals was inevitable. I cannot too strongly recommend the syntax guide, even as an extra purchase. At times, it will provide an answer for a problem that does not seem to be addressed anyplace else. Again, a genuine book is handier (especially since you can hold it open on your desk as you work) than the corresponding pop-up screens in the help system of the SPSS program.

Also, with the elimination of manuals as part of the standard SPSS package, the program now effectively hides many of its most advanced capabilities. In particular, it has a relatively rich language for creating scripts or macros that could allow it to extend the range of its analytical procedures - as a look at its included ridge regression macro will give a hint. However, your reviewer senses that this language gets little use, as the near absence of user-submitted macros on the SPSS site strongly suggests. In this, the program lags far behind SAS, for which users have created all sorts of intricate routines that perform truly remarkable analytical exploits.

At the very least, SPSS could have devoted some special sections of the help system to describe the new procedures instituted in this release and how to use them. Your reviewer did not find full descriptions of these and how they work in one location.

SPSS 11.5 overall

Although no review would reach its true state of completion without some grumbles, we need to put these aside when summing up this release. The basic functioning of the program has changed with the elimination of the need for an SPSS “spo” (output) file to share all results with others. Clustering has undergone a major upgrade as well, with the ability of the new two-step procedure to handle both continuous and categorical data. “Tables” has become a useful application and far easier to use in the bargain. The program has improved its already formidable abilities to manipulate data. It communicates with other programs better, now becoming capable of creating files usable with several versions of SAS, while adding new capabilities for communicating with the world of IBM data files. These are all impressive accomplishments, even if SPSS has yet to see the true way and follow all of your unassuming reviewer’s ideas.

SegmentSolve from Market Advantage Software

SegmentSolve is a mature and remarkably feature-rich application, as its version number (8.0) would suggest. Its creator, Market Advantage, has been developing software for many years, although primarily for the use of clients with whom they work as consultants. The firm’s Web site (www.marketadvantageconsult.com) gives a clear indication of this part of their business interests. This firm has released a few commercial products in the past, perhaps most notably about 10 years ago a highly creative brand mapping program in a partnership with SPSS. In those old DOS days (anybody else remember them?), this program allowed the user to manipulate the intensity of various descriptors related to a product and see in real time how its position vs. other products shifted. Unfortunately, SPSS then had not hit its stride in marketing programs by outside companies, and this inventive product somehow disappeared. Market Advantage nonetheless kept developing its software products, moving forward into the Windows era. SegmentSolve is the first of their products available for more general release.

In brief, SegmentSolve does much of the really hard work in choosing a “best” segmentation solution - understanding that it makes some basic assumptions about the definition of best.

Your reviewer apologizes here for continuing to qualify the words “best” and “optimal” - understanding that this could well sound something like Bill Clinton talking about what “is” is. Still, we need to understand that any segmentation solution declared “optimal” reaches this state when compared with some pre-defined notion of what counts as best. Segmentation must start with some way of grouping people, and then proceed to determining whether these groups can be found and reached selectively.

Much attention has been focused on the first part of this problem - ways of clustering people into groups. Arguments about which clustering algorithm works best raged furiously, at one time, among the more academically inclined. Those arguments seem to have subsided now, replaced by a realization that each method has tendencies peculiar to itself, and that in any event, all methods behave somewhat unpredictably.

The second half of the segmentation problem has received less systematic attention: namely, how to form groups which can be reached selectively. This problem goes outside the neat boundaries of any mathematical procedure, and has strong practical implications as well, so perhaps these considerations explain why academics have not spilled as much ink about it. A simpler answer may be that this is just a much harder problem than resolving how a mathematical procedure tends to group objects, including people.

SegmentSolve makes an earnest attempt to take some arbitrariness out of the first part of the process. It also takes a brave run at the second part of the problem, but finally addresses this in a more cursory way.

SegmentSolve is set up as one enormous guided procedure or wizard. In fact, you see a rather whimsical introduction to this wizard when you open the program, as shown in Figure 3. A handsome logo appears in the program’s help screen, rather than at the beginning of the program (although it may just blink on for an instant on a very fast PC). You can choose to follow the advanced wizard, presumably if you feel something like Einstein, or the standard wizard (perhaps if it has gotten to the time of day when you have serious questions about the true meaning of the question, “What is 2 + 2?”).

Figure 3

The program will simultaneously run and compare results from up to 13 standard clustering algorithms, trying to settle on one that works best for each number of clusters in a range that you specify (say, anywhere from two to 10 groups), and then settle on an overall “winner.” The program also gives some help to the user by identifying a main mathematical tendency of each method in its formation of clusters. (For instance, you can see whether you are picking a distance-based or density-based method - as you might more or less discern in Figure 4. The brief descriptions do not give all the tendencies of these methods, but at least a sense of how their biases work when “finding” groups in a data set.

Figure 4

I would like to give a more detailed report on the operation of this program under all types of adverse conditions, but this is not possible. While the copy I received for review worked, it worked only with pre-selected datasets. I could not, for instance, feed the program a dataset that gave me a great deal of trouble with other clustering routines to see how it performed — or how the program handled typically sloppy data, or data with extreme values intact.

The best advice I can give about use of SegmentSolve is just good general counsel for any clustering routine - check all data carefully for anomalies before feeding it to this program. It does not appear to be more forgiving of data irregularities - and certainly not well-hidden problem areas - than a regular clustering routine.

For instance, it will either drop a variable entirely if it has missing values, or fill up to a prescribed percentage of a variable’s values with means (and if over that percentage, drop it) - and will treat all variables in an analysis the same way. It does not appear to have the ability to ignore missing values on a pairwise basis, as SPSS can, for instance, and has no more sophisticated routines for imputation of missing data. The program also apparently does not have any built-in procedures for examining the data you are trying to cluster, aside from a simple preview of the values in a variable. Therefore, even though the program has import routines for taking data directly from a simple ASCII database, it seems quite unwise to use this, rather than examining the data carefully with a statistics program before clustering.

The program will allow some data manipulation: you can standardize data (either across variables or within each respondent, or both) and you can specify that it accept values only within a certain range. Again, though, you must know what the acceptable range is from some other source or examination of the data.

Supposing you find your data ready for clustering, SegmentSolve will do tremendous amounts of work for you that can help you reach a good solution quickly. It will compare the various solutions it generates on a wide range of mathematical criteria, and even allow you to give more or less weight to these criteria in deciding which solution is “best.” If you, for instance, believe that balance among the clusters should be accorded more weight in the final solution than (for instance) the mean F-ratio among the groups, you can give each the precise proportion of the weight in picking “best” solutions that you want. SegmentSolve will consider only those criteria to which you give some weight.

Oddly enough, with its wealth of criteria for screening groups formed, it does not include a cut-off for the minimum acceptable group size. While we do encounter incredibly large datasets more often now than ever before, clients also continue to ask for segments from smaller samples. Having the program automatically eliminate any solution that led to a group with fewer than whatever you deem an adequate group size would help weed out useless solutions with small samples. It also could act as a safeguard against finding a mathematically good-looking solution with a larger sample that creates small splinter groups. Perhaps SegmentSolve can include this feature in upcoming releases.

The program automatically identifies variables as either continuous or categorical, with the categorical variables set aside from the variables going into the clustering itself. These instead are reserved for use later in crosstabulation against the selected solution. You can tell the program to change its default (or “best guess”) definitions of the variables types, but those finally labeled as categorical must be used in the crosstabulations only.

That is, SegmentSolve’s clustering routines all are traditional methods that can handle only data treated as continuous. The program does not include newer algorithms that can handle all types of data like the two-step method now part of SPSS or the fuzzy clustering found in NCSS.

However, once the program is done, you have quite a neat package of traditional clustering solutions, including a report in Excel format showing all “basis” variables (used in the clustering solution) crosstabulated against the categorical variables (such as demographics) that you specified during the analysis. This is as far as many organizations go with clustering before they decide that they have reached a segmentation solution. If this fits the practices and goals of your organization, then SegmentSolve will save you a great deal of time, and most likely do quite well compared with the solutions your organization has used in earlier efforts.

To reach a truly “optimal” solution, though, you will need to go still further. SegmentSolve will not look ahead to the demographic (or other categorical) variables set aside for crosstabulation and choose a solution that provides the most differences based on these. That is, the ease with which groups can be characterized and selectively reached is not part of its evaluation of the solution. You, or your lucky data analyst, must look at the results and make this determination. It is entirely possible that the best mathematical solution (or even the several best) does not have groups that differ from each other strongly on the criteria that can be used to define and find them.

Beyond this, crosstabulations do not give the full picture of how groups differ. In nearly every segmentation solution that your author has reviewed in detail, variables such as demographics and media habits interact in meaningful ways. Simple crosstabulations will not directly show these interactions. Rather, you need to use a procedure such as CHAID or CART (or C&RT) that has been designed to tease out these interactions.

With this part of SegmentSolve we in fact reach one of the primary frustrations in segmentation. Namely, you can find mathematically pleasing solutions, but then discover in the later portion of the analysis that these numerically separated groups are not clearly differentiated in ways that help you reach each one (or just the most important ones) selectively. When this happens, you must go back to another solution, perhaps even one that is not mathematically optimal, to get groups that can be located in the real world.

If you are willing to look carefully at the SegmentSolve’s output for a wide range of solutions, going all the way through the crosstabulations, you could well make some steps toward addressing this problem.

However, as suggested just a few paragraphs ago, we cannot argue that the solution producing the most differences on the level of crosstabulation is necessarily the best. Similarly, we cannot argue that crosstabs would reliably point us toward a highly useful solution that a more sophisticated method, such as CHAID, would uncover. (If for instance, the target segment has an extremely high incidence among women who are age 25 to 44 AND who live in areas near the center of urban areas AND who have incomes of $35,000 and up, this combination of characteristics may get lost underneath a wealth of other information in simple crosstabulations.) The only way to find information like this seems to include taking the time to use the best analytical approaches thoroughly.

If you allow the program to choose based on mathematically optimal criteria only, or if you let it find segments on auto-pilot (and please never do this), you will have simply stepped around this problem area. If using tactics like these, your finding the most useful solution would mainly become a matter of getting very lucky.

In short, SegmentSolve can serve as a highly useful tool for screening a large number of alternative segmentation solutions, and for eliminating many that clearly make no sense. However, you are well advised not to treat this software, as sophisticated as it is (or in fact any other piece of extant software), as capable of finding the “best” segmentation solution. That, then, is today’s talk about what “is” is.

In conclusion

SPSS has added many useful features with this release. While it still does not have all the depth of its ultra-heavyweight competitor SAS - or even some of the amenities in the much less expensive NCSS - it still combines an impressive range of features and a good level of ease of use. As mentioned, with the elimination of manuals as part of the standard SPSS package, though, the program makes many of its most advanced abilities difficult to find and apply. Yet SPSS still seems hard to use for novices, so the program can extend its reach in both directions.

Although it is fun (more or less) for your reviewer to continue in this vein, overall there is much to like about SPSS in its most recent release. Those of you who have been waiting for an important reason to upgrade now have several. We can only hope that SPSS will hit upon as many key improvements in upcoming releases. And of course, if they are wondering just how they could possibly do this - well, modesty forbids me from saying much about how they need only look at the rest of this review.

In SegmentSolve, you have a remarkable tool to help you sift through a large number of alternative clustering solutions, which, if used with discretion, can lead to a highly useful segmentation scheme. Here is a program that automates tasks that would take hours or days of analytical time, and so will help ensure that you have adequately considered many alternatives before settling upon a solution.

Those of you who have not yet fallen off the edges of your seats with the excitement of reading this, and who have contrasting points of view, are welcome to send rejoinders. Please be advised that we have carefully constructed spam filters for all the rudest words and phrases, and for expletives both common and uncommon (thus proving again that practice makes perfect). Therefore, recalling all that any maiden aunts may have told you about politeness, please feel free to send any comments to the reviewer at the e-mail address listed.