Editor’s note: Raymond Raud is chief engineer of Smart Software Company. Michael A. Fallig is vice president of Audits & Surveys. The authors are particularly grateful to Joel Dorfman of Audits & Surveys for introducing R. Raud to the problems of open-ended coding and continuing patronage of the project, to the colleagues in Smart Software Company for their help in preparing the article, to Irv Roshwalb for his numerous suggestions of improvement, to Robert Ruppe and his team in C.T.I.S. for patience and diligent work in testing the program. AbstractThe cost and accuracy disadvantages of manually coding open-end questions can be overcome by the application of computer algorithms based on neural networks, an aspect of artificial intelligence which simulates the human brain’s ability to learn. This article describes such a program and a field test’s results.

For nearly 50 years researchers having been debating the advantages and disadvantages of eliciting survey responses with open versus closed-end questions (e.g., Blair, Sudman, Bradburn, Stocking 1977; Bradburn 1983; Bradburn, Sudman, and Associates 1979; Dohrenwend 1965; Dohrenwend Richardson 1963; Lazarsfeld 1944; Schuman, Presser 1981; Sheatsley 1983; Sudman Bradburn 1982). Perhaps because the body of research suggests that one form of question is not clearly superior to the other in every situation, most investigators conclude that both forms have their place in survey research.Findings from their nationwide field experiment led Blair, Sudman, Bradburn, and Stocking (1977) to conclude that open questions reduce the amount of under reporting of the frequency respondents reported engaging in threatening or socially sensitive behaviors (e.g., alcohol consumption, drug use, masturbation, sexual intercourse). But as Bradburn (1983) and Bradburn, Sudman, and Associates (1979) note, question form (i.e., open versus closed-end) did not appear to affect reports of...