Editor's note: Murray Simon is president of D/R/S HealthCare Consultants, Charlotte, N.C.

While conducting marketing research on technical products or complex issues can be difficult, it's even more challenging when health care providers are involved.

On the quantitative side, it is difficult to generate sufficient valid responses. Unless there is an attractive incentive, health care professionals are generally too busy and inundated with outside communications to respond. Even in those projects that offer an effective incentive, if a mailed questionnaire relates to factors such as office management procedures or product-buying/usage patterns, the doctor frequently will have a staff person it fill out and then not review it before it is returned. And when the provider does respond personally, there remains the nagging question, Just how representative are these particular respondents?

On the qualitative side, the cost of recruitment and incentives is often higher than comparable technical studies outside of health care. The prevailing provider attitude toward marketing research is: If you want my learned input, you're going to have to pay for it.

At the end of qualitative projects, the client often is left wondering whether the findings are representative of the universe as a whole. During the debriefing, we emphasize that the only way to gain insights regarding potential predictability is through a quantitative study. But again, budgets are limited and costs are high - further research may not be feasible.

Both quantitative and qualitative factors tend to add cost and complexity to the development of research projects in the health care arena. As a result, clients are often looking for answers from studies that are underbudgeted (in time and money) and limited in scope - a common problem to which we applied an uncommon solution.

Are assumptions correct?

In 1993, our company was contacted by a major non-health care corporation with a familiar problem. They had a technology that they felt had potential medical applications, particularly in the area of patient-information management for psychiatrists. They wanted to know if their assumptions were correct.

Of course, the company had limited amounts of time and money it could spend to get an answer. We were conducting preliminary discussions late in October, and a go/no-go decision had to be made by the beginning of the year. In addition, this particular technological application had only recently surfaced, and market research dollars had not been budgeted for it - funds had to be begged and borrowed.

The project had the potential to establish a long-term working relationship with a major new client, so we decided to not only go ahead with a study with significant up-front limitations and problems, but to try to maximize the return on investment. In other words, we decided to show them what we could do.

One of our biggest concerns was the fact that we would have to interview psychiatrists. Costs would be high because of the expense of a relatively difficult recruit and the size of the incentive needed to generate interest.

Does it have potential?

Our first objective was to see if psychiatrists felt the technology had potential benefits. Through some networking with medical colleagues, we gained access to staff members at a major psychiatric institution for a series of on-site interviews. After two days of interviewing we came away with two conclusions:

  • The technology conceivably could have significant applications in psychiatry.
  • It would be important to interview psychiatrists practicing outside of the institutional setting.

We came to a client/supplier consensus that the next phase of the study had to go beyond concept confirmation/rejection, and should involve some respondent brainstorming on potential applications, which dictated a group format. Although we do a lot of face-to-face focus groups, we decided to use a telephone focus group for a number of reasons:

  • We would get substantially better geographic diversity than we would with face-to-face groups.
  • We would be able to hear from the small-town doctor as well as the urban practitioner.
  • Respondents would not know each other - posturing among psychiatrists can be a problem.
  • The recruit would be easier; respondents could participate from their home or office.
  • We have a lot of experience with, and confidence in, the telephone focus group.
  • Overall costs would be lower compared to traditional face-to-face groups.

The groups were done and following an analysis of the results and several client conferences, we decided that based on what we had heard, there was a strong business potential in the technology, and that a number of specific applications were possible. It's always nice to be able to tell the client that their baby is beautiful.

Unfortunately, an all too familiar question reared its head: How representative are these findings?

We were convinced that our client's technology was promising, but some broader questions had to be answered:

  • How will the president's mandate for universal health care, and the laws enacted in response to it, affect the way psychiatry is practiced?
  • How quickly will psychiatrists adopt new technologies?
  • Which applications of this technology will have the greatest potential for success?
  • How big a business can our client expect to develop from the various applications?

It wasn't possible to conduct a full-scale quantitative study - time and money were rapidly running out. But the need to know was strong, and we had a research idea that we wanted to test. We all know statistical validity can't be developed from interviews and

focus groups with a relatively small number of respondents, but what if you interview specialists who routinely interact with thousands of their colleagues on a nationwide basis every year? Certain providers practice, do research, publish and give lectures at major professional conventions and seminars. Given their ongoing professional interaction, wouldn't they tend to represent a unique global perspective on their profession?

Leading edge

After lengthy discussions with our client - which included repeated warnings about the difference between statistical validity and educated inferences - the decision was made to proceed. Once again, we decided to use a telephone focus group because our potential respondents were spread all over the map. Since these people are leading edge specialists, we decided to pay a higher than normal incentive.

To begin the recruitment process, random calls were made to psychiatrists who had attended regional or national seminars within the previous two years. The doctors were asked for the names of prominent colleagues who often give presentations at professional meetings. Some were uncooperative, perhaps even a bit annoyed at having been called, but enough positive responses were received to generate a list of approximately 20 names. It was our hope that networking this preliminary list would not only result in the recruitment of qualified respondents, but would also expand the list as well.

We decided to screen for psychiatrists who:

  •  were currently practicing an average of 32 hours per week;
  • had given presentations at regional (not state) or national psychiatric meetings at least three times per year over the past three years, and/or;
  • had been on one of the American Psychiatric Association's national committees within the past three years.

We also asked the yes-no screener question, Would you classify yourself as someone who is very much in tune with the changes taking place throughout the country in the practice of psychiatry?

Ego factor

Although the process required calling those who didn't qualify to get the names of those who did, once we had a starter list the recruit went quite smoothly and quickly. The groups were to be conducted by telephone in the evening, which made participating easy and convenient for the respondents; the financial incentive was attractive; and the ego factor kicked in immediately - the psychiatrists considered their input essential to any forum on anticipated changes in the field. They were also eager to hear what their colleagues had to say about the future of their profession; several gave us the names of others to call. Included in the groups were a consultant with the National Institutes of Health, three heads of major psychiatric institutions, a former president of the American Psychiatric Association and several practitioners with teaching institution affiliations. All were currently active in lecturing on a national and international basis.

The groups went well, and the client was pleased and excited with the results. The discussions were lively and interactive, participants answered questions in a very self-assured manner and, with minor exceptions, there was a strong consensus among the respondents with regard to the major issues discussed. An analysis of the transcripts convinced us that we had a good grasp of what was happening in the field of psychiatry, and the directions the client should take with the technology became quite clear.

It's true that the study's findings cannot be validated without a parallel quantitative study. And to some degree, only time will tell if the assumptions made from this study are valid. We do not advocate the use of this approach as a bargain-basement substitute for good quantitative research. Then again, as market researchers we do have an obligation to provide our clients with as much usable information as possible within the limits established up-front. We hope the ideas and thoughts used to solve the problem we faced help others do just that.