Top 7 Best Practices for DIY Research 

Editor's note: This article is an automated speech-to-text transcription, edited lightly for clarity.   

Sago sponsored and presented a session during the June 11, 2025, Quirk’s Virtual Sessions – DIY Research series. 

In this session Amanda Ilnisky, VP, product management, and Justin Perkins, VP, qualitative product at Sago gave their top seven tips for conducting DIY research. These tips were on a variety of aspects like time management, using AI, consistency and more.  

Session transcript 

Joe Rydholm 

Hi everybody and welcome to our session “Top 7 Best Practices for DIY Research.” 

I’m Quirk’s Editor, Joe Rydholm and before we get started let’s quickly go over the ways you can participate in today’s discussion. You can use the chat tab to interact with other attendees during the session, and you can use the Q&A tab to submit questions for the presenters during the session and we will answer as many questions as we have time for during the Q&A portion. 

Our session is presented by SAGO. Amanda, take it away.

Amanda Ilnisky 

Thank you. So, I'll start with some introductions.  

I'm Amanda Ilnisky. I'm the VP of Product Management for Sago's Quantitative Research Technologies.  

To tell you a bit about myself, I've spent 15 years in the market research industry, primarily focused on panel and survey authoring technologies. I've had the privilege of working with all sorts of DIY research clients in several different verticals. Plus, I've been a DIY research client myself and acted as a decision maker and user of such technologies.  

Justin.

Justin Perkins 

Hey, thanks.  

I'm Justin Perkins. I am Amanda's cohort on the qualitative side of the business, so Vice President of Product on the qualitative side. I sit over some of our products like qual board and some other things. I've been in the industry for the better part of a decade, since 2013, which I guess is now like 12 years. I don't know. So, it's been a little bit.  

I love talking about this and have a passion. I'm in the Nashville area. We have some people in the chat from Nashville, so hello fellow Nashvillians.

Amanda Ilnisky 

Before we get started on our best practices, I think it's worth discussing why people choose to work DIY. And the fact that you're attending this session tells me maybe you either are a DIY research user or perhaps someone who's interested in working more DIY or you just feel that your business is shifting in that direction. So, you're trying to prepare.  

In my experience, clients are motivated to work DIY for a few key reasons. The first being cost control. You need to spend less money by reducing the amount of work that you're doing with external vendors, taking on more of it yourself.  

Another reason is that you have the in-house resources or expertise to do some of that work yourself. So, instead of outsourcing it to someone who could do the exact same thing you could do, you want to do it yourself. 

Then the third reason I've seen come up a lot are negative experiences working with vendors. So, maybe the quality you've gotten from the team isn't quite what you expected. You've submitted a questionnaire for programming, and it contains errors and your kind of like, ‘well if I just programmed it myself, that wouldn't have happened,’ or the timing isn't quite what you were hoping for. You want to have more control over that. 

Anything to add to that? Justin, have you seen any other motivators?

Justin Perkins 

Yeah, I mean the thing on the qualitative side, obviously resource constraints hitting just as hard. I think that's everyone these days. So, it's a pain point that a lot of us feel in different avenues, especially since Amanda and I both sit in product development, so we see it too. You see and feel those things. And so as you're in the research side of it, you can feel that pressure.  

But on the qualitative side, I think there's historically been a lot of gaps, and we'll definitely dive into this more later, but it's a little teaser, right?  

AI to no shock of anyone here is the big disruptor that makes people where before there maybe wasn't some of these abilities to go DIY on qualitative because of the crazy amounts of data we get. This is where AI really comes into play. Now we have this tool where what possibly couldn't have been done before because it was somebody sitting out on the back porch with a highlighter and a bottle of wine going through the important insights that you read from your focus group that day. Now it's something where you've got these tools to help. And so that really has made it on the qualitative side. 

I think you see more of these things where people are looking for DIY research there.

Amanda Ilnisky 

Yeah, that's a great point. I totally agree that AI is making DIY more possible than ever before. 

From there, let's move into some of our top favorite most loved tips for working DIY. The first one I've got is to be realistic about time and complexity. DIY, it's a great choice for many, but it's really easy to stumble into a situation where you are working DIY, and you found you don't have enough time for testing or maybe you're going to program your survey and you found the complexity of the survey now surpasses your own skill. Or maybe it's that the complexity exceeds the capabilities of whatever platform you're using. So, those are things, once you encounter them, that are a blockers.  

So, you want to leave yourself plenty of time and try to evaluate things like complexity early so that you're not putting your deadlines at risk. And at times you may find it's actually more time efficient and less frustrating to have someone else do the work for you versus trying to do everything yourself.

Justin Perkins 

On the qualitative side, I mean we see the same thing. We've got the asynchronous side of research and the live side. And they differ a little bit in this. Obviously both sides you want to go through and test. 

So, qual board, one of the products I sit over, historically, I used to have to load discussion guides in that people would then go in and answer. And you see all of these desires to go to different segments. And then the reality is all the more complex the logic that you put into these surveys and into your digital asynchronous discussion boards, all of those different paths you make, they start to compound the complexity of programming a guide or getting in and then testing a guide. Like, ‘well, I want to make sure all these scenarios play out so that you're not on day two and you realize, oh, I've missed the whole part of what we needed to really evaluate here in the beginning.’ I think that's where having a partner sometimes can help in those instances.  

But when you have realistic expectations of yourself, that's where we see DIY research become really successful. For example, when you're looking at it ahead of time and you're not thinking ‘I can go do things that are going to evaluate how we can fly to the moon’ but instead looking at things that are in a realistic scope. That's where we see a lot of success.  

Then on the live side of things, that's a lot about actually looking at the flow. There's actually tools that exists in the live space. Like, for example, on this webinar that we're on currently, there's a timer so you can see where you're at. Then we also have some things that we've added on our side, like a timer for each section so we make sure we are not spending 15 minutes on intros accidentally where we need to get to the rest of the things.  

So, going through and actually making sure that flow is realistic, not just for your own sake but also for your respondent's sake. A lot of times they're going to start tuning out. You want to make sure you hit the important things, and that's where those project stakes matter. You want to make sure that you are setting yourself up for success there too.

Amanda Ilnisky 

Yeah, and we'll talk a little bit more about respondent experience in a little bit.  

So, we'll move on to the next tip, which is creating your project to suit your style.  

A good example of this is authoring your survey in a way that works best for you. For some clients I've worked with, that means starting with a Word document where you've written a questionnaire. For others it means they are kind of freeform designing the questionnaire directly in the survey platform. Then there are some who take a mixed approach where maybe you start with a questionnaire document, but then the revisions are made directly in the program.  

In my experience, most researchers want to end up with a question or document that's complete, correct and reflects what's been programmed. It can be a huge headache to track down every little change that's been made and then go back and update them in the questionnaire document. 

For a lot of DIY quant platforms, like the one I oversee, Methodify, this is true. We have the capability to export a questionnaire as a Word document. So, if you do make a whole bunch of changes to the programming and now your questionnaire and your survey have become a little bit disconnected, you can just export the questionnaire from the platform, and it will include not only the questions but all sorts of logic. Anything that you've added to the survey that would be relevant to your questionnaire design is included in that document. It's super handy.  

Another tip is to know your system before you design your survey. There's a few different ways you can do that.  

Some systems will allow you to run a simulated report or some way of capturing dummy data so that you can understand what your charts and tables, your output, the answer coding are going to look like, and how you can recode or build filters to arrive at the banners you need for analysis. Knowing that upfront might change your approach to how you're programming the survey, how you're assigning pre-codes or how you're doing certain things.  

If you can preview that ahead of time it can save you a lot of frustration later.

Justin Perkins 

Yeah, for sure. With qualitative, it's a huge fuzzy box of possibilities that you can do to get the answers to your research questions. I think that's really where there is certainly a level of comfort that we see. I do a lot of demos with clients and there's comfort levels of the things you're used to that you want to make sure you are incorporating into your study design.  

Amanda, you touched on it, but understanding the system for qualitative is super important. We see people come in that are really successful on the DIY side. They have that understanding of what the capabilities are going to look like when they're in the study design. Even if it's a new system, making sure you go to trainings, doing those kind of things before you have a guide that your client signed off on.  

For example, we had someone once that wanted to essentially program a video game into one of our system and we had to be like, ‘okay, well we can't quite do that, here’s what we can do.’  

So, I think that study design aspect and creating the project is something that you as a moderator or you as a researcher, you want to make sure that those things can actually happen and also that it's something you're comfortable going through.  

Actually, even if you're not the moderator per se, but someone is comfortable actually moderating, that's where we really see the value in the qualitative side especially, is if you just launch a bunch of questions that people and expect to go back and read those in a few days and never kind of dig to find those additional insights, that's where the huge value is.  

So, you want to make something that you're comfortable enough going in, moderating through it and engaging with people. The more you engage with your respondents, the better you're going to get, which comes back to, again, the respondent experiences we'll talk about in a minute.

Amanda Ilnisky 

That minute is now. Our next tip is to keep the respondent's experience top of mind.  

As a researcher, it's easy to lose sight of the respondent experience because you know what your research objective is. There are specific questions that you want to ask to arrive at the conclusion you're looking for, and sometimes it's easy to forget that there's going to be a person on the other side of the survey who, for example, may have a different level of language fluency. Maybe there are some words or some terms that just won't mean anything to them, or maybe they have differences in ability, maybe there's a colorblindness or maybe they're unable to see it all or maybe there's a hearing impairment. 

So, when you're designing your survey, you need to keep accessibility in mind. You want to make sure that you're looking out for people who have different needs in that regard.  

But then also engagement. You want to use a variety of question types, different representations.  

So, you're not serving 10 grids in a row because I can guarantee that by the 10th grid your respondents are going to be phoning it in and just straight lining to make the experience end. 

Then after you've programmed your survey, review it, take a look at it, preview it, consider things like length, clarity, are there any terms you need to define, any descriptions they might find confusing? Make sure you haven't added any bias into how you're asking different questions.  

And then going back again to those excessively long answer lids or answer lists or grids, that's going to really drive low data quality and drop off. Same with programming errors. If there are issues with skip logic or if respondents aren't able to complete your survey for some reason, obviously that's going to be hugely problematic for your data.

Justin Perkins

And I think people want to be entertained. They need to be engaged for you to get accurate answers. That certainly is the same thing that we see in qualitative.

Especially sometimes you're doing these longitudinal communities that take place over the course of months in some instances, and they're coming back and answering these things. We get good answers out of respondents when we show three concepts and ask the same questions over and over again. But we get great answers once you start digging into talking about a concept and asking questions with projective techniques. Engage that creative brain of a respondent because you're not just wanting to talk, to go ask ChatGPT a question. You've engaged with these people because you want an answer from a real person.

So, let's get at these projective techniques. Let's think a little bit outside of the box. That's where we see real success.

We talked earlier a little bit about length and clarity and that is so important. If you ask a respondent to come spend 30 minutes a day answering some questions and then it takes two hours, that's where things start to fall apart. 

If you haven't done it yourself, do it yourself to see how long it takes and not just run through and write tests on everything to make sure logic works. But if you haven't gotten a good feel for it, you haven't done these kinds of things before, that's always a great starting place. It's like, ‘okay, how long would this take me?’ Then assume that it's going to take someone else longer because you are familiar with it and you're not having to go in and look at all of this for the first time, but you don't want to have someone who launches a study and then all of a sudden you just can't complete it because things are taking too long.

These are just some checks to run beforehand, and ways to put out an estimated timeframe. Those things are great, those things are helpful tools, but at the end of the day, you still have to have that sanity check for yourself of, ‘okay, is this actually going to be what I expect it to be?’

Amanda Ilnisky 

Great points.  

Moving on to our next tip, keep things consistent. I'm going to tell a story from my past to make up my point here. 

I once worked with a client who was worried about quality, as we all are. They were worried about attentiveness, worried about all these different things. So, they had this great idea that they were going to have questions with scales in them. But to make sure people were paying attention, they were going to keep flipping the scales around. So, one time the negative poll and the positive poll were that way, and then the next time they were reversed. That's actually a really terrible idea. There is a fine line between trying to watch for attentiveness and just being annoying and frustrating. 

My advice is always to design your survey consistently. If you are using scales, you want to keep your scale directions the same throughout. If you're using different ranking orders or things like that, you want to be consistent.  

Some platforms will even give you templated scales. So, you don't have to program the scale yourself, you can just select which one you're using from a list and then it will always be implemented in the same way. 

Speaking of templates, they are really great. Templates are amazing. They're a huge time saver. So if you do find there are specific questions that you're asking frequently, maybe even a block of questions or maybe an entire survey that you intend to use again with just a few changes here or there, saving it as a template can save you a ton of time.  

Now you have a survey that you've tested, you've probably fielded and you've been able to achieve results with it. So, if you use it again, it takes away some of the unknowns because it's a tested thing, and it's a lot easier to make minor edits to an existing survey than it is to program one from scratch.

Justin Perkins 

I honestly don't have a ton to add because Amanda makes such good points that apply to both qualitative and quantitative.  

The thing that I will say is I think that is where value comes in when you're looking at these different systems. Making sure the more you reuse the system, the more it will make your life easier when you're DIYing. Because you're going to be able to copy some of those questions, how long it takes someone to answer something like that.  

Your first new project in the system is probably going to be the one that takes the longest and then it will get better as you continue to go through there. So, that's where we see async projects. The very first time in the system is when we see a lot of people realize like, ‘okay, I actually do want help. I may not want DIY quite yet.’ They want to make sure that they have somebody guiding them. 

Then you get that comfort level to go through and do that. And you've got your first one in there that you can pull questions from and see where, like we talked about on the last slide, making sure you have variety for engagement with your respondents, but also making sure you're consistent.  

I think as you get more into this and a lot of, you're probably way more experienced than I have at it, but you start to get that feel for consistency on things and that's where you do see that success as well.

Amanda Ilnisky 

Which brings us to our next tip, which is something we've touched on in our previous discussion, but it's worth calling out on its own, which is always doing a final review before you launch.  

Sometimes this is tough because you might be running out of time and it's really tempting to be like, ‘I tested the survey when I was halfway done and it worked fine and I didn't really add that many things, so I'm just going to go.’  

It's really important that you do a final review to ensure that your logic is working, your screen outs are working, everything is working the way you intended. If you're working alone, obviously you'll have to do this yourself. If you have some colleagues, you can share a preview link with them, have them run through the survey. Sometimes when you've worked on something for so long you can't even see what you're looking at anymore. So, it helps to get a different perspective on what is actually happening.  

Or if you have a platform that can generate a simulated report or test data, you can generate the test data and by viewing the data file, you'll see right away if there are terrible problems with the survey. Maybe the data can only advance to a certain point and then half of your file is blank. Obviously, that's a big problem. But you can test for lots of things that way if you don't have an appetite for clicking through a survey.

Justin Perkins 

I think the thing that we see are these logic engines, and I think this is even worse on the quantitative side than the qualitative side. You have so many of these different scenarios that the way it plays out can be hard. So, this is where having things like a support help desk can be really valuable on the qualitative side. Having someone else that can be there as a sanity check of like, ‘hey, why is this thing not breaking?’ 

Use those tools when they're available. Use your support, use your help articles, use those things instead of feeling like you're on an island. Because even here at Sago, if you're DIYing, we're not going to leave you out to dry and I don't think many vendors would. As a vendor, we want you to be successful in your research.  

I think that's where it's really helpful to make sure that in those final reviews you can have that second set of eyes where it makes sense where you might need it, especially when you're running into problems because problems will happen.  

I think that's something we don't directly call out here, but in DIY research and when you're working with a vendor, there's going to be a problem. Is that part of every project? No, but at some point, in research, you're going to hit something where you're on a platform, something happens that you didn't expect. And that's why we have experts. You've got tools and teams around to help support where you need it.  

Outside of that, the biggest thing, coming back to the simplicity side from earlier. If you continue adding complexity, then these kinds of things can take more time. It's where you want to make sure that you do leave time budgeted for this in your project launch and estimations to make sure that you get the research you need.

Amanda Ilnisky 

Absolutely. Which brings us to our next tip.  

When you're working DIY, you need to be realistic about data quality. When you're choosing a sample partner, don't hesitate to ask them questions. You can ask them questions about their security technologies and measures if they operate a panel themselves.  

You can ask about things like verification processes and their recruitment approaches. If you're using your own sample, maybe you've got an e-mail list for a customer database and you're sending a survey out to them. You've got to be mindful there as well.  

You want to make sure that whatever list you're using is clean, has some sort of recent contact history, so it's not full of dead addresses and you go to mail to it and you encounter massive deliverability issues because of the number of hard bounces you experience or that you're not using a list.  

I have another life story to share with you here.  

When you're using a list, you have captured those e-mails with some understanding from whoever owns those e-mails. They believe that they've provided their e-mail for a specific purpose. And yes, there's usually fine print saying, ‘I agree to be contacted for marketing and other stuff,’ but nobody reads it. So, you want to make sure that however those addresses have been captured. They are not under the impression that the e-mails will be used in a very specific way.  

The story I have is from a client who had this e-mail list and they wanted to invite it to a survey. They ran it and they found that the response rate was really low. 

Then they were looking at some of the open-ends and people were leaving really negative comments about things and not answering in relevant ways. They came back and they were like, ‘well, what's going on? What's wrong with my survey? Why is this happening?’ 

The first thing I did was ask questions about the list. Are these people who have been engaged with marketing before, or have you only ever contacted them in one specific context?  

They did a bit of digging and they're like, “oh yeah, these e-mails were from our service department. They thought that they had submitted their e-mails so that they could be in touch with service people about servicing, whatever the thing is that they bought. We just kind of borrowed it to use for this.” 

I said, “well, there's your problem. These people have probably never been contacted outside of e-mails to book service appointments or whatever the situation was. So, when they start getting survey invites, they're like, hold on, this isn't what I thought was going to happen.” 

You want to be transparent with your list members about how their details are going to be used.  

Another thing you want to do related to data quality is reserve time to review your data before you jump into your analysis and reporting.  

If you had previously worked with a vendor, you had the luxury of a project manager on their side who would do that for you. They're going to look through the data, they're going to clean out all the obviously bad responses and they're going to provide you with a data set that is in reasonably good shape.  

When you're working DIY it's alarming to see the data first without any of that cleaning. So, be prepared for that. Be prepared to see some bad stuff and leave yourself time to check it.  

Some platforms, like my platform Methodify, will allow you to check the data in real time. So, as soon as you start getting responses, you can be right in there exporting data, looking at the in-platform reporting, seeing what you're getting and being able to flag and remove those responses early on so that you're not stuck reviewing it all at the end.  

But even if you're left reviewing it at the end, that's okay. Just leave yourself time to do a check of your data before you start doing your analysis.

Justin Perkins 

On the qualitative side, I think the only thing to add here is you're in these discussion guides, and the asynchronous is the worst of them, live you see it some too, but you ask these questions, you may get paragraphs of responses back. Being realistic about all of this data that you are collecting, what can you do with it? So, it's thinking about that to think about, you're going to get this data.  

If you are asking people to give you War and Peace with every single answer, then by the end they're going to give you a sentence. So, you may be missing the mark by the end of what you're trying to get at because people are getting fatigued. We're all humans. We know that that's not a shock to anyone.  

So, really on the qualitative side, it's just thinking through like, ‘okay, what's the most important thing I want to get from this research and how do I want to get there?’  

How do we make sure that you're not over segmenting your guides where you have answers that are so fractured that you then have to spend time bringing them back together at the end where maybe you didn't need to?  

Just asking those questions, that back and forth internally of like, ‘okay, do I actually need to do this?’ Even if it's a simple process to merge back together, it's another thing to do. So, how do we make sure that you can go through and design all of these things with like, ‘okay, this is what I want to have at the end of the day.’

Amanda Ilnisky 

Good tip.  

Our next tip is make the most of AI features. 

Justin, you mentioned earlier, AI has been a huge game changer for DIY researchers. It's really important that you're leveraging AI to assist with time-consuming, repetitive tasks.  

A good example is reading hundreds of verbatim responses to open-end questions in a quant survey. Now, this isn't to say that you should not read those responses, you absolutely should.  

But if you can leverage the AI to assist you, it might help you focus on those questions your survey that produced particularly interesting, surprising or relevant themes so that you know where it's worth digging versus the questions that maybe didn't yield anything particularly insightful.

Justin Perkins 

Amanda's a hundred percent right. I've talked about this with a colleague named Nile who he goes through and does these long reports. The reality is, as we talk about these time crunches, he feels like without AI tooling at this point, it makes it near impossible to really get at those golden nuggets of research.  

That is why all the people that are on this call, it is what you specialize in. That is what you are excellent at doing. And so that's why when we at Sago, and I think in most tools in the industry, look at AI functionality, it's not to replace people by any means. It is the reality that you need help.  

I can think back to calls I had in 2014, 2015, where people were like, ‘I just got 30,000 responses and I don't know what to do.’ 

That's where AI has come in as a little bit of a life raft. Here's how we can take and start to summarize this data.  

I think the other important thing with AI functionality is that you do want to make sure that it is digging into your data.  

I think a lot of people here are familiar with different tooling like ChatGPT and these things, but that's where market research specific tools, I think really help at making sure that you're digging into your data. It's closing the walls down, so you're not going to get quotes back from Monty Python about things that you're asking questions on. It's going to focus on your things. You can set it up to look at your particular data to analyze that and reduce those hallucinations.  

Which I think AI as a whole is getting better at day in and day out, but that is one of the important things is to make sure you're looking at your data.  

In addition to that, you have citations to why it is there. And that's why Amanda mentioned it's important to be familiar with your openings, be familiar with that content, because I don't think it is ever a great idea to just completely take AI at its word and assume that it will never make a mistake. That is where I think as humans, we still bring value, and so we can look at it and we can use our human brain to say, ‘okay, yeah, that does track with what I read.’ It's an assistive tool.  

My feelings on AI functionality is that it really opens the floodgates for what all we can do as an incredible assistive tool at helping expand these capabilities and give time to actually find the things that are important for your clients.

Amanda Ilnisky 

Absolutely. Assistive tool, that's exactly the right word. It's here to help not replace.  

So, those are our tips. Those are our seven favorite tips. We're happy to open up the floor to some Q&A now. If any of you have any questions about the content we've shared or anything you'd like to ask.