Questions for today, so your insights tool stack is ready for tomorrow
Editor's note: This article is an automated speech-to-text transcription, edited lightly for clarity. To view the full session recording click here.
As researchers are being asked to do more faster but with less budget or personnel, it is important to have tools that work for your team.
Ben Wiedmaier, director of research relations at User Interviews and Liz White, managing director at Studio by Buzzback, presented a framework that helps narrow down a search for the right tools. The two know it works because they have used it themselves.
Session transcript
Emily Koenig Hapka
Hello and welcome to the session, “Questions for today, so your insights tool stack is ready for tomorrow.” My name is Emily and I'm an Editor here at Quirk’s.
Before we get started, let's quickly go over the ways you can participate in today's discussion. First, you can use the chat tab to interact with attendees during the session. Second, you can use the Q&A tab to submit questions for our speakers during the session.
This session is brought to you by User Interviews. Please enjoy the presentation.
Ben Wiedmaier
Thanks Emily, and thanks everybody for spending a bit of your day with us.
We're very excited to share, hopefully, some practical and tactical ways that you can make a better insight stack, whether you're a team of two or 200. I don't know if there are any teams of 2,000. If there are chat me. I would love to know more about where you're working, where there's that many insights professionals.
But I'm very excited to share with you a framework that my colleague and friend, Liz and her team have been working on.
So, again, do use that chat or Q&A tabs. We are here to engage with you, not just share the content.
So, let's get into it. Oh boy. There we go.
I'm Ben. I'm the Research Relations Director at User Interviews. That just means I spend a lot of my time thinking about what research and insights professionals are doing, and this is part of that work.
Thankfully, you won't be hearing very much from me. You'll be hearing from my colleague and friend, Liz.
Liz, welcome.
Liz White
Thanks, Ben.
Hi everyone. Good morning, good afternoon. I'm Liz White. I am Managing Director of Studio by Buzzback. I have spent the better part of 20 years in insights. A large portion of that was spent designing and leading custom research programs and studies for our client partners.
The last 10 years, though, I spent immersed in qualitative research first by forming the qualitative research function at Buzzback and building that team and capabilities and vetting those partners. Then most recently taking that and translating that digitally to studio.
Ben Wiedmaier
Awesome, and you're going to hear more about Studio and Buzzback here in a moment, and certainly User Interviews. If you have any questions about that, Liz will be sharing a bit more here in a moment.
I think it's useful for us to not only share what we're doing but why we're doing it. So, here's a bit of what you can expect.
Today we're going to talk about the high-level state of tools, platforms and the tech that helps us do our jobs. We're not going to linger too much, as you'll see there's quite a bit to linger on.
We're going to share an evaluative framework. A set of questions and steps that you and your team can ask to help you make better decisions about those sorts of tools and platforms to help you do your work. We're going to share two stories from research team leaders that help reinforce this.
Then we will reveal what ABR means and how it relates to the work that you and your team are doing.
Why us?
Liz and I, as she said, we both have many decades in the research and insights research and insights worlds.
We're both on the practitioner side. That is to say we're both doing research both in the field and digitally, and we partner with lots of clients. That gives us broad experience across sectors. So, whether you are on the agency side, whether you're a solopreneur or maybe you are working in-house at a large enterprise, we have likely partnered with a team like you before.
We're very hands-on. And so the things that we're sharing today, we have actually done.
And on the partner and vendor side, we are working with a lot of customers to think about the workflows that we're advocating.
So, we hope that that experience and exposure from our side gives us a bit of expertise and hopefully gives you some confidence in the things that we're sharing with you today.
Let's start with the state of tools.
As I said, it's going to be brief only because there's so much that we could possibly say. I have two images though that I want to share with you that I think really drive home the state of things.
Don't worry about squinting. You won't probably be able to see everything. The purpose of this image is to show you just how many tools there are.
This is simply in the user experience research space. This is where I spend a lot of my time helping product UX and design teams build better experiences across digital and in-person channels.
My colleague, Jackie, will share the link to our report that we used to generate this.
We had so many tools that we decided to create, as you can see in the middle there, a map kind of organizing it into a kingdom.
Again, the idea here or the thing to take away here is that there are so many tools from both collecting, recruiting, all the way to analyzing and sharing and repositing, but of course that's just one part of the insights world.
There's also here in the res tech landscape, this is their 2024 landscape. And so, we have hundreds more.
I was going to try and count all of them between this one and this one. I couldn't. Suffice to say if you are like Liz and I, you've probably felt a bit or more than a bit of overwhelm in trying to identify the constellation of tools that are on offer, parse what they actually do and then try to make sense of it for your team.
If you're feeling that, well then stay tuned because we're going to be sharing a bit of a framework for helping you parse through that.
Now, Liz, you have been thinking through these various tool stacks and the ecosystem there in, in four layers. Can you walk us through what these layers are?
Liz White
Yeah, so those maps are a really good starting point when we think about the state of the landscape today.
One of the things that is interesting is if you were to do kind of a market landscape audit, maybe about 10 years ago, 90% of what you would've seen would've been agencies. Now, the difference would be the size of the agency.
So, you would have the big four, you would have anything down to small boutique shops and then kind of a smattering in between. But the lion's share of the market was really clustered around strategic agencies.
Then we had this tech boom that kind of created this overcorrection for that. We had a flood of platforms, some of which are represented in those maps, come out and really on the pro side really put research in the hands of the user and the practitioner. And so, from that standpoint, it was wonderful
It allowed you to really be in control of your research and not be beholden to agencies. But it also created some friction as well. So, that ownness of being in control also caused a bit of burden. You were responsible for everything and depending on who you are, sometimes you were given the keys to a car that you didn't really know how to drive.
And then supporting that were these specialized tools that emerged that were really fit for purpose and that would solve for specific workflows that you needed in order to get your research done.
But what that meant, and that's kind of still the case today, is that we have this kind of big bifurcation of solutions within our space. So, we have agencies on one end, and you have these platforms on another and this big blue ocean in between.
One of the things that isn't really represented and that isn't really talked about enough in my opinion, is this insights enablement option that allows somebody to tap into something that has strung together all of these three layers.
So, strung together, the strategic agency benefit, and what you would be getting from that, platforms and specialized tools and bringing together that comprehensive solution and experience. And that takes it right up the middle and it bridges that gap between the two ends of the spectrum that currently exist in the landscape.
Ben Wiedmaier
And I'm glad that you ended on the middle ground, Liz, because from our perspective, we're a SaaS tool and yet so many companies are trying to democratize, enable or empower whatever the term is, they're sort of “non-researchers.” And I use quotes here to broadly define either stakeholder partners, whether those are product engineering folks or junior researchers, but the tool can only go so far. So, I'm really glad that you talked about expert support because we are asking or rather hearing from our clients and customers that the tech is good and it's getting them between 60 and 75% of the way there, but they still want that other set of eyes.
And so, I'm wondering if you might walk us through the studio model or rather Studio by BuzzBack model and how that combination of solid tech but with expert oversight or support comes together.
Liz White
So, that's really what we set out to do when we came out with Studio.
It really was built to orchestrate all of these different elements in the qualitative space that historically have been very fragmented and have created a very laborious end-to-end experience when you are standing up a piece of qual. To be able to bring together all of the pieces and go through a pretty rigorous vetting process, which is what kind of ties us to the topic today of that process that we in fact went through in order to create this toolkit that is specifically for qualitative research.
So, within Studio we have vetted different fieldwork platforms, recruiting partners, User Interviews being one of them that are in there, and also expert qualitative talent, both moderators, facilitators, strategists and bringing all of that together in one seamless platform for that end-to-end experience.
One of the things that we had to do when we were starting out was take a look at that map that existed and say, ‘okay, where do we start? How do we go through this process and how do we vet all of the potential platform solutions that could be a part of Studio? How do we build the right tech stack for what we were trying to achieve?’
I actually went back to my notes last night to take a look. We vetted 66 different field door hosting platforms and 63 different recruiting and panel partners to integrate into Studio. With those types of numbers, you have to have some kind of framework, otherwise it's really hard to distill all of that and to make concrete and sound decisions based on what you're trying to achieve and who is that right partner that is going to help you get there.
So, hopefully, whoever is listening today, like you said, this is a framework that no matter what size of business you are, no matter where you sit in the spectrum of insights, this framework that we applied and that we used as we built out Studio is also helpful and relevant to you as well as you think about your own tool stack and your own insights toolkit for 2026.
Ben Wiedmaier
It’s a good time to be doing it if you're watching this on-demand. So, maybe budgets are still a little squishy and so it's a good time for you to at the very least be thinking about the questions that you're using or the frameworks that you have or don't have in place to guide those questions.
So, Liz, let's jump into that vetting framework since that's the big takeaway for today.
I'm wondering if you could start with some of the things that you experienced when you were meeting with either sales development reps or maybe you were playing around with the tool itself around the sort of whether or not the vibes you were getting of the tool felt like they knew their stuff. What did you mean when you shared this “Questions that Matter” slide here?
Liz White
Yeah, so actually before we get into the questions, if I was to take a step back and actually think about what was my starting point, where was I mentally when we got going, I do think that there is a lot to be said for mindset and to be embarking on this journey in the right mindset.
One of the things that sometimes I even need to remind myself when we are going out and we're exploring and we're vetting is that at the end of the day we are all researchers.
We ask questions, we think analytically, we think critically. We are always in the pursuit of what will enable us to get what we need at the end of the day.
And so, to be applying those same skill sets and to be looking at all of these platforms through that lens, through the lens of a researcher and kind of having that mindset and orienting myself there first, that then allows me to kick off the journey, to start asking my questions and to be listening with that analytic and critical mindset that's so important when you have hundreds and hundreds of platforms to weed through. So, I always start with mindset.
Then after that, if I was to look back and distill the questions that we would be consistently asking of the vendor partners as we were going through that process, you can really boil it all down to three key questions.
Some of those questions are external facing and some of them are internal facing and that's okay too. I think you want to have a little bit of both as you're going through your evaluation framework, and you're thinking about your solutions.
The first one is, “Are they speaking your language?” We'll talk about what that really means and what that meant to us when we were going through that process.
The second is looking at it and seeing, “Are they trying to be all things to all people or is there a pursuit of excellence in a certain space and a certain function?”
The third is the kind of inward facing one, “What is your evaluation process?”
And thinking through the different components of that, that then allow you to, not just go off of vibes but allow you to really critically assess to see if the fit is there.
Ben Wiedmaier
Yeah, I'm so glad that you had that last one because what Liz and I are going to be sharing in anecdotes might be what worked for us or our teams or our clients, but it is really important that you start with your own needs, whether you're again, internally or supporting clients externally.
We can't give you the answers. We can give you ways to get answers.
But Liz, let's start with speaking the language.
Liz White
First, what that meant for me and what that meant for us as we were setting out was I always look at the founders. I look at the leadership team. I look at the folks that are setting the strategic direction for the platform and to see what they are bringing to the table.
What I want to see is subject matter experts, especially in research.
For all of us, we show up bringing our past experiences, education and understanding to the roles that we have today. So, I want to see somebody who has deep expertise in insights and analytics, depending on the platform, that are guiding the strategic direction.
Secondarily, but I think it's also important, is thinking about the service element to the platform and that solution.
We all kind of just came off of a year conference circuit where we listened to presentations, oftentimes talking about the tech, but then saying, ‘Hey, don't worry, there's still a person there. There's still somebody there to support you.’
And to me, I never want to see service as an afterthought. I actually want to see that embedded into the fiber of the platform because no matter how much we want to DIY or self-serve, service is still critical in our line of work.
So, to me that's kind of the way I was looking at ‘Do they speak your language?’ Because I think that it is reflected then in the platform and the features and what's available to you. Once you start there, then these kind of red flags and green flags emerge.
To me red flags were always things like pitches that hyper-focused on speed, or that was this oversimplification of our space. It's not simple and I think there is a lot of watering down that makes it seem as if insights is just this volley of question and answer and question and answer, but it really is more nuanced and complex than that.
And so, to me that just shows that you don't understand the space in that deep way that I need you to. Features and things that as researchers we would expect to be able to do because those are table stakes. The worst is when you get in there to try it and you expect to be able to do something really basic and you can't.
So, those things tend to then emerge that all fall under that umbrella of, ‘Do they speak your language?’
To me, green flags are this is being built and led by people who understand my world, who understand insights, research and analytics that are baking in methodological guidance and service along the way. Because no matter who you are, you'll hit a point in which you need that. You need that guidance and expertise to ensure the success of whatever the research is that you're trying to launch and kick off.
Ben Wiedmaier
So, if those are the questions and the ways that you can get a sense of the extent to which the tool or the founders behind it or the builders behind it are invested in and know about research. Something else that, at least for us on the User Interview side of the space, bump into is often teams run into the “all-in-one-flaws" or “flaw-in-one" as I've written here.
That is essentially where some tool tries to be the Swiss Army knife of insights research broadly defined. They try to support every phase of the research. I mean now that AI is being sort of sprinkled into more tools, I'm seeing all-in-ones say that they can use AI to design your interview guide, we'll use our panel that they don't really describe much about to recruit. They have some sort of fieldwork capability. And then of course there's an analysis and a share out.
What I have found working about a decade on the vendor and research tool side of things is that, and I am grateful for Liz for letting me keep this in, it's ‘meh-in-most as opposed to all-in-one.’
They may start out as a tool that focuses on testing or maybe they're an in-app monitoring tool and they sort of clue John all these other things because either they think that that's the way that they can up their subscription price or they're hearing from their current customers that there are other pain points that they don't currently solve. And instead of just getting better at what they started doing, instead of just really drilling down to trying to become best in class, they try to become more and more and more and it just reduces the quality across the bar.
So, for example, at User Interviews we just focus on what our founders call the ‘Participant Layer.’ And so, that's recruitment, screening, participant scheduling, incentives and that's it.
We do use artificial intelligence and large language models, but we use it in service of that mission. And so, we have an algorithm that matches participants with projects on the backend. We don't need to bother you, the researcher, with what's going on. If you want to know, you can ask us.
We also are focusing a lot on fraud detection because again, that is a big part of making sure that the participant layer is as smooth and as positive as possible. And so you need to ask your questions or the questions of your team, what are your core research jobs? Think of building tech stack as a jobs-to-be-done sort of framework. We'll get to that in a moment.
What do you need to do? Are you on a more quantitatively oriented team where you really need a best in class survey tool or you need a very robust analytics platform? Are you more qualitatively minded?
And so, you need something that can help organize interviews or moderated sessions and then start asking both the questions that Liz featured about, ‘Do they know their stuff, and do they know the stuff that you need them to know?’
I would really caution you against thinking that you can put all of your chips, all of your budget into a single tool because they just, I've heard from so many research leaders that even though it seems like the marketing works out, it often doesn't.
Liz, I don't know if there's anything you want to add there.
Liz White
I think that's what we appreciated about you all when we were building our partner ecosystem within Studio.
For us, our mindset and approach was instead of aligning with one partner who is everything to everyone, we would rather and prefer, and this is what we've done and what our users appreciate about Studio, is we would rather create a unique ecosystem that allows a user to access best in class tools or experts for qualitative research.
To really hone in on that, to specialize in qual and to specialize in building a toolkit that really brings in who we consider to be best in class in their respective spaces and build that partner ecosystem layer within Studio versus trying to be all things to all people.
The same is true when we were building out Studio. We could have built our own panel and built our own hosting framework and we didn't intentionally, we felt that there was more power and value in aligning strategically with partners that thought like we did and enabled our users then to access these best in class solutions at their fingertips all within our platform.
Ben Wiedmaier
So that's the second part of our vetting framework, ‘What do they know? What do they try to do?’
Then the last one, the last sort of bucket here, is your own internal process. Take us through this, Liz.
Liz White
Yeah, so I'll do this one quickly. I want to get to the sound bites of our research leaders because I think it'll be helpful to the folks listening in.
But one of the things that I'm always a big fan of is, ‘doing for yourself what you do for others.’ What that means in this context is often we are supporting our client partners or our stakeholders or our branded marketing partners to understand the landscape of the category that you are serving. There are different need-based frameworks that we often apply in research, jobs-to-be-done, being one of them, but to kind of take a page out of some of the playbooks that you do in service of business and actually apply that to this vetting process.
One of the things that I really like about jobs-to-be-done is that it's not just about the features.
So, while the functionality is one layer to it all, there is also the emotional and the benefit outcome, which is the reality. It just is the reality of that also being components that you have to care for.
So, we did build out, and we're sharing this with everyone who's listening today, a handout that really breaks down these different components. It's what we use as we go through and build out our toolkit. It is a page out of the jobs-to-be-done framework.
Hopefully this is helpful for everyone, but I would encourage you to make it your own, especially when it comes to how you're weighing each of the elements. Those were the things that were important to us, but there may be different things that are important to you.
This is a leave behind that hopefully will be helpful to everyone, but again, I encourage you to kind of make it your own.
Ben Wiedmaier
Yeah, and I'm so grateful for the Studio team for sharing that with us.
It is in the handout section. We've made it as it's an Excel sheet, so you should be able to edit as you want. We'll share it out with anyone who's watching this on-demand so that you have that as well.
Okay, with our remaining time, wow, the countdown clock has started. That'll get you going.
We've got two executive stories, we'll make them quick.
The first one is from Calendly, maybe you use that to schedule. Gordon is their research manager. There are two-person team that supports all of Calendly, mostly their product design and engineering teams. They're at a $3 billion valuation. And to get there, they have a lot of questions.
So, recently I sat down with Gordon to ask how he's building his own tool stack. Here are the questions that he's asking when he is building a team.
So, let me see if I can get this on the screen and play this.
[Audio plays of Gordon Toon, research manager at Calendly.]
We are at the stage of not incredibly scrappy, but also not the most mature biggest Google type research team at Calendly. And that's why we've started to go a bit more towards the BYO.
Because we have found that we're getting a greater variety and depth of questions across the organization, across the business, which requires a richer set of methodologies, and being able to actually do those methodologies very well. And so sometimes it's better for us to get that specialized tool that does that methodology particularly well. And we as a research team can manage which tool to access.
Ben Wiedmaier
For Gordon, his team is more analytics driven because they have so many people visiting their homepage. And so, the product questions or the questions that his leadership are asking require a different set of tools. For him, no one tool could do all of it.
They use a combination of an in-app survey and then they use our tool, User Interviews, to build their own panel.
So again, an example of how a scrappier team who has a large variety of questions needs to build their own because hey're just not quite sure what they're going to get the next day.
Liz, do you want to set up our next executive stakeholder example? I'm going to pull up the audio here whenever you're ready.
Liz White
This kind of represents the other end of the spectrum.
So, there may be some listeners where you identify with Gordon, you're a team of one or two. You have to be lean and scrappy as you build out your toolkit and think about how that happens in a really agile way.
The other end of the spectrum is Jeff at Verizon who has a different world and a different reality. There are different resources available to him. There though comes different challenges and different problems that he needs to solve for, based on the enormous learning plan that they have and also the enormous amount of data that they sit on, and how do you actually do something meaningful with that data and design your tool stack around that.
Ben Wiedmaier
So, here's Jeff.
[Audio plays of Jeff Ulmes, senior director of customer and product insights at Verizon.]
For me, I start by looking at defining the ends of the spectrum of insight.
On one end, it's a hundred percent quantitative data and analytics. Because more and more data is becoming available, the tools are becoming more sophisticated in terms of how we can just interpret data and extract insight. And on the other end, I want to balance that with the truth of real people.
So, for me, I define the two ends of the spectrum first, which on one hand is real interviews with real people and on the other hand is getting as massive and quantitative and predictive as I can. Then I fill in between there.
The other thing is everyone is just kind of asked to do things faster and leaner. What that means for me is standardization is actually our friend, not only for faster but also for greater insight. So, I'm really leaning towards more of that insights engine. We look at trends, and we plan for really robust integrations.
So, being able to do really lean survey research that connects into large data sets, that same lean survey triggers the need to go off and do really rich qual to bring that to life.
Ben Wiedmaier
So, Jeff, as you can hear there, he's talking about not just the specific tools but an ecosystem because he's enabling a whole host of people to do a whole host of things. And you'll get that audio in our follow along.
But, I just want to make sure, Liz, that we have time for you to walk through our, oh gosh, sorry, the ABR, which is Always Be Researching.
Liz White
Always Be Researching. It's my favorite acronym. I'm coining that.
For me, it is something that I really do try to live by in my professional life. I would encourage everyone to be doing that as well.
So, as we wrap today, I think the things that we would love to leave you with is that mindset of Always Be Researching. To really be starting as you embark on your journey to define that toolkit that is right for your needs and your environment. And to be asking a core set of questions;
What job am I hiring this tool for?
What would success look like? When we map that out and we map out new tool integration and really actually define those key KPIs. Is my hypothesis right?
How will I test that hypothesis?
And what can prove me wrong? Because we're all researchers and sometimes you test a hypothesis and you're proven wrong. And so, how do you pivot what happens?
So, that's my final call that I would leave everyone with is, ‘Always Be Researching.’
Ben Wiedmaier
Yeah. Thank you so much, Liz. It's great to peel back a bit of the layers on how you did it.
Here is where you can connect with us and learn more about both Studio by Buzzback and User Interviews.
Thanks to Quirk’s. Thanks to you and hopefully you'll be building tool stacks that meet your team in 2026 and beyond. Thanks everybody.
Liz White
Thanks.