AI in 2025: Industry Adoption, Practical Tips and What’s Ahead
Editor's note: This article is an automated speech-to-text transcription, edited lightly for clarity.
On January 30, 2025, Rival Technologies sponsored a session during the Quirk’s Virtual Sessions – AI and Innovation series. The session focused on tips, industry adoption and what to look forward to when it comes to AI in the market research and insights industry.
Andrew Reid, CEO and founder, and Dale Evernden, head of UX and innovation at Rival Technologies walked through some tips on using AI with real world examples with clients they have worked with, like Oura Ring.
Session transcript
Joe Rydholm
Hi everybody and welcome to our presentation, “AI in 2025: Insights Industry Adoption, Practical Tips and What’s Ahead.”
I’m Quirk’s editor Joe Rydholm – thanks for joining us today. Just a quick reminder that you can use the chat tab if you’d like to interact with other attendees during today’s discussion. And you can use the Q&A tab to submit questions to the presenters and we will get to as many as we have time for at the end.
Our session today is presented by Rival Technologies. Enjoy the presentation!
Andrew Reid
Hey Dale, you there? Are you having a good day?
Dale Evernden
I'm here. I'm here.
Andrew Reid
Awesome. Here we are for 2.0 of Quirk’s and us, for the second year, getting a chance to do an AI presentation.
So, if you're ready to rock, I'm going to get going here.
Dale Evernden
Nothing to talk about.
Andrew Reid
There's so much to talk about.
Beautiful. Alright, let's just get going here.
Alright, so our talk today, “Insights Industry Adoption, Practical Tips and What's Ahead in AI.”
So, we are going to cover a whole bunch of different things, which is why we've got a bit of a general topic and description. I promise everyone will get some great value out of this.
My name is Andrew Reid. I'm the CEO and founder of Rival Technologies. I also previously founded a company called Vision Critical, now called Alida, in the insight community space.
I'm the co-CEO and founder of the Rival Group, which consists of Rival Technologies, which we're going to spend a lot of time on today, and our sister company Reach3, which is a full-service consulting agency headquartered out of Chicago.
Dale, welcome to the party. Maybe say a couple of things about yourself.
Dale Evernden
Yeah, hi.
I've been working with Andrew since the beginning on this Rival trip. My particular responsibility is design and innovation. As a result of that and spending a lot of time working on AI, I am looking forward to talking about it.
Andrew Reid
Yeah, it's going to be great. Dale was employee number one here and we always have a great time collaborating.
So, just a little bit about Rival, we've built a conversational data collection platform. We really focus in insight communities, on voice of market, on what we call action ‘triggered insights.’ So, integrating with Salesforce and other databases and other customer systems of record to be able to make sure we're talking to people in moments that matter most in the moment whenever possible. We also work over SMS. We deploy over e-mail and very soon over WhatsApp.
Just one other point is we're very proud of the fact that we were number nine on the GRIT Report last year and we continue to get great rankings in G2.
For us, we really do love conversational research. That was the real idea around Rival and the kernel that really got us going and it was why can't we reimagine quantitative research? There's been so much innovation in our space, not a lot of innovation in the survey itself.
Does it have to feel like taking a test? Does it have to be that experience, and can we evolve into something that still lets you do very sophisticated research but do it in a much more innovative way?
AI scenario we've been focused on for quite a while. We actually have four live AI integrations on the platform. We've got four more in the works right now. We've been presenting at conferences. I have been fortunate to get invited on the client-side to present many times around AI. We've done a handful of webinars as well.
So, it's a space that we know well and we're very comfortable talking in and we're really excited to be a part of.
We're going to cover all these areas. We're going to get into a bit of this awkward relationship we have with AI. We're going to talk about value. We've got an Oura Ring case study. We're going to talk about some tips that we have and some rules of engagement. Then hopefully some questions.
If you have questions, please make sure you're putting them in the chat. We're writing those down to be able to ask them later on.
Okay. Dale, I'm going to pass it over to you to start to talk about this awkward relationship that everyone does have with AI.
Dale Evernden
Thanks.
To start with a quick joke, why did the market researcher break up with their AI assistant? Because they kept saying, it's not you, it's my algorithms.
And if that's anything, it's an interesting transition to speak to the general sentiment out there right now. I think we can all agree that we're right in the middle of a very large paradigm change with the way we use software and computers across society. It's a little bit awkward quite frankly.
We're moving really, really fast. The rate of change and the scope of change is enormous. I think these systems can behave in ways that are foreign to most users and so this sets the stage for a little bit of social anxiety. I think it's worth calling that out at the front end to just work through it a little bit.
Our big recommendation is to lean in, but we want to start by just recognizing that it's not always easy to lean in.
A couple data points. In the spirit of our industry, we're seeing that AI is growing at an extremely rapid pace. 78% of Americans are now familiar with AI. That's a huge change from last year.
So, the topic and the way it applies to the work we do is growing incredibly fast, but so is our uncertainty and a little bit of our anxiousness around it.
Andrew Reid
I think I'll just add that it's funny, any of us that are having conversations, this is coming up on day-to-day conversations now with people and you can tell there are a handful of people that seem to be really switched on, leaning in, curious. There's a lot of people that are kind of closing their eyes and hoping that this is maybe a nightmare that's going to go away.
So, that anxiety is real and is definitely something that we're going to have to continue to manage and to deal with.
Dale Evernden
I mean in some cases there's this dialogue that emerges if it's only in our head, but also just amongst ourselves, which is like, ‘wow, are these new computing tools going to replace us?’
We're arguing that if we lean in and embrace it, no.
We've all seen this quote floating around maybe you haven't, but AI is it going to replace us? It's probably not, if you embrace AI. It's the humans who don't embrace AI that might have some trouble.
So, we're here to advocate for embracing AI and leaning in.
Andrew Reid
Exactly, exactly.
Dale Evernden
Really quickly, just to fortify that point, put a period on the sentence so to speak.
The goal here is to stay relevant, competitive and empowered. We've been running a lot of experiments with this technology and the outcomes have been really, really impressive. We're extremely excited about it and one of the goals today is to talk through and share some of those outcomes.
But again, the net here is that we want you to want to really invest in this new reality and hopefully you will.
Andrew Reid
It also seems like we're just at the very beginning, even though we've come so far in the last year, year and a half with technology like ChatGPT. These three words, agentic, quantum, AGI, these are things that we're hearing more and more and we're seeing articles around.
Maybe, Dale, you can just kind of quickly unpack a little bit of what's coming at us.
Dale Evernden
Yeah, I mean if you take agentic, that's kind of what we're seeing emerge at the moment in the industry. When you take large language models and AI tech and then you give it a workflow and you put it to task. What's interesting is that these agents can use their reasoning to work autonomously and then they evolve their capacity to deliver the value. When you add that to next generation computing like quantum computing, that's where you really see a path to AGI.
Quantum computing drastically accelerates the ability for computation to work through problems. So, when you take agentic approaches and you take quantum, that's where we get to AGI.
We could talk at length about these topics, but your point, Andrew, the big takeaway is it’s not slowing down. If anything, it's getting a lot faster and so the sooner we embrace it, the better.
Andrew Reid
Yeah, and I guess one very little, small comment for researchers out there is this is going to allow you to dream a lot bigger.
You used to think that what you do in volumetric forecasting or discrete choice modeling was really complicated. I mean the ability to get even more complicated now with analysis and some of the design around some of these interesting survey experiences I think now becomes that much more limitless.
In the spirit of time, let's just keep moving forward here.
We're going to talk a bit about three foundational truths that we think help set the stage and line us up for this next section.
Dale Evernden
When I talk about embracing AI with stakeholders, partners and collaborators, I always like to circle back on a couple concepts. And so, these are the ones I want to talk about quickly.
The first one that not everybody connects the dots on is the fact that these LLMs and agents are probabilistic systems. We're used to deterministic systems which are much more rigid and predetermined than an AI system.
There's a tremendous amount of power that comes from the fact that these systems are probabilistic, they're much more flexible, they're adaptive, they're autonomous, and of course intelligent, but they can behave in different ways. If you can come to the same system three different times and get three different outputs, that can be kind of alienating.
But in fact, what you find over time and through adoption is it's really empowering. It's just a matter of getting used to the fact that we've got a different human computer interaction model there and so that's really important.
These tools hallucinate. As market researchers responsible for delivering insights that drive business, we really need to keep this in mind. We don't have to spend too much time on this, we all get this, but this is really critical to success in our business.
Finally, related to engaging with these systems, they really require direct guidance and direction. The one thing I would like to point out is that most real world applications of AI require an iterative engagement. You can't just say do this and then it'll do it. It'll come back with a first draft and you got to tell it and form it a bit. So, get used to that iterative engagement, which is different than the systems we've used in the past.
The fourth one is really to pull those ideas together. We're really champions of this idea of humans in the loop.
In our industry, I think this is incredibly important as we adopt this technology. LLMs and agents, they're not expert systems that are independently reliable. They really thrive and do their best work when we're in control.
We champion this idea of augmentation over automation. We really want to make sure that the human remains in the loop. And so that's a big part of how we've been building value and engaging with this technology.
We just want to articulate on top of those other three points and they all sit together to set a context for adoption.
Andrew Reid
I think you're going to see technology in our space come out that's trying to be a replacement where it's you know ‘can I get some inputs and bypass research and be able to run research without having to have a researcher?’ And that will be great. There'll be some interesting technologies out there.
For us really that RMO of how we think is, how do we give you that exoskeleton? How do we take smart journeymen, rip market researchers that have been in the space for a long time and really equip them to be able to do great work and to increase their velocity, the output, the quality that they're doing using technologies like these.
So try to now move into some practicality around, we want to show you some technology and just lay out how we think about the world.
First of all, where are you using AI? What are the different areas to use it in market research?
Well, I think about these five areas that you've got inputs, the inputs you're getting to help determine the studies you're going to run, the methodologies you're going to have the suppliers you're going to pick, the technology you're going to use. All those different pieces you have authoring or all the things you do to pull together that data collection instrument, the fielding of it, the analysis of it.
Then once you've done that, what are you doing from an acknowledgement perspective? How are you managing all those reams of data that you've collected in an intelligent way to benefit your organization?
The middle three are areas that we've actually shipped solutions for. All of these areas. We have thoughts and plans and some roadmaps around and we really look forward in 2025 to leaning in and delivering value across the core three, but ideally across all five as we look at the year here.
We're doing this through this lens that we call “Rival Labs.” And what's interesting is we're a pretty new company. We really had our dev team pushing forward since 2018 and we found that we needed to even disrupt ourselves.
As a fairly modern, not a massive company, between Rival and Reach3 we're about 150, 160-person organization. We're not a really large organization, nor are we overly tiny. But we built this motion just to make sure that we could go as fast as possible, and we could deliver value as quickly as possible. We tried to remove any handcuffs that you may have if you take a traditional approach to thinking about innovation.
Dale, maybe you could take us through some of the principles that we're using when we think about Rival Labs.
Dale Evernden
It's all about speed, really trying to keep up. The whole goal with Rival Labs was to create a safe place to run experiments and take some risks.
The foundational principles that we've got in place to really make labs successful are rapid ideation, coming up with ideas, exploring those ideas on top of the technology that's available to us and agile iteration. Those are the two sort of operational principles we have.
Then we use collaboration to really hone and execute on those two pieces and we've done some great work with partners and clients to that end.
These three principles define the lab's mission, and they've helped us really develop an innovation program that meets the opportunity, so to speak.
Andrew Reid
We have something I know that we're cooking right now, that's going to be released soon. Where two years ago it was T-shirt sized as a $400,000 or $500,000 initiative minimum to get it out the door. And now we're living in a world where in a week or two you can actually accomplish pretty much the same thing. I've been blown away by it.
The other thing that's interesting is this interplay with security and compliance. We all have to make sure that you're also maintaining all that security and compliance posture. That makes things very interesting, but challenges that are all solvable.
Dale Evernden
Yeah, I'll just say one thing on this. We often recommend as folks engage with this technology to set up a separate experimentation channel so that you can move quickly. We don't recommend experimenting with the tech on real business problems until you've made the investment and learned how to use it properly.
That's the other piece to this. Give yourself some space to play, so to speak.
Andrew Reid
Yeah, for sure.
Okay, so the first one we're going to get into is actually live in our product. This has been live for maybe about two years. This is called AI Tone Refinement.
This is the backend of the Rival platform where you're authoring a conversational research experience or conversational survey.
And what we did here is we thought about the fact that a lot of researchers are fantastic at building surveys. They're great practitioners. They're not necessarily always the best at the language and how they engage their target, who they're talking to.
If I'm trying to talk to a specific group of people that's way out of my age group, I'm 48. If I'm trying to talk to guys that are 18 that maybe live in a different area than I do, I will likely be way out of water. Tone Refinement allows us to take a whole list of tone descriptors and customize that experience.
You've got a slider that lets you decide, do you want to have a lot of emojis or a few emojis. You can do it on the entire survey, or you can change those tone descriptions on a card-by-card basis. Then when you click on apply, apply is actually training our LLM and it's showing the system it got something right.
So, what's actually happened is over the course of the hundreds and hundreds of surveys that have gone through this AI Tone Refinement has really helped to tune our LLM. So, it's recommendations are way better now than they were on day one.
Anything you want to add in here Dale, before we move along?
Dale Evernden
Just one quick thing. This is an interesting example of how we've designed for Human in the Loop.
We could have just automated this process with a single button and then it updates the whole chat but we've built it, so each card has an affordance beside it that's got a refresh affordance, it's got a slider to turn the volume up and down on the emojis. You can choose to apply tones at the card level or the whole chat.
So, we really built in Human in the Loop affordances on this, and I just wanted to mention that.
Andrew Reid
Yeah, it's been really, really well received.
Okay, I'm just going to play a video of Dale talking that's about a little less than a minute. That's just going to go through AI generated insights. This is specifically dealing with unstructured data. Unstructured open-ends are sort of the low hanging fruit I would say of our industry. It's the safe space for everyone to run towards because there's a lot of work involved in massaging unstructured data into something that's usable.
Dale Evernden
Alright, here's a quick look at our insights summarizer feature now available for open-ended text and open-ended video questions.
We've selected an open-ended text question here and to the right, we've got a table showing all the responses we've captured for that question. The question in this case was asking participants to respond on a recent check in and onboarding experience with an airline.
If I select the insights tab position beside the ‘all responses’ tab here, I have a ‘generate insights’ button that is available in the UI, select that. This sends all the verbatims to the AI, to process and run thematic analysis.
The AI responds by summarizing the key insights and then organizing those insights by a confidence score. You can see our top two are presented here with a fairly high confidence score. This one's got 95. Each of these insights can be clicked on and I can expand, and I can make sure that my source verbatims, which are presented here by relevant score are actually fortifying and justifying the insights gives me researcher the ability to make sure that the quality is where they wanted on these insights.
The insights themselves, as I said, can be run multiple times. I can generate another set of insights here and it'll produce another thematic run. So, if I'm collecting insights over a period of time, I can see how those insights evolve as the research is fielding.
That's our insights summarizer feature. Thanks for taking the time to watch this demo.
Andrew Reid
A couple of things to note here.
One, if you click on those summaries, those are all editable. Again, same thing with Human in the Loop where now we've given you a great head start.
If you start to look at the verbatims that made up that summary, you may want to slightly change that language to fit with the culture of your organization or some of the things that you've noticed yourself.
So that's one thing we think that's really important. Dale, maybe just explain really quickly the confidence score and how that works.
Dale Evernden
Yeah, another example of us building a purposeful affordance to really facilitate the human element.
The confidence score is our way of telling the researcher that they may or may not need to engage with that particular insight and ensure that the quality is where they want it.
You can on glance, see which of the insights are pulled together by the AI with more or less confidence. The less confident ones, you should probably dig in and look at the source verbatims, look at the relevant scores and just double, triple check that the insights are actually valid.
That's the spirit of those features really pulled together to try to facilitate a QA cycle.
Andrew Reid
Yeah, exactly.
The summarizer works for both text and video, that we find it is great. This one's pulling, you can see it's got source videos that we've transcribed in real time and tagged with sentiment and it's pulling those in. This is just demonstrating that you've got the ability to really use this in conjunction either with just video, with just text or you can use it in concert together for questions.
We're really big fans at Rival of every time you ask an open-ended question, why wouldn't you give someone the choice to open up their front facing camera and respond that way You can be more verbose, you can show emotion. There's a whole bunch of things you can do. We find right now around 10% of our open-end completes our video. We're seeing that number very slowly trend up and I think that trend is going to continue.
Next one is video reels. This is really, really exciting. This is something that's very fresh that we're just coming out with right now. Dale, do you want to talk a little about video reels?
Dale Evernden
Yeah, so what's cool about video reels is we were able to take the AI summarizer feature that we just showed off and build on top of it. What video reels does is it takes open-ended responses, whether that's a text response or a video response, and we run the thematic analysis using insight summarizer. Then we ask AI to go and review all of the clips that support a particular insight and then find specific pieces of that verbatim that are evidence of the insight. Then it clips those and then pulls those together into a single video, a highlight reel, we call that an evidence video.
You can see there, in the previous example when you clicked on an insight, it opened up and showed you the source verbatims. In this articulation, we can click on each of the insights, and it shows you your evidence video.
Now I will say this, we've also built in affordances for the researcher to go in and editorialize those videos themselves. There's a timeline, you can move the clips around, you can expand and contract the clips, you can add other clips, you can create a video from scratch and not use the AI to bootstrap it.
This is something we're really excited about. It represents stacked innovation and how we've taken a bunch of different AI value stories and pulled them together into one offering.
Andrew Reid
Yeah, very exciting. I think that a lot of people are very interested in creating these show reels from video. And if you can have the technology do all the heavy lifting for you, why wouldn't you want to include this in really all of your quant studies?
We're going to jump in on one of our clients, Oura Ring, a ring fitness tracking company that's been around for a while now. They've got some really interesting technology, a very loyal following of people. Those of you that are watching that have an Oura Ring, my guess is you love it.
I have one and it's great. It tracks my sleep, it tracks my activities, it tracks a lot and once you have those things, it's hard to live without them.
I'm just going to play this quick video here and then we'll chat about it.
Dale Evernden
Alright, let's look at the AI probing feature with this Royal Airlines brand that's asking its customers to provide feedback on a recent check-in experience at the airport.
We've got a simple Likert style question here. We're going to say that the check-in experience was not good and then it's going to ask me my first open-ended question, ‘tell us a bit more about specific pain points.’ I'm going to say the check in kiosk was broken and submit that.
The AI is going to take that response, process it and come back with a conversationally aware probing question. So, “thanks for sharing that detail. How did the broken check-in kiosk affect your overall experience?”
I can say, “well, I had to line up and was almost late for my flight.” Submit that and then we'll do another follow up question.
In this case it is going to say, “given that you were almost late, you describe how the staff or any supporting systems helped the situation.” And I'm going to say “there were very few staff to help process the long lines.”
So, what we've got here is an initial open-ended question. I've responded to three probing questions and that thread is then taken and summarized by the AI to finalize the verbatim. And I get a final verbatim here, which is then included in my data for the chat.
That's just a quick demo of our AI probing feature. Thanks for taking the time to watch this.
Andrew Reid
A few points to note here that I think are interesting.
One is that in the flow, being able to get that fresh verbatim from someone and immediately be able to react to it.
Then one thing that's interesting that's happening on the backend is that we're stitching this back into one verbatim. So, instead of it being three separate verbatims, now we've created one verbatim that you can analyze on your own.
You're getting a unique experience on the responding side, but on the reporting side, you're equally getting not the pain in the butt factor of having these three separate responses. You really just want one spot, one response.
The natural place for this whole idea of iterative innovation to go is what we call thoughtfulness scores.
So, if Dale had just said, how is your experience? And he said bad. Well bad is not a very thoughtful response. If he had said, “The kiosk is broken, I was almost late for my flight, I'm really frustrated, I'd appreciate if someone would call me.” The system may not need to probe much further. It may need to say, “we just have one last question,” or it may say, you know what? That's enough.
This idea of having probing for the sake of probing isn't a very smart way to use the technology. What's much more intelligent is to think about the thoughtfulness of the response in real time and decide how much you should probe and how far you should probe.
In some instances, maybe I got to probe five levels deep to get at the issue and in some levels, one level maybe just fine. And so this is a great example for you to see how we started with summarizer, then we added in video, then we moved to the ability to create these reels off the backs of those that match up with the summaries we have. Then we started moving into probing. And then very quickly the probing realized that probing on its own is not great. Thoughtful probing is much better.
Dale, if you want to add a bit more in here, but this is super exciting and something that we're thinking through in a thoughtful way and looking forward to releasing soon.
Dale Evernden
Yeah, I'll just say that when you try to land a plane with this technology, sometimes the first idea doesn't always hit the way you want it to. This is where iterations are important.
In this case, we learned that without the Thoughtfulness Scoring on the responses, the AI was a little bit repetitive in its questions. And so, by to your point Andrew, you're adding the thoughtfulness scores, we were able to get a better user experience that was a little bit more appropriate.
The other thing I'll mention is that we also used Thoughtfulness Scores to evaluate the delta between the initial response to the open-end and then the probed response, which is that generated verbatim.
You can see in the slide here we've got a four out of 10 and then a nine out of 10. And this represents kind of an interesting way to visualize deeper, richer insights.
And so over there on the polar graph, you can see how through probing we were able to get a much more thorough and meaningful response through the engagement.
One of the takeaways here is that iteration, and again, we didn't have thoughtfulness scores front of mind when we first started to explore AI probing. It emerged as an observation from our investigations with the technology.
This is why iteration is so important and rapid concept development is so important. If you can head down a path, you're ultimately going to end up somewhere interesting. And sometimes it can be way more interesting than you originally thought.
Andrew Reid
I think the increasing conversation and this awareness we're seeing with the brand side is every customer interaction is a brand interaction.
So, if you give me an awful survey experience that's on you, you don't need to give me an awful survey experience. You can give me an intelligent experience that gets all the information that you need, but that treats me respectfully and interacts with me in a way that seems very intelligent. That's part of what we're trying to get at here with the Thoughtfulness Score.
We did try this with Oura Ring. We were able to present at Quirk’s in New York in the summer. The presentation was very well received. Oura was very interested. They got a great uptick and engagement from their community of around 5,000 people that are currently sitting on their Rival Insight community.
They're putting up their hand to continue to do some of that important co-innovation with us because it's fun to do this. If you can do it when you've got a client that wants to come along for the ride, that's great too.
To Dale's point earlier, we obviously ran this a few times ourselves to make sure we felt confident about it before we brought ordering into the mix.
In this case, the findings were that people liked the experience. They felt they were relevant and the interaction was relevant and appropriate. They felt this was easy to understand.
What's better is that we had a much higher ranking and the thoughtfulness of responses and the verbosity, viscosity of those responses. And so that's a great outcome. If you can get better quality and more volume of data from your customers, then they're really, I think, achieving some of those goals that we all have as insights professionals.
We have a bunch of other clients that we've been able to continue to work with and they tried out AI probing and really had a great experience with it. We very much thank these clients that have been really helpful in helping to drive this innovation.
Those are few examples of things that are practical, that are real, that started off as experiments and that are either in the product or coming into the product very soon.
We really think that in today's world, in 2025, these are not special things you turn on. This is technology that should be part and parcel of the platform that you use for data collection. These are all just enhancements to the overall platform that we're putting in. Dale, I know we've got some tips here. So, I'm going to pass it to you, to maybe go through some of these tips.
Dale Evernden
So, you got to see some real-world innovation outcomes there and some of the sausage making that went into it. The goal is that hopefully that motivates more engagement.
So, just as you're beginning to think about working with AI, I just want to double click on this idea of keeping the human in the loop. You don't want too much automation without some oversight. That's really important as you're delivering insights to your stakeholders, you really need to maintain trust. Once that's gone, it's hard to get back. If you're using AI tools, make sure you're referencing the fact that you're using AI tools. All these images, for example, were made with AI.
It's really important to consider indirect stakeholders. And so you can deliver outcomes from market research that's been facilitated with AI to your direct stakeholders, but there's usually people downstream as well who are going to need to know that you're using AI to do that.
Keep that in mind.
Then data privacy and compliance always need to be front of mind. From an enterprise readiness perspective, we certainly keep it front of mind, encourage everyone to do the same.
You need to be your own data privacy and compliance officer. Be smart about how you use these tools. There's different tiers.
For instance, ChatGPT has different tiers that you can purchase. I don't recommend using the free tier to do any kind of work. You want to pay for the pro or the teams tier because you get much more security and compliance built in as a result. So, just be careful with that.
Andrew Reid
I'll just add that we're finding that some of the even specific states are coming out with regulations.
Go look at what Colorado has put into some of their AI regulations. They have some mandates that you have to actually make sure you're telling people upfront they're going to be interacting with AI before they do.
Some of this is moving in different directions, so just making sure you're doing that homework and talking to the right people. For us, we try and make sure we've got some great trade associations in our industry and some really smart people that are watching those laws and regulations. So, make sure that you're paying attention to that.
Then finally, this is a fun little thing, so feel free to scan that QR code there. That QR code is going to take you through a very quick little quiz or segmentation, which based on the questions that you answer will tell you whether you are an enthusiastic adopter, a cautious integrator or a skeptical traditionalist.
We aren't really seeing a mix of all of these in the industry right now. There's a bunch of people up front trying to really be part of innovation and there's some that are waiting cautiously until they see a lot of implementations, a lot of research on research and a lot of proof points and then everywhere in between. And that's okay.
So, just for fun, we thought we would do this as a way to give back a little bit and to curiously see how people do rank. I'm looking forward to looking at some of these responses or seeing some of the backend. This is all just anonymized, so you'll get a ranking that gets sent to you.
So, that's our presentation. I hope that we provided some value for everybody here. We do have time for questions now and we'll be ready to take those.
Dale Evernden
Thanks again.