The B2B Insights Podcast Channel was created to help marketing and insights professionals navigate the rapidly-changing world of B2B markets and develop the strategies that will propel their brand to the top.
Subscribe today for your dose of exclusive insights from the B2B market experts.
In this episode of the B2B Insights Podcast, B2B International’s Kim Arend, Research Manager, and Pete Mullarkey, Insights Network Manager, discuss the steps researchers can take to ensure B2B research respondents are who they say they are and provide reliable and accurate insights.
They define what world-class quality actually means, the tools and technologies used at B2B International to ensure we’re only working with the highest quality data, and the important role of manual checks and gut instincts.
Listen to the full episode:
Listen on Spotify >
Listen on Apple Podcasts >
Watch the full video:
Read the full transcript:
Jump to section:
- Introduction
- What does world-class research mean?
- How much online panel data do we typically remove?
- Data and AI-related spending increases
- Is world-class quality different in B2B than B2C?
- Which data collection methods do we use?
- How do we approach data checking?
- Which tools and technologies do we use to enhance quality?
- Concluding thoughts
Introduction
Kim: Hello and welcome back to the B2B Insights Podcast. Today, we’ll be talking about how to achieve world-class quality in B2B research. You are hosted today by myself, Kim Arend, and Pete Mullarkey.
Pete: It should be really good fun. There’s so much to unpack today in the world of quality. Let’s see where we get to.
Kim: I don’t know how we’re ever going to fit it all in. If we start with both of us introducing ourselves, I can go first. So, as you already heard, my name’s Kim and I’ve worked at B2B International for quite some time now. I don’t like to admit it, but I guess at this stage you’d call me a seasoned researcher.
I’m a research manager now and I’m originally from Germany and came to B2B International as a young student and an interviewer, just like Pete as well. I did a lot of interviews and therefore know a lot about what good and bad quality can look like firsthand and have then moved through to the analysis part of market research. What about you, Pete? What’s your journey?
Pete: Oh, it’s been very B2B focused. So, I came out of university and joined B2B International as a telephone interviewer like yourself, Kim. And since then, I’ve worked on the fieldwork operations side and in the real depths of looking at research on the analysis side as well. But now my role really straddles both the operations and the research side, because I look at all our partners. For example, who we engage with, how do we look at who can work in which regions of the world. And I’ve seen the good and the bad like you have. So, it would be really good to talk about that.
Kim: I’ve also worked quite a bit on B2C research in the last few years because we do venture into that field sometimes and it’s quite exciting to see the difference.
Pete: It really is. And I think my background is much more on that quantity side of things. And I know you are a qual expert in in our organization. So, we will come at it from two different angles. But in terms of today’s conversation, we’ve got a number of things we want to touch on in terms of what world-class research is.
What does world-class research actually mean?
Pete: So, Kim, do you want to tell us what world-class research means to you?
Kim: Such a hard, hard question to answer. It’s such a broad question. If I was to play devil’s advocate, I always say it’s not really quality, but qualities, if you excuse the pun. Because especially if you look at quantitative measures and you’re looking at a spreadsheet of data within your Excel, I often find that if you’re only looking at one score, that doesn’t really tell you enough. You need a little bit of context around it. You need to look at all the different qualities from one interview with a respondent. And so, world class quality is all about seeing that bigger picture. What about you? How do you define it for yourself?
Pete: Within our role we have to gather the data from somewhere and decide how we go about doing that and how the data will make an impact in the boardroom. So, for me it’s all about that quality that we build through trust. So, talking to the right people, making sure they give us as much information as possible so that we’re then able to push through and find these nuggets of insights that really help drive companies forward. The joy at the end of a project is when you hand something over that you’re really proud of because you’ve got full trust and clarity in what has been put forward.
Kim: It’s important to think about all the different stages of a project. When you think about data quality, you just think about the data collection. But really, it starts in different parts, which I’m sure we’ll get into a little bit later.
What percentage of online panel data do we remove on an average project?
Kim: I also have a little surprise for you, Pete. I’ve got some quick-fire questions. My first question to you is, how much, when we talk about online panel data, how much, in percentage, do you think we remove on an average project?
Pete: This is such an interesting question and just to prefix it as well, I’ll touch on why we are asking about online panel data. I think the reason we focus on this is because it’s such a great method for collecting lots of voices quite quickly, but also because of PII, for lots of different reasons we now cannot share who has completed those surveys. So, there’s this veil of trust, and this is why I talk about trust and quality so closely together. Who are the people that are actually doing these interviews and how do we prove who they are?
So, I do have a bit of an insight in terms of the answer to your question, because we are given a lot of feedback from the partner networks that we work with, the major online panel providers. And with B2B, we have such a target on our back because our surveys incentivize far higher than in our B2C cousin’s world. So, we do have to have that mindset all the time of if someone’s going to fraudulently try and complete a survey, it’s much more likely to be a B2B one than a B2C one. So, looking at the data, and we’ll come and talk about manual checks later, but there’s lots of things that we can do in the technology side up front.
So, the long-winded way of answering this question, back in 2023, I believe it was around about 50% and above, which is really shocking because I’ve heard at some conferences recently, people in the industry talk about a 30% kick out from a B2B perspective. And in B2C, that’s down around 8% to 12%. And there’s different reasons for that. And we’ll touch on what the differences might be there. But if I’m going to put a dart in the dartboard, I’m going to go for 50% for this particular answer. But I know that’s changing this year for some of the reasons which we’ll talk about later.
Kim: So, it used to be 50% a few years ago, like you said. But now we’re working with tools such as Research Defender, which we’ll talk about later, we’re at around 30% now.
How much is the IT industry projected to invest in data and AI-related spending, in terms of percentage increase, in the next year?
Kim: Hopefully my second question is going to be a little bit more of a surprise, maybe one that you don’t know the answer to. So obviously we all know that generative AI is taking root in lots of different industries and certainly research is no exception to that. And experts now predict that IT spending is going to go up significantly in 2024 and half of this increase, which I’d like you to guess in a second in terms of percentage, comes down to investments into data centers, which of course affects us.
We need to think about how the data that we collect is being stored and how does that affect certain needs like electricity, data safety and all those things that sometimes we forget about because a project is done and then we don’t really think about what happens with the data next. So how much do you think the IT industry is going to have to invest, in terms of percentage increase, in the next year?
Pete: It’s got to be in the three digits. I think I could be completely wrong here, but if you look at what has been happening before, IT investment is so big, but every boardroom now is talking about what AI to invest in, what to put together. The biggest company in the world now is a chip manufacturer. It’s not the software or the technology players behind it. So, I’m going to put my neck out. I might look completely daft and say 300%.
Kim: I’m sure there are always different numbers out there, so we can debate that point on a separate podcast about data alignment, if you want. But the number that I found, which was a global number, was 8%.
Pete: That sounds really low.
Kim: What I thought was interesting was that the article spoke about the fact that in the US, the growth of that is, of course, going to be largest. They’re going to really invest in their large data centers, microchip development, and manufacturing companies. And, of course, that links to electricity needs. In the US already, they struggle with their grid capacity. So, at a certain point within research, you’ve also got to think about the other areas and industries that you affect and the ethics behind it to a certain extent. And whether that’s electricity or sustainability. I thought that’s why it was quite an interesting angle to look at. If we want good data quality, it means we have to collect more interviews and it impacts other things.
Is world-class data quality in B2B research different to B2C research?
Pete: We really want to touch on the B2B breakdown versus B2C. Really, I’d love to know, Kim, because you’ve got feet in both camps, how do you see that difference of what quality means in in different spaces?
Kim: I think the initial thought that people always have is that it’s easier to do B2C research. And so, surely the quality will be easier to achieve because you’ve got so many more respondents to sample from in our day to day lives. We are all consumers. Even a B2B decision maker is a consumer for large parts of their life. But I think when we step into the B2B world as a consumer, because we are still a consumer in the B2B world, we are purchasing products, purchasing services, whatever it might be in your industry. But I think there happens this switch in your brain when you make B2B decisions. You are a little bit more deliberate in the way that you make decisions. Journeys take a lot longer. We know this from doing research ourselves.
And so, there are also more people involved, more factors. You’ve got to think about your company’s reputation and that you’re sticking your neck out for that. And I think that’s the real difference here in quality. I would argue from having done both types of research that in B2B, once we find the right people to answer our surveys, whether that’s online or over the phone, I would argue that we get better quality because people are more careful in what they say. They think about it a lot more. And they are more deliberate, as I said. So, I think there is a difference, but it’s maybe not the one that you would expect. How do you feel about it?
Pete: I agree. I honestly feel that in B2B, we have to make so much more of the individuals because we don’t speak to as many. The target audience is not as broad as GenPop for example. Whereas in B2B, if we’re looking for, I don’t know, an IT decision maker who works for a FTSE 500 company, who in the last two years have introduced a new cloud platform, which is a typical B2B brief we see, that’s going to be quite a small population of people. And then they’re quite senior.
So, you’ve got to look at, we’ll talk about methods shortly, but we really need to get the most from those voices and because we don’t always speak to them very often, we need to make sure that we get all the information from them that we require and so usually we’re squeezing them like a sponge. But because we’re only, on a study like that, going to interview 100 people, every 100 of those voices will count whereas if we look at a B2C breakfast cereal study, they might interview 10,000 people. So, even though we’ll still be looking at a quant methodology, that quant volume is less. And so, each voice has a higher proportion in the different cuts that we might have.
And that’s where our quality really comes in because every brief we take we really break it down and we look at who is the right person to speak to on this and how do we get to them and that ‘how do we get to them’ is critical to then answer the next section we’re going to talk about which is all about data collection methods.
Kim: You raised an interesting point there that I think a lot of B2B companies think about. If we only talk to a B2B respondent at a certain time and place, so let’s say, for example, after the COVID pandemic or during it, a lot of people said, “how much can we trust in this data?” Not from an individual level, because we know that these people have been well recruited, as you said, and we’ve built a really nice questionnaire. They are clearly the right people. But how much of their answers are influenced by current trends, by current challenges, and how much should we be guided by that?
How do you get around that? Because that’s also part of data quality. You do have trackers where obviously you go to customers or maybe the market every year and you see how things change. That obviously gives you that long-term, trackable, quantifiable data that then shows what was maybe a trend and what wasn’t.
Pete: And that’s fine when you’ve got the sample size to do that, but sometimes you don’t. We did customer satisfaction surveys where they wanted to track things every month and we’ve only got so many customers and their customers only want to give their views every so often unless they see lots of changes happening. So, you really have to think about the detail in that sample process to make sure that the quality really sings when the sales managers are looking at a dashboard to find out, “have I gone up or down?” But if their sample size is only based on four or five voices, it will give you a small indication of trends, but they’re looking more at the individual story.
Whereas if we’re doing a brand tracking study of 1,000 or 2,000 interviews, it’s easier to look at the wider context. It might be marketing managers, HR directors in certain responsibilities, who want to see the trend data that’s happening at an overall level. And we’ve always got to break everything down and look at each project from an individual basis and say, actually, you need volume on this or you need voices and you need depth and you need to really get into the weeds of the information.
Which data collection methods are used in B2B research?
Kim: So, let’s talk about the actual data collection then, how do we gather it? Let’s get into the detail. What are the channels that we use and the methods? Do you want to tell us a bit about that?
Pete: I talk to so many of our partners about who B2B International are and what makes us stand out. And our only specialism is B2B research. We need to know every single method that exists out there because if we’re looking to speak to tradespeople, you’ve got to get those on the phone. If you want to do a C-suite level interview, you’ve got to go to an exec recruitment company to really understand how you get 10 or so voices there, and then there is the panel world, the easy drop-in or the lower price points that we see in that that space.
And so, with all of those different factors, we want to have a full toolkit because as we go back to that initial brief or when a client comes to us with an objective, we don’t want to just pick something off the shelf. We want to make it work for them. And that’s where you need to know, okay, we want focus groups, we want face to face focus groups in four different countries, we want to really look at a branding study at the right time and then a pre and post campaign. Everything needs to be put together in a way that sings to make sure that the boardroom is lit up with excitement when we present that finding back to them. And that definitely stems from that data collection piece.
But just going back to the target audience part, you’ve always got to think in B2B specifically, how do you get to that target audience? And there’s lots of lists out there, there’s lots of databases that people have got, and I would say that having good data and relevant and real and timely data leads to good quality. A good contact list is the new gold. You look at how strong LinkedIn is in terms of its whole network in the B2B space. They’ve created something that they can lucratively tap into because that’s where people go to find business decision makers. So, I would definitely say lists and databases are the new gold. It’s fascinating. Do you agree?
Kim: Oh, absolutely. I was laughing along there because I thought about a conversation I had only a few days ago about how, especially these days where people remove their phone numbers from their email signatures, you can’t really gather data that way anymore, which even before then was a bit of an ethical conundrum. And people are just harder to reach, harder to send an email to. I still remember when I was an interviewer, even back then, I found it hard. And this is now five years ago, but if I had to go back on the phones now, I wouldn’t know how to actually do that, how I ever did that. It’s absolutely incredible. And I think another good point that you touched on was there is not one size fits all. So, for one project, you’ve got to go down the route of online because you’re going to have a greater chance of hitting enough numbers. Whereas for other projects, you want to be much more refined in your approach.
And I’m also a very, very big fan of a mixed methodology. If I have a project that has two different stages, for example, so maybe a qual stage first, to explore a few themes that we then want to deep dive into in a quantitative online survey afterwards, I always feel much more confident with the study as a whole, because it is about that whole picture that you draw at the end and how you bring everything together and bring it to life. If you only have one stage, for example, an online stage, then sometimes you can be a little bit worried and it can take some time to get your questionnaire right as well, because you don’t really have a natural starting point all the time. Or even within one questionnaire, I would always say it’s worth, even if it is a full online study, it’s worth including a couple of open questions, just for context, just to put yourself at ease and to be sure that you can look at something that feels a little bit more real than numbers.
Pete: You’re not wrong. I’ve just come back from the IIEX conference in Amsterdam and it was a key point because you’ve got companies that are saying qual is dead and I’m sat there going “no, it’s not.” It’s never going away. People said this years ago when online became such a key method, but actually the return has been so strong and particularly during the pandemic where everything was so unclear in terms of what people really wanted. Everything changed and you can’t come up with a solution if you don’t understand the problem and you don’t hear the narrative in what people are saying.
And that’s me from a quant background. So, I love qual in terms of what it can give you to get further down the chain and what’s coming next in that space. I’m really keen and excited to talk about some new technologies that we’ve invested in, because actually what is happening in the online space is it’s becoming more popular, but it’s a bit of a race to the bottom, so everyone’s trying to make it cheaper and cheaper, which is a really big worry for me, because I really feel that you have to pay people for their time in the right way, and if we’re really racing towards a £20 online interview with a decision maker, you’ve got to wonder, if we’ve got a 15-minute questionnaire that we’ve put together, how much money is that decision maker seeing? And would they really spend their time to complete it?
I look at a brief and I look at the cost. And I think if I was that decision maker, would I do it? And sometimes I’m thinking “absolutely”. We really have some great incentives behind things. It would encourage me to do that. And I think the incentive world is very interesting because cash is great, but also in B2B there’s been some different pieces that we want to provide. For example, we want to give people some information about the industry that they might not have been able to get before. And that’s really helpful.
And it all leads into a bigger question of why does a respondent take a survey? It has to be enjoyable from the individual’s perspective as well. I think we always appease our clients, but there’s also a real need to turn the lens on ourselves and talk about whether we would do the survey if we were responding ourselves? And we’re pretty good at that, but I’d love to know your views on that, Kim.
Kim: I would agree with that. You said earlier on that the only specialism that we have is B2B, but I think we do have one other specialism that we often don’t realize, which is that we come from the bottom up. A lot of us like you and me have come from interviewing or maybe quality control or interview checking. And I think that really helps us to question “is this questionnaire or study designed to be enjoyable and to be an easy-going conversation between an interviewer and a respondent?”. I think having been an interviewer before helps you so much to think logically about a questionnaire in terms of structure, of flow, of length, although that’s always something that’s really tricky to adhere to.
If we say an interview is going to last 30 minutes, it’s not always going to be the case because sometimes you do get people who actually enjoy it a little bit more and want to talk a little bit more. So, it’s not always the worst thing in the world, but I think it also helps you when you are designing the questionnaire and when you are talking to B2B organizations that you’re designing these projects for. It helps you feel in control of the quality of the data and also it helps you to let go of control because somebody else is going to do these interviews, but I’ve done them before and I know which kind of respondents I’m looking for and how I can make this exciting for everybody along the journey. Because it’s not just about the respondent themselves. It’s making it enjoyable for the people that have to conduct the interview or making it enjoyable for our partners that we work with to pitch these questionnaires to their panels, to their respondents that they’re working with, their experts. So, you need to think about every single person that’s involved in the process.
Pete: It starts with sitting down with a piece of paper and writing a really good questionnaire. And nowadays we not only have to write a good questionnaire to make sure that the client gets all the answers they require, but also include secret questions to weed the wrong people out, if we’re doing online. And so, our screening criteria has changed massively in the last three years. All the checks that we put in place, the open-enders, the fact that we work with partners to make sure that if they’ve got information about people already, we mirror that up in the background, we ask questions at the start and questions at the back and make sure that they match up and that they haven’t changed their age or something else about themselves in the 15 minutes we’ve spoken to them in.
And thinking about that initial questionnaire and screening is critical from a quality point of view. And the kind of the quality monitoring that we do, we very rarely talk about it, but we win awards for it and we should be so proud of it. The work that happens in that space, to go through and listen to interviews, so this would be for a CATI interview or a depth interview that we’re doing via our in-house team, we are listening to 10 to 20%. The MRS guideline is 10. We really try and push for 20% to listen to those interviews to help guide our moderation team. We need to give them feedback and it’s all about making sure that we’re getting value from that whole process, from either the conversations we have at the start, through to the questionnaire that we’ve designed, through to briefing our interviewers, and then them starting on the telephones. We’ve learned so much in terms of what works and what doesn’t with interviews, how we approach it and how it has to be a friendly introduction and it has to have a carrot somewhere to give someone a reason to want to participate.
The hardest studies we have are when we can’t give anything away at the start, we can’t tell them why they’re doing it. Nobody wants to enter into something without a bit of clarity on what the benefits are for them.
What are you doing in questionnaire design at the moment to really improve the quality?
Kim: One thing that’s always been important to me from the start is to run through it with somebody else. I think that’s the best check you can ever have, so almost do a trial interview with somebody else and read the questions to them. They pretend that they’re answering. Now, they might not know what the biggest trend is in oil and gases right now, but they can they give it a try. And then you play it out and see where it takes you and that helps you with your length, it helps you to spot things that are worded weirdly or that don’t sit well with each other or repeats. You also want to make sure that you’ve only got the best killer questions in there.
How do we approach data checking?
Kim: We mentioned earlier using tools like Research Defender to minimize the amount of online panel data that needs removing. We’ve gone from 50% a few years ago to around 30% now. So, tell us, Pete, how do we do that magic?
Pete: Well, it’s a long process. If we look at the challenges that we face, and it’s not just online research, which we can talk about in full detail, but it can be some of the recruitment to web processes that we do, even some of the CATI interviews. We are always checking every single thing that we want to put forward into our data set. Our data set has to be really strong and robust and have all the insights available.
But those checks need to be done before someone’s entering into an interview, during the interview, and then at the back end of the interview. So how do we make sure? If we really focus on the online side of things, if we don’t know who somebody is and they have an IP address and they go into a survey, although they can falsify their IP address, but we can first and foremost check where they’re based. We can also check how many surveys they’ve done in the last 24 hours. And there are survey farms of people who are constantly trying to do surveys to game the system, to fraudulently get in behind systems.
I was at a talk where Kantar was talking about how they are very much aware that even on their own panel that they’re trying to weed out people who are possibly children in Bangladesh who have watched a YouTube video of this guy who’s put together a really sophisticated method of falsifying where you’re based. Or they’ve learnt how to access LinkedIn to pretend that you are a CEO of an American company. And those people can make 20, 30, 40 dollars a day if they complete a number of surveys. And the average wage in Bangladesh is around about £8.26 a day. So as soon as they take a number of these surveys, they’re making so much money.
So, we’re in this system where we really have to look at what is happening at that upfront stage to weed that element out. So, IP checks, making sure we understand how many surveys people have done, but also other things that we know about them such as whether that IP has ever come up on a different platform before and been rejected?
But there’s only so much that these companies can do in the background to really address this. When we get to our questionnaire, it’s then on us. One of the things which we try and do is add branding lists in with some fake brand names. And the really worrying thing, which is why we know this kind of fraudulent process is happening and trying to stop it and trying to get ahead of it, is that we’ve seen that if we put a fake brand name in a brand list, about two or three days later, that fake brand becomes a spontaneous text box answer and not just within that project, but I’ve seen it pop up from one of my colleagues’ projects in my project that has nothing to do with it.
And it really is a huge worry from our perspective, which is where so much investment was made last year to put this at the forefront of everything we do, because we’re not in the banking sector. We are not being protected by criminal speculations that can be put on things. We are in the market research industry. If someone’s fraudulently entering into surveys, there’s no crime that they’re going to be committed of.
So, we’ve got this huge issue that we’re trying to fight internally. We’re speaking with all the big partners. They’re well-aware of the challenges that are faced. And Iwe can’t shy away from it. And we’re not. We’re tackling this. And from the conversations and the numbers that we’ve talked about, we put forward 50%. Others are putting forward 30%. So, we know we’re already doing 20% more than anyone else. And now we’ve got these tools in place to really help us. At the front end, trying to stop people coming in, and when they’re in, trying to spot that they’re not the right person for the target audience by having lots of tricks and traps in the questionnaire.
And then the back end is now, I think, the really important one. If you really want to hit fraud where it hurts, you don’t pay people. And there’s a lot of partners that would pay in vouchers. And we see that as a bit of a worry because it’s a lot easier to trade vouchers internationally and get financial remuneration for it. But what we’re asking our partners in the online space to do is to start to pay people into an individual bank account that mirrors with the IP address. And if they can’t, then they should be taken off the system. But it’s a problem we have in our industry because not many of the panel companies that I’ve spoken to want to do this because they’re talking about big numbers, millions of people on their panels. If they actually implemented some of these processes, there could be seismic repercussions in terms of feasibility and costs that can have a knock-on effect.
But we’re aware of this. We’re addressing it. We’re challenging the industry. We’re not the only ones. We’re working with the MRS and ESOMAR to really put forward a lot of work in the background to help us start to win that battle. And it’s been going on for years. This is not the first major issue the industry’s had. We just need to know that B2B International is doing more than most in the industry to get through them.
Kim: I always have three golden rules of how, once the data has been checked by the various tools that we use, we check the data manually. The first one is to have a set of rules that you set up. So, if you’re looking at a spreadsheet of 2000 records then it can be a little bit overwhelming. Where do I start? What do I want to look at? So even when you’re designing your questionnaire, when you’re setting up your data checking files, think about rules, whether that is flatlining, whether it’s the time that they spend on certain questions. If we’ve presented them with a list of options and let’s say there were 20 options and they’ve only spent two seconds on that question, that’s probably a guarantee that somebody’s not fully read that.
The second thing for me is if something looks weird, but that mistake keeps happening time and time again, so maybe we see a spontaneous brand that we wouldn’t expect to pop up, have we forgotten to take something into account in the design stage? Is there something that we can go back to? It’s never too late to make questionnaire changes. We should never be afraid of that. And I think that’s also an important thing – not to be scared of trusting your gut. If something seems off, then you should be acting on that and you should be doing something about it. I think if we do have a project with a lot of bad data, I almost get a little bit scared to touch it again because I don’t know what to do with it and that’s where more heads should be put towards it and think about it together.
I know this is a little bit of a strange example, but it always makes me think of the post office horizon scandal. It comes down to bad data at the end of the day. That was a data scandal. A lot of people felt that something was wrong, they trusted their gut, they went against the grain. You have to trust your instinct.
Pete: There’s an example which always sits with me, and it was a project we were doing, a tracker, we’d done it for many years. And I got this phone call from one of our colleagues who said, “Pete, I don’t trust the data that we’ve got back from this company. I don’t think they’ve spoken to the right individuals.” I said, “okay, tell me and we can get in touch with this company. We can ask them to remove those participants.” But I always want the proof. I always need to have some evidence of why we want to remove somebody because we need to talk it through. Anyway, the reason why they wanted to remove this is because the brands that they’d mentioned, they weren’t wrong. But they were more likely to be, and this was for a medical devices survey, and we were talking to healthcare professionals, the brands that they were all mentioning tended to all be used within the veterinary industry and not the human healthcare industry that we were looking to survey. And because we’ve done this research so many times before, this was why we could spot that so quickly. These brands have never appeared before.
Our manual checks are phenomenal because our team is so dedicated to that particular process of making sure we’ve got that right individual within our survey. That is world-class quality. That example there sets us apart from the competition because I don’t know anybody who would have that knowledge of seeing a brand list and then removing them.
So that manual data process, and we’ll come and talk about technology shortly, but there were so many technology advances and booms, and the noise I’ve heard in the last couple of days has been seismic in the IIEX about AI. But sometimes you need to understand what’s come before to be able to know what you’re going to do with the next step. And that’s why we put so much time and dedication into that manual process at the back end to really look through every individual to make that sense check.
Which tools and technologies are used to enhance data quality?
Pete: But I’m going to talk now about the kind of the technology that I think is really starting to come through, because we started to use a few different pieces. And I want to know which ones you’ve tested already, Kim, in the new portfolio of B2B International’s technology tools. Have any come through as good for you?
Kim: One tool that I’ve really enjoyed using is an AI program that we use to create translations quicker. So of course, we have our interviewers that are great at doing translations once they’ve conducted interviews in local languages, which is absolutely fantastic. We have a lot of language capacities in-house. But of course, if we want them to stay on the phone and do these interviews, they will fall behind with their translations and with their transcripts. And so, what we use now is a voice recognition system that can translate directly from a recording. And that then gives you a lovely transcript with breaks and everything. And honestly, the speed that introduces to a project, you can have everything instantly. And that helps you to reconsider your questionnaires because you don’t have to wait two days until you see the first interview data and you can act a lot faster if there are things you want to improve. So, I absolutely love that.
Pete: That’s been revolutionary because even when I started on the telephone, the biggest bane of my life was finishing a call and thinking it had gone really well and then having to type it all up. And if you were doing that from your own notes or if it was longer and you really needed the detail, you have to listen to the recording and it can take 400% longer to complete than the actual time it took to complete the interview. It’s making such an impact on your time in terms of the research, but also the interviewer’s time.
Can we talk about how good our interviewing team is? We do so many checks and other things at the back of our surveys. So once a project’s finished with a client, we’ll talk about how good our partners have been and they’re good. But B2B International’s in-house team scores so highly in our internal perceptions because they get what we’re after.
Kim: Was it 9.5 out of 10? That’s a great score.
Pete: It’s magic, because it goes back to what is quality, it is asking a question, listening to the answer, and getting that information, and then probing maybe a little bit as well, depending on the answer that comes back.
That actually leads us to another piece of software that we’re using at the moment, which is particularly for our online surveys. One of the big challenges can be how do we get more from an online interview? Because they might be filling in a survey while they’re watching TV or they’re doing something else. They might be at their desk, but they might be rushed. We might get an answer to asking, “what is the most important thing about your business relationship with our client?” And they just say, “they give us a good price.”
But what we really need is a probing tool that can go way further than that because a good price is never really the reason why somebody is making a business-to-business decision. It’s important, but that’s where this tool comes in because it will read through that. It will work on an AI solution to say, right, I know what that person says, and I can now give another probe or another question that B2B International’s interviewers or researchers haven’t written ourselves. But it’s straight away in the next window for them to fill in. And it just means that there’s so much more context that we can get from the important voices that are out there. But we’re not having to hold their hand all the time.
And there’s more advances that are coming in this space because we think AI interviews will exist in the future So, you have a voice recognition that will read the questions and then you can have a conversation with it and gather data that way. So that could be revolutionary, but it’s not tested. It’s not something we’re looking at.
And a lot of the new tech is focused on making improvements in the technology space of fraud validation. We’re also really excited about text analytics programs that can read what someone said and give another question. There’s so much happening. I’ve been to three conferences this year. AI is spoken about at every single one to the nth degree and it’s fine but ultimately, it’s a machine learning algorithm and we do need to be really cautious because it really depends on what you’re putting into something to determine how well you’re going to get something out.
And from our 25 years of experience in B2B, we know that a lot of the businesses have similar spokes on a wheel but are not all the same in terms of how they operate, but there are things that run through every single company. Even if we’re talking to someone in the oil and gas sector, someone in the technology space, down to someone in the industrial lubricants area, we know exactly what the pain points might be. But we need to put the context on top of it. And I think that’s where you need a human to be able to take all that expert insight in the B2B space and understand it before we can run it through a program and expect all the answers. I think it will summarize things nicely but we still need to spend the time to look into it in more detail as humans. Do you agree on that one, Kim? Are you a big AI advocate? Do you think the industry will go full AI or do you still think there’s roles for people?
Kim: I’d really hope so. I don’t want to paint too black a future for myself but I agree, we’re in this space right now where we’re trying to find a balance between collaborating with AI. It’s almost like this new coworker that’s entered the room and now we need to learn how to adjust to that, just like we would to a normal person.
We’re all going to find our own ways. And I think that’s what’s important, that we find the value in AI, but still the value that we can add as well, which is absolutely non-negotiable.
Concluding thoughts
Kim: So, what’s our closing statement on how to achieve the best quality in B2B research?
Pete: It’s by being really diligent. I think it’s still, from my perspective, all about making sure that we eliminate the fraud as well as possible. Using technologies from the banking sector to take that off the table. If we can just have the right people in an online space, that’s perfect. That’s why I’m much more of a bigger advocate for telephone interviewing or recruitment via telephone, because you’re actually speaking to a real person. And you know that they are who they say they are and it just really helps.
I think there are advances that will come in face recognition and different technologies to help in that space but ultimately for me I want to make sure that the tools and platforms and partners we have just give you that confidence as soon as you come out of field that you’ve got exactly what you need to be able to take it to the next stage. And I’ve been at the company long enough to know that quality has always been our number one factor in everything we do. It has to be the best.
Kim: I think nobody would ever say, “I don’t want good data quality.” That’s like saying, “I don’t want a nice UK summer.” And I would agree with everything you said. The other thing that I would add is sometimes, and this is maybe a little bit contrary to what you’ve just said, we obsess maybe a little bit too much with the quality of the data without asking ourselves why we want good data quality. And I don’t mean why would we want it as such, but what is the aim that we’re trying to achieve with this project? Because we can gather all the great data in the world, world-class quality data. But if nothing comes from it, if it’s not digested in an actionable way or we don’t have someone ready to implement the findings, it’s all for nothing. So, I think sometimes we need to think about why we want it to be the best quality. And then we also care about it a little bit more as well.
Pete: And just as the final point, we wouldn’t have such a good client list who keep coming back time and time again if we didn’t have quality running through everything that we do. And it’s what makes me proud to work for the organization for so long. It’s the right thing to focus on, the right thing to really put forward.
Kim: Absolutely. That was a very lovely closing remark. So, with that, thank you so much for listening today. And if you’ve enjoyed today’s episode, make sure to subscribe on all the usual podcast platforms so that you don’t miss any future episodes. And you can also explore more strategic content around all different types of topics on our www.b2binternational.com website. And you can also subscribe to our newsletter there. Thank you so much, Pete.