Search Rocket site

Digital: Disrupted: Is AI Eroding Trust in Brands?

Rocket Software

May 19, 2023

In this week’s episode, Paul sits down with Dave Fleet, Managing Director and Head of Global Digital Crisis for Edelman, to discuss how generative AI is impacting reputation management for brands. Dave shares how businesses can combat disinformation and new threats they must pay attention to when it comes to AI and brand reputation.

Digital: Disrupted is a weekly podcast sponsored by Rocket Software, in which Paul Muller dives into the unique angles of digital transformation — the human side, the industry specifics, the pros and cons, and the unknown future. Paul asks tech/business experts today’s biggest questions, from “how do you go from disrupted to disruptor?” to “how does this matter to humanity?” Subscribe to gain foresight into what’s coming and insight on how to navigate it.    

About This Week’s Guest:

Dave is the Managing Director and Head of Global Digital Crisis for Edelman, a global communications firm. He has advised senior clients on reputation management and digital communications for almost two decades. Prior to this role, Dave led Edelman’s Canadian Digital practice for five years, overseeing a team across Edelman’s five Canadian offices.

Listen to the full episode here or check out the episode transcript below.

 

Digital Disrupted

Episode Transcript:

Paul Muller: So much has been written lately about the impact of generative AI on the future of, well, pretty much everything. And I kind of have a sneaking suspicion some of it was written, in fact, maybe most of it was written by a generative AI, which is probably going to be one of the most snake eating its own tail moments in the history of humanity. Anyway, in previous episodes we've taken a brief look at what it might mean for human augmentation, what generative AI might mean to cybersecurity and the innovation process. But what about the potential impact on your brand and reputation, both your companies and your own personal one? What are the PR implications of generative AI both from an up and downside? We're going to take a look at that in just a minute but, before we do, I'd love you to check out the website of today's show sponsor Rocket Software at rocketsoftware.com to see why over 10 million IT professionals rely on Rocket Software every day to run their most critical business applications, processes, and data. Well with that, our guest today is the Managing Director and Head of Global Crisis for Edelman, a global communications firm. And he's also, I thought I’d mention this because I was particularly impressed by this, a mentor for BANFF Spark Marketing Accelerator for Women in the Business of Media. We might have an opportunity to have a chat about what that means with him as well. Welcome to the show, Dave Fleet.

Dave Fleet: Thank you, Paul. Thanks for having me!

PM: It's great to have you here. Before we jump into the topic of generative AI, and before we go any further, can I just ask you a quick question? Are you an AI?

DF: I am not.

PM: Okay, good. Alright, I just want to clear that up. I need you to start doing this before we have every meeting. We do a little something called the lightning round where we get a chance to know you, I guess, through a slightly different lens. Are you prepared to expose yourself to the world?

DF: We'll see how I do.

PM: Let's do this thing. All right, first question, Dave, what would people say is your superpower?

DF: I'm not sure I have a superpower. I kind of disagree with the question. I don't know. I think this is a team sport, but I think if you push me, I would say it's something in service of that team effort. And I've found over the years that I'm good at distilling order out of chaos. I'm very systems oriented in my mindset. I'm very left-brained, which infuriates our creative teams sometimes, but, in moments when it feels like everything's burning down around you, I've found that I'm pretty good at being able to step back, break things down into little pieces to figure out what needs to get done and then organizing people around that.

PM: Sounds like the sort of emotionless qualities a robot would have. I'm still not convinced you're not a Generative AI. Alright, next up. The most disruptive technology of all time.

DF: I think my answer six months from now might be different. Right now, I would say it's the internet. I still remember when I was starting out and we were just starting to see the shoots of social media come up and I was really disillusioned by at the time what I saw as this one way kind of means of communication that everyone was using. The idea that you said something and, if you repeated enough, people would believe whatever you said. I saw the way that the internet and the way that social was starting to really change that. And I think that was transformative. You ask me that question again in six months and I might be talking about AI.

PM: Fair enough. All right. The best quality a leader can have?

DF: Empathy. Without doubt.

PM: Say more about that.

DF: I mean whether it is with your team or with your stakeholders -- and I don't mean like sympathy -- but empathy in the sense of being able to put yourself in their shoes, asking yourself what other people are thinking about, what are they feeling about, what are they caring about? And then being able to communicate with them in a way that puts that at the center. The world is tough. We're communicating a lot of tough things a lot of the time and I think whether you're in that tough moment or if you're dealing with buy-in or a stakeholder or just working day to day, I think if you can put yourself in other people's shoes, you can be more effective and you can make people want to work with you.

PM: Your advice to people starting their careers?

DF: Be a sponge. Number one. Curiosity is the most standout trait that I look for in someone who's starting out. I remember we hired someone on my team about eight years ago, somewhere around then, and I remember we gave her a little project to start off and it was something that we weren't expecting an intern to accomplish much. This person turned around and delivered something that was so good that eight years later I'm still remembering exactly what it was. And it was because she followed the threads, because she asked questions, and she kept following the threads and she came back with something that was just incredible. I think second, I would say follow the leader, not the title. So don't chase job titles, chase good leaders, who you want to work with. And then third, I would say choose the challenge. So be the person when the train is off the tracks and everyone else is jumping off, jump on the train and try and help fix it. Now don't come at me and say that you can't get a train back on the tracks if you're on the train, be that person who jumps into the challenge.  I think it makes for difficult experiences, but in my experience, you get a lot of good learnings from those experiences.

PM: Yeah, I think that they're three great bits of advice. I'm taking some of those away with me. The first thought that comes to mind when you think of reputation management?

DF: Trust. Do I trust them to do the right thing? Whether it's making a quality product, doing the right thing for society, whatever it is, trust.

PM: Excellent. Final question. If you could use technology to solve one world problem, what would it be and why?

DF: Climate change. Is that too big an answer?

PM: You know what? You get to choose this. This is your show.

DF: To me it's existential. If the world's climate can't support us anymore, then nothing else really matters. So that would have to be it.

PM: I'll take that. Yeah, and there's obviously a huge set of challenges beneath each of those. That's a great one. Absolutely fabulous having you here. Before we get into the topic of reputation management, I picked up a slight accent there, I can't place it. So maybe you start by telling us a little bit about your personal professional background.

DF: So, I was born and bred in the UK. I grew up in a tiny little town in the southwest of the UK, did a business degree over there and became obsessed with the internet. During that, was pretty sure I was going to be a web developer. Turned out not to be very good at it, but I did a couple of co-op placements as a web developer back around the turn of the millennium. And then right after I finished school, I moved to Canada and kind of by accident fell into a communications role in government.  I got the opportunity to work in communications in the what's called the cabinet office of the provincial government where I live. And an accidental first job turned into about five years there, about half the next decade there. And as I mentioned earlier, as I was there, I became obsessed with this idea of, and fascinated of, the idea that social media could solve for some of the things that I was seeing that I didn't like in communication.

And this idea that you could just say something, and people would believe it and you'd say it often enough and people would go, oh, okay, that must be true. And it just wasn't the case. And I saw how annoying that was for everyone who faced that. And I saw social media as a great way to be able to try and bridge organizations and their audiences and bring people closer together, which is deeply ironic now 20 years later. So, I did that and then I left the government and I went agency side about 15 years ago, first at a smaller agency and then I joined Edelman about 13 years ago now. Since then, I've had a variety of roles. I've had an opportunity to be in practice leadership roles on the digital side of things and working on paid media and advertising. Now I have my current role where I sit at the intersection of digital communications and marketing and crisis communications, and that's where I sit now.

PM: Fabulous. I did mention this during the show intro. I was curious, it sparked my interest. Have you pardoned me saying that? Tell us about the marketing accelerator for women and your interest in it and a bit more about what their mission is.

DF: Yeah, I mean it's a mentoring program that's put on by the organization. I thought it was a fantastic initiative. It gave female women an opportunity to be paired up with mentors in the marketing and media space. I had the opportunity to get involved there and I'm not going to overplay the work I do there. I volunteer as a mentor and I have thoroughly enjoyed the experience, but I think it's really important. At Edelman, certainly, we put a lot of thought into equity and inclusion and very proud of the progress that we've made in areas like having female executive leadership across our global firm. And so, I thought this was a great way for me to be able to help contribute more broadly on that front.

PM: That's great stuff. I love it. So we might start by just talking a little bit about, first and foremost, what generative AI means to you when you think of the term, maybe starting with a bit of definition because it is relatively early in its history within society. So, maybe we'll start with a working definition of that and then following on from that, having read your article, one of the articles that you've published recently about it, you make a very strong case for some of the concerns we need to have from a downside standpoint. But maybe we talk a bit about some of the upside opportunity that you see when it comes to generative AI in the marketing context.

DF: Yeah, I mean, look, I think it's important to step back and acknowledge that AI has already played a massive role in shaping reputation management. And even in disinformation. I think the difference in the past has been that it wasn't as clearly visible, and you weren't interacting with it so directly. But if you think about things like social media monitoring where you've got tools that gave sentiment automatically or project out where an issue is going to go - very mainstream tools now - that's AI in the background doing that. Or if you think about social media and the kinds of content that are getting served to you, whether that's on Facebook or on TikTok, that's all algorithm driven evidence, AI focused Media buying (has) been turned on its head over the last decade through AI,  whether it's programmatic or the way that meta optimizes the media buys that you can make through it.

So, I think what's different now and what's different with this is Generative AI is it's much more user facing. It's taking prompts from users and creating new content or at least what appears to be new content on the back of that. And we can get into that a little bit. I think if I tried to think about the implications of that for digital crisis, I can kind of relay it back to one of the most common questions I get about my role, the way I answer that. With a title like Digital Crisis, every present question is, well, isn't every crisis a digital crisis? And why do we need a digital crisis person? Why don't we have a radio crisis person? Why don't we have a newspaper crisis person? And so, I talk about three facets to digital crisis. And so, the first is types of crises, the things that are either entirely digital or rooted in digital.

And I think AI is going to have a big impact there. And we talk about crisis dynamics and then we talk about crisis tools and techniques and the building blocks of how do you respond to issues. And I think AI is set to impact all three of these. And I think there are good sides and there are bad sides to that. Crisis types. You mentioned disinformation getting a lot of attention where I'm spending a lot of my time, but cybersecurity's going to be impacted. You're going to see copyright issues, you're going to see all sorts of things. On the dynamic side, we often talk about the internal dynamics that companies have to grapple with around digital crisis. How do you bring together all the different functions of the business to be able to respond in a world where the crisis communications team probably doesn't control most of the channels that you need to use in a world where everyone's getting their information from digital. And then there's the tools and techniques and I think there are good opportunities for us to use these tools in a way that can help to streamline or improve the way that we're doing some of the work. I think the key is going to be doing so in a way that doesn't introduce new risk for the organization. We've already started to see examples of that.

PM: Well, including the Vatican.

DF: Go on.

PM: Well, you've got the Pope wandering around in a white puffer jacket. So, when obviously the opportunity to talk came up, I was really excited by it because this is something that we are seeing emerging. And I suppose going back to your rhetorical question about why do we need a digital crisis specialist born of a better term, I would argue from my own perspective is because the area is so fast moving that a traditional crisis management team, no disrespect to them, would be struggling to keep up and go, what do you mean the pope is in a white puffer jacket, but that never happened. Why are we even talking about it? Well, because it's out in the media now and we've got to respond to something that's not even real. But if we don't have a thought about it, so again, how do we validate whether or not that in fact happened?

Because again, we used to trust in images, we used to trust in voice, we used to trust in video. We have talked about this on the show before. The root of trust in society, which used to be if it's got a photograph, basically it happened, has now disappeared. And I think the PR manager, the reputation manager for a company or for an individual's, got this incredibly difficult job at the moment, which is how do you keep up with fighting effectively this almost limitless resource that is capable of producing misinformation and disinformation faster than you can respond to it.

DF: Yeah, I had the pleasure recently of sharing a conference on disinformation and a guy named Jack Stubbs spoke who's someone we work with very regularly, and he talked about the ABCs of disinformation and how each of those is going to get going to get turned upside down. And the ABCs are actors, behaviors, and content. And that's kind of a way of breaking down a disinformation threat. So, you've got actors and the way that a lot of these new tools like generative AI are going to affect those actors is that they're lowering the barriers to entry. So, what has been harder to access, harder to scale, harder to do effectively, those barriers to entry are being removed. And so, you're likely to see a much broader range of people and bad actors engaging in this. And you've got behaviors, so how do you mass produce content, for example, or the ease with which you can use things like bots to amplify negative narratives and do that at scale and do that in a convincing way.

And so that kind of affects the behaviors piece of how the content and the disinformation gets shared around. And then I think one of the areas where you're seeing, or we're going to see the biggest challenge is around the C, which is the content. It is one of the biggest barriers that have existed, especially for foreign-backed, state-backed actors, has been the ability to mass produce convincing content. A lot of what's gone out in the past has been not that plausible or it has been easy to spot like language nuances, cultural nuances, and too many fingers. And now you can go plug a request into ChatGPT and it gives you a perfectly passable block of text that can be very hard with an expert eye. You can probably parse it apart, but for the average person it's going to be very hard to tell. So, it is kind of upending the threat landscape from all of those angles.

PM: Yeah, the mind boggles a little bit here. As you're talking to businesses at the moment, what kind of threats are you seeing? Well, what sort of threats are you advising them of? How seriously are they taking them? And I suppose ultimately it doesn't get much more serious in terms of credibility, are you starting to see or have you experienced what we'd call them, the cybersecurity world breaches, but have you experienced examples of where  you had to respond to a legitimate threat?

DF: Yeah, so we'll start with the threats here piece. I think I should caveat everything here by saying this thing is moving at a million miles a minute. I remember back in the early 2000s, I used to block off an hour at the beginning of my day and at the end of my day just to read up on developments that had happened over the last 12 hours. And it was just that crazy that it was literally two hours a day of reading just to try and stay on top of it. And then it felt like there was a period in the 2010s where that velocity dropped off a little bit in terms of the pace of change. But it feels like we're right back up there right now. And this is a big breakthrough moment with Dali, big breakthrough moment with ChatGPT and now the moments are coming every other day there's something that's new.

So,  where this is going. I think a lot of it is going to be in this bucket of emergent crises that we can't even necessarily predict yet, but that are just emergent use cases of AI. No one thought that ChatGPT was going to go and write code. These systems are going to start interacting in ways we can't predict yet, whether that's autonomous systems or AI's interacting with each other and how does that play out. But I think there are some other areas that are kind of clearer in terms of threats. So social media manipulation is a pretty kind of obvious one. Automated profiles, mass content creation, content being optimized for virality, things like that. Disinformation we've already talked about. I think cybersecurity threats, seeing things like social engineering attacks being much more sophisticated, automated discovery or exploitation of vulnerabilities, the risk of leaks from inside the building.

We've already started to see instances where companies have realized that employees have been plugging in confidential information into some of these platforms as part of their work. And there's risk there. You're exposing that information to a third-party. Copyright, there's a lot of things that are unclear around copyright in this space. So, there's a whole slew of these issues. And I don't think we should lose track of the one that you actually mentioned off the top here. The macro risk here is this erosion of trust where we move to a place where no one believes anything that they see or read or hear. And that would be pretty tragic. But we were already on a bit of a slope towards that even before this all emerged.

PM: Well, I'm so glad you mentioned that because I'd written that down as something I wanted to have a chat to you about what I would describe as your day job, which is I'm a popular figure in culture, so I might be a celebrity, an actor or a musician, whatever, or I might be a politician or I might be a brand and someone's produced a video supposedly of one of my employees doing something horrible to a baby seal or something. And so, you've now got to respond to this thing and you are now having a conversation with media about a thing that never even happened, but you've got to come up with a point of view on something absurd. There's that, that's the obvious threat. Then the counter threat in terms of erosion of trust is famously your neighbors down south had a former president who coined a phrase that's now become known amongst everyone, which is fake news.

And because so much news can now legitimately be fake, you can hide any number of sins including actual sins by just simply declaring that this thing didn't ever happen. And so, we wind up in this situation where this erosion of trust leads to the point of view where people can just cherry pick whichever fact happens to suit their narrative or their perspective at the time. Confirmation bias goes wild, and we devolve into apes pounding on rocks and there goes society. I mean it sounds a little apocalyptic, but I do worry that this thing could be an accelerant to our worst characteristics. And I guess the point I'm trying to make is that even the political class are involved in accelerating this erosion of trust rather than preventing it. What are your thoughts on that particularly?

DF: I think it's a point well made. And look, we just released some new data from our Trust Barometer, which is, we've been studying trust for more than 20 years and we just put out a new study today around trust in the health space. And one of the findings from that study was that family and friends are now as trusted as doctors when it comes to being an information source. And that's interesting as an observation. It's really alarming when you think about that disinformation landscape because where are those people getting their news from? And so, it is an alarming kind of landscape there. And I think there are some things that we can do to try and allay that.

PM: I don't want to overstate this, but I genuinely have this feeling, I might have even talked about this on the show before, is that we are at risk of undoing everything we've learned since the enlightenment. That it all just rolls back to the village elder going to them and, to your point about trusting in friends and family,  because a lot of the refinements we've made over the last couple of centuries of documenting, researching a very thoughtful process for fact checking, seems to be rapidly disappearing. I can imagine a future where it rapidly vaporizes. I think we're teetering on the edge at the moment. I don't want to be too apocalyptic about it, but I can imagine a future where trust becomes completely erased. Oddly enough, I think part of the solution, I'd be curious what you think, is we probably will have to build computer software that will do that validation for us because the rate at which it gets created, there's no chance that a human's going to be able to keep up with that or if, to your point, possibly even spotted. So, we're going to need to use algorithms to fight the algorithms to try and tone down the signal to or improve the signal noise ratio. What are your thoughts?

DF: I think that there are a number of ways that we're going to have to come at this. I think you're absolutely right. There's a lot of ... it's very easy to focus on the threats here. And ironically, that's actually one of the biggest challenges in the disinformation space, which is that the black cloud is so much more interesting and so much more captivating than the truth, which is usually really boring and dull and no one shares it.

PM: What do they say? Good news travels fast, bad news travels faster.

DF: Yeah, well and the sad truth, and I attended a panel yesterday and they were saying that bad news and especially disinformation has the fortune to not have to be grounded in facts. So, it can be outlandish and it can be really engaging and really, really shareable. And there are lots of studies that show that it moves way faster and moves farther in social circles than the truth. So, part of the onus, even on companies but also on media, is you've got to try and make those stories that you are trying to put out about the truth. They have to be as engaging and captivating as absolutely possible because just a boring, this is false and here's the truth, isn't going to reach very many people. It's not going to do the job. But I do think that, I mean this whole thing is an area that's in flux at the moment.

I don't think anyone's really cracked the solution, but I think that there are some things that companies can do around this. And I think it's really important, actually, to go back to your last question, the role of the employer here is becoming more and more important. The role of business as a trusted institution and is the one remaining trusted institution from our research. But also, the role of my employer, which is a source of information is actually one of the most trusted now. Not necessarily all other people's employers, but my employer is very trusted. And so that we then have both an opportunity and frankly an imperative to try and use that as a force for good, both for our own business but also at a societal level. But I can come back to that. I think one of the important things here, the best way to combat disinformation is to get ahead of it.

And that is unfortunately for some, for us, it is the non-sexy stuff when it comes to the crisis realm. It is thinking about disinformation when you're doing risk assessments and when you're doing scenario planning and when you're preparing teams and training them and shoring up some vulnerabilities proactively. Where are you vulnerable? In what way are you vulnerable to these narratives? What can you do to plug those holes? Do you need to be more transparent about the science behind your product, for example. So, I think getting ahead of misinformation can then help lessen some of the challenges around the speed that it moves at.

PM: Yeah. Well let's talk then a bit about that. So number one, I mean, basic thoughts. What can organizations do? How can they detect that it's happening? How do they appropriately resource and respond to this? And I guess what else should we be thinking about?

DF: So, we think about this in layers of Swiss cheese. You look at one piece of Swiss cheese, it's full of holes, but if you line up enough of them, it's going to plug those holes eventually. And so, there's no one thing on its own that is going to be helpful. But I think a lot of what you've got to be doing is grounded in the idea of proactive reputation building to try and build resilience around your organization's reputation out front. And that's not new to disinformation, but it's important. And a lot of companies still don't do that. And then on top of that, you need those early warning systems. And that could be, for a lot of organizations, could be making sure that their existing social media is just catching the little signals. For organizations that are more at risk, whether it's things that are more involved in critical infrastructure or organizations that have found themselves embroiled in the past. There are more specialized tools and companies that focus in this area that can really get more into the fringe platforms and start to look at things like narratives as they're forming before they break through to the mainstream. But, for some companies, that's going to be overkill. And then you've got that kind of building into your preparation and planning that I already talked about as another piece. So, do you have a playbook for how you're going to handle these information issues? How are you going to triage them? How's that going to happen? What are the options as you do triage them? And then as far combating specific threats goes, there are kind of two primary paths. There's proactive and there's reactive. And proactive can be low key. It could be monitoring and reporting content. The platforms for example, violates terms and conditions, or it could be things like risk mitigation.

So, making sure that things like your company's actions or the words you're using out there don't just blunder inadvertently into escalating a known risk. And there's also emerging research that's really getting a lot of attention now around the idea of debunking, the proactive messaging directed at people before they encounter misinformation. And the idea around that is similar to taking a vaccine, it's like an inoculation against disinformation or misinformation that might be through content online, it might be offline, it might be pre-briefing key stakeholders in person for example. And then the reactive piece is that debunking, like refuting claims once they've been made. And the challenge is that many companies just start here and that's the end, the start and end of it. They go, okay, we just need to fact check this thing and it's not true. And the research has shown that this alone isn't going to cut it because as we were saying, the lies are more interesting, and they tend to be hooked into some form of in-group bias and they're very hard to refute. You get a backlash effect when you try and run up against facts against some of beliefs.

The facts are never going to win and so you need to pair it with these other things.

PM: Yeah, no, I'm just going to pick on Peter. I'm sorry, I'm getting super political here. But as what described as one of the most absurd thoughts has ever sort of passed my run up with his space lasers, a bad actor in the past could start some sort of rumor like this and it would tamp down over time. Social media, as we've figured out, has become an accelerant. Because if enough people start saying, well, my uncle knows about this, then it starts to build momentum. In a future scenario, I suppose again, the ability for generative AI to create enough evidence so that should somebody go to fact check, those facts can be supported by that. If somebody wanted to undermine my reputation or my company's reputation and say for example, let's pick on Coca-Cola, does something awful with producing pollutants and they pour arsenic into Coca-Cola. You can create a video of somebody pouring the arsenic in and it would look convincing to somebody who wants to believe that Coca-Cola is evil, I mean, is there any way you could ever get ahead of something that coordinated?

DF: I think one of the things that you can do, and this is beyond all the other things that we talked about here, one of the key things that we need to think about at a societal level is media literacy.

PM: Say more about that.

DF: If you look at some of the countries that have done best and the most resilient to disinformation in general, countries like Finland where they've been dealing with disinformation from state actors for a very long time. They have built media literacy into their curriculum back to early childhood education through to post-secondary. And this goes back to what I was saying about our trust and the trust in the employer. For example, you can't go and educate the entire population, but we can take care of the risk of disinformation within the organization itself. So, we have an opportunity here to do a better job of raising media literacy within our own employee base. And not in a partisan way, like to your point around Pizzagate, I was on a prep call for an event I'm doing in a few days, and one of the speakers talked about the idea of educating people on disinformation and free speech.

And by introducing that notion, let’s also talk about free speech. You're taking some of the air out of the tires of people who are concerned, okay, you're just going to come at me because you think this is a big right-wing thing, but everyone's susceptible to this. So, doing this education in a way that engages people on both sides of the aisle and have genuine concerns about opportunities that aren't necessarily available to everyone, which is what I think underpins a lot of this. Or freedom's being taken away. I think doing it in that way that is inclusive is really important.

PM: And you make a point that I wanted to get to. We've talked about this before on the show, which is what role do you think government, whether it's in the form of legislation or education, have to play in helping address this in the long term? I recognize it's not a short-term solution, but do you think that governments and our education institutions, which are related, need to step up here?

DF: I think so, and I think it has to happen in a way that brings together different institutions because I think government on its own is going to struggle with certain constituencies and each of the institutions is going to have its own challenges. But if you look at, for example, some of the work that Google has been doing partnering with government around rebuking and test-driving efforts around that, I think that is, that's kind of more of the way to go. And then I think the education piece is a place where government can help. And again, if you do that in a way that is going to be incredibly difficult now in the polarized landscape, we're in and you say the word disinformation and it's just a trigger word for some people now. But I think if you're able to do that in a bipartisan way that isn't just aimed at knocking one side or the other. No one wants to be manipulated. And so, I think coming at it from a bipartisan perspective of that it is the way to go.

PM: I wish I could believe that last statement, but part of me just does wonder whether deep down inside confirmation bias, it seems like it's built into our DNA. And I do wonder whether we actually might not want to be manipulated, but we want to be comforted. And comfort comes from hearing something that confirms our world view, it means that we don't have to change, the rest of the world has to. So I'm not sure I'm as trusting are you.

I have so many questions for you. You did mention one threat that I hadn't really stopped to think about until recently. I did see a company policy get popped up by one of my organizations I work with, and that is the matter of inadvertent information leaks - not necessarily of company secrets even - but of personally identifiable information through feeding documents into a tool like ChatGPT to have them summarized or reframed. In the process of doing that what the person doing that doesn't realize is they're actually adding it to the public knowledge that ChatGPT has about us and therefore other people will know about it. Do you want to talk a bit maybe more about that particular threat? Because that does seem to be one that inadvertently could result in problems. What sort of company policies need to be put in place to utilize these technologies and get the benefit from them without necessarily causing problems?

DF: Well, I think you almost answered your question within your question there. But I think you're right that one of the pieces that people don't think about is, and we've even had these discussions within our own walls, part of our global AI task force, and we look at new tools and there are things like new press release generators. Well okay great, you're going to plug in your information around your upcoming announcement into an AI, but where's that information going? And if that's proprietary information that hasn't been released yet, should you be doing that? So, I think there is a lot of work to be done on the kind of IT and compliance side around what protections are built in. And I would extend that even beyond things like plugging things into ChatGPT. I don't want to single that I'm out here, but even you know, you can get browser plugins that are providing feedback in real-time based on what's on your screen.

Well if you're viewing confidential documents on your screen, then is that a security risk? So, I think there's a lot of work that our IT friends are going to be very, very busy working through. A lot of that same thing applies as AI starts to get built into some of our common applications. I think it's really important, and it kind of brings back to the idea of education here. We're back onto AI, but educating your workforce about the opportunities but also the risks of AI is really important. So, we're in the process of rolling out training across our entire workforce around this area for exactly that reason. And part of that needs to be guidelines for how you use AI in your work and what should you do, what shouldn't you do. Don't input confidential information into an AI. Don't allow AI to substitute for expertise or allow work to go out the door that's been generated by AI without thinking about fact checking or thinking about copyright or thinking about bias and fairness and what's being put out. Don't take content at face value. So, I think that those kinds of policies but also education have to happen.

PM: Particularly fascinating stuff. Look into the crystal ball for us, if you wouldn't mind? It is a fast-moving space. Let's go forward five years. How do you think the digital crisis landscape ... you just laughed. Maybe that's too far ahead. How do you think the landscape might have changed?

DF: Ask me about next week. Oh look, I think change is going to be the constant, but it has been for years. Even if you think about the last few years that we've just dealt with, on the reputation and crisis side of things, we had the pandemic and then within the pandemic we had the Great Resignation and we had supply chain issues. And then in parallel with all of that, we had the murder of George Floyd and the Black Lives Matter and this kind of reckoning around diversity, but also around the role of business in society. And these issues have just kept coming. Like Russia-Ukraine, energy crisis. AI is just the latest and it's a big one, but it's the latest. And so, I think we are living in a world of crisis right now. I think there are a few things that are going to shape it in the next few years.

I think AI is going to keep evolving at an accelerating pace. I think that's going to lead to certain activities being commoditized. The good news is no one should have to wake up at six in the morning to compile media clippings for anyone, things like that that are kind of that rote work that the people neither enjoy nor value nor get too much out of. And I don't mean value in terms of the output, but the actual work itself, I think those are going to start to get commoditized. So, I think that's a piece of it. More broadly, I think all the things we've talked about, new types of crises, new dynamics that are going to be introduced. Secondly, I would say I think we are going to see the ongoing and increasing impact of Gen Z on the crisis space, on reputation.

We haven't really touched on that here, but I think this generation that is up and coming, now that is, some of them are in their mid-twenties. They have values and expectations of companies that are different to the generations that have come before them. Their core values around transparency and integrity are incredibly important. The way they respond when their expectations aren't met is different to that of older generations. We see it in the way they interact with media and channels and from a crisis communications perspective, dealing with an issue on TikTok is very different to dealing with one on Twitter, which is where a lot of people are more comfortable. It's harder to monitor, it's harder to create content for it, it's much more algorithm-driven. Issues could pop back up when you think they've gone away. We also see that from Gen Z and their expectations of the roles that business play in society and the expectations that companies think about stakeholders and not just shareholders.

And then I think there's kind of maybe two other things I would add. One is I think we are going to continue to see change at a platform level and whether that's because of AI or if it's because of shifting behavior patterns, whether one platform is versus another. And then lastly, I think we just have to continue to look at this landscape that we've been talking about here, like this environment we're operating in and the polarization. And I think we are going to have to do what we can to try and solve some of these really pressing societal issues because, to a great extent, some of the things that we've just talked about for the last half an hour or so, they are creating fertile territory, being created by things like income gaps or distrust of institutions or a fear that equal opportunities aren't available to everyone. And that is kind of what's leading to a drop in trust, which is leading to fertile ground for disinformation, which is leading to some of these issues that we're talking about.

PM: Yeah, I mean I remember somebody saying to me probably 20 years ago, the solution to your problem is usually two levels abstracted from the thing you're actually looking at. So, you are often looking at the symptom of a bigger problem. And as you say, we can't lose sight of the fact that, you know, what are people's motivations for this disinformation in the first place and trying to address the motivation rather than addressing the disinformation and maybe the disinformation will go away or lose somewhere else. Incredibly thoughtful stuff. I've enjoyed the conversation. I think you're doing amazing work helping people understand this stuff. I remember back in the day when Wikipedia first popped into the scene and companies I've worked for had to have dedicated Wikipedia people who were just there trying to keep the Wikipedia entry under control. Seemed odd at the time. Now you wouldn't even question somebody doing that. And these new and emerging roles, I'm sure we'll come to look at in five years’ time and think to myself, why didn't we have them in the first place? So incredible stuff you're doing. Appreciate that. One question for you as we wrap up the show sponsored Rocket Software have a set of corporate values. They talk about the things that matter to them most. Empathy, humanity, trust, and love. Just curious what matters to you right now, Dave?

DF: That one's easy. I literally have a piece of paper here on my laptop that literally says, remember what's important. And I have a list and I've kept it here since the beginning of the pandemic. And it is a very beaten-up piece of paper now because it travels with me. But it is time with family, physical and mental health, personal integrity, continuous growth, and making a difference.

PM: Love that. You'll have to email me though. Brilliant. Thank you, Dave. Thanks again to Rocket Software for bringing us another episode of Digital: Disrupted. Thank you all for listening in. If you like what you've heard and you've heard this podcast before, you know the routine, we'd love to get a thumbs up on Apple, iTunes, Spotify, wherever you happen to be listening to us. You can also reach out to me on the Twitter, I am real, I not a robot, I promise. That's what a robot would say, of course. Or our show sponsor at Rocket. So, if you've got any questions for our guests such as day are ideas for topics we'd like to hear covered in the future, we'd love to hear from you. With that, we'll see you all next week. Everyone. Stay disruptive.