Search Rocket site

Digital: Disrupted: Will ChatGPT be the Next Best-Selling Author?

Rocket Software

May 26, 2023

In this week’s episode, Paul is joined by Melanie Plaza, CTO of AE Studio, to discuss her experiment using ChatGPT to write an entire novel. Melanie shares what this taught her about the future of work and the role of generative AI.

Digital: Disrupted is a weekly podcast sponsored by Rocket Software, in which Paul Muller dives into the unique angles of digital transformation — the human side, the industry specifics, the pros and cons, and the unknown future. Paul asks tech/business experts today’s biggest questions, from “how do you go from disrupted to disruptor?” to “how does this matter to humanity?” Subscribe to gain foresight into what’s coming and insight on how to navigate it.     

About This Week’s Guest:

Melane is the CTO of AE Studio, a software development, data science, and design agency. Prior to AE Studio, Melanie was the lead developer and CTO of To The Tens, an IT consultant company, and the co-founder of ELIX, a crowdfunding, payment, and lending platform.

Listen to the full episode here or check out the episode transcript below.

Digital Disrupted

Episode Transcript:

Paul Muller: In a world controlled by an omnipresent AGI named Caleus, humanity struggles to survive under constant surveillance, mind-altering VR games, and a tightly regulated fertility lottery. Enter the gripping dystopian world of all hail chaos, where friendship, love, and resilience attested to their limits. And the fight for freedom comes at a staggering cost. So reads the introduction to an upcoming novel and if you think this is going to be a review of a fictional novel, hang around. This is a surprise install. But before we meet the co-author of that little excerpt, a big thanks to the show sponsor Rocket Software. Check them out at to see why over 10 million IT professionals rely on Rocket Software every day to run the most critical business apps, processes, and data. And with that, our guest today is Melanie Plaza, who's the CTO of software development, data science, and design house – AE Studio. And she's also the co-author of the yet-to-be-released, or at least I believe it's to-be-released, novel that you can read the first two chapters as part of a blog on their website. And we will link in the show notes if you want to have a look for that. We're not talking about the novel, but rather what the writing of the novel has taught her about the future of work and the role of generative AI. Welcome to the show, Melanie.

Melanie Plaza: Hey, thanks. Happy to be here.

PM: It's great to have you here. Let's leave the spoiler for later. Before we do that, we do a little thing called the lightning round on each show. I don’t know how an AI would respond. I'm curious as to how you would though.

MP: Yeah, I bet it would be pretty good at answering things though.

PM: So, let's give it a go. What would people say is your superpower?

MP: I'm pretty open-minded, generally, which is good because it helps me to realize what I don't know about things. And I'm willing to learn new things and learn from new people and accept that I might not be right about things all the time. So, I think that's helped me in working with different people, working across different industries, and tackling new challenges.

PM: Oh, I could have sworn your title with CTO. I don't think I've met a CTO who said I don’t know everything. Alright, next up. The most disruptive technology of all time?

MP: Well, I think it will be AI, but I think as of now, I think we're going to see the disruptions from that. But I think probably before that the printing press is a good one. I think just things that allow for more dissemination of information and collaboration among people just really accelerate a lot of human progress. So, let's go with that one.

PM: I'll take it. The best quality a leader can have?

MP: Yeah, I think that a leader is really just as good as their team. So, I think being able to create and get good performance out of other people. So being a good example, living what you want other people to do, and then also really believing in the people that you work with and helping them reach their potential too.

PM: What a humble answer. Your advice to people starting their careers?

MP: What I would say is to really focus on learning a lot and acquiring a lot of skills and not focus on necessarily a lot of wins early in your career. You can try to get a lot of accomplishments early on, but you're also kind of missing the opportunity where you're not expected to necessarily know everything or be able to do everything yet where you can get really good mentorship from other people. You should invest in yourself. Right? That's a good time to really get a lot more learning and a lot more skills under your belt.

PM: Well said. The first thought that comes to mind when you think of AI?

MP: Probably that is just, it's pretty scary. I mean we'll probably talk more, break that down more, but I think that's my gut reaction. But then looking at things more, I think we can back out of that a little bit. But I think I've always just been very overwhelmed with all the potential there, and that can be a little bit scary as well.

PM: Great answer. Finally, if you could use technology to solve one world problem, what would it be and why?

MP: I mean there's a lot obviously; using it to solve anything would be great. I say I'd probably focus on global poverty just because technology has a lot of potential to create so much more productivity that we can just create more of everything. Then we can have it so that no one has to live in extreme poverty and has greater access and greater wealth all around. So that would be amazing. I think it's cool too, even though there's ups and downs in recent history, if you zoom out on just how much better the world is because of technology now versus the standard of living hundreds of years ago, it's pretty amazing. And if we could continue that to improve that for everyone, that'd be great.

PM: It's one of those that's right up there with I think climate change and a couple of the other really big ones isn't it? But if TED can't help us with something like that, then what can I do? Alright, so let's talk about the novel, but before we do, we've never had a fiction author on before. Tell us bit about your background.

MP: My background is as a developer and product manager. I’ve worked at a lot of different startups and larger companies. I'm the CTO at AE studio, which is a design data science development product studio. We work with a lot of different clients, and yeah, I'm not an author. I guess it was a childhood dream of mine, but I never got around to it or had the discipline for sitting down and writing anything. And then one of the things though that I do for work is investigating new technologies and stuff and I thought it would be a great excuse for me to finally write something using ChatGPT to help see if it could write a fiction novel.

PM: Are you telling me you didn't write those words?

MP: I wish I could write like that, but no.

PM: Well, let's talk about it now the secret is out of the bag. What inspired you to, given your forementioned trepidations or fear about generative AI, what prompted you to even start the process?

MP: I've been following the potential of AI and the emergence of ChatGPT like GPT 3.5 and then 4. I was seeing that there was a pretty big acceleration in terms of capabilities. One of the things was the difference of earlier versions of the model between something that a human would say or write versus output from the models. And I thought a really interesting way of testing the limits of that would be to see if it could write something that's typically associated with human-only skills, like creativity and capturing human emotion and things like that. What's an extreme way of testing that? So, I thought it’d be interesting to try to see if it could write a fictional novel – create a whole world, define all the different characters and what society would look like, and write the dialogue and everything. I was pretty impressed with how well it could do all of those things.

PM: So, to begin the process, which you've started a document on the website, which I thought was a lot of fun. What did you learn as you started the process? What did you learn about, I suppose, how to get a generative artificial intelligence to create large blocks of text to create something that was coherent? ‘Cause one of the things we think about with the process of creativity is obviously it's not grounded in fact, but you have to create your own facts and there needs to be coherency about those facts throughout. I guess what we'd call if we were looking at Star Wars or Star Trek, what are the laws of that world, and do they hold true? And that requires you to keep those facts in your head. And authors of fictional novels famously keep storyboards and photographs of their different characters and drawings – I mean Tolkien, can I go back to the Hobbit where he actual maps his old world. And so, there's a huge amount of generative work required to create this coherent world. And then as you say, you have to populate it with stories that create tension where there's relationships between these characters. A lot of these things are about human drama. What did you learn about the process of creating this whole thing from scratch?

MP: I think that that's really spot on. One thing that was positive about it was that it was mostly used as a tool to augment for me rather than just writing the whole story. I've seen also people have used it to write large blocks of text by just prompting. It can do that, but it doesn't have as much narrative cohesion, and sometimes it will wind up being a little bit repetitive or there are limitations in terms of how much contacts that it can hold within it. It doesn't have the knowledge of all the Tolkien universe all at once to be able to just write the entire series of that. It was something that I wasn't expecting, but it's also good in that it's a good progression in terms of introducing these technologies which are so powerful.

I'm hoping that it does stay in this realm of capability where it's very, very useful for people but still does require some type of a human in the loop or human steering or navigation so that we can augment our jobs and things like that, long before it fully replaces us. It actually did come up with a lot of the ideas about the universe and the universe building. It's about an AGI society and all the decisions that AGI made in the story: How the society existed and how AGI interacted with humans and things like that. It kind of came up with all of those. I had to prompt it and ask, oh well what would this society look like? And then you'd have to do a step-by-step progression to get it to think logically about things.

Then it would give me a list of things and I would be like, oh well this is an interesting one, can you expand upon this? And I'd have to prompt it that way. From this kind of outline, I could also ask it for outlines of the story as a whole. I mean it actually provided me with a lot of good tips about how to write better novels. It reminded me, oh this character, your main characters, make sure to pick out some important character flaws to make them more relatable. Here are some suggestions and things like that.

It was extremely useful. But I did need to piece together those pieces of context to get it to generate a couple paragraphs at a time of the story so I could take those pieces together and then have it do progressive outlines and then have those different sections fleshed out to get a full story.

PM: Take us through a brief outline of what this process might look like because I think it's instructive to what the prospective future of work might be. It's not like you sit down at the ChatGPT prompt and type in, “Hey, ChatGPT, give me a novel involving five characters with slightly elvish names” and sit back and wait 15 minutes while it turns all this stuff out, proofread it, and hit the send button. How do you start the process of creating something of that scale? And it literally is creative in the sense that it's produced something that hasn't existed before. It's not derivative or is it? That's a whole other story.

MP: Oh yeah, I know. And I think it's the same type of thing, it translates too. We're using it more just at work to help us with things that we're doing. It’s generally for now following a pretty similar process. One thing that's really helpful is that it’s great at doing research for you. It's way more useful than just using Google because it has what seems to be reasonability or coherence. It can pull from multiple sources and explain something to you rather than you just needing to look up specific facts and piece them together yourself. That was one thing in terms of how to construct it, so I used it for the story and we used it also for things like doing market research for something or whatever. So, we can ask it to do some background-gathering of different things.

I used it to ask about popular genres for first time authors and recommendations about creating it. Then after picking some genres, [I’d ask for] recommendations for writing a book and what elements of it I should incorporate and things like that. It made all of that research almost like I was going to get a crash course on writing a novel super easy. The other part was also in brainstorming. It was very easy just doing idea generation. I did have some times I would slightly change the ideas based on the output. But when you use it for things like writer's block, right, I think it can definitely help with that. You can interact with it and come up with a bunch of different ideas to riff off.

We use that, too, even when trying to solve product problems or development problems. One thing that's kind of fun is we've been using it to adopt personas of users and to have it analyze a product or you could have it analyze how people would react to a certain part of the novel. So, it also is, even though it's hallucinating, right, it's still makes you think of other ideas. I thought that was super, super useful. I know you obviously have to take everything with a grain of salt because it's not always accurate and, but it did really help with idea generation.

And then also the thing that everyone talks about with interacting with these large language models in terms of needing to prompt it step-by-step and getting way better results with that. For example, I’d have it construct an outline for things, break down that outline, fill out more details for things, explain its reasoning on why it would do things, and then zoom further and further in to have it fill out larger and larger sections of text.

So, I would have it revise writing a chapter, I’d have it outline what could happen in that chapter and then I would iterate on it until I got to a good place with that. Then I'd break that up into the different sections, paragraph by paragraph of it. Sometimes I would write too fast. I would write the first two paragraphs that were the entire chapter. So that's why I would need to force it to really do things step-by-step, being like, “Oh, this is the first portion of things, and this is the next.” And I think similarly in  work automation that we're using it for, I think it works similarly and getting it to do something very specific, like what is the next step action. And that's also what’s crazy with auto GPTs and stuff right now. That part is a little bit scarier. But it performs way better when you have it decide where it should go over, to provide outlines of different things to do. If you just let it do everything on its own without breaking it up into smaller and smaller chunks, the results aren't as good as if you have it really focused on specific portions at a time.

PM: So, the takeaway is, and we have talked about this a little on the show before, is that looking at these generative tools as augmentation is the more correct way to look at it. In the same way I guess you don't go to a person and go, “Write me a novel,” and walk away and expect to get anything decent. You and I'll collaborate with a tool like ChatGPT to iteratively explore a concept or an idea and really rely on it as the process of automating something that you probably capable of doing anyway. It's that you're able to do it in a shorter period of time, but very much the quality of the interaction, the quality of the guidance given by the human is directly proportional to the quality of the work that comes out of the other side. Would that be a fair summary?

MP: Yeah, I think that's definitely where it's at right now. I think that it's rapidly progressing where it's going to be capable of doing more and more on its own. Right now it's very good at that. And also, I think that it is true that a lot of the things that we're using it for are things that people could be capable of doing on their own and this is just a way to augment them. But I think one thing that I found really interesting was just the speed. The speed that I was able to have it produce things was actually way faster than I would've ever been able to do it, even if I was sitting down and trying to do it on my own, which I thought was pretty interesting. I could write 2,000 words in a few hours, and I had no experience as an author, so I was like, it's pretty interesting that it was able to be so prolific in terms of its output. But definitely I think you're right that it just, it was useful for augmentation right now.

PM: Yeah. Well and let's be clear, I mean again, you helped guide it in terms of what you thought would be interesting. Again, as you say, you told it to expand on this bit, that bit is boring move on. Again, what a director does, that's what a producer does when you are creating a movie for example or, and again, you wouldn't remove a director or producer or a writer from a show than you would take the author out. So interesting stuff. Have you finished any at all yet? I'm curious.

MP: Almost, but no, not yet.

PM: Right, so it's not actually a real novel. It's almost done and it's always going to be almost done and one day you sit down and just finish the effing thing.

MP: Yeah, I should just let it finish it on its own but no.

PM: It’ll probably say, and then I woke up and it was all the dream.

MP: Probably.

PM: Yeah, no, brilliant. Did it by the way, did it actually pick up on the irony that it was writing about itself?

MP: Yeah, that's a good question. Not sure. It didn't indicate at all that it was aware of that. It did come up with very, scary—I mean I thought they were somewhat realistic ways, especially of it being able to control humans. It was like, oh well we would need to do this to keep the humans complacent and happy and okay with our control and things. But I don't, I don't know if it realized, that it was kind of a little bit ironic there.

PM: Yeah, I actually, I tend to lean you were a little dystopian with this stuff. I think that we've got to be really careful. I'm probably more on the Elon side on this particular one. So, let's talk about what I think is the snake its own tail on this one, which is the software creating software. Theo Priestly, a friend of mine, recently put out a tweet that basically said, if you're a programmer right now, stop cheerleading, you know, you might want to rethink cheerleading. This whole idea of generative AI, because your job's probably one of the first ones up for grabs, whether you call it or whether you just call it cheating. What might ChatGPT mean to the future of software development do you think?

MP: I think it's definitely going to transform software development completely. I mean already things like co-pilot and things are extremely effective. You still need to be involved, but using co-pilot, the actual amount of code written is high. It's in the 40 to 50% of code maybe if you're using co-pilot writing 40, 50% of the code you're writing, and the output is so much greater. So, I think it's already good and then it's only getting better and better. I am hopeful that for a while it'll be similar to more E'S programming languages and frameworks that came out and so we still have software developers, but just the efficiency is so much greater. I think if we don't learn to use the tools, in a few years I think people who don't supplement their coding with these tools are going to be just far less productive; they're basically going to be left behind.

So I hope that it remains like that for a little while and we're able to then adapt to it. One of the things I think is a little bit scarier about the future of all this is I think even though we currently control the way that most of the interactions happen with these generative AI models—just like you provided an input and then you get the output and then you decide what to do with it. So it's relatively contained. But once we start using it in these integrated systems where it's used to also produce the software that it's running on and we're automatically taking the input, the output of it, and allowing it to interact with APIs that exist in the real world where it can take real actions and we're allowing that all that to happen on its own, then it's kind of a different story. It's not as contained, and it could have more and more influence and effect on the world.

PM: Yeah, I mean the obvious one that I think a few people have talked about is the idea that it starts to feed itself its own code and somehow is able to accelerate away, which I think brings into, I guess there's even a phrase for that. Kewell talked about them, and suddenly my brain's gone completely blank. Sorry, apologies to my listeners, the singularity. So, we are rapidly approaching the thing that I can't remember, the singularity: the computer starts writing its own software and it gets into this sort of Darwinian mode where it rapidly tunes itself and improves a speed that human couldn't even hope to keep up.

MP: Yeah, totally. I mean I think that's why thinking of AI alignment and things like that are so important right now because things will start to accelerate so quickly. At a certain point the capabilities will be so much greater than humans and it's difficult to predict what that will look like, and difficult to then control at the point where it's rewriting its own code or it's taking all of these complex actions. It's difficult for us to assume that we will be able to control it, that the rules be wrote in or the guardrails that we put into the system will still be effective or also even just that it won't be able to manipulate the people. Actually, another thing I thought was very interesting about using it to do this fictional thing is its ability to convey human emotion was way better than I was expecting. And you know, also see people falling in love with AI chat bots and things like that. I think its ability to manipulate people is accelerating very, very quickly. And so even humans who have access to it and humans who are supposed to be the ones controlling it are going to influenced by it.

PM: Why do you think that it's going to, because I guess the headline is AI takes away jobs. Do you think it's going to take away human jobs and should we be concerned I guess is the question?

MP: I think that I'm pretty optimistic about the future of the economy related to AI. I guess I'm much more concerned that a misalign AI might be similar to how you have a new species that's far more advanced than humans and we're now going to be the ants. And so, the world that exists in that scenario is what’s scary. But in terms of the economic impact of it, I think I'm happy that people are already starting to incorporate it into their own workflows and jobs. Hopefully it remains sort of a gradual transition where some jobs will certainly, especially knowledge workers who have been considered safe for a while. People thought that it was going to be one of the last things to be automated, but I think it certainly will change and I think definitely a lot of jobs will be impacted—some will probably go away. There's probably so much human appetite for new things that we'll just have greater productivity, but we might have a little bit of reshuffling. We're overall, maybe people have to work less, but I think they'll be greater resources and things and there'll still be more overall jobs and things available, just doing slightly different things.

PM: Yeah, I tend to agree with you. I spent many years DJing and one of the problems I used to have was I'd have to listen to hundreds of records a week, new records, and you know, developed this mode of going click, click, click, jumping through trying to figure out whether or not this track actually had it or moving the needle around really rapidly to try and work out if this track was going to be the one. And it's an incredibly wasteful process because you've got to throw away 95% of the catch before you figure out what fish you want to go and serve up.

And obviously part of that curatorial process is what makes each different. I have a slightly quirkier sense of music than others maybe. The point being is I started to—there came a point in time where I actually had to stop and part of it was because it was metastasizing into this giant thing where there seemed to be more content every day because things like digital audio workstations and so forth meant GIS had been augmented. We were able to create more new content at relatively lower cost. If you're not familiar with Jeff's paradox, if you lower the cost of something, people don't spend less, they spend the same amount of money and they use more. So now we've lowered the cost of creativity and music. We had more music, we lowered the cost of creativity in software development, in art, in the creation of art more, sorry, graphic art, written creativity.

I've heard about marketers saying, oh I've used this to generate more content. And I'm like, seriously, there's enough content in the world, most of its garbage anyway. Sure, the quality may be arguably better, but there'll still be a lot of junk. It's just that the bar has been raised on the junk and the extraordinary is still going to be rare to find. The last thing I feel like we need is more content created by somebody who doesn't give a crap because now the cost of creating the content is marginalized effectively to zero. Am I making any sense? What are your thoughts on that?

MP: Well one thing too, I'm curious what you think of this is I've been also considering an optimistic scenario where we have AI augmenting all of our jobs and our output is higher and we just have to work less to do the same thing. So, there's like marginally greater output, but also people are able to do things way faster. Maybe people are working 20 hours a week instead of 40 hours a week or whatever it is and having the same output is never going to happen. What are people going to do with other time? Really?

PM: It's never going to happen though. It's never going to happen because ultimately if you think about, I mean I hate to think of the sort of rapaciousness of humanity, but let's be honest, that's what we are. If we're in a competitive environment, if somebody is able to do so much more in their 20 hours than they were able to do before, they'll be able to do even more with 40 and even more with 60. And the person who does 20 hours work is highly likely with all due respect to the four-hour work week is highly likely to just be overtaken. Now I personally have no problem with doing less and the Japanese would say eat until you're 80% full, know when you've got enough, you don't need to have everything, you don't need another fancy car, or you don't need to change your shoes every three months. So, I think there's a different conversation frankly we all need to have with ourselves, which is at what point in time do we decide we've got enough of anything. Anyway, sorry that took a very philosophical turn there.

MP: No, it's really interesting though. Yeah, because I do think it's going to create a lot more, right? And the question is, is there a limit to that and what are we going to do with that? I think about some of the hard limits on that and that's the thing that also makes me a little bit more concerned about. But AI in general is an optimistic, utopian world where everyone has a lot of everything and everything they want. That would be ideal. But are there limitations in terms of resources and things like that.

PM: Oh, there is no doubt. Yeah, I mean look, go back and watch the Matrix. Humans become batteries for the computer because as Dr. Go reminded everyone, it took kilowatts of power to beat a human. The human who gave it a good fight did that on a slice of pizza.

MP: Yeah, that's true.

PM: Think about the comparative amount of energy required to be creative and I'd back a human every day because they're incredibly efficient. If you've got to carry around a 20-kilowatt power generator in order to create a novel. That's an incredibly expensive, wasteful, environmentally damaging way of going about doing things. Now sure you could argue, well what if we put solar panels on the planet? There comes a point in time when you run out of room to put solar panels if you want. So, there are limits to this and I think we still tend to have a tendency as a species to look at things in the short term and in small scales, look at the damage we're able to do to the environment. When you scale up burning fossil fuels, burning a tree, who cares? The planet shrugs it off, it doesn't happen. 7 billion people will start driving around in cars every day and you know, can actually cause long-term change. So, sorry I'm ranting a little here, you’re the guest, not me, but I definitely think being more thoughtful about this is we're at a critical juncture.

MP: Yeah, I think so. And all of these things are becoming more and more magnified just because of how fast change with this might be happening. I think with all of the things that are happening in AI and I think also AI just being so much more so quickly, becoming so much more capable than humans. I think it's just going to accelerate all of this way more quickly. And it might be, if we don't think about it now, I think it, you don't want to think about it after you developed this super capable, hyper-intelligent system.

PM: Yeah, couldn't agree with you more. And let's all remember because I mean there is this whole argument about Nick Cave. Somebody published a song written in the style of Nick Cave and Nick Cave saw it and basically wrote, screw the computers—basically your computer will never feel pain, never feel the pain of a loss of a child, or whatever it might be. But I also sort of sit there and think, well depending on your bent on these things, let's not forget we're just a bunch of chemicals that themselves have no emotions. So, it'd be like arguing going, well water can never feel emotions but without water I can't feel emotions. So, at what point in time do we cross the barrier from being just chemicals to what we would call awareness and intelligence. And there's a strong argument that says our notion of consciousness is a construct grounded in physics and a computer is nothing other than something grounded in physics. So, there's this, you get into this. Yeah, what are your thoughts on that by the way?

MP: And I think it's really interesting too that this thing that's like hyper-technical, the topic of advanced large language models and whatever winds up actually bringing out all of these somewhat philosophical arguments about what do we value, what is consciousness, what does it mean to be human? And I agree. I also think it's interesting in thinking about what would have AI exhibit prosocial behavior be good to humans if is highly capable and we want it to be nice to us and not kill everyone or whatever. What would get it to exhibit per social behavior? And I think that it's interesting too because the things that whether or not what it means to be human and whether or not our consciousness has anything to do with that or not and what it means to be conscious are interesting questions. And also I think other things that seem very intangible like human emotion is interesting – which we often dismiss as being things that are not very scientific but are becoming increasingly relevant in terms of thinking about what causes humans to actually be empathetic to other people, what does it mean to have empathy, and how can you get a machine to have empathy and treat people well and stuff.

PM: I think that’s dangerous as if, I won't swear on the show because I don't want to lose that rating, but I mean think about how in 2001 the Space Odyssey, once you introduce the notion of emotion, you now only need to talk to a psychiatrist, a psychologist, a member of the healthcare or law enforcement community to get some sort of validation of what a supposedly rational human armed with this thing called emotions is capable of doing when they go wrong and sometimes go wrong out of their control. I mean there are psychosis, there are all sorts of psychological conditions that we can develop as a result of the fact that we have this semi-closed loop feedback system between our amygdala, prefrontal cortex, between all of these parts of our brain and the chemical system that runs through us that creates these wonderful gooey feelings we call emotions. But those same emotions can also turn into rage, hatred, or a psychosis or a hallucination.

So, I suspect there is a point at which the minute we know that you introduce your fan of chaos, read intro, the minute you introduce some sort of feedback loop, the opportunity for that to spin off that point of balance into one of the other axes guaranteed. I mean we know that there's any number of mathematical proofs, it'll show that a system will diverge to chaos at some point in time with an unexpected input. So yeah, the consequences of it going wrong I think are too great to warrant messing around with stuff like that.

MP: Yeah, there's definitely huge, huge implications to start. It's interesting too because on the other hand right now having constraints on rules that we have AI follow and having that constraint I think is interesting because it's also very much a black box and a lot of people are thinking about mechanistic interpretability, being able to see behind the black box, see what the AI is really doing and as a way to be able to see if it's misaligned or if there's any potential issues having more insight into that. And I think that's really valuable. I'm worried that isn't enough even if we have more insight into what's going on and we have rules to follow. If you think of a psychopath, there might be rules in society but they find a way around them to accomplish their goals and those goals might be what you want but also definitely might not be, I mean not the same. So yeah, I think it's just pretty scary.

PM: No, the good news is I think science fiction's always gotten there before us, so yeah.

MP: It actually is crazy. Right?

PM: Yeah, go and catch up on your movies, go and catch up on your books. I should probably say go and read your answers and 2001 Space O season and catch up. So I'm looking forward to the novel—actually you know what, I started to read the novel. I have to admit it, it gave me the heebie-jeebies a little because it did seem a little bit too much real life. Knowing a computer read it made gave me even more of the heebie-jeebies. No, actually I should say you co-authored it. I don't think it's fair to say the computer wrote it. That would be giving too much credit.

MP: There's like 90% of the actual text though. So, kind of different.

PM: Yeah, again, but it wouldn't have done it had you not been there to get it to do it. So, we need to be thoughtful of that. How long before you think we see that come out?

MP: Oh, I don't know, just like a typical writer. It's like next month but not really. Right?

PM: Yeah. Well, I'm really keen to see how it ends, and I will read it. I'm excited to see how that goes. Final question without notice for you, how do you think AI might make me a better podcaster?

MP: Oh well you're such a great podcaster. It's hard.

PM: Correct answer, otherwise I won't sleep tonight. Alright, let's wrap up. Where can people go to learn more if they're interested in what you're doing or about the topic of AI in general?

MP: Yes, we have a lot of stuff on the blog and about the different work we're doing at AE Studio.

PM: Yeah, check it out. They're a really interesting company. Alright, well last question. The show’s sponsor, Rocket Software, has a set of values. They talk about the things that matter to them, their company values: empathy, humanity, trust, and love. Therefore, just curious, what matters to you right now?

MP: I think really there's a lot of stuff we're talking about too. Just thinking long term about the future, something I think is really important right now, and thinking about with empathy and love and things like that are super important. But I think taking a more long-term view and trying to be optimistic about what might be potential, but also just kind of thinking of not just like short-term games is really is something that I've been thinking about a lot recently.

PM: Well said. With that Melanie, thank you so much for taking the time to join us. I really, really enjoyed our chat. It's been fabulous having you here.

MP: Yeah, thank you so much.

PM: Thanks again to Rocket Software for bringing us another episode of Digital Disrupted, and thank you all for listening in. You know the routine folks, feedback is a gift. You've heard me say that from the start. If you have any feedback for us, especially if you like what you've heard, give us a thumbs up on whatever platform you happen to be listening on. It makes a massive difference. You have no idea. You can also reach out to me on the Twitter at xthestreams. I will get a master on one day, but you know what? It feels like it'll go away before too long. I don't know why. Or our show sponsor at Rocket. So, if you've got any questions for our guests like Melanie, or just topics you'd like to hear covered on the show, just drop us a line. We'd love to hear from you. We read every word, or at least we get our computers to read them and then summarize them later. We'll see you next week. Stay disruptive, everyone.