Brief summary
Thoughtworks recently established a new role — Chief AI Officer. Taking up the position is Mike Mason, a veteran of Thoughtworks with over 20 years at the company, in technology roles spanning developer to technology strategist and author (and occasional Technology Podcast host). Mike will help guide Thoughtworks AI strategy and ensure that we're equipped to support clients trying to leverage AI.
In this episode of the Technology Podcast, Mike talks with hosts Neal Ford and Prem Chandrasekaran about his new role and explains why it's important that the company has someone leading on AI. He also discusses the hype, opportunities and risks of generative AI that — high on everyone's agenda at the moment — and explores how it might change knowledge work in general and software engineering more specifically. Listen as Mike talks through some of his own experiments with ChatGPT and offers his perspective on its likely impact on jobs in the months and years to come.
Episode transcript
[MUSIC PLAYING]
Neal Ford: Hello, and welcome to the Thoughtworks Technology Podcast. I'm one of your regular hosts, Neal Ford. And I'm joined today by another of our regular guests, Prem. Good morning, Prem.
Prem Chandrasekaran: Thank you, Neal. It's wonderful to be here again.
Neal: And we are joined today by another voice that will be familiar to you, because he's a regular podcast host, but not today, he's sitting in the much more comfortable guest chair here in our luxurious podcast studio [laughs] — Mike Mason. So welcome, Mike.
Mike Mason: Thanks, Neal. Thanks, Prem, nice to be here. I'm not sure the guest chair is actually more comfortable, but I guess we'll see…
Neal: We shall see it, depends on how hot the seat it gets, I suppose, as we start talking to you. So we are here today to talk to Mike because he has been newly appointed or anointed, I guess either one of those would work correctly, as the new chief AI officer at Thoughtworks. And we thought it would be an interesting conversation to have to talk about, first of all, what does that mean, and more importantly, why is that important for organizations to start thinking about that as a particular role? So we'll let you kick it off, Mike: so what does it mean to be the chief AI officer of a consulting company?
Mike: Well, primarily it means that we think we need to strongly signal both internally within the company and also to the outside world. How important we think the current AI revolution is by putting someone in a c-level role in order to accelerate that. So my goals are, firstly, to accelerate generative AI into the work that we do and how we do it for clients.
Secondly, to make sure that we embed that into how we build software. So the craft of building software, how that changes with generative AI — and I'm sure we'll get more into that.
And then thirdly to encourage the use of generative AI within our business. So we're a company, like any other company, we have a finance department, and HR department, and all of those kinds of functions, which potentially all could gain a benefit from generative AI. And I've said generative, a lot I might shorten that to gen-AI as an easier thing to say, we were arming and hiring should the role be chief generative AI officer because that's really the immediate focus —I don't know, that got a bit of a mouthful, a bit too long. And also, frankly, it might be that the generative portion becomes old hat after not too long, we don't know where this is all going. And so we thought that just leaving a chief AI officer was simplest for now.
Neal: That sounds like a pretty reasonable thing certainly given such a fast changing kind of ecosystem. And I think one of the interesting things about AI and software development is just the explosion of interest we've had. Certainly the world, but internally in Thoughtworks, we have seen this massive explosion of interest.
And I think it's a good idea to start corralling some of that, because if you don't have some sort of central point of contact then you end up with a lot of duplicated effort, and a lot of misalignment, and a lot of that sort of stuff. So I suspect that's a lot of what you're doing right now is wrangling all of the enthusiasm.
Mike: Well, definitely, we do have a lot of enthusiasm. Being named in the role does mean that I'm at the top of everybody's list to forward an email to or ping for things, so yeah, I'm getting a good view of what's going on. I was already fairly closely involved because I was part of our Office of the CTO for our chief technology officer, so I was fairly plugged into the technology communities that we have. But now, even more people are coming out of the woodwork and doing this stuff.
It is actually an interesting question like, how do you balance between the thousand flowers blooming of — where people who are enthusiastic, they can get involved, they can experiment with these tools — how do you balance that kind of energy versus the need to direct people and have some efforts that end up being more important and more impactful than others, and some efforts that are maybe a great learning experience but ultimately don't go anywhere? And I think that's a question for company culture in a lot of places. How do you balance experimentation with the winning ideas? I don't think we've fully figured that out yet. One strategy actually that Thoughtworks takes is to publish stuff. Because once we've put it on the website then that actually becomes the official line on something, even though we're happy to have differences of opinion even in the things that we publish. So yeah, definitely a question.
I think another thing that's important is providing access to tools to people who want to experiment with these things. And once you get into the tool access, there's a whole question around licensing, intellectual property, all of those kinds of things. So as a technology firm who builds a lot of software for our customers, we have to be able to make certain guarantees to them about the code that we write on their behalf, and obviously, if you're using code generation tools, then there's an impact there. So we need to get this line between enthusiasm and responsibility. And I think that's a difficult thing to tread but we're working on that.
Prem: So, yeah, lots of hype around generative AI and related technologies in the last few months. Thanks to ChatGPT. So what do you think it's useful for and maybe some potential applications that you're thinking of?
Mike: So I am super excited about it because I think that this generation of AI tools actually apply to all knowledge work. And knowledge work is stuff that people do at a screen and keyboard these days. And I think in the short-term, we're going to see a lot of these low-hanging fruit use cases, I would call them, being quickly tackled by generative AI tools.
And so those are things that when you first see the tools doing it, it's quite remarkable — like summarize this long web page for me, or given these brand guidelines, write me a marketing blurb for this new product based on some bullet points. It's quite remarkable when you see these things happening. But I think we're going to get used to those use cases fairly quickly, actually. And it's going to become normal that you see AI tooling doing the textual manipulation stuff that used to be just the realm of humans.
And when it comes to writing, and image creation, and stuff like that, where we are becoming humans are becoming editors of that content rather than the producers of it. I think for me avoiding the curse of the blank page is actually really useful. Even if I don't like the output from some of those tools at all, it does give me something to get started with — like Prem, I know you asked a couple of tools: What kinds of questions would you ask a new chief officer, right? And I think that's a great example of avoiding the blank page.
And I think so there's going to be a ton of that stuff like productivity for people who are doing knowledge work. And I think we're going to see a ton of tooling in the next six months from the Microsofts, and the Google, and the Office applications, and Zoom, and all the other comms tools I'm sure are going to start to provide these things. And I think that's going to be useful individual and company productivity stuff. So that's one thing.
I think the next thing that we're going to start seeing is AI co-piloting. And I mean, that as a concept not specifically the code generation tool from GitHub. But co-piloting in co-creation of stuff. So we've got one of our enthusiastic thoughtworkers out of our Toronto office who has built a tool called Boba AI, which is this structured research strategy and ideation tool where you give the AI prompts around the company that you want to do some research on or the strategic idea that you want to create. And it uses actually uses OpenAI APIs in the back end and LangChain to tie all this stuff together and produce a whole bunch of ideas for you so that you as a user can figure out, OK, well, I like this idea or that's given me some new stuff over there. And you can use it for your job — maybe you're a product manager or a corporate strategist — and that can give you a boost.
I think we're going to see that stuff across like a whole bunch of use cases, where for your job ideation is a part of it and AI tools are going to help you. And we're going to build fairly lightweight UIs that go beyond just asking the bot a question in the chat interface to a more structured interface. So I'm excited to see those kinds of tools kicking off.
And then, obviously, because these things can generate code, that's going to change the way that we create software. So I think the craft of software development is going to be eventually radically altered by this stuff. And in the short-term altered somewhat by this.
Neal: So the analogy that I've been using for this to exactly your point is the spreadsheet. Because in 1970, accountants spent most of their time recalculating paper spreadsheets by hand. And then the electronic spreadsheet came along. And it was massive for them. And it didn't make them any better accountants. It just got rid of a lot of the busywork.
And two things happened. One is they stopped using adding machines so much. But spreadsheets got instantly much more complex than they had been before because you could make them more complex. And I think you're exactly right; a lot of these tools right now are in what I call the talking dog phase where the remarkable thing is that the dog can talk. It's not what the dog says, it's that he can talk.
And you quickly get past that phase too is it actually useful? And Scott Galloway is another famous podcaster. I think he says it right — ChatGPT is not going to take your job, but somebody that understands how to use it effectively will take your job. And I think that's the augmentation. I think is where we're really going at least in the near short-term.
Mike: Yeah. I completely agree. And I actually heard of — I was listening to another podcast. And the guest was saying in their department, I can't remember which department of a business it was, they are deliberately putting every task they have to do into ChatGPT just to see whether it can do it or what the output would be simply so that they might not even use it because maybe the tool can't actually do anything useful there yet, but they're getting an idea of how you actually use this stuff.
We did a workshop last week where we had some clients along who were interested in learning about generative AI. And one of the VP-level people who are making decisions on this stuff are actually a part of the workshop is this experiential thing where you use Boba to generate some strategy and to do some strategic thinking. And in the feedback after the workshop, it turned out this was the first time that this person had actually been hands-on with one of these generative AI tools.
So it's tricky like we're all moving really fast and we all have a lot on our plates as our day job. But even some decision makers when it comes to this stuff have actually not found enough time to experiment with it. So I think even just that, try to do a few things with these tools is a really important first step.
Prem: Well, I mean, look, so a lot of excitement obviously. But then with that excitement also comes a little bit of apprehension as well. Ethical concerns, privacy, security and what have you, right? So how do you plan to — firstly, do you think of these as risks? And if you do, then how do you plan to mitigate them?
Mike: So I definitely do think of them as risks. I think Thoughtworks thinks of them as risks.
A lot of this is an extension of responsible data. So over the last, I don't know, five or 10 years, every company has been slurping up as much data as they can and then running ML on top of that, even before the current wave of generative AI stuff. And that has actually always been a question. Is it OK to collect this data? Are we going too far in processing it? Are we respecting people's privacy as we do so?
There's also the legal landscape around that — I mean, GDPR is the headline legislation around the right to be forgotten and all that kind of stuff. So I would say that this is just an extension of that. The other thing to mention, of course, is bias in data, right?So if you have been doing any kind of machine learning over the last few years, you should have been thinking about the bias in the data that is present because bias in bias out. That actually gets worse with generative AI tools because it's often not clear what data these things were actually trained on, so there might be bias in the training data and then there might be bias in the outputs.
I actually saw a piece of analysis that some academics had done looking at I think it was ChatGPT and asking questions about the kinds of person from a particular university versus another university, how they might perform in a job. And the answers actually had this obvious baked in bias to them that people from this university are analytical in this way. And people from that university have some other characteristic. That was obvious bias. I think there's a whole bunch more less obvious stuff baked in.
So, I mean, yes, we do need to worry about those things in the same way that we always have with data. In addition, there's other problems like overreliance on AI. That's the situation you get into if you have a Tesla. And the self-driving thing works 95% of the time. You could actually argue that's worse than it working 0% of the time because it lulls you into a false sense of security. You might take your hands off the wheel then you have a problem.
Similar thing with these AI tools, right? Like if you still need a human in the loop on almost all of this stuff today, and — this is beginning of July 2023 just to timestamp this comment — you still need a human in the loop for the majority of these things. And you need the human to be paying attention, right? Because the problem both from text, and image, and code generation perspective is that these models produce stuff that is very plausible and very authentically, very authoritatively expressed, but might be wrong.
I've certainly seen it do this in code, right? Like block a code, looks great. Confident looking code, lovely. Lovely little off by 1 error lurking in the middle of it. Lovely little divide by 0. If I give it a certain set of input all those kinds of things. So you need to be paying attention to avoid that. And I think that's one of the things that we're going to have a problem with is people taking AI output for granted once it gets to a certain level of quality.
Neal: So you touched a little bit on something I think is important is that we tend to think of AI tools as tools strictly for hardcore technologists — and things like GitHub Copilot is a great example of that, because business analysts don't care about Copilot, but developers love it because it helps fill in code. But the latest generation of generative of AI is actually broadly useful across the entire organization. Like you said, you've got people who are trying every problem. Let's just see what it says and see what it…
So how much of a challenge do you see getting non-technologists who are already wired into technology interested in using this effectively like you said a lot of c-level executives haven't even played with these yet because they're busy. And it's just yet another thing of the hype cycle. But this one feels different in that as powerful it is and how fast it's moving. So how much do you see this seeping into every nook and cranny into organizations in the near term?
Mike: I think the adoption is can it be brought about by people who are able to be hands-on with this stuff. So, I mean, there's a bunch of business cases here as well for ways that you could make people more efficient and all that kind of thing. So once you've got a money/revenue tag associated with a potential use case, I think you can get people's attention for it.
In the getting people comfortable with this, we do some hands-on workshops where you just put the tools in front of people and they have… It's your job to be playing with this right at this moment, so people have the time and they do play with it. And so I think that's good. I think going in with an open mind is important as well. I've seen a few folks who call out the failure modes, right? Like this thing can't do arithmetic, right? Or it can't do this particular kind of logic problem like word game style logic problems.
And that's fair enough as a limitation. But how often do we need to actually solve word problems for business, right? Like actually there's many more things that we really want to do. And is it useful? Not is it perfect, is it useful? And I think asking that question and getting people to try things out will get the ball rolling on that.
I think it was Prem said or maybe you Neal said, AI is not going to take your job, but somebody using AI better than you is going to take your job. I think that's true for businesses as well, right? AI is not going to kill your business, but a competitor who is more effective at using AI than you is going to be a threat.
And that's why I think this stuff is so interesting because you can imagine these use cases across a business both for individual and team productivity type stuff because we're all living in an email deficit world where we're never up to date on everything, right? I'm still waiting for my personal AI assistant where I get up in the morning, and it tells me the email threads I need to bother to look at, and the chats that I need to respond to, and all those kinds of things. They are coming, right?
We work with one of the big cloud providers and have some early access to some of their tooling. And that stuff is coming, right?
And I think the potential adoption problem is that the first version of these tools is going to be rough, right? And you're going to look at it and go ehh… that was only 30% good. But we're going to go from 30% good to 90% good fairly quickly as you can see with the pace of iteration that's going on.
Prem: Excellent. I mean, look, obviously, with all of the excitement, I mean, this is also pretty rapidly evolving, changing ecosystem. So how do you keep up?
Mike: [Laughs] I've been failing to keep up for the last couple of weeks actually! So I actually think people just need to carve out the time, right? Like it is important to stay on top of this stuff from a deeper level than just reading the newsletters or the headlines, right?
Like I think we're all subscribed to sort of what's hot in tech style newsletters of various sorts. Those are good, but you do need to be able to go the next level lower than that.
I think one of the things that I've been very happy to be able to do is to have been using Copilot for the last six months, so I understand pretty deeply what it can do and what it can't. I use ChatGPT extensively, including in some writing experiments. So I feel like I'm on top of what it can do and produce there.
But on top of that, just finding time for it is the usual thing about finding people that you trust who do have time and then leaning on their experience. I think that goes across the board because there's so much stuff here that you need some kind of filtering mechanism. So if you have people who you trust, who have seen a thing said that this is good, this is worth you spending time on and learning about it. I would pay lots of attention to that.
Neal: Yeah, especially as things keep coming out so rapidly — people are going to discover things that you didn't discover yet. So being n open-minded about that and playing with a lot of these things. I just recently discovered something that's immensely useful, which is this AI tool called Rask AI that will take a video and translate the words on it to different languages. That's massively useful, and it's really nice for technical videos on YouTube that you can translate into different languages. But you have to hear about that. So that's part of the problem here is that it's happening so fast. And there's so much churn that even hearing about these things is tricky. So I think you have to keep your ears open and extend your normal circle of places where you hear interesting things from because it's going to be coming from a wider field I think and all sorts of interesting things keep popping up like mushrooms after a rainstorm.
Back to something that you said earlier about learning how to use this effectively. I think that's going to be really key for people going forward. Because back to my earlier analogy, spreadsheets didn't make you an accountant. But by 1980, if you were an accountant, and didn't know how to use a spreadsheet, you were in deep trouble. And I think that's going to be true not just for developers, but for business analysts, and all these technical roles that can be augmented.
I think we'll quickly find what are the common augmentation modes for this. And I think that the kinds of problems that we're solving just like the spreadsheet example can get more complicated if we have assistants to help us do some of the busy and grunt work for particularly knowledge-based things, not wisdom-based things. So it can produce the knowledge and then the human has to produce wisdom to interpret that knowledge to use it more effectively. But these things can generate knowledge at a breakneck pace.
Mike: Yeah. And I certainly have found it very useful too because it has read every web page on the internet, right? Like, somewhere in the guts of these systems, all of that knowledge actually exists. If you can prompt it in the right way and craft a response, it can craft a response to help you out. You can get a long way.
So you can get these things to produce 15 ideas in a particular area; or 15 pieces of a requirement that you're doing requirements analysis for a feature, or something like that. It doesn't have to be right to be useful. Like it doesn't have to actually produce the content that you then, as an analyst, use as the entirety of the story that you're writing for a developer to implement.
It only has to generate two or three things that you hadn't thought of or express something in a different way to expand your own thinking around that topic to be useful. So I think for the best professionals in the field, this is a boost, this is a leveling up. I can also put my thinking into a generative AI tool, and I can get this expanded set of things back, and I can make a decision about which ones to include.
I think that's the other thing that we're going to start seeing. The wisdom is in the wisdom to make a decision between some options. But you can imagine — I mean, that's actually what happens when you use copilot to autocomplete a block of code; you are deciding, do I like that one, and you can page through different versions, or am I going to accept that. And then edit half of it or what am I going to do, right? So you are already doing less keystrokes and more decision-making.
I think this actually speaks well to pair programming where because we've said, yeah, if you think the most complicated thing about writing software is typing on the keyboard, then pair programming doesn't make any sense. But if the most complex thing about software is deciding which software to write, how to write it, whether to refactor a piece of code, how the thing that we're doing relates to the rest of the system, is this going to cause a performance problem — all of those kinds of things, those are all high level thinking and decisions that you’re making.
So I don't think we're there yet in terms of just handing systems over to AI to write all the code. I think you are still very much going to have humans assembling all of this stuff. I think we might eventually get to some interim state where… I don't know — if you had a strong microservices architecture, and you could specify a microservice well, could you give that to an AI to produce the entire microservice? Maybe — like I can see us getting to that point. But I think anyone who's spent any amount of time working with a large microservices-based system will tell you actually writing individual services is not the difficult problem; it's the orchestration and choreography of bringing all those things together and the complexity of that. And you still need methods to create abstractions break down complexity, all of those kinds of things, which is the core job of doing software. So I'm actually really excited to see how all this plays out.
We've got people who are working on wildly different thing — How do I just generate code in the file that I'm working on more efficiently? What's the right way to use it for that? All the way through to folks who are saying, how can I make like a super leveraged team with a few experts who are understanding all of the ways that the system needs to be built? And then boiling that knowledge down into a reusable prompt that's context for the AI, then I can actually have a much more junior team using this knowledge of how the system is supposed to hang together, as well as the AI generation to actually produce the software and wire it all together. And that's a vastly different style of coding. So I think that's plausible.
And then if you think about how we do CPU design these days, right, we don't think about where the transistors are going to fit on the silicon, right? We have higher level abstractions thinking about how we do hardware design, and then a whole bunch of software layers to like burn that down into actually a silicon wafer that you would then fabricate. It might be that with software, we end up doing a similar thing. Like if we can think about component-izing in a way that AI can implement a lot of those components, maybe that also radically changes the way that we build software. So, to me, I've actually never been more excited to be in the software industry.
Like, OK, when I originally graduated from university, I was pretty excited to be in the software industry. And I'm super excited to be in the software industry now because I think this is a step change. And we don't know exactly where it's going to end up.
Neal: Well, and to your point, you can tell some AI, at some point, generate this microservice architecture for me. But it can't tell you should I be using a microservice architecture to solve this problem? That's the wisdom versus knowledge part of this. And I'm with you. I'm an architect mostly by role now. And so I don't get a chance to dig into a lot of implementation details because I'm at a higher level. But this is great for me because now, the tools can handle all those fiddly details about how to implement stuff and focus more on the why you're implementing something versus how you actually implement it. I think that's really nice.
But you tiptoed right on to a slippery slope there when you mentioned pair programming. And this is actually a huge conversation at our last TAB (Technology Advisory Board) meeting. Can you pair program with an AI? Is it a useful pair if you're doing pair programming?
Mike: Not for my definition of pairing. And so that starts to get into the depths of what is useful pairing, why do you do it, all that kind of thing. I like the name copilot, right? I think that's useful as a name for a thing. I actually think if you're pairing and you're using an AI tool to help the pair, that can also be a useful thing to do. I don't think they yet replace the other mind which you are bouncing ideas off of and having this creative dialogue with. That said, I use ChatGPT and I would recommend 4.0 over 3.5 to anyone there. If you want to know, the 4 is significantly better than 3.5, especially for coding stuff.
But I implemented something on a weekend project on a technology stack that I didn't know using ChatGPT as an assistant to do so. And, actually, wrote some of that up. And one of the things I found was it is more useful if you are treating the AI as like a useful intelligent colleague than as some subservient minion or lesser being of some sort, right? I don't want to anthropomorphize the AI too much, but I've gone into conversations because you run out of context room, right? So you have to start a conversation again.
But I'll open with we have been working on blah, blah, blah, blah, blah, blah. And actually, treat it as though I was refreshing a colleague on what we were working on and then ask it about next steps and all this kind of stuff. And if you put a little bit more effort into the way that you're interacting with the AI, you actually get much better output from it and much more useful complete things. And so I've used it to talk through technology and architecture options; it's helpful. Again, in the spirit of generating ideas, it would come up with four or five architectural options or technology stack options and would go, oh, hey, I've heard of this one. I'm pretty sure we talked about that at a tab meeting one time — maybe we put it on the radar; so let me dig into that and actually ask the AI more about that particular technology. And that was pretty successful for me to go from zero knowledge of this particular tech stack to working application in a weekend.
Neal: That to be a programming Turing test is when the AI get good enough to actually act as a pair and not just a good encyclopedia, because that's basically what it is now — is a really interactive encyclopedia that you and your pair can rely on. So I have one last question here.
And, Prem, as you would do, if you were doing an interview with the chief AI officer, submitted some of these questions to AI to see what kind of things they could come up with. And as Mike said, these are really good for filling in gaps and things that you might have missed. And one of the things that both of them came up with in some form or another I think is a depressing subject around generative AI. And that's the ethics question. Because as we've seen, there's so many intrinsic advantages to using these things that they're never going to put any brakes on this because nation states, and competitive advantages, and business, et cetera. So I think it's contingent upon the technologists who are actually using this to understand some of the ethical implications of what they're using. You were talking about bias before, and hallucinations, and that kind of stuff.
I think this is one of those places where we cannot afford as technologists to be quite so blind to ethics concerns because they're going to start coming up on a regular basis, I think, as our daily work things put this in our face. So I think it's important to be aware of that. So do you see that as part of your role as the chief AI officer? To make sure that people understand the ethical implications of the choices they're making as they use these tools?
Mike: I think definitely yes. This is a branch of the thinking that everyone needs to bring to bear. There's always a question is technology neutral. No. Usually, I would say; it absolutely depends how you use it. And it's tricky because there's ethics around the data that was used to train models and the legality of the output. So we've seen with some image generators, for example; they will happily spit out images that are very much in the style of a specific artist.
And the question is, is that ethically OK or not, right? One of the things that we are starting to see is the hyperscaler platforms are starting to provide generative AI model marketplaces. So these big foundation models that are expensive to build and need lots of training time and training data, they start to have a marketplace of those so that you can select a model that you might want to use.
And at least in one of them, they come with a very clear data card that goes with the model to say this is what it was trained on broadly, like not the whole lot. But like this is broadly what this thing was trained on. And this is the IP or the intellectual property situation of the output and the input. So at a minimum, we need to be very aware of the intellectual property concerns.
I think on ethics as well, some of this stuff is very tricky, right? If you're going for an efficiency play, are you putting people out of work? Well, some people might have an ethical problem with that; other people would say, well, that's the nature of technology and automation anyway. And this inexorable march forwards to the machines doing more things for us. And if you look at previous technology revolutions, they've reduced the cost of things, which has increased the demand for those things. And I think what's going to happen here is we're going to reduce the cost of software, which is going to increase the demand for software, right?
I don't know any CTOs who are saying, I can get all 10 things done this year that the business wants me to do. And that I have budget for. They're not saying that. They're saying I have to choose four things that I can do. And it might be that with an efficiency play here, instead of… I don't think you're going to see less programmers employed, for example, I think you're going to see the same number or possibly more because instead of doing four out of the 10 things, you're going to want to do 5 or 6 of them. Instead of working slowly through a three-year transformation program, you're going to want to accelerate the timeline on that because you can do things faster.
I think… I was talking to the CIO of a wine company. And he said, his marketing team can now go 10 times faster because they have this brand guidelines baked into an AI prompt, that they can put in the start of a conversation with an AI and then put in specifics around the exact wine that they want to create some marketing content around. And then it generates that marketing content for them.
And I said, "So you're going to lay off 90% of your team." And he just laughed. He was like, "No, we're going to do 10 times more stuff." Like there's a whole list of wines that we would love to do more specific marketing for, and we can't. So if we can go faster, we're going to do more.
So I think I remain a tech optimist here. But I do think we have to be mindful of the disruption that it might cause in the short-term. Because as I said, the part of the excitement is that this affects all knowledge work — and that's a ton of people, right? This is lots of industries, lots of jobs that could be impacted by this stuff. And so like the pace at which it's coming at us. And frankly, the current economic malaise that the world seems to be in, I don't know, that's not a great combination in the short-term. So certainly, I think we need to be mindful of that.
Neal: Yeah. I think I've always had this theory that as soon as software developers showed business people that you can use software to produce reports, and graphs, and that stuff, that the supply demand got massively skewed because the business always wanted way more software than we could produce. And so maybe this will finally start equalizing some of that because it seems like in the software world, we've always been struggling to keep up with the demands of how much software the world wants, and so maybe this will finally kick us into and allow us to move faster than the market tends to demand new things. And we're trying to catch up with them. So I think that's useful.
And towards your about accelerating trends, et cetera, Blacksmith used to be a really, really big job — and then cars came along and Blacksmiths had to retrain. And so I think that's the key thing here is that I don't think it'll eliminate jobs, but it will change some jobs and change the way they're done. And so I think that's important not to get caught out in one of these jobs that transforms and makes sure that you understand the implications of how it's going to transform, or you'll find yourself as a blacksmith lamenting the arrival of cars.
Mike: Well, and that's especially true for people in the software industry, which is largely our audience here. I would encourage everybody to try the stuff, right? GitHub Copilot is a big brand in the gen-AI for software world, but there are also open source versions. The open-source world is iterating incredibly quickly on tooling. You can get FauxPilot, which is run all on your machine if you want. You don't have to send any data anywhere.
So yeah, I would encourage people really to shelve the skepticism for a minute and shelve the desire for perfection, and simply ask is this useful, can I use this, does this help me even a little bit to do what we all want to do? — Which is build great software, great value for our customers, all of that stuff. I think it can be accelerated through these tools. And I would encourage people to look at that.
Prem: Any parting thoughts that you would like to share with our viewers and Thoughtworkers alike?
Mike: I would encourage everybody to use this stuff to experiment with it, to do so responsibly. Personally, I've ended up doing a lot of stuff for personal projects because it's much clearer. The intellectual property situation, I don't have to use confidential data or anything like that. So getting an idea of the way to use the tools, doing it responsibly, working with your company or your client to figure out how to use these things, and to get permission to do so, and to do so in a way where everyone's comfortable with the intellectual property, I think that's the first step for everybody. And I think we're going to see this stuff accelerate. So the time to start is absolutely now.
Neal: Have you seen anything in the software development world in the last two decades that has taken off with so much enthusiasm and so fast as this interest in generative AI? Because I don't think I have. I mean, this is so much beyond the other fads and trends that we've seen in the software and the software ecosystem. This one seems special.
Mike: It does seem special. The other question to ask, though, is, in the same way that we would talk about, I don't know, misinformation, and politics, and stuff like that, we are also much more hyper-connected than we have ever been. So I think this idea is a big deal. But it's also that it's moving quickly because we are connected on social media, everybody can tell what everybody's doing, whereas the shift to Cloud, let's say, was just slower because of that. I think also the barriers to adoption; anyone can go and just open the website and start using these things. And we are seeing this already baked into corporate email tools and stuff like that. So the accessibility I think is the other thing that's really accelerating this. So, yeah, I've never seen anything move as fast as this. I've never seen us at a point where every one of our clients, every one of our old clients who haven't spoken to for a while, every one of our new prospects — everybody wants to at least touch on this subject. So I really do, I do think it's a big deal. I don't whether things are going to calm down, right? Are we on an S-curve, where we're in the upward slope of the S-curve and we will figure out, hey, there's this box in which these AI tools can play? And that's really the sweet spot for them? Or are we on some exponential hockey stick where the sky is the limit? And actually, it might be several years before things even slow down.
Neal: So thank you very much, Mike. This is a fascinating conversation. I would like to go ahead and have your AI assistant book an appointment to re-record this podcast a year from now so we can catch up to see what has changed, because you've only been in this role shortly. I'm guessing this role is going to change and evolve at the same breakneck pace that the ecosystem and the technology is changing and evolving. So it'd be fascinating to check in a year to see where things are and at that point.
Mike: We should definitely do that.
Neal: All right. Thank you so much, Mike and thanks Prem. Fascinating conversation. And good luck in your new role. I'm guessing you're pretty busy.
Mike: Yeah, just a little bit.
Neal: Thanks. Thanks for joining us, Mike and thanks, Prem. And we'll see you all next time.
[MUSIC PLAYING]