Brief summary
Serverless has become the buzzword du jour. But what does it mean? What are the implications for your enterprise applications when you’re using services where you’re not responsible for the infrastructure that they run on? How do AWS Lambda, Azure Functions and GCP Cloud Functions fit in To explore these issues, our co-hosts Mike Mason and Zhamak Dehghani are joined by Paula Paul, a Tech Principal at Thoughtworks and Mike Roberts, an external cloud engineering consultant and former Thoughtworker.
Podcast Transcript
Mike Mason:
Hello everyone and welcome to the Thoughtworks podcast. This episode is going to be all about serverless and serverless architectures. I am Mike Mason from Thoughtworks. I'm one of the hosts, and I'm joined by several other guests. First of all, there's my co host Zhamak Dehghani.
Zhamak Dehghani:
Hi everyone, hi Mike.
Mike Mason:
And Zhamak is one of my co host. She's tech principal here at Thoughtworks. And I'll also introduce, guests from Thoughtworks, which is Paula Paul. Paula can you tell us a little bit about yourself?
Paula Paul:
Hello Mike, and thanks for the invitation. I am also a tech principal at Thoughtworks, I guess for the past decade or so I've been working with clients to help them adopt cloud, whatever their needs are there, prior to that I actually worked in data centers and had my hands on physical servers, doing a lot of server consolidation and such.
Mike Mason:
And I'm very happy to say we have an external guest with us today. His name is Mike Roberts. I've actually known Mike for a long time we went to university together. He's a Thoughtworks alumni. Mike, why don't you say hi and introduce yourself? Hello.
Mike Roberts:
Thanks Mike. Hello, everybody. Yeah, my name is Mike Roberts. I actually used to work at Thoughtworks, a long, long time ago, back in the early 2000s. But I've managed to retain some lovely friendly relationships with Thoughtworks Folk ever since. So I worked in finance for a long time. And then, just over a couple of years ago, I actually started my own consulting business, helping people out with cloud engineering and cloud architecture.
Mike Mason:
Okay, so the topic today is serverless and serverless architectures. And I think what might be a good starting point is to talk about what we actually mean by serverless. Because I think as with almost anything in our industry, once something gets a little bit of buzz, it becomes a little bit murky what that thing actually is, or maybe the original intent of a name of something has become something else. We see that a lot with things like agile and microservices for example. I don't know anyone who's not building a microservices based system these days. Anyway, Mike, why don't you give us a quick rundown of, your definition of serverless, and the important bits in that definition.
Mike Roberts:
Yeah, Thanks, Mike. So, serverless is sort of become a number of different things to different people. To me the clear thing, the important thing about serverless, is the idea of using services, where you are not responsible for managing the infrastructure that those services are run on. Now, and a lot of times, that means something like using a functions as a service platform, something like lambda. That's really why serverless has become really popular because, lambda and things like it are very different to how we've built applications in the past. But equally important, and in some ways, sometimes more important in the world of serverless, these services that we refer to as back end as a service. And these are things like software as a service, but where we're incorporating those services into applications.
Mike Roberts:
So, in sort of the newer world, that's things like using, externally hosted user management, services like Auth0. And also maybe something like Google's Firebase, which is a database application, which you don't have to run the database yourself, Google look after that for you. And you can integrate directly with that database from a mobile application or directly from a single page web application. But there's also a whole bunch of older services here. So actually, if we start thinking about things like Amazon S3, which many many people will have used, that itself is a serverless back end as a service. And that's when people might start saying, "Why is that? That just seems like you're diluting the term." Well, interestingly, S3 has a number of properties that it shares with lambda and these other things when it comes to resource management.
Mike Roberts:
And I think there's actually a number of clear differentiators for what a serverless service is. And that's things like first of all, it doesn't require you to manage any server hosts or server processes. So when you're using S3, you're not thinking about individual storage nodes, you're just thinking about this abstract concept of a bucket, where you can store your objects. There's no concept of networks or storage hosts or anything like that. And related to that, is a serverless service, self auto scales and auto provisions, based upon load. So again, when you're using something like S3, you can be storing a kilobyte in there, or you can be storing a petabyte in there. You are not responsible for figuring out, the auto scaling with how big it needs to... how much capacity it needs to store and nor are you responsible for figuring out how many S3 instances needs to manage the load.
Mike Roberts:
And that idea of this like completely invisible and automatic auto scaling is really crucial to a serverless service. Also costs go along with that scaling, with a serverless service. And another thing about it, is it has this idea of implicit high availability. So you're not having to worry about setting up a high availability set of instances to look after your service. So all of those things are criteria that we can apply to things like S3, to Amazon Lambda to Microsoft Azure Functions, Google Firebase, et cetera.
Mike Mason:
So I mean, if you think about the original promise of cloud, it was... Well, I mean, I think there were a lot of different promises. But I remember, 10 years ago, people wanted to be able to survive the Slashdot effect, or to build a... I know we've built things that you need to get running very, very quickly and would scale to high load and then might just disappear again overnight. We did a project to help with flood relief charity thing in Australia. And the idea was, the thing would collect donations. And I think it was an easy to kind of, auto scaled kind of a thing. But you had to deal with that yourself. Like that was the... to me that was one of the original promises of cloud was that you got this scalability thing. And you also got somebody else to manage the infrastructure. I mean, so serverless is just that plus or is there a bigger difference there?
Mike Roberts:
I think there is a bigger difference. So first of all, yes, absolutely. You can use things like auto scaling with EC2 and various container services. But it's still normally the responsibility of someone in the organization to actually manage that scaling or infrastructure. So thinking about what it means to scale up and scale down, especially with scaling down, what happens when you scale down to your infrastructure, when you start wanting to turn things off. There has to be some thought that goes into that, the cloud will support it. But you still need teams to manage that scaling and still manage the actual individual resources. But there's another thing here as well, which is when you're using systems like that, you're still thinking about the server hosts that you're running on.
Mike Roberts:
So you're still thinking about things like operating systems, and that kind of thing, and how systems are interacting with each other at a layer 4 IP level type thing, like you're still thinking about, IP addresses and all that kind of area. The promise with serverless is that A, you really are raising your abstraction, similar in some ways to a platform as a service where you're not thinking about underlying server hosting server processes anymore. And secondly, you're really thinking from a networking level, a much higher level, we're typically only ever thinking about layer 7 interaction. So HTTP based interactions, unless we have to think about things like security and virtual private clouds and that kind of thing. But we're not normally having to worry about that sort of lower level IP level networking. So those are sort of two places where, yes, it's the promise of the cloud. But we're now not having to worry about some of those issues, that was still our concern.
Paula Paul:
Do you have any concerns over the visibility of that, auto scale, without any line of sight to it, downstream processes that might want to be informed if your serverless capability is massively auto scaling due to some unexpected load, and if that auto scaling happens, on its own, does that begin to cause issues downstream?
Mike Roberts:
Yeah, I mean, we could definitely... there's a whole bunch of what I like to sort of refer to as the fire swamp of serverless, things that are dangerous, that you need to be aware of, and certainly, the way that... for example, AWS Lambda can horizontally auto scale, wonderfully and very, very wide very, very quickly, can be scary in certain situations. And so there are a number of things that one can do, either to limit that scaling, or to be aware of that scaling, or to architect to allow for that scaling.
Zhamak Dehghani:
Mike on the definition, I completely agree with Mike's kind of generalization of, on the continuum of needing to control everything from, the metal level up to the function that you're writing now to, function as a service, you really don't care where it's run, how it's run, how it scales. And somewhere on that continuum, people starting to drop the serverless kind of definition, this is serverless. And because it's a trendy phrase, you sometimes you stretch, we're on the continuum serverless, but actually serverless and how much you need to abstract away, the underlying operation. And I think that's a kind of a great way of... great lens to look at it. But from the bucket... in that bucket of serverless, the things that you mentioned. Even though they all share the same property of removing some operational, kind of overhead and dependency from the teams that are building the functions.
Zhamak Dehghani:
They are quite different. Back end as a service, you know, [inaudible 00:10:31] mentioned, leads to a different patterns of development, different architecture in a way, it has its own implication, like, exposing your data and maybe removing some of the abstraction layers versus, a function as a service is, reacting to an event and executing a function and going away. Even though they have the same properties. We probably, as when we choose an architecture and what we pick, we might, be quite... we might like to use function as a service because it fits that event driven architecture and it fits really well. But backend as a service, or a Database as a service may not fit into that architecture, or may not be something we would want to pick. So, they don't all come together if you're picking, a serverless architecture.
Mike Roberts:
Yeah, I think that, yes, in some ways, they are very different. Like one is about, a custom compute platform. We want to write application code, and it's a different way of running application code. And the other as you say, is about, managing services that we haven't even written the code for. So that's one of the differences between FaaS and BaaS, wrote the code, back end as a service we didn't. But the interesting thing I think, is that, even though they seem very, very different from an architectural point of view, the reasons that we use either are very, very similar. And this is where we start getting into sort of the benefits of serverless and why people care. To me, there are effectively two main reasons, using either of these types of service. One is purely economic.
Mike Roberts:
So one is about, we're looking at the total cost of ownership of running an application. And that includes operations, it includes things like, disaster recovery and multi region, and everything along those lines. If we use serverless services, is that going to let us operate our application more cheaply, for less cost? And that's a process that you can go through and analyze and decide that yes or no, you're going to save money by using lambda, and or using S3, and or using a raft of other services. So that's sort of one element and that's one reason People are interested in serverless, is the economic side. On sort of the economic side. The other side, which arguably is also economic, but it's not architectural, is about time to market. It's about how quickly, can we take an idea, that we can conceptualize and get that into production.
Mike Roberts:
And the idea with both function service and backend as a service is because there is less work to do, in standing up an application, and in writing code that is required for that application, we can shorten that lead time. And this allows people to have much more rapid experimentation cycles. And this is actually why I'm most interested in serverless. I think there's a cost savings, in a lot of scenarios, not all but a lot. But I think that this idea of much more rapid experimentation, is very aligned with what we've been talking about around agile, for 15 nearly 20 years now. Agile is not about going faster. Agile is about how do we make more rapid iterations and how we try out more ideas. To me, serverless is like the technology equivalent of Agile, ramped up to that next level,
Mike Mason:
That's interesting. And I sort of want to dig into both of those benefits and really pick them apart because that's where the rubber hits the road on a lot of these things. On the ability to go faster. We work with a lot of enterprise clients, a lot of enterprises, have departments full of people who are building, reasonable things, internal platforms, all sorts of stuff. And I think they would say, "Well, our internal platform is intended or does provide a number of those benefits. You should be able to deploy something on our internal platform, it's not using all zero it's using an internal..." Or, whatever that would be. So, is it simply that, because this is using kind of plug and play, third party services that are out there in the real world, you actually get the benefit, whereas as opposed to, a lot of these internal platforms, they can be difficult to use, they can be a bit clunky?
Mike Roberts:
It's the same argument I think of, on prem versus cloud, right? You have a lot of organizations that are big enough that they say, "Hey, we don't need to use the cloud. We believe that our economy of scale is such that, we can run our own internal service, our own data centers, and our own operations team at a low level." Assuming it's a cost reason there doing it. Other organizations are on prem because of security reasons. And that's a whole other thing. But a lot of organizations, have traditionally said that they are on prem for economic reasons. They believe that they can do things more cheaply, than the cloud. And what we've been seeing over the last five and 10 years is, more and more organizations are saying, "You know what, we can do it as cheaply as other organizations."
Mike Roberts:
You know, you even look at organizations like Netflix, who have been on the cloud now for many, many years. They are operating at massive scale. And so you would think they could get the kind of economy of scale that they could run their own infrastructure. And what they've said is "No, you know what, Amazon can do it more effectively than we can ourselves. And plus, we can also get some innovation stuff, which we can talk about in a second." So, I think that this whole, the serverless thing as well, where people say, "Hey, we can provide our own internal platforms." Yes, you can, you absolutely can. But at some point, the economics are going to get such that it's actually going to be more cost effective to let a cloud provider manage that platform for you, than you try to manage that internally.
Zhamak Dehghani:
One thing related to that, that I observed, I guess, on my last project, and I'd like to actually hear other people's perspective on it. Like related to what Mike was saying, we did have... the clients and us with the client had already invested in, running a Kubernetes cluster, making, a lot of the capabilities on that platforms, kind of self serve. We had... service Mashup was giving us all the observability and the, routing and rolling out services incrementally to us. And it really... For us to put service as a container out, was a few hours, like starting from an idea, "Oh, I want to, write a service that does blah." To put it out there, not to production, I have to admit we weren't quite there. But, near production, services was, six, kind of seven hours.
Zhamak Dehghani:
And then within that architecture, they were... because it was very event driven architecture services, we're talking through kind of events streams with each other. There were places that serverless, we kind of wanted to play with serverless and so, it does this tiny little adapter function, getting an event and calling a RSAPI on a third party application that doesn't understand events. So there were these glue functions that seem like, yeah, let's try serverless, and the challenge we had was, there were parts of your infrastructure that you try to standardize, and there parts of your, ecosystem that you try to differentiate and be polyglot. The parts of infrastructure that you try to standardize is the operational layer now, was being managed by Kubernetes, and service session and a bunch of other things that was providing observability.
Zhamak Dehghani:
So being able to ask our operation team to, not only... I mean, I understand there's like zero, probably very, very little to take care of the serverless, deployment, but it's still a different deployment patterns. So add that to an infrastructure that was already set up, it seemed like a little bit of an overhead for them, that if, the team wasn't successful enough to bring a serverless function, add that to that infrastructure. They would revert back, they actually reverted back to just write it as a container. And I wonder if that problem is... if that actually makes sense, like if you're already operating and have a... will lubricate a self serve, infrastructure that is different. It does make sense to kind of bring an embed serverless in that.
Paula Paul:
Yeah, I've seen this as well, very similar situation, not a client where, the Kubernetes infrastructure was built. Of course, it was very easy for the developers to include, new capabilities, new services, all containerized. But of course, the infrastructure team still had to do, maintenance of the images. This is classic kind of operational duty. So I mean, at some level, it would be nice to say, Yes, we no longer have to worry about patching images or rotating clusters in order to address security issues." But on the other hand, just jumping into serverless, there's friction there as well because you already have established observability practices for, the services in your cluster, a log aggregation, dashboards. And with the serverless that we were using at the time, it was not the same way, you didn't instrument it in the same way for observability.
Paula Paul:
So you do have to... it's almost like hybrid data center and cloud, you now had a hybrid, container and serverless, or serverfull and serverless, whatever the opposite of serverless is, I'm not quite sure. So I think that, the benefits of serverless are there, but there might be some patterns where it applies, more readily than others.
Mike Mason:
I think that's interesting, because I've often heard serverless described as not an all or nothing architecture, that you could choose a piece of your ecosystem and say, "We're going to use serverless for just this one point." But it actually sounds like it's going to be a bit more complex than that, in the real world, or depending on the organization.
Mike Roberts:
No, I think you bang on Mike and I think that's one of the things that I think is... I love about serverless, is it's not actually an opinionated architecture, you can bring it in a very small level. To me, the point is, what do you want your operational platform to be, right? And some people want to say, "We want to standardize on Kubernetes." And if that's good enough, if Kubernetes as a platform, satisfies everything that your developers need, then great. That's wonderful. What a lot of other people said is, "Hey, we're going to standardize on a particular vendor platform as our platform." So, all of the primitives that come along with Amazon, like the Amazon security, and how Amazon treat networking, all that kind of stuff. You can do that using Amazon Lambda, or you can do that using containers. Or you can do that using EC2.
Mike Roberts:
Like either of those things is about understanding what your platform is. But you can absolutely fit, little bits of serverless in here and there. I gave a keynote at an O'Reilly conference, a few weeks ago, and I talked about, how I've seen organizations introducing serverless. And the sort of what we've been... what a number of us have been seeing now. Now we're sort of getting into these more mature phases. Like at the beginning of serverless it was all startups, and it was all or nothing, and everyone was going gung ho. But that was the early adopters. Right now, what we're seeing is, with sort of the... now we're getting into the early majority, like, what we're typically seeing is, is it's very typical for serverless to be used in operations, which to some people is really weird. They're like, why would operations folks want to use this, like completely non operations type thing.
Mike Roberts:
And actually, we've seen a lot of teams, operations teams, who got a lot of value from using various serverless tools and techniques. And then after that, it can start coming in, like is very, very classic then to start thinking about using it, for little things that you would normally use for cron jobs on a cron server. Because it's so easy to set up a scheduled lambda, to do that work for you. And then you're not managing that one cron server to rule them all, everybody hates because it has 100 different cron jobs running on it, and it's a total nightmare. Right? And then you can creep into full blown applications from there.
Mike Mason:
So, I want to ask some dumb questions for a minute, right? Because honestly, I really need to tackle a few things. So, I had somebody say, slightly tongue in cheek, but they say, "Serverless is the new stored procs. Right? It's pieces of code floating around, there, operating on some shared data store with, no control over what the hell they are, and just, no visibility into it." Is that true? I mean, what do you do to manage all this stuff that is just floating around. I mean, it's not even... the cloud, the joke about the cloud is there's no such thing as a cloud is just your stuff running on somebody else's computer. I mean, the stuff is now different. It used to be that I had like... at least I had some sort of an app that was really there.
Mike Mason:
Now I just have like, single functions floating around. Please tell me that the management of all of this stuff is a little bit more coherent than that. And, how do you do test cycles? All that kind of stuff.
Mike Roberts:
Yes, it is. Anyone that's letting any of their lambda services hit any database willy nilly needs, their head examined, right? There's a lot of stuff that you can do here. And there's... I mean, we can dig into this. And this is one of my favorite areas. There are many ways where people go, "Building lambda applications is going to be very, very different to how we've built traditional applications. How do I arrange my source, I can't use them on a repo, and blah, blah, blah." I think a lot of that stuff absolutely doesn't change, like how we think about doing continuous delivery, how we think about building source, doesn't really change with lambda. I mean, it can do, and you see a lot of tutorials where they say, "Now open up the Amazon web console, and upload your zip file."
Mike Roberts:
And it's like, "No, no, no, no, we don't do things that way." That's fine as a learning tool." That is not what you do in production with serverless systems, you use exactly the same processes that we've been building through continuous delivery, and good engineering practices for years. And it's effectively the same thing. One example, that I can give here, and, it's sort of a... it doesn't use all of the aspects of serverless, but you can build, for example, a micro service very, very happily using serverless technologies. And what I mean by that is you might have 10, different lambda functions, an API gateway sitting in front of those lambda functions, and an embedded database that those lambda functions use. And you can set up your, AWS security, such that nothing else can talk to that database, and you can set up the security of those lambda functions, so that they can't talk to anything apart from that database.
Mike Roberts:
Right? And it's not really that hard once you've understood some of the fundamentals of using AWS security. And at that point, to the external observer, what you have is a micro service, they have no idea whether that is implemented in containers or implemented in lambda, right? And the architectural patterns of that don't change, and how, you've set up your testing and your CD, and all of that stuff doesn't change. It just so happens that you've built your code into lambda functions, rather than into containers. And that's all perfectly possible. There are other ways that you can build lambda and, I would... 18 months ago, it was kind of still a little bit of a wild west about how do you build and package these things. But I certainly think in the Amazon world, they we're really starting to see, some standards come along now and how we think about this stuff.
Paula Paul:
Yeah, I was going to ask earlier, and I think that this is a related topic. Is serverless really an architecture or is it just a different deployment topology because, as you said, I could build my microservices in containers that orchestrate them in Kubernetes or mezzos. Or, I could deploy them in lambdas, and configure the networking for policies about who should talk to who. So to me, it's the same architecture, it's just a different deployment topology, or maybe infrastructure architecture, if you will. So is serverless really an architecture?
Mike Roberts:
So we're all consultants here. So I can say a key phrase, it depends. So, I just gave you an example where you can use serverless technologies and not change your architecture. And that's perfectly possible. And in many cases, that's going to work really well. But, and it is a big but, I don't think that's always the best way to think about serverless. First of all, there are problems with that. And there are also benefits that you're not extracting. The problems come, where, to one of the points I think that you were making early on Paula was, what about scaling? So if you have a micro service, that you're implementing in lambda, and it's not connecting to its own database, it's actually connecting to a downstream service, then what happens with scaling?
Mike Roberts:
And if you're building, for example if the downstream microservice that is connecting to, is itself not serverless, then you have to think about... when you are making a hybrid architecture serverless and non serverless, you have to be very, very aware of the scaling properties of all the components involved. And so, that's where you do have to think a bit differently because there's some different constraints and there some different behaviors there. And that's where things sort of on the negative side, you have to sort of think a bit differently. But on the positive side, what we're starting to see is actually like... ones, teams really start getting into this stuff, they go, "Oh, wow, there's entirely different ways that we can architect things."
Mike Roberts:
There's a really wonderful talk, that a guy called Goico [Adich 00:28:50], gave, about, just over a year ago, and he talked about how we went through two iterations of re architecting to serverless. And the first one was kind of just putting everything into lambda functions but not really changing anything. And it was good, it made a benefit. But he wasn't that interested... It wasn't sort of that interesting. Once he got used to it for a while, he then re architected it again, but taking a drastic change in how he architected it. And doing things like, allowing the front end to communicate directly with databases, embracing some of the security patterns that you get with many of these serverless services. Doing things like embracing Amazon services that he wouldn't normally have used, in ways that again, scaled in an interesting way for his use case.
Mike Roberts:
And one of the things, the two things he got from that, one was, it's much easier for him now to make changes, but also he brought up his Amazon bill, for this particular... So this is an application, it's a mind mapping application with like, 200,000 active users a month or something. So you know, not small but not trivial. He brought up his Amazon bill, the whole thing costs less than $100 a month, for his Amazon infrastructure. And only $1.50 of that, was lambda, like most of it was actually CloudFront and other services. And so, when you start actually really understanding, like what this stuff allows you to do, it can change your architecture significantly.
Mike Roberts:
Because it's really about... And this is where, lambda is actually not that interesting. Lambda is just about gluing things together. What's really interesting is where you start thinking about, what about all these other services that are available, that means I don't have to write code in the first place. They all work well together from a scaling point of view. And there's a number of people have been chatting about this for a while. But I think it's... this is the tricky part. Because, how we build applications, but really relying on many, many vendors services, is where things get really valuable but also really sort of... there's some warnings there and some things that we need to think.
Zhamak Dehghani:
On the architecture, the question Paula asked and also the question Mike, you asked in terms of visibility, out of the bucket of serverless goodness, if I pick out the uninteresting one, the lambdas and function as a service, the way I see that is just, pure event driven architecture. When we talk about event driven architectures, you're getting there a set of patterns, you can have, a mediator or a pattern where you still have one service that mediating, different steps of your process, however, it emits events that then downstream... some of those downstream of parts of your process can, respond to so you still controlling the workflow in one mediator, but then parts of that workflow can be reactively evoked and extended to, the broker pattern that okay, everything goes on Kafta topics and everybody respond to it and I think if we purely pick lambdas and not have a hybrid architecture then it's a pure event driven architecture in terms of how the workflow is implemented not in terms of the underlying kind of capabilities that you're using from third parties or other SAS capabilities that you're providing.
Zhamak Dehghani:
And in that pure event driven architecture, there are set of problems that we need to resolve, kind of regardless of what sort of event driven architecture you're doing, but it's... you have to think about those problems seriously, like how you implement really transactions or not implement transactions, how you deal with errors, how do you understand, the... how you manage your workflow, the concept of sagas, and I think, architecturally from the workflow implementation, there are some really interesting patterns and some challenging set of patterns when you go to a purely reactive and event driven architecture, using lambdas.
Mike Roberts:
Yeah. And first of all on this, getting away from my sort of weird abstract ideas of serverless architecture. And I do recommend, Goico's talk, talking about this stuff about event driven architecture, especially data pipelines. This is amazing for lambda. I mean, I said lambda is boring. Actually, I don't think lambda is boring. I think I've spent a lot of time in lambda. And I'm actually very excited about some of the changes that we've been seeing this year. But how I got into lambda was through a data processing pipeline, and how many companies are finding lambda extremely effective is in this world of data processing and asynchronous systems. There's been various case studies that you can look up, companies like FINRA and Fannie Mae, they've been using lambda for this kind of thing.
Mike Roberts:
And it's because of the scaling benefits and it's because of the cost benefits that people really love this. But yeah, there's definitely some challenges, that you're speaking to, I mean I think that those challenges exist in some ways. Whether you're using, lambda or not. Many of those challenges exist, whether you're using lambda or not. And sometimes, it's appropriate not to use lambda. And again, to come back to this idea of a hybrid application, if you're building a data processing pipeline, that implies that you normally have several stages going on, it might make total sense, not to implement some of those stages in lambda, it might make sense to have, either some of those stages in something like a more traditional application hosted in a container or on EC2, or it might make sense to use some other kind of serverless system.
Mike Roberts:
So one element of Amazon that's getting increasingly popular is the service called step functions, which is basically state machines as a service. And that is... it's been out for a year and a half now I think, I can't remember how long, but it's sort of getting more robust. And so there's certain situations where it might make sense to use that kind of component within your pipeline. So, trying to think of the right component, right type of technology in the right place in the pipeline is super important. But yeah, you still got to think about where things need to be transactional. Sometimes lambda brings its own challenges. So there's one of the areas that I talked about, and I talk about this fire swamp of lambda, again, which is that, lambda guarantees, at least once execution, the key that being at least, and sometimes a particular lambda function may be invoked more than once for the same source event.
Mike Roberts:
And so you need to think about idempotence. But, we've been needing to think about idempotence, when we've been thinking about, that there's more distributed development, and more distributed architectures that we build anyway. So lambda case, and you just kind of forces us to think about these issues rather than necessarily introducing new ones.
Mike Mason:
I mean, we've been talking about, getting away from operating your own stuff and allowing, serverless platform provider to do it for you, is serverless the end of DevOps. No, no?
Mike Roberts:
No.
Mike Mason:
You still need DevOps?
Mike Roberts:
No. And I get angry on the internet when people say things like that. Yeah, there's people that say that serverless is the end of DevOps. And I, 150% disagree with that sentiment, right? Serverless is the end of a certain kind of operations for many, many organizations. You don't need to think about, operating systems, you don't need to think about some of the infrastructure I talked about earlier. Paula mentioned, operating system patching, there's this is one great example from early in January, earlier in January or February, earlier in the year, where the specter and meltdown, vulnerabilities came out, the things that everyone needed to patch their operating systems because there's a bug in Intel chips that can be exploited. So, Amazon Lambda just patched everybody's lambdas, there was no discussion to be had, there was no work for people to do, they didn't need to redeploy their software. It just happened.
Mike Roberts:
And so that kind of wonderful operational work, that we all hate doing, and hate... it's just awful. Like that goes away. There's no question on that. A lot of the stuff around configuration management that we've been talking about over the last 10 years, some of that goes away. There's no real point to using puppet or chef or something with lambda. That kind of level of DevOps tooling, that goes away, that's true. But that's not DevOps, right? If we look at, the DevOps handbook and what people like, Jez Humble and, and Nicole Forsgren, and various people have been talking about over the last few years, DevOps is about how do we accelerate development? How do we accelerate the entire process, and make that as lean as possible? That stuff still exists, we still need to do monitoring. We still need to do deployment, we still need to do testing. We still need to think about so many aspects and so to me, what DevOps is really about.
Mike Roberts:
And you talk about this in the four values, I can't remember what you called it in the tech radar, that stuff is as important as ever. And in fact, I'd argue that in a lot of cases, some of these serverless platforms allow us to accelerate those ideas. So it's embracing DevOps in my mind as opposed to replacing it.
Zhamak Dehghani:
Monitoring with actually thinking... there might be things in terms of resources and, operating systems and infrastructure orchestrators that you don't monitor, but in terms of your application that has been, disintegrated into little pieces of event driven functions. That becomes a more important and probably is slightly more complex problem to solve. I mean, even with microservices monitoring has become its own science. And there are companies being built to treat monitoring data, whether it's coming from your traces or metrics, as big data and try to add intelligence on top of it to actually make sense of when something didn't work, where exactly happened. And I think now that your architecture has a smaller pieces that need to kind of collaborate with each other to create a meaningful overall service or workflow, the monitoring and making sense of, the signals that you're getting from these, tiny pieces of code, is even a bigger challenge and bigger investment.
Paula Paul:
Yeah, exactly. I mean, you still have to worry about security. And you still have to worry about even things like correlation IDs, that goes to monitoring. So all of these things, I typically would work with my delivery infrastructure or DevOps person and make sure that we had the right tooling in place or that we were consuming whatever it is I'm producing from my services in the appropriate aggregator. So, yeah.
Zhamak Dehghani:
But it's just on that continuum, like even driven architecture we had to deal with how to, deal with transactions or rollback or errors. Again, the function as a service and lambda architecture forces us to deal with it. And with monitoring and observability, if you had a handful of services, yes, maybe you could trace, look at the logs and write a couple of, query, SQL queries, on top of your Kibana to figure out what happened, but as that environment gets more kind of fine grain into... broken down into pieces of fine grain functions, then, you have to invest and think about it as really, as a service of it in it's own and the intelligence you need to build on top of making sense of the large amount of data.
Mike Roberts:
Yeah, I was just about to say if you look at the serverless tools, space, it's all about monitoring and observability, because that's sort of one of the key missing pieces around that right now.
Mike Mason:
So if we... Winding up the podcast here if we think about what's on the horizon, or the future of serverless. I guess we should say we're recording this, right at the end of 2018. And serverless is moving fairly fast. So, Mike, what do you what do you expect to see for serverless in 2019 or what are the important future things?
Mike Roberts:
Yeah, and I will caveat this by saying I spend the vast majority of my time looking at AWS based services. That's where almost all of our clients have been over the last couple years, it's all been AWS and so, I speak with that a little bit of caveat. The serverless platform for Amazon is now fairly mature. Right? lambda has been around for four years. And the things that we're seeing in lambda, well, there are a lot of lambda announcements at reinvent that happened just about a month or so ago. It's about tuning now, it's not about major, major changes. It's about, trying to like remove some of the friction that people have, with some of these services in great ways, but it's starting to become pretty stable. And so, I think to me, and the caveat that is around this stuff that Zhamak was just talking about, I still think there's some work to be done, on the monitoring side.
Mike Roberts:
But to me, we're starting now to think about high level things. We're talking about education. That needs to be really something that we're pushing on a lot, lot more. And when I'm not working with clients, which is most of the time, I try and think about this stuff, but there's a lot of people working on the area of like, how do you build serverless applications. I think the other thing that I'm starting to see, a lot more, momentum around which is great, which is thinking... and this is related, is around the application mobiling, and serverless. So, Amazon themselves now have a team or couple of teams in fact, dedicated to thinking about this sort of area within serverless. So, you have a team at Amazon working on this thing called the serverless application model, which is, partly is a deployment tool, but it's a little But more than that, it's about, what is the world of...
Mike Roberts:
What is the software development lifecycle when we're thinking about building a serverless application and not just an individual lambda function. And related to that, you have another team, who are working on this thing called a serverless application repository, which is a way that you can import serverless applications that other people have built. But more importantly, what there's sort of a bigger picture there, of what they're thinking of, which is, what happens when you try and build like an application of applications. What does that mean when we're sort of trying to build a collection of our own serverless applications? And so they're doing... they've already done some really good work on that. But they're doing some really interesting work in continuing with that into next year.
Mike Roberts:
So to me, those are sort of the main things. I mean, we will see the platforms continue to refine so, for example, Amazon just announced a few weeks ago, actually just got made GA yesterday as we speak, that there is now web sockets support for Amazon API gateway. And, most people, won't care about that, for small minority of people, that's going to make a huge world of change. And so, we see these flood of continual little refinements that I say that ease the friction to using some of these platforms and we'll continue seeing that.
Zhamak Dehghani:
Yeah, I agree with Mike. And like any kind of paradigm shift that happens, we go through this curve of growth that, people jump in and start using it. And then the, community supports kicks in and the tooling starts evolving like the, observability tooling and the patterns evolving, oh, we really hurt ourselves, doing lambdas in this area, or, flooding our database with connections with our auto scalable lambda. So just those evolving patterns and anti patterns, being more distributed, I guess evenly distributed to the community, and people start talking about them and writing about them. One area that I hope we will see some change, I don't see on the horizon and maybe Mike you're seeing something on the horizon. More about the standardization. So what Kubernetes allowed, it created an open standard that all the providers jumped on and, it remove your dependency to your AWS or GCP, to some extent. Right? You have some level of abstraction. And I wonder that's on the horizon for serverless.
Paula Paul:
I'd like to see that as well. Because like with Kubernetes, now, if I can containerize my application, I shouldn't matter if it's orchestrated in, as your Google AWS and I don't yet see that kind of flexibility with serverless, and I'd like it because if there's one thing we know, is that things will change, and we should be able to change our minds about where we're hosting our serverless capabilities without a lot of friction. So if Sam is listening, that's what I'd like for Christmas.
Zhamak Dehghani:
And maybe the need for that is less because you're... maybe the amount of code that you're writing, that is tightly coupled to the platform, maybe it's less, and you can just move that across with less overhead. But I still feel like there is some room for standardization.
Mike Roberts:
Yeah, this is a sort of a slightly contentious area, depending on who you talk to. Right? On one hand, yes. Standards good, being able to move good. And there's a project within the CNCF, the Cloud Native Computing Federation, which is... Foundation, sorry, which is... most of what they do is spend their time as on Kubernetes. And they've really sort of been helping moving the Kubernetes well forward, but there is a group within that that is focusing on serverless. And one of the things that that group have been looking at, is this common events, interface for lambda functions. And so yes, great, that's going to happen. On the other hand, to your point, Zhamak, like actually moving, from a lambda function to an Azure function from a coding point of view is really... if you're doing it right is trivial.
Mike Roberts:
Because, these are very, very, very, very lightweight frameworks. The only thing that you're doing when you're building a lambda function is satisfying a function signature, that's it, there's no libraries, there's no nothing. And so moving from a lambda function to something else is pretty trivial. But then it comes on to the real big thorny thing. And Sam Newman talks about this as well, which is, the problem with this whole area is not moving the functions around, it's what the functions are using, right? And if you're building a really high level serverless service, you're going to be using 15, 20, maybe Amazon services, which are all incredibly high value, and you're using those services, not because Amazon have pulled a sneaky one on you, but you're using them because they've accelerated how you can build your applications and operate your applications, right?
Mike Roberts:
You're using Amazon's Cognito or whatever so that you don't have to manage your own password systems or that kind of thing. So the problem is when moving... the idea of moving a lambda function to an Azure function isn't changing the... isn't modifying the code from a syntactical point of view. It's thinking about the semantics of all the services that you're interacting with, that you can have to change. And that's a far, far bigger problem, than things like a common interface for functions or service functions.
Paula Paul:
Yes, that's exactly, I always say the code is the easy part. And I've been watching open service broker for this sort of thing, but I think that it's also a long journey for something like open service broker, to bring all this together.
Mike Mason:
Cool. Okay, well, we'll wrap up here. I'd like to thank my co host, Zhamak and Paula Paul from Thoughtworks and, Mike. Mike Roberts, thank you so much for being on the podcast today. I wanted to give you a minute to just, talk about what you do, where people will find you on the internet. So, tell us all that stuff.
Mike Roberts:
Thanks Mike. And yeah, definitely thanks again for inviting me on this. So yeah, you can find me online at symphonia.io, Symphonia is the consulting business that I started with my business partner, John Chapin, a couple of years ago. So we really help people in sort of navigating this whole area of cloud architecture and cloud engineering and what it means to develop is on the ground, we'll happily sit and roll up our sleeves and pair with engineers as they're working on these things and figuring out what it means, because architecture operations and engineering is all, one big fuzzy world now. So, we love helping people figure out how to navigate that world. So yeah, you can find us at symphonia.io, or you can, personally find me on Twitter @Mikebroberts. But yeah, if you want any help, or to chat about this stuff, please drop me a line, mike@symphonia.io I always love hearing people's questions and also finding out how people are using all of this stuff.
Mike Mason:
Awesome. Thanks, Mike. And so for the listeners out there if you've enjoyed the podcast, please do give us a review or a star rating on whatever podcast platform you're using to listen to us, because it really does help people, find the show and increase our reach. So, thanks very much everyone and, we'll catch you on the next one.