Spotlight
Amazon Q: GenAI a Feature or a System?
Identifying where challenges and advantages exist in the quest for immediate value in Generative AI.
Thu, 12 Jul 2018
Hi, I’m Forrest Brazeal at Trek10, and this is ‘Think FaaS’, where we learn about the world of serverless computing in less time than it takes to run a Lambda function. So put five minutes on the clock - it’s time to ‘Think FaaS’.
We’re continuing today with our miniseries on serverless best practices. I want to start by sharing a question I got from one of our awesome Think FaaS listeners this week. I’m going to read it in full because I think it’s a great summary of a struggle that a lot of people are having.
Hi Forrest, I'm listening and enjoying your podcast at trek10.What I really miss, and hopefully you'll have a session about it, is testing in the cloud. As a developer who owns the code from product, to coding, to deployment, to monitoring, testing is a crucial part of the process. Local testing is feasible, there are tools and various mocks that enable me to test it locally, but for me without running my app in its native cloud environment at least once, there is no way to know if I broke something, code or integration (until it reaches the ci/cd process which for me is too late). The main problems that I'm facing:* Provisioning additional account in aws is expensive, I'm using managed services that cost money just for being provisioned like rds, elastic search.* Its quite cumbersome to manage multiple accounts (as dev manager) to all of my developers and limit their billing* It's extremely frustrating to use the develop, deploy, debug cycle in a cloud environment due to network , it can take up to 1 minute for each small code change.
So let me break down what this listener is saying. Sounds like they’re already doing the most important thing, which is writing mocks and unit tests to cover their code locally as much as they can. There are plugins for the Serverless Framework like Serverless Offline that can help with this. There’s an open source project called localstack that is actively maintained and mocks many of the most popular AWS services. AWS even offers a free local download of DynamoDB, by the way, which is one of my favorite cool AWS facts. The big best practice in all of this is to decouple your business logic from the integrations with your service providers as much as you can. That makes way it easier to mock your code for testing, or even to swap providers if you need to do that at some point. You want to be as confident as you can that your code is correct before it ever goes to the cloud.
But even if our listener is doing all of that, it sounds like they are more concerned about integration tests. After all, a serverless app consists mainly of boundaries between services, and you haven’t really tested it until you’ve run it in the cloud and verified that everything is working. Now, I do want to make this point right up front. Serverless architectures are event-driven, or should be. Events come in, events go out. That is an easy pattern to mock. So when you do integration tests, what you’re really looking for are permissions issues between services, maybe problems with infrastructure configuration, and performance bottlenecks. The performance testing isn’t necessarily something you run every time you make a code change, so we’ve really limited the scope of what we need to worry about.
That said, I heard three main pain points in the listener’s question. Integration tests can be expensive, they can be cumbersome, and they can be slow compared to running on your local machine.
Regarding managing the expense of your test stacks, other than relying on usage-priced services as much as you can, I would suggest sharing your data and cache layers between developers and giving them their own application stacks (Lambda, API Gateway, AppSync), since those are cheap and that’s where a lot of the dev work is happening. This gets trickier if you are giving each developer a completely separate AWS account, so there you have to choose a bit of a tradeoff between cost and the advantage of sandboxing your code.
Secondly, managing all those test stacks can be tough. It’s so important that you’re using something like CodePipeline to coordinate your tests. At Trek10 we’ve been using dynamic pipelines that spin up when a feature branch is pushed to source control and deploy the developer’s code to a test stack. It’s just dev, so it’s not the end of the world if somebody needs to go in the console, poke around, change some things on their test stack. That’s what the console is for, and that can be quite fast. But you need a code promotion process such that only code-defined infrastructure makes it out of the dev environment and through the pipeline to the next step.
I also recommend automating smoke tests that take place after your deployments. Call your Lambda functions with mock data. Make GraphQL queries against AppSync. You can run these tests serverlessly in Lambda functions or in a CodeBuild environment. CodeBuild will be a bit slower but may give you more flexibility. And you want to run those tests in all your environments, not just in dev.
Finally, speed. Our listener mentions frustration with network latency when writing code on a local machine, deploying it to the cloud, and running it to check for bugs. This is going to sound crazy, but one way to shorten that feedback loop might be just to write your code in the cloud to begin with. I’m talking about using Cloud9, AWS’s browser-based IDE. My co-host Jared Short actually uses Cloud9 as his primary dev environment, and he recently put out an interesting blog post on how and why he does that. Cloud9 has some decent Lambda integrations. It lets you write and debug your serverless application in your browser, and then deploy functions to a test environment with a single click. It might not be quite as fast as running a locally hosted application, but you may find that it speeds up your workflow compared to pushing code from your local machine every time you want to make a change.
Now, obviously, the workflows I’ve described aren’t perfect. We are not at feature parity in the serverless world with the local development ecosystem for, like, a React app. But as the serverless movement continues to grow, we’ve already seen the tooling improve by leaps and bounds, and I have no doubt that will continue to happen. In the meantime, you can keep up with all things serverless by following Trek10 on Twitter @Trek10inc, I’m there as well @forrestbrazeal, and we’ll see you on the next episode of Think FaaS.
Identifying where challenges and advantages exist in the quest for immediate value in Generative AI.