by QuinnyPig on 12/28/2024, 11:23:20 PM
I don’t know as I’ve ever seen Lambda versioning being used for anything in the wild. Folks just spin up different dev and staging environments for testing.
by newaccountman2 on 12/28/2024, 4:41:20 PM
> Consider this scenario: 1. You have a critical DynamoDB table with a Lambda trigger handling important business logic 2. A developer pushes changes to the Lambda's `$LATEST` version for testing 3. Surprise! Those changes are now processing your production data
...why would a DynamoDB trigger for prod data be pointing to a Lambda where people push things that are still being tested?
> The workarounds are all suboptimal...- Maintain separate tables for different environments
This is not "suboptimal" or a "workaround"; it's the proper way to do things lol
Just as one would have separate RDS instances for QA/staging/test versus production.
Test Lambda --> Test DynamoDB table
Prod Lambda --> Prod DynamoDB table
by crop_rotation on 12/28/2024, 7:58:52 AM
I mean the solution would be to have a different test table an also a test lambda. You can deploy to test lambda and test it by changes to the test table.
I recently discovered what I consider a serious design flaw in AWS DynamoDB Triggers that I believe deserves more attention from the community.
Here's the issue: DynamoDB Triggers can only point to the `$LATEST` version of a Lambda function. Yes, you read that right - there's no built-in way to target a specific version or alias through the console. This means any changes to your Lambda function's `$LATEST` version immediately affect your production triggers, whether you intended to or not.
Consider this scenario: 1. You have a critical DynamoDB table with a Lambda trigger handling important business logic 2. A developer pushes changes to the Lambda's `$LATEST` version for testing 3. Surprise! Those changes are now processing your production data
The workarounds are all suboptimal: - Create triggers through CloudFormation/CDK (requires delete and recreate) - Maintain separate tables for different environments - Add environment checks in your Lambda code - Use the Lambda console to configure triggers (unintuitive and error-prone)
This design choice seems to violate several fundamental principles: - Separation of concerns - Safe deployment practices - The principle of least surprise - AWS's own best practices for production workloads
What's particularly puzzling is that other AWS services (API Gateway, EventBridge, etc.) handle versioning and aliases perfectly well. Why is DynamoDB different?
Some questions for the community: 1. Has anyone else encountered production issues because of this? 2. What workarounds have you found effective? 3. Is there a technical limitation I'm missing that explains this design choice? 4. Should we push AWS to change this behavior?
For now, my team has implemented a multi-layer safety net: ```python def lambda_handler(event, context): if not is_production_alias(): log_and_alert("Non-production version processing production data!") return
```But this feels like we're working around a problem that shouldn't exist in the first place.
Curious to hear others' experiences and thoughts on this. Have you encountered similar "gotchas" in AWS services that seem to go against cloud deployment best practices?