Relevant News - Global

Here are the latest news items relevant to all Regions.

AWS Security Hub launches Extended plan for pay-as-you-go partner solutions

🚀
New Service Feature Introduction
TL;DR: AWS Security Hub launches Extended plan offering unified security operations with curated partner solutions and pay-as-you-go pricing.
AWS Services: AWS Security Hub, AWS Enterprise Support

Link: https://aws.amazon.com/about-aws/whats-new/2026/02/sec-hub-extended/

Today, we're announcing the general availability of AWS Security Hub Extended, a new plan that extends unified security operations across your enterprise through a single-vendor experience. This plan helps address the complexity of managing multiple vendor relationships and lengthy procurement cycles by bringing together the best of AWS detection services and curated partner security solutions.

The Security Hub Extended plan delivers three critical advantages. First, it helps streamline procurement by consolidating solution usage into one bill—thereby reducing procurement complexity while preserving direct access to each provider's domain expertise. AWS Enterprise Support Customers also benefit from unified Level 1 support from AWS. Second, it enables you to establish more comprehensive protection by bringing together the best of AWS detection services with curated partner solutions across endpoint, identity, email, network, data, browser, cloud, AI, and security operations. Third, it helps enhance operational efficiency by streamlining security findings in a standard format, providing centralized visibility across your security environment while reducing the burden of manual integration work.

You can access and review partner solutions across security categories through the Security Hub console, selecting only the solutions you need with flexible pay-as-you-go or flat-rate pricing—no upfront investments or long-term commitments required. With AWS as the seller of record, the Extended plan may be eligible for AWS Private Pricing opportunities. This gives you the flexibility to add or remove security categories as your business needs evolve, while enabling you to streamline vendor contract negotiations and consolidate billing. For a list of AWS commercial Regions where Security Hub is available, see the AWS Region table. For more information about pricing, visit the AWS Security Hub pricing page. To get started, visit the AWS Security Hub console or product page.

Published: 2026-02-26 17:30:00+00:00

Introducing Amazon EC2 I8g.metal-48xl instances

🖥️
New Instance Type Introduction
TL;DR: AWS announces general availability of Amazon EC2 I8g.metal-48xl instances powered by Graviton4 processors with enhanced storage performance.
AWS Services: Amazon EC2, AWS Graviton4, AWS Nitro System, Amazon Elastic Block Store

Link: https://aws.amazon.com/about-aws/whats-new/2026/02/ec2-i8g-metal-48xl-generally-available/

AWS is announcing the general availability of Amazon EC2 Storage Optimized I8g.metal-48xl instances. I8g instances are powered by AWS Graviton4 processors that deliver up to 60% better compute performance compared to previous generation I4g instances. I8g instances use the latest third generation AWS Nitro SSDs, local NVMe storage that deliver up to 65% better real-time storage performance per TB while offering up to 50% lower storage I/O latency and up to 60% lower storage I/O latency variability. These instances are built on the AWS Nitro System, which offloads CPU virtualization, storage, and networking functions to dedicated hardware and software enhancing the performance and security for your workloads.

Amazon EC2 I8g instances are designed for I/O intensive workloads that require rapid data access and real-time latency from storage. These instances excel at handling transactional and real-time databases, including MySQL, PostgreSQL, and NoSQL solutions like ClickHouse, Apache Druid, and MongoDB. They're also optimized for real-time analytics platforms such as Apache Spark. I8g instances are available in 11 different sizes with up to 48xlarge (including 2 metal sizes), 1,536 GiB of memory, and 45 TB local instance storage. They deliver up to 100 Gbps of network performance bandwidth, and 60 Gbps of dedicated bandwidth for Amazon Elastic Block Store (EBS).

To learn more, visit EC2 I8g instances. To begin your Graviton journey, visit the Level up your compute with AWS Graviton page.

Published: 2026-02-26 16:00:00+00:00

Introducing OpenClaw on Amazon Lightsail to run your autonomous private AI agents Blog Post

🚀
New Service Introduction
TL;DR: AWS launches OpenClaw on Amazon Lightsail for running autonomous private AI agents with pre-configured Amazon Bedrock integration.
AWS Services: Amazon Lightsail, Amazon Bedrock

Link: https://aws.amazon.com/blogs/aws/introducing-openclaw-on-amazon-lightsail-to-run-your-autonomous-private-ai-agents/

AWS launches OpenClaw on Amazon Lightsail to run OpenClaw instance, pairing your browser, enabling AI capabilities, and optionally connecting messaging channels. Your Lightsail OpenClaw instance is pre-configured with Amazon Bedrock for starting with your AI assistant immediately — no additional configuration required.

Published: 2026-03-04 20:04:16+00:00

AWS Security Hub Extended offers full-stack enterprise security with curated partner solutions Blog Post

🚀
New Service Introduction
TL;DR: AWS Security Hub Extended launches as unified full-stack enterprise security solution with curated partner integrations
AWS Services: AWS Security Hub Extended, AWS Security Hub

Link: https://aws.amazon.com/blogs/aws/aws-security-hub-extended-offers-full-stack-enterprise-security-with-curated-partner-solutions/

AWS announces the general availability of AWS Security Hub Extended, a unified, full-stack enterprise security solution. It brings together AWS detection services and curated partner solutions through a single, simplified experience.

Published: 2026-02-26 18:52:06+00:00

Transform live video for mobile audiences with AWS Elemental Inference Blog Post

🚀
New Service Introduction
TL;DR: AWS Elemental Inference launches as fully managed AI service for real-time video transformation to mobile formats
AWS Services: AWS Elemental Inference

Link: https://aws.amazon.com/blogs/aws/transform-live-video-for-mobile-audiences-with-aws-elemental-inference/

AWS Elemental Inference is a fully managed AI service that automatically transforms live and on-demand video broadcasts into vertical formats optimized for mobile and social platforms in real time, enabling broadcasters to reach audiences on TikTok, Instagram Reels, and YouTube Shorts without manual editing or AI expertise.

Published: 2026-02-24 18:55:11+00:00

AWS Weekly Roundup: Claude Sonnet 4.6 in Amazon Bedrock, Kiro in GovCloud Regions, new Agent Plugins, and more (February 23, 2026) Blog Post

🚀
New Service Feature Introduction
TL;DR: AWS Weekly Roundup featuring Claude Sonnet 4.6 in Amazon Bedrock, Kiro in GovCloud Regions, and new Agent Plugins
AWS Services: Amazon Bedrock

Link: https://aws.amazon.com/blogs/aws/aws-weekly-roundup-claude-sonnet-4-6-in-amazon-bedrock-kiro-in-govcloud-regions-new-agent-plugins-and-more-february-23-2026/

Last week, my team met many developers at Developer Week in San Jose. My colleague, Vinicius Senger delivered a great keynote about renascent software—a new way of building and evolving applications where humans and AI collaborate as co-developers using Kiro. Other colleagues, Du’An Lightfoot, Elizabeth Fuentes, Laura Salinas, and Sandhya Subramani spoke about building and […]

Published: 2026-02-23 16:56:24+00:00

Amazon EC2 Hpc8a Instances powered by 5th Gen AMD EPYC processors are now available Blog Post

🖥️
New Instance Type Introduction
TL;DR: Amazon EC2 Hpc8a instances with 5th Gen AMD EPYC processors now available, delivering 40% higher performance and enhanced networking.
AWS Services: Amazon EC2

Link: https://aws.amazon.com/blogs/aws/amazon-ec2-hpc8a-instances-powered-by-5th-gen-amd-epyc-processors-are-now-available/

Amazon EC2 Hpc8a instances, powered by 5th Gen AMD EPYC processors, deliver up to 40% higher performance, increased memory bandwidth, and 300 Gbps Elastic Fabric Adapter networking, helping customers accelerate compute-intensive simulations, engineering workloads, and tightly coupled HPC applications.

Published: 2026-02-16 23:12:37+00:00

AWS Weekly Roundup: Amazon EC2 M8azn instances, new open weights models in Amazon Bedrock, and more (February 16, 2026) Blog Post

🖥️
New Instance Type Introduction
TL;DR: AWS introduces new Amazon EC2 M8azn instances and new open weights models in Amazon Bedrock
AWS Services: Amazon EC2, Amazon Bedrock, AWS Graviton

Link: https://aws.amazon.com/blogs/aws/aws-weekly-roundup-amazon-ec2-m8azn-instances-new-open-weights-models-in-amazon-bedrock-and-more-february-16-2026/

I joined AWS in 2021, and since then I’ve watched the Amazon Elastic Compute Cloud (Amazon EC2) instance family grow at a pace that still surprises me. From AWS Graviton-powered instances to specialized accelerated computing options, it feels like every few months there’s a new instance type landing that pushes performance boundaries further. As of […]

Published: 2026-02-16 17:28:52+00:00

AWS Weekly Roundup: Claude Opus 4.6 in Amazon Bedrock, AWS Builder ID Sign in with Apple, and more (February 9, 2026) Blog Post

🚀
New Service Feature Introduction
TL;DR: AWS weekly roundup featuring Claude Opus 4.6 in Bedrock, new EC2 instances, Network Firewall price cuts, and security enhancements.
AWS Services: Amazon Bedrock, Amazon EC2, AWS Network Firewall, Amazon DynamoDB, AWS Builder ID, AWS STS, Amazon CloudFront

Link: https://aws.amazon.com/blogs/aws/aws-weekly-roundup-claude-opus-4-6-in-amazon-bedrock-aws-builder-id-sign-in-with-apple-and-more-february-9-2026/

Here are the notable launches and updates from last week that can help you build, scale, and innovate on AWS. Last week’s launches Here are the launches that got my attention this week. Let’s start with news related to compute and networking infrastructure: Introducing Amazon EC2 C8id, M8id, and R8id instances: These new Amazon EC2 […]

Published: 2026-02-09 20:42:04+00:00

Amazon EC2 C8id, M8id, and R8id instances with up to 22.8 TB local NVMe storage are generally available Blog Post

🖥️
New Instance Type Introduction
TL;DR: AWS launches new EC2 C8id, M8id, and R8id instances with up to 22.8TB local NVMe storage, offering 3x more resources.
AWS Services: Amazon EC2

Link: https://aws.amazon.com/blogs/aws/amazon-ec2-c8id-m8id-and-r8id-instances-with-up-to-22-8-tb-local-nvme-storage-are-generally-available/

AWS launches Amazon EC2 C8id, M8id, and R8id instances backed by NVMe-based SSD block-level instance storage physically connected to the host server. These instances offer 3 times more vCPUs, memory, and local storage with up to 22.8TB of local NVMe-backed SSD block-level storage.

Published: 2026-02-04 22:31:56+00:00

AWS IAM Identity Center now supports multi-Region replication for AWS account access and application use Blog Post

🚀
New Service Feature Introduction
TL;DR: AWS IAM Identity Center now supports multi-Region replication for workforce identities and permission sets, improving resiliency and enabling closer application deployment.
AWS Services: AWS IAM Identity Center

Link: https://aws.amazon.com/blogs/aws/aws-iam-identity-center-now-supports-multi-region-replication-for-aws-account-access-and-application-use/

AWS IAM Identity Center now supports multi-Region replication of workforce identities and permission sets, enabling improved resiliency for AWS account access and allowing applications to be deployed closer to users while meeting data residency requirements.

Published: 2026-02-03 19:13:34+00:00

AWS Weekly Roundup: Amazon EC2 G7e instances, Amazon Corretto updates, and more (January 26, 2026) Blog Post

🖥️
New Instance Type Introduction
TL;DR: Amazon EC2 G7e instances with NVIDIA Blackwell GPUs launched for GPU-intensive workloads and AI inference applications.
AWS Services: Amazon EC2

Link: https://aws.amazon.com/blogs/aws/aws-weekly-roundup-amazon-ec2-g7e-instances-with-nvidia-blackwell-gpus-january-26-2026/

Hey! It’s my first post for 2026, and I’m writing to you while watching our driveway getting dug out. I hope wherever you are you are safe and warm and your data is still flowing! This week brings exciting news for customers running GPU-intensive workloads, with the launch of our newest graphics and AI inference […]

Published: 2026-01-26 16:25:46+00:00

Announcing Amazon EC2 G7e instances accelerated by NVIDIA RTX PRO 6000 Blackwell Server Edition GPUs Blog Post

🖥️
New Instance Type Introduction
TL;DR: AWS introduces Amazon EC2 G7e instances with NVIDIA RTX PRO 6000 Blackwell GPUs for generative AI inference and graphics workloads.
AWS Services: Amazon EC2

Link: https://aws.amazon.com/blogs/aws/announcing-amazon-ec2-g7e-instances-accelerated-by-nvidia-rtx-pro-6000-blackwell-server-edition-gpus/

AWS introduces Amazon EC2 G7e instances accelerated by the NVIDIA RTX PRO 6000 Blackwell Server Edition GPUs with up to 2.3 times inference performance. G7e instances deliver cost-effective performance for generative AI inference workloads and the highest performance for graphics workloads.

Published: 2026-01-20 21:22:56+00:00

Amazon EC2 X8i instances powered by custom Intel Xeon 6 processors are generally available for memory-intensive workloads Blog Post

🖥️
New Instance Type Introduction
TL;DR: Amazon EC2 X8i instances with custom Intel Xeon 6 processors now generally available for memory-intensive workloads
AWS Services: Amazon EC2

Link: https://aws.amazon.com/blogs/aws/amazon-ec2-x8i-instances-powered-by-custom-intel-xeon-6-processors-are-generally-available-for-memory-intensive-workloads/

AWS is announcing the general availability of Amazon EC2 X8i instances, next-generation memory optimized instances powered by custom Intel Xeon 6 processors available only on AWS. X8i instances are SAP-certified and deliver the highest performance and fastest memory bandwidth among comparable Intel processors in the cloud.

Published: 2026-01-15 22:52:17+00:00

Amazon SageMaker HyperPod now provides comprehensive observability for Restricted Instance Groups

🚀
New Service Feature Introduction
TL;DR: Amazon SageMaker HyperPod introduces comprehensive observability for Restricted Instance Groups with unified monitoring dashboards and automated log collection.
AWS Services: Amazon SageMaker HyperPod, Amazon Managed Grafana, Amazon Managed Service for Prometheus, FSx for Lustre

Link: https://aws.amazon.com/about-aws/whats-new/2026/03/amazon-sagemaker-hyperpod-observability-rig/

Amazon SageMaker HyperPod now offers comprehensive observability for Restricted Instance Groups (RIG), enabling teams training foundation models with Nova Forge to gain deep visibility into their compute resources and training workloads. This new capability eliminates the manual effort of collecting and correlating metrics across the infrastructure stack, providing a unified view of GPU performance, system health, network throughput, and Kubernetes cluster state through a pre-configured Amazon Managed Grafana dashboard backed by Amazon Managed Service for Prometheus.

You can now monitor GPU utilization, NVLink bandwidth, CPU pressure, FSx for Lustre usage, and pod lifecycle from a single Grafana dashboard, with metrics collected across four exporters covering GPU performance, host-level system health, network fabric, and Kubernetes object state. In addition, curated logs are automatically made available in these dashboards, covering epoch progress, step-level training logs, pipeline errors, and Python tracebacks, so you can quickly diagnose training failures. HyperPod Observability for Restricted Instance Group is automatically enabled when you create a new cluster using RIGs, or can be enabled for existing clusters in a few clicks in the HyperPod cluster management console.

Amazon SageMaker HyperPod RIG observability is available in all AWS Regions where SageMaker HyperPod RIG is supported. To learn more, visit the documentation.

Published: 2026-03-04 18:00:00+00:00

Amazon OpenSearch Ingestion now supports Amazon Managed Service for Prometheus as a sink

🚀
New Service Feature Introduction
TL;DR: Amazon OpenSearch Ingestion now supports Amazon Managed Service for Prometheus as a sink for fully managed metrics ingestion pipelines.
AWS Services: Amazon OpenSearch Ingestion, Amazon Managed Service for Prometheus, Amazon OpenSearch Service, Amazon Managed Grafana

Link: https://aws.amazon.com/about-aws/whats-new/2026/03/amazon-opensearch-ingestion-supports-amazon-managed-service-prometheus-sink

Amazon OpenSearch Ingestion now supports Amazon Managed Service for Prometheus  as a sink, making it possible to build fully managed, end-to-end metrics ingestion pipelines without any custom forwarding infrastructure. With this launch, customers can now manage their entire metrics ingestion workflow using the same pipeline infrastructure they already use for logs and traces.

Customers can now choose the right destination for each observability signal — sending logs and traces to Amazon OpenSearch Service for powerful full-text search, log analytics, and trace correlation, while routing metrics to Amazon Managed Service for Prometheus for time-series storage and analysis. This flexibility allows teams to build purpose-fit observability pipelines that leverage the strengths of each service without compromising on data fidelity or analytical capability. Amazon OpenSearch Ingestion's built-in data transformation and enrichment capabilities allow customers to prepare and refine metrics before they land in Amazon Managed Service for Prometheus, improving data quality and consistency. Once metrics are in Amazon Managed Service for Prometheus, customers can query them using Prometheus Query Language to analyze trends, configure alerting rules to get notified when metrics cross defined thresholds, and visualize their data using Amazon Managed Grafana for rich, customizable views of infrastructure and application health.

The feature is supported in all regions that Amazon OpenSearch Ingestion and  is currently available. Customers can get started by using the new sink for Amazon Managed Service for Prometheus in their pipeline configuration via the AWS Management console or using the AWS CLI and start ingesting metrics into their Amazon Managed Service for Prometheus workspace.

To learn more and get started, visit the Amazon OpenSearch Ingestion documentation.

Published: 2026-03-04 16:00:00+00:00

Amazon OpenSearch Ingestion now supports unified ingestion endpoint for OpenTelemetry data

🚀
New Service Feature Introduction
TL;DR: Amazon OpenSearch Ingestion now supports unified endpoint for all OpenTelemetry signals (logs, metrics, traces) in single pipeline.
AWS Services: Amazon OpenSearch Ingestion, OpenTelemetry

Link: https://aws.amazon.com/about-aws/whats-new/2026/03/amazon-opensearch-ingestion-unified-ingestion-endpoint-opentelemetry

Amazon OpenSearch Ingestion now supports a unified ingestion endpoint that can accept all three OpenTelemetry observability signals — logs, metrics, and traces — through a single pipeline. Previously, customers who wanted to ingest all three OpenTelemetry data types had to create and manage three separate pipelines, one for each signal type. With this launch, a single pipeline can now receive any combination of OpenTelemetry signals, simplifying pipeline architecture and reducing operational overhead.

Customers can now build centralized observability pipelines that consolidate logs, metrics, and traces in one place, making it easier to correlate signals and gain a holistic view of application health. Teams operating at scale can reduce the number of pipelines they manage, lowering infrastructure costs and simplifying access control, monitoring, and lifecycle management. This also makes it easier to adopt OpenTelemetry incrementally as teams can begin with one signal type and add others over time without any pipeline reconfiguration.

The unified ingestion endpoint for OpenTelemetry data is supported in all regions that Amazon OpenSearch Ingestion is currently available. Customers can get started by using the new unified OpenTelemetry source in their pipeline configuration via the AWS Management console or using the AWS CLI and point their OpenTelemetry clients to the new unified endpoint.

To learn more and get started, visit the Amazon OpenSearch Ingestion documentation.

Published: 2026-03-04 16:00:00+00:00

Amazon SageMaker Unified Studio adds metadata sync with third-party catalogs

🚀
New Service Feature Introduction
TL;DR: Amazon SageMaker Unified Studio now supports metadata synchronization with Atlan, Collibra, and Alation third-party catalogs.
AWS Services: Amazon SageMaker Unified Studio, Amazon SageMaker Catalog

Link: https://aws.amazon.com/about-aws/whats-new/2026/03/amazon-sagemaker-unified-studio-3p-catalogs/

Amazon SageMaker Unified Studio now supports metadata and context sync across Atlan, Collibra, and Alation. These integrations synchronize catalog metadata between Amazon SageMaker Catalog and each partner platform, giving teams a consistent view of their data and AI assets regardless of which tool they use day to day. Organizations can maintain aligned glossary terms, asset descriptions, and ownership information across platforms without manual reconciliation.

All three integrations synchronize key metadata elements including projects, assets, descriptions, glossary terms, and their hierarchies. With the Collibra integration, you can synchronize metadata in both directions between SageMaker Catalog and the partner platform, so updates you make in one are reflected in the other. Also, you can manage SageMaker Unified Studio data access requests from Collibra. With the Atlan and Alation integration, you can ingest metadata from SageMaker Catalog into Alation with additional enhancements coming soon. You set up these integrations by setting up a connection to SageMaker Unified Studio from within Atlan and Alation, while the Collibra integration is available as an open-source solution on GitHub.

To learn more, visit the Amazon SageMaker Unified Studio documentation. For implementation details, see the Atlan blog post, Collibra blog post , and Alation blog post.

Published: 2026-03-03 23:00:00+00:00

Amazon SageMaker Unified Studio launches support for remote connection from Kiro IDE

🚀
New Service Feature Introduction
TL;DR: Amazon SageMaker Unified Studio now supports remote connection from Kiro IDE for seamless development workflows.
AWS Services: Amazon SageMaker, Amazon SageMaker Unified Studio, Amazon EMR, AWS Glue, Amazon Athena, AWS IAM

Link: https://aws.amazon.com/about-aws/whats-new/2026/03/amazon-sagemaker-unified-studio-kiro-ide/

Today, AWS announces the ability to remotely connect from Kiro IDE to Amazon SageMaker Unified Studio. This new capability allows data scientists, ML engineers, and developers to leverage their Kiro setup - including its spec-driven development, conversational coding, and automated feature generation capabilities - while accessing the scalable compute resources of Amazon SageMaker. By connecting Kiro to SageMaker Unified Studio using the AWS toolkit extension, you can eliminate context switching between your local IDE and cloud infrastructure, maintaining your existing agentic development workflows within a single environment for all your AWS analytics and AI/ML services.

SageMaker Unified Studio, part of the next generation of Amazon SageMaker, offers a broad set of fully managed cloud interactive development environments (IDE), including JupyterLab and Code Editor based on Code-OSS (Open-Source Software). Starting today, you can also use your customized local Kiro setup - complete with specs, steering files, and hooks - while accessing your compute resources and data on Amazon SageMaker. Since Kiro is built on Code-OSS, authentication is secure via IAM through the AWS Toolkit extension, giving you access to all your SageMaker Unified Studio domains and projects. This integration provides a convenient path from your local AI-powered development environment to scalable infrastructure for running workloads across data processing, SQL analytics services like Amazon EMR, AWS Glue, and Amazon Athena, and ML workflows - all with enterprise-grade security including customer-managed encryption keys and AWS IAM integration.

This feature is available in all Regions where Amazon SageMaker Unified Studio is available. To learn more, refer to the SageMaker user guide.

Published: 2026-03-03 19:39:00+00:00

AWS Batch now supports configurable scale down delay

🚀
New Service Feature Introduction
TL;DR: AWS Batch introduces configurable scale down delay to reduce job processing delays for intermittent workloads.
AWS Services: AWS Batch

Link: https://aws.amazon.com/about-aws/whats-new/2026/03/aws-batch-configurable-scale-down-delay/

AWS Batch now allows you to configure a scale down delay for managed compute environments, helping reduce job processing delays for intermittent and periodic workloads. With the new minScaleDownDelayMinutes parameter, you can specify how long AWS Batch keeps instances running after their jobs complete (from 20 minutes to 1 week), preventing unnecessary instance terminations and relaunches that can delay subsequent job processing.

You can configure the scale down delay when creating or updating a compute environment via the AWS Batch API (CreateComputeEnvironment or UpdateComputeEnvironment) or the AWS Batch Management Console. The delay is applied at the instance level, based on when each instance last completed a job.

Scale down delay is supported today in all AWS Regions where AWS Batch is available. For more information, see the AWS Batch API Guide.

Published: 2026-03-02 19:05:00+00:00

AWS Config now supports 30 new resource types

🎉
Service Feature Change
TL;DR: AWS Config now supports 30 additional resource types across services like Amazon Bedrock AgentCore and Amazon Cognito for enhanced monitoring.
AWS Services: AWS Config, Amazon Bedrock, Amazon Cognito, AWS AppSync, AWS Batch, AWS Deadline, AWS Detective, Amazon GameLift, AWS IoT, AWS Omics, AWS PCA Connector, AWS Resource Explorer, AWS Resource Groups, Amazon EventBridge Scheduler, Amazon Verified Permissions, AWS Glue DataBrew, Amazon Connect

Link: https://aws.amazon.com/about-aws/whats-new/2026/03/aws-config-new-resource-types/

AWS Config now supports 30 additional AWS resource types across key services including Amazon Bedrock AgentCore and Amazon Cognito. This expansion provides greater coverage over your AWS environment, enabling you to more effectively discover, assess, audit, and remediate an even broader range of resources.

With this launch, if you have enabled recording for all resource types, then AWS Config will automatically track these new additions. The newly supported resource types are also available in Config rules and Config aggregators.

You can now use AWS Config to monitor the following newly supported resource types in all AWS Regions where the supported resources are available:

Resource Types:

AWS::AppSync::DataSource AWS::Deadline::LicenseEndpoint
AWS::Batch::ConsumableResource AWS::Deadline::QueueEnvironment
AWS::Bedrock::DataSource AWS::Detective::OrganizationAdmin
AWS::BedrockAgentCore::Gateway AWS::GameLift::ContainerFleet
AWS::BedrockAgentCore::Memory AWS::GameLift::ContainerGroupDefinition
AWS::Cognito::IdentityPoolRoleAttachment AWS::GameLift::GameServerGroup
AWS::Cognito::LogDeliveryConfiguration AWS::GameLift::Location
AWS::Cognito::UserPoolUICustomizationAttachment AWS::IoT::TopicRule
AWS::Connect::RoutingProfile AWS::Omics::ReferenceStore
AWS::DataBrew::Dataset AWS::PCAConnectorAD::Template
AWS::DataBrew::Job AWS::PCAConnectorSCEP::Challenge
AWS::DataBrew::Project AWS::ResourceExplorer2::View
AWS::DataBrew::Recipe AWS::ResourceGroups::Group
AWS::DataBrew::Ruleset AWS::Scheduler::ScheduleGroup
AWS::DataBrew::Schedule AWS::VerifiedPermissions::IdentitySource

Published: 2026-03-02 16:00:00+00:00

AWS announces pricing for VPC Encryption Controls

ďą©
Pricing Change
TL;DR: AWS announces pricing for VPC Encryption Controls starting March 1, 2026, transitioning from free preview to paid feature.
AWS Services: VPC, Transit Gateway

Link: https://aws.amazon.com/about-aws/whats-new/2026/03/vpc-encryption-controls-pricing/

AWS is launching pricing for VPC Encryption Controls, a security and compliance feature that enables you to audit and enforce encryption-in-transit of all traffic flows within and across Virtual Private Clouds (VPCs) in a region. VPC Encryption controls can be enabled in two modes: Monitor mode detects presence of any unencrypted traffic within your VPCs, and Enforce Mode ensures all data-in-transit is encrypted and prevents the bring up of any resources that allow unencrypted traffic within your VPC. 

Starting March 1, 2026, VPC Encryption Controls will transition from a free preview to a paid feature. You will be charged a fixed hourly rate for every non-empty VPC (VPC that has network interfaces in them) that has Encryption Controls enabled in either monitor or enforce mode. There will be no charge for empty VPCs that have encryption controls enabled. When you enable encryption support on a Transit Gateway, standard VPC Encryption Controls charges apply to all VPCs attached to that Transit Gateway irrespective of their encryption controls mode (monitor, enforce or off) even if they are empty. 

To learn more about VPC Encryption Controls and view detailed regional pricing, visit the VPC Encryption Controls documentation and VPC pricing page.

Published: 2026-03-01 23:41:00+00:00

AWS Elemental MediaLive Now Supports SRT Listener Mode

🚀
New Service Feature Introduction
TL;DR: AWS Elemental MediaLive now supports SRT Listener mode for inputs and outputs, simplifying network setup by eliminating firewall configurations.
AWS Services: AWS Elemental MediaLive

Link: https://aws.amazon.com/about-aws/whats-new/2026/02/aws-medialive-introduces-srt-listener/

AWS Elemental MediaLive now supports Secure Reliable Transport (SRT) Listener mode for both inputs and outputs. With SRT Listener mode, MediaLive waits for connections rather than initiating them. Upstream sources push live video directly to MediaLive, and downstream systems pull encoded streams on demand. This simplifies network setup by removing the need for complex firewall configurations or static, publicly accessible IP addresses on the source or destination side. SRT Listener mode complements MediaLive's existing SRT Caller mode, giving you full control over which side of the connection initiates the SRT handshake.

SRT Listener mode enables flexible contribution and distribution workflows. On the input side, you can push streams from on-premises encoders or remote production sites, including MediaLive Anywhere deployments, directly to MediaLive in the cloud without coordinating firewall changes with your network team. On the output side, downstream distribution partners can connect to MediaLive and pull encoded streams when ready, without requiring MediaLive to initiate outbound connections. Both SRT Listener inputs and outputs support configurable latency settings and mandatory AES encryption to help ensure content security.

SRT Listener mode is available in all AWS Regions where AWS Elemental MediaLive is offered. To get started, see Setting up an SRT Listener input and Creating SRT outputs in listener mode in the AWS Elemental MediaLive User Guide.

Published: 2026-02-28 00:14:00+00:00

Amazon Lightsail expands blueprint selection with a new WordPress blueprint

🚀
New Service Feature Introduction
TL;DR: Amazon Lightsail introduces new WordPress blueprint with guided setup wizard and IMDSv2 enforcement by default.
AWS Services: Amazon Lightsail

Link: https://aws.amazon.com/about-aws/whats-new/2026/02/wordpress-blueprint-lightsail/

Amazon Lightsail now offers a new WordPress blueprint, making it easier than ever to launch and manage a WordPress website on the cloud. With just a few clicks, you can create a Lightsail virtual private server (VPS) preinstalled with WordPress, and follow a guided setup wizard to get your site fully configured and running in minutes. This new blueprint has Instance Metadata Service Version 2 (IMDSv2) enforced by default.

With Lightsail, you can easily get started on the cloud by choosing a blueprint and an instance bundle to build your web application. Lightsail instance bundles include instances preinstalled with your preferred operating system, storage, and monthly data transfer allowance, giving you everything you need to get up and running quickly. The new WordPress blueprint includes a step-by-step setup workflow that walks you through connecting a custom domain, configuring DNS, attaching a static IP address, and enabling HTTPS encryption using a free Let's Encrypt SSL/TLS certificate — all from within the Lightsail console.

This new blueprint is now available in all AWS Regions where Lightsail is available. For more information on blueprints supported on Lightsail, see Lightsail documentation. For more information on pricing, or to get started with your free trial, click here.

Published: 2026-02-27 23:28:00+00:00

EC2 Image Builder enhances lifecycle policies with wildcard support and simplified IAM

🎉
Service Feature Change
TL;DR: EC2 Image Builder adds wildcard support for lifecycle policies and simplified IAM role creation with pre-populated permissions.
AWS Services: EC2 Image Builder, Amazon Machine Images, IAM

Link: https://aws.amazon.com/about-aws/whats-new/2026/02/ec2-image-builder-lifecycle-enhancements/

EC2 Image Builder, a service that helps you automate the creation, distribution, and management of customized Amazon Machine Images, now supports wildcard patterns in lifecycle policies and simplifies IAM role creation. You can now use wildcard patterns to manage images from multiple recipes within a single lifecycle policy, and create IAM roles with pre-populated default permissions directly from the console.

Previously, you had to create separate lifecycle policies for each new recipe or manually select individual recipes, making it difficult to scale as new recipes were added. Now with wildcard pattern support, you can specify patterns like my-recipe-1.x.x to automatically apply lifecycle policies to all matching recipes—including new recipes created in the future. Additionally, creating IAM roles for lifecycle management previously required manually configuring the required permissions. Now when creating a new role in the console, EC2 Image Builder automatically populates the required default permissions, reducing setup time and potential configuration errors. Together, these capabilities simplify onboarding and ongoing maintenance, enabling you to manage your image lifecycle at scale with less operational overhead.

Lifecycle Policies are available in all commercial AWS regions. To learn more, refer to the documentation.

Published: 2026-02-27 22:10:00+00:00

ARC Region switch adds three new capabilities: post-recovery workflows, RDS orchestration and AWS provider support for Terraform

🎉
Service Feature Change
TL;DR: Amazon Application Recovery Controller Region switch adds post-recovery workflows, RDS orchestration blocks, and Terraform support for enhanced disaster recovery automation.
AWS Services: Amazon Application Recovery Controller, Amazon RDS, AWS Lambda

Link: https://aws.amazon.com/about-aws/whats-new/2026/02/arc-region-switch-post-recovery-rdsblock/

Amazon Application Recovery Controller (ARC) Region switch helps customers orchestrate the failover of their multi-Region applications to achieve a bounded recovery time in the event of a Regional impairment. It automates multi-Region disaster recovery, reducing engineering effort and eliminating operational overhead when recovering applications across multiple AWS accounts and Regions. Region switch now includes three new capabilities: post-recovery workflows, native RDS execution blocks, and AWS provider for Terraform support.

Post-recovery workflows. Disaster recovery doesn't end when customers failover to a standby Region. After orchestrating a failover or failback, customers must prepare the other Region for the next recovery event. Today, this requires manual coordination of scaling, recreating read replicas, and validating configurations. Post-recovery workflows help customers automate these preparation steps. With this launch, post-recovery workflows support the custom action Lambda execution block, Amazon RDS create read replica execution block, ARC Region switch plan execution block, and the manual approval execution block. Customers can create read replicas, run custom logic via Lambda functions, add manual approval gates, and embed child plans for complex orchestration as part of post-recovery. Post-recovery workflows are available for active/passive deployments and can be triggered manually.

RDS execution blocks. Coordinating Amazon RDS database recovery during Regional failover requires manual steps to promote read replicas and recreate replication, introducing delays and errors. Region switch now natively supports two Amazon RDS execution blocks that automate RDS recovery orchestration. The RDS promote read replica execution block orchestrates promotion of a read replica to a standalone instance during failover. The RDS create read replica execution block orchestrates replica creation as part of post-recovery workflows.

AWS provider for Terraform support. Region switch is now supported by the AWS provider for Terraform, enabling customers to manage disaster recovery plans as Infrastructure-as-Code and integrate them into CI/CD pipelines alongside application deployments.


To learn more, about AWS provider support for Terraform, visit Terraform provider documentation. To learn about post-recovery workflows in action, read the post-recovery workflow tutorial. To get started with Region switch, read our launch blog or documentation.

Published: 2026-02-27 22:00:00+00:00

AWS Network Firewall now supports firewall state change notifications through Amazon EventBridge

🚀
New Service Feature Introduction
TL;DR: AWS Network Firewall now integrates with Amazon EventBridge for real-time firewall state change notifications and configuration updates.
AWS Services: AWS Network Firewall, Amazon EventBridge, Amazon SNS

Link: https://aws.amazon.com/about-aws/whats-new/2026/02/firewall-state-change-notifications/

AWS Network Firewall now integrates with Amazon EventBridge to provide real-time notifications for firewall state changes and configuration updates. This new capability enables you to monitor critical firewall operations including firewall configuration updates and endpoint status modifications across your network security infrastructure. You gain immediate visibility into changes affecting AWS Managed Rules, Partner Managed Rules, and firewall configurations.

With EventBridge integration, you gain enhanced visibility into your firewall operations in real-time. You can build automated workflows to send notifications through Amazon SNS, create tickets in your IT service management (ITSM) systems, or integrate with third-party security information and event management (SIEM) solutions. This integration helps you maintain better operational awareness of your network security infrastructure and respond quickly to configuration changes or potential issues.

AWS Network Firewall state change notifications through Amazon EventBridge are available in all AWS Regions where AWS Network Firewall and Amazon EventBridge is currently available.

To learn more about AWS Network Firewall EventBridge integration, visit the AWS Network Firewall documentation. For information about Amazon EventBridge, see the Amazon EventBridge documentation.

Published: 2026-02-27 19:00:00+00:00

Amazon Bedrock batch inference now supports the Converse API format

🎉
Service Feature Change
TL;DR: Amazon Bedrock batch inference now supports Converse API format for unified model-agnostic input across batch workloads.
AWS Services: Amazon Bedrock

Link: https://aws.amazon.com/about-aws/whats-new/2026/02/amazon-bedrock-batch-inference-supports-converse-api-format/

Amazon Bedrock batch inference now supports the Converse API as a model invocation type, enabling you to use a consistent, model-agnostic input format for your batch workloads.

Previously, batch inference required model-specific request formats using the InvokeModel API. Now, when creating a batch inference job, you can select Converse as the model invocation type and structure your input data using the standard Converse API request format. Output for Converse batch jobs follows the Converse API response format. With this feature, you can use the same unified request format for both real-time and batch inference, simplifying prompt management and reducing the effort needed to switch between models. You can configure the Converse model invocation type through both the Amazon Bedrock console and the API.

This capability is available in all AWS Regions that support Amazon Bedrock batch inference. To get started, see Create a batch inference job and Format and upload your batch inference data in the Amazon Bedrock User Guide.

Published: 2026-02-27 19:00:00+00:00

Amazon CloudWatch logs centralization rules now support customizable destination log group structure

🎉
Service Feature Change
TL;DR: CloudWatch logs centralization rules now support customizable destination log group structure using attributes for better organization.
AWS Services: Amazon CloudWatch, CloudWatch Logs, AWS Organizations

Link: https://aws.amazon.com/about-aws/whats-new/2026/02/cloudwatch-centralization-custom-groups/

Amazon CloudWatch now supports customizing destination log group names when creating CloudWatch log centralization rules. Organizations managing logs across multiple accounts can now use attributes to organize centralized logs into meaningful hierarchies — by account ID, region, organizational unit, or other AWS Organizations metadata — that match how their organization operates and what their compliance requirements demand.

You can define a destination log group name structure using attributes that CloudWatch Logs automatically replaces with actual values when logs are copied. For example, using the pattern ${source.accountId}/${source.region}/${source.logGroup} creates destination log groups like 123456789012/us-east-1/cloudtrail/managementevent, making it easy to identify which account and region logs originated from. You can use attributes, including source account ID, region, log group name, organization ID, organizational unit ID, root ID, and the full organizational path.

Customizable destination log group names are available in all centralization rules supported regions.

Customers can use centralization rules to centralize one copy of logs for free (ingestion). Additional copies are charged at $0.05/GB of logs centralized (the backup region feature is considered an additional copy). Storage charges apply. To learn more, visit the CloudWatch Logs Centralization documentation.

Published: 2026-02-27 18:50:00+00:00

AWS Resource Access Manager now supports maintaining shares when accounts change organizations

🎉
Service Feature Change
TL;DR: AWS RAM now supports maintaining resource shares when accounts change organizations with new RetainSharingOnAccountLeaveOrganization parameter.
AWS Services: AWS Resource Access Manager, AWS Organizations, Route53 Resolver Rules, Transit Gateways, IPAM, Service Control Policies

Link: https://aws.amazon.com/about-aws/whats-new/2026/02/aws-resource-access-manager/

AWS Resource Access Manager (RAM) now supports a resource share configuration that allows you to maintain resource sharing continuity when accounts move between AWS Organizations. With the new RetainSharingOnAccountLeaveOrganization parameter and corresponding ram:RetainSharingOnAccountLeaveOrganization condition key, security administrators can configure resource shares to retain access when accounts leave the organization and enforce consistent policies across their organization using Service Control Policies (SCPs).

This capability helps organizations undergoing mergers, acquisitions, or restructuring maintain access to shared resources like Route53 Resolver Rules, Transit Gateways, and IPAM pools without disruption. Security teams can use SCPs to enforce the RetainSharingOnAccountLeaveOrganization configuration organization-wide. When enabled, RAM treats organization accounts as external accounts, requiring explicit invitation acceptance and preserving resource access during account transitions between organizations.

This feature is available in all AWS commercial Regions at no additional cost. To learn more about resource share configurations, see the AWS RAM documentation or visit the AWS RAM product page.

Published: 2026-02-27 17:35:00+00:00

Amazon OpenSearch Service adds new insights for improved cluster stability

🚀
New Service Feature Introduction
TL;DR: Amazon OpenSearch Service adds two new Cluster Insights: Cluster Overload and Suboptimal Sharding Strategy for improved cluster monitoring.
AWS Services: Amazon OpenSearch Service

Link: https://aws.amazon.com/about-aws/whats-new/2026/02/amazon-opensearch-service-adds-new-insights-improved-cluster-stability/

Amazon OpenSearch Service has enhanced Cluster Insights with two new insights — Cluster Overload and Suboptimal Sharding Strategy. Suboptimal Sharding Strategy provides instant visibility into shard imbalances that cause uneven workload distribution, while Cluster Overload surfaces elevated cluster resource utilization that can lead to request throttling or rejections. Both insights come with details of affected resources along with actionable mitigation recommendations.

Previously, identifying resource constraints and shard imbalances required manually correlating multiple metrics and logs, making it difficult to detect issues early. With these new insights, you can proactively monitor cluster health and take timely action.

Suboptimal Sharding Strategy detects shard imbalances caused by indices with too few shards relative to the number of data nodes, or by shards carrying disproportionately large amounts of data compared to others. It identifies the root cause of uneven workload distribution and provides recommendations to help you achieve optimal shard distribution for improved query performance and resource utilization. Similarly, Cluster Overload helps you identify elevated resource utilization, including CPU, memory, disk I/O, disk throughput, and disk utilization that can potentially lead to request throttling or rejections. It also provides scale-up recommendations so you can take timely action to protect your critical workloads.

These new insights are available at no additional cost for OpenSearch version 2.17 or later in all Regions where the OpenSearch UI is available. See the complete list of supported Regions here. To learn more, visit the Cluster Insights documentation or view the complete catalog of available insights.

Published: 2026-02-27 10:49:00+00:00

Amazon Bedrock announces OpenAI-compatible Projects API

🚀
New Service Feature Introduction
TL;DR: Amazon Bedrock now supports OpenAI-compatible Projects API in Mantle inference engine for better isolation and access control.
AWS Services: Amazon Bedrock

Link: https://aws.amazon.com/about-aws/whats-new/2026/03/amazon-bedrock-projects-api-mantle-inference-engine/

Amazon Bedrock now supports OpenAI-compatible Projects API in the Mantle inference engine in Amazon Bedrock. Amazon Bedrock is a fully managed service that offers a broad selection of best-in-class foundation models from leading AI companies like Anthropic, Meta, and OpenAI, along with a broad set of specialized developer tools that make it easy to build and scale compelling generative AI applications. Mantle is Amazon Bedrock's distributed inference engine for large-scale model serving that supports OpenAI-compatible APIs.

With Projects API, customers who have more than one application, environment, or team can now create individual projects to achieve better isolation across all of them. You can assign different IAM-based access control to each project and add tags to each project for better cost visibility.

Projects are available for all customers using the OpenAI-compatible APIs, the Responses API and Chat Completions API, through the Mantle inference engine in Amazon Bedrock. There is no additional charge for using the Projects API. You pay only for the underlying model inference you consume. To get started with the Projects API in Amazon Bedrock, visit the Amazon Bedrock documentation

Published: 2026-02-26 23:06:00+00:00

Amazon SageMaker HyperPod now supports API-driven Slurm configuration

🎉
Service Feature Change
TL;DR: Amazon SageMaker HyperPod now supports API-driven Slurm configuration for defining cluster topology and filesystem configurations directly through APIs or Console.
AWS Services: Amazon SageMaker HyperPod, FSx for Lustre, FSx for OpenZFS, AWS Management Console, AWS CLI, AWS CloudFormation, AWS SDKs

Link: https://aws.amazon.com/about-aws/whats-new/2026/02/amazon-sagemaker-hyperpod-slurm/

Amazon SageMaker HyperPod now supports API-driven Slurm configuration, enabling you to define Slurm topology and shared filesystem configurations directly in the cluster create and update APIs or through the AWS Console. SageMaker HyperPod helps you provision resilient clusters for running machine learning (ML) workloads and developing state-of-the-art models such as large language models (LLMs), diffusion models, and foundation models (FMs).

With this new API-driven configuration, you can now specify Slurm node types including Controller, Login, and Compute for cluster instance groups; instance group to partition mappings; and FSx for Lustre and FSx for OpenZFS filesystem mounts per instance group directly in the cluster API definition or through the advanced configuration section in the AWS Console. When you modify partition-node mappings directly in Slurm's native configuration files to fine-tune cluster resource assignments, Slurm's partition-node configurations can drift from HyperPod's view. A new cluster-level SlurmConfigStrategy helps you manage drift with three options: Managed, Overwrite, and Merge. The Managed strategy allows you to manage instance group to partition mappings completely via the API or Console, and automatically detects drift in partition-to-node mappings during scale-up or scale-down operations. When drift is detected, cluster updates are paused until you resolve it by switching to the Overwrite strategy to force API-defined mappings, the Merge strategy to preserve manual customizations, or by directly updating Slurm configurations to align with HyperPod.

API-driven Slurm configuration is available in all AWS Regions where SageMaker HyperPod is available. To get started, you can use the AWS Management Console, AWS CLI, AWS CloudFormation, or AWS SDKs. For more information, see the Amazon SageMaker HyperPod documentation for creating clusters using the Console or the CLI, and the API reference for CreateCluster and UpdateCluster.

Published: 2026-02-26 22:58:00+00:00

Amazon ECS Managed Instances now integrates with Amazon EC2 Capacity Reservations

🚀
New Service Feature Introduction
TL;DR: Amazon ECS Managed Instances now integrates with EC2 Capacity Reservations for predictable workload availability and cost efficiency.
AWS Services: Amazon ECS, Amazon Elastic Container Service, Amazon EC2, AWS Management Console, AWS CLI, AWS CloudFormation, AWS SDKs

Link: https://aws.amazon.com/about-aws/whats-new/2026/02/ecs-mi-ec2-capacity-reservations/

Amazon Elastic Container Service (Amazon ECS) Managed Instances now integrates with Amazon EC2 Capacity Reservations, enabling you to leverage your reserved capacity for predictable workload availability, while ECS handles all infrastructure management. This integration helps you balance reliable capacity scaling with cost efficiency, helping achieve high availability for mission‑critical workloads.

Amazon ECS Managed Instances is a fully managed compute option designed to eliminate infrastructure management overhead, dynamically scale EC2 instances to match your workload requirements, and continuously optimize task placement to reduce infrastructure costs. With today’s launch, you can configure your ECS Managed Instances capacity providers to use capacity reservations by setting the capacityOptionType parameter to reserved, in addition to the existing spot and on-demand options. You can also specify reservation preferences to optimize cost and availability: use reservations-only to launch EC2 instances exclusively in reserved capacity for maximum predictability, reservations-first to prefer reservations while maintaining flexibility to fall back to on-demand capacity when needed, or reservations-excluded to prevent your capacity provider from using reservations altogether.

To get started, you can use the AWS Management Console, AWS CLI, AWS CloudFormation, or AWS SDKs to configure your ECS Managed Instances capacity provider by choosing capacityOptionType=reserved and providing a capacity reservation group and reservation strategy. This feature is now available in all AWS Regions. For more details, refer to the documentation.

Published: 2026-02-26 22:00:00+00:00

AWS Marketplace now supports multiple purchases of SaaS & Professional Services products from the same account

🚀
New Service Feature Introduction
TL;DR: AWS Marketplace now supports Concurrent Agreements, allowing multiple purchases of same SaaS/Professional Services products within single AWS account.
AWS Services: AWS Marketplace, EventBridge

Link: https://aws.amazon.com/about-aws/whats-new/2026/02/concurrent-agreements-february/

AWS Marketplace now supports Concurrent Agreements for SaaS and Professional Services products, enabling buyers to make multiple purchases for the same product within a single AWS account. Previously, buyers could only maintain one active agreement per product per AWS account, requiring sellers to use workarounds to support expansion deals. Concurrent Agreements removes this constraint, allowing different business units to procure independently with their own negotiated terms and pricing.

Both buyers and sellers benefit from the flexibility Concurrent Agreements provides. Buyers can accept multiple offers for the same product without disrupting existing agreements, supporting multi-team procurement within centralized AWS accounts, mid-term expansions, and repeat purchases. Sellers can close multi-business unit deals that couldn't happen before, transact expansions immediately instead of waiting for renewal cycles, and eliminate the operational overhead of managing workarounds. 

Concurrent Agreements is enabled by default for all Professional Services listings starting today, with no seller action required. For SaaS listings, sellers must update their AWS Marketplace integration to handle multiple active subscriptions, including updating subscription notifications to use EventBridge and updating entitlement and metering APIs. Starting June 1, 2026, support for Concurrent Agreements will be required for new SaaS products. Sellers who have completed the integration work can opt in to enable Concurrent Agreements for their SaaS products now. 

This capability is available in all AWS Regions where AWS Marketplace is supported. Concurrent Agreements purchasing is available on SaaS products where sellers have completed the integration, and is enabled by default for all Professional Services listings. To learn more about enabling Concurrent Agreements as a seller of SaaS products, review the Concurrent Agreements integration lab.

Published: 2026-02-26 21:00:00+00:00

Amazon CloudWatch now provides lock contention diagnostics for Amazon RDS for PostgreSQL

🚀
New Service Feature Introduction
TL;DR: CloudWatch Database Insights adds lock contention diagnostics for RDS PostgreSQL to identify blocking sessions and historical issues.
AWS Services: Amazon CloudWatch, Amazon RDS, CloudWatch Database Insights

Link: https://aws.amazon.com/about-aws/whats-new/2026/02/amazon-cloudwatch-lock-contention-diagnostics-rds-postgresql/

Amazon CloudWatch Database Insights now provides lock contention diagnostics for Amazon RDS for PostgreSQL instances. This feature helps you identify the root cause behind both ongoing and historical lock contention issues within minutes. The lock contention diagnostics feature is available exclusively in the Advanced mode of CloudWatch Database Insights.

With this launch, you can visualize a locking condition in the Database Insights console, which shows the relationship between blocking and waiting sessions. The visualization helps you quickly identify the dominating sessions, queries, or objects causing lock contention. Additionally, this feature persists historical locking data for 15 months, allowing you to analyze and investigate historical locking conditions. You no longer need to manually run custom queries or rely on application logs to diagnose lock contention issues, streamlining the troubleshooting process.

You can get started with this feature by enabling the Advanced mode of CloudWatch Database Insights on your Amazon RDS for PostgreSQL clusters using the RDS console, AWS APIs, or the AWS SDK. CloudWatch Database Insights delivers database health monitoring aggregated at the fleet level, as well as instance-level dashboards for detailed database and SQL query analysis.

CloudWatch Database Insights is available in all public AWS Regions and offers vCPU-based pricing – see the pricing page for details. For further information, visit the Database Insights documentation.

Published: 2026-02-26 18:00:00+00:00

Amazon Cognito enhances client secret management with secret rotation and custom secrets

🎉
Service Feature Change
TL;DR: Amazon Cognito adds client secret rotation and custom client secrets for enhanced security and lifecycle management.
AWS Services: Amazon Cognito, AWS Management Console, AWS Command Line Interface, AWS Software Development Kits, AWS CloudFormation

Link: https://aws.amazon.com/about-aws/whats-new/2026/02/amazon-cognito-client-secret-lifecycle/

Amazon Cognito enhances client secret lifecycle management for app clients of Cognito user pools by adding client secret rotation and support for custom client secrets. Cognito helps you implement secure sign-in and access control for users, AI agents, and microservices in minutes, and a Cognito app client is a configuration that interacts with one mobile or web application that authenticates with Cognito. Previously, Cognito automatically generated all app client secrets. With this launch, in addition to the automatically generated secrets, you have the option to bring your own custom client secrets for new or existing app clients. Additionally, you can now rotate client secrets on-demand and maintain up to two active client secrets per app client.

The new client secret lifecycle management capabilities address needs for organizations with periodic credential rotation requirements, companies improving security posture, and enterprises migrating from other authentication systems to Cognito. Maintaining two active secrets per app client allows gradual transition to the new secret without application downtime.

Client secret rotation and custom client secrets are available in all AWS Regions where Amazon Cognito user pools are available. To learn more, see the Amazon Cognito Developer Guide. You can get started using the new capabilities through the AWS Management Console, AWS Command Line Interface (CLI), AWS Software Development Kits (SDKs), or AWS CloudFormation.

Published: 2026-02-26 17:00:00+00:00

Application Performance Monitoring Enabled by Default in CloudWatch Observability EKS Add-on

🎉
Service Feature Change
TL;DR: CloudWatch Observability EKS add-on v5.0.0 now automatically enables Application Signals APM by default for all installations.
AWS Services: Amazon CloudWatch, Amazon Elastic Kubernetes Service, EKS, CloudWatch Application Signals, Enhanced Container Insights, Container Logs

Link: https://aws.amazon.com/about-aws/whats-new/2026/02/application-performance-monitoring-cloudwatch-eks/

Today, Amazon CloudWatch Observability EKS add-on version 5.0.0 automatically enables CloudWatch Application Signals — Amazon's application performance monitoring (APM) capability — for all new installations and upgrades, eliminating the previous manual opt-in step. Amazon Elastic Kubernetes Service (EKS) is a managed Kubernetes service that simplifies running containerized applications at scale. The CloudWatch Observability add-on for EKS extends native Kubernetes observability by integrating Enhanced Container Insights, Container Logs, and now Application Signals directly into your clusters. The Observability add-on automatically instruments your services to collect traces, metrics, and logs for a unified, application-centric view. For DevOps engineers, platform teams, and developers who needed application-level visibility into their EKS-hosted services — such as service latency, error rates, and request traces — this change closes that gap by making those capabilities available out of the box, so teams can focus on building and operating applications rather than configuring observability tooling.azon EKS.

With Application Signals now enabled by default, customers immediately benefit from automatic service instrumentation — no manual configuration or Kubernetes workload annotations required — along with pre-built dashboards that surface application performance metrics and a rich troubleshooting experience that goes beyond infrastructure-level data to help teams quickly identify and resolve issues. For example, a platform team managing a microservices application on EKS can now detect latency spikes or error rate increases at the service level without any additional setup, accelerating root cause analysis during incidents.

This feature is available in all commercial AWS regions where Amazon CloudWatch Application Signals is available; to get started, you can refer to the Amazon CloudWatch Application Signals documentation and upgrade to version 5.0.0 of the add-on.

Published: 2026-02-26 16:43:00+00:00

AWS Lambda Durable Execution SDK for Java now available in Developer Preview

🚀
New Service Feature Introduction
TL;DR: AWS Lambda Durable Execution SDK for Java now available in developer preview for building resilient multi-step applications.
AWS Services: AWS Lambda

Link: https://aws.amazon.com/about-aws/whats-new/2026/02/lambda-durable-execution-java-preview/

Today, AWS announces the developer preview of the AWS Lambda Durable Execution SDK for Java. With this SDK, developers can build resilient multi-step applications like order processing pipelines, AI-assisted workflows, and human-in-the-loop approvals using Lambda durable functions, without implementing custom progress tracking or integrating external orchestration services.

Lambda durable functions extend Lambda's event-driven programming model with operations that checkpoint progress automatically and pause execution for up to a year when waiting on external events. The new Durable Execution SDK for Java provides an idiomatic experience for building with durable functions and is compatible with Java 17+. This preview includes steps for progress tracking, waits for efficient suspension, and durable futures for callback-based workflows.

To get started, see the Lambda durable functions developer guide and the AWS Lambda Durable Execution SDK for Java on GitHub. To learn more about Lambda durable functions, visit the product page.

On-demand functions are not billed for duration while paused. For pricing details, see AWS Lambda Pricing. For information about AWS Regions where Lambda durable functions are available, see the AWS Regional Services List.

Published: 2026-02-26 07:00:00+00:00

AWS Security Agent adds support for penetration tests on shared VPCs across AWS accounts

🚀
New Service Feature Introduction
TL;DR: AWS Security Agent now supports penetration testing on shared VPCs across multiple AWS accounts within organizations.
AWS Services: AWS Security Agent, Virtual Private Cloud, AWS Resource Access Manager

Link: https://aws.amazon.com/about-aws/whats-new/2026/02/aws-security-agent-adds-penetration-tests-shared/

AWS Security Agent now enables customers to run penetration tests against Virtual Private Cloud (VPC) resources shared from other AWS accounts within the same organization. This new capability allows security teams to perform comprehensive security assessments across their multi-account environments using AWS Security Agent. By leveraging AWS Resource Access Manager (RAM), customers can securely share VPC resources from sub-accounts to a central AWS account where penetration testing is conducted.

This feature addresses the challenge of testing distributed architectures spanning multiple AWS accounts. Security professionals can now create an Agent Space in a central account and use RAM to access VPC resources from connected sub-accounts for testing. This streamlines security assessments for organizations with complex multi-account setups. The ability to comprehensively test shared VPC resources enhances an organization's overall security posture.

To get started, ensure your accounts are part of the same AWS Organization and configure resource sharing using RAM. Then launch AWS Security Agent in your central account to begin penetration testing across the shared VPC resources. For more information on AWS Security Agent and its penetration testing capabilities, visit the AWS Security Agent documentation

Published: 2026-02-25 19:07:00+00:00

AWS launches a playground for interactive Aurora DSQL database exploration

🚀
New Service Feature Introduction
TL;DR: AWS launches browser-based playground for Aurora DSQL database exploration without requiring AWS account or setup
AWS Services: Amazon Aurora DSQL

Link: https://aws.amazon.com/about-aws/whats-new/2026/02/amazon-aurora-dsql-launches-playground/

Today, AWS announces a browser-based playground that enables developers to interact with an Amazon Aurora DSQL database without requiring an AWS account. With zero setup or infrastructure configuration, developers can create schemas, load data, and execute SQL queries directly form their browser.

The playground for Aurora DSQL provides an instant, ephemeral database environment, making it easy to experiment and learn. Built-in sample datasets help developers quickly explore core Aurora DSQL capabilities and get hands-on experience in minutes.

To start exploring, visit the playground for Aurora DSQL. To get started with your production workloads and learn more visit Amazon Aurora DSQL.

Published: 2026-02-25 18:00:00+00:00

Aurora DSQL launches new support for Tortoise, Flyway, and Prisma

🚀
New Service Feature Introduction
TL;DR: Aurora DSQL adds integrations for Tortoise ORM, Flyway schema management, and Prisma CLI tools with automatic IAM authentication.
AWS Services: Aurora DSQL

Link: https://aws.amazon.com/about-aws/whats-new/2026/02/aurora-dsql-launches-tortoise-flyway-prisma/

Today we are announcing the release of Aurora DSQL integrations for popular ORM and database migration tools: an adapter for Tortoise (Python ORM), a dialect for Flyway (schema management tool), and CLI tools for Prisma (Node.js ORM). These integrations help developers use their preferred frameworks with Aurora DSQL while automatically handling IAM authentication and Aurora DSQL-specific compatibility requirements.

The Aurora DSQL Adapter for Tortoise enables Python developers to build applications using Tortoise without writing custom authentication code. The adapter supports both asyncpg and psycopg drivers, integrates with the Aurora DSQL Connector for Python for automatic IAM token generation, and includes compatibility patches for rich migrations. The Flyway dialect adapts Flyway for Aurora DSQL's distributed architecture by automatically handling Aurora DSQL-specific behaviors such as IAM-based authentication. The Prisma CLI tools help Node.js developers validate their Prisma schemas for Aurora DSQL compatibility and generate Aurora DSQL-compatible migrations, streamlining the path from development to production.

To get started, visit the GitHub repositories for Tortoise ORM, Flyway, and Prisma. Get started with Aurora DSQL for free with the AWS Free Tier. To learn more about Aurora DSQL, visit the webpage.

Published: 2026-02-25 18:00:00+00:00

Aurora DSQL launches new integrations for Visual Studio Code SQLTools and DBeaver

🚀
New Service Feature Introduction
TL;DR: Aurora DSQL launches new integrations for Visual Studio Code SQLTools and DBeaver with automatic IAM authentication
AWS Services: Aurora DSQL, AWS IAM

Link: https://aws.amazon.com/about-aws/whats-new/2026/02/aurora-dsql-visual-studio-code-sqltools-dbeaver/

Today we are announcing the release of the Aurora DSQL Driver for SQLTools and the Aurora DSQL Plugin for DBeaver Community Edition. These integrations allow customers to leverage popular database tools to run queries against Aurora DSQL clusters, explore database schemas, and manage their data. Both integrations simplify database connectivity by automatically handling IAM authentication and transparently managing access tokens, eliminating the need to write token generation code or manually supply IAM tokens.

The SQLTools driver integrates Aurora DSQL with Visual Studio Code and is also available on Open VSX Registry for use with VS Code-compatible editors such as Cursor and Kiro. The DBeaver plugin is built on top of the Aurora DSQL Connector for JDBC. Both integrations eliminate security risks associated with traditional user-generated passwords by using AWS IAM credentials for secure, password-free authentication.

To get started, visit the Aurora DSQL documentation page for VSCode and DBeaver. Get started with Aurora DSQL for free with the AWS Free Tier. To learn more about Aurora DSQL, visit the webpage.

Published: 2026-02-25 18:00:00+00:00

Amazon WorkSpaces Applications extends support for 4K resolution

🎉
Service Feature Change
TL;DR: Amazon WorkSpaces Applications now supports 4K resolution on non-accelerated instances across all connection modes at no additional cost.
AWS Services: Amazon WorkSpaces Applications

Link: https://aws.amazon.com/about-aws/whats-new/2026/02/amazon-workspaces-applications-4K-resolution/

Amazon WorkSpaces Applications now supports up to 4K (4096 x 2160) resolution on non-accelerated instance types and across all client connection modes. Previously, higher resolution monitors were limited to graphics-accelerated instances in WorkSpaces Applications classic mode. This update allows you to choose the appropriate instance type and provide a better end-user experience that aligns with your hardware investments.

This new feature benefits customers by providing a consistent and high-quality streaming experience across instances regardless of hardware acceleration capabilities. Whether using native application mode, classic application mode, or desktop view, your end users can now enjoy up to 4K resolution if their display device supports it. This enhancement is particularly valuable for users with ultra-wide monitors (21:9 aspect ratio) at 4K resolution, ensuring applications display with optimal clarity and detail at the maximum supported resolution of 4K.

These features are available at no additional cost in all the AWS Regions where WorkSpaces Applications is available. WorkSpaces Applications offers pay-as-you-go pricing. To get started with WorkSpaces Applications, see Amazon WorkSpaces applications: Getting started

To enable these features for your users, you must use a WorkSpaces Applications image that uses a WorkSpaces Applications agent released on or after February 4, 2026, or an image that uses Managed WorkSpaces Applications image updates released on or after February 18, 2026.

Published: 2026-02-25 16:00:00+00:00

Amazon Bedrock now supports server-side tool execution with AgentCore Gateway

🚀
New Service Feature Introduction
TL;DR: Amazon Bedrock now supports server-side tool execution through AgentCore Gateway integration, eliminating need for client-side orchestration.
AWS Services: Amazon Bedrock, Amazon Bedrock AgentCore Gateway

Link: https://aws.amazon.com/about-aws/whats-new/2026/02/amazon-bedrock-server-side-tool-execution-agentcore-gateway/

Amazon Bedrock now enables server-side tool execution through Amazon Bedrock AgentCore Gateway integration with the Responses API. Customers can connect their AgentCore Gateway tools to Amazon Bedrock models, enabling server-side tool execution without client-side orchestration.

With this launch, customers can specify an AgentCore Gateway ARN as a tool connector in Responses API requests. Amazon Bedrock automatically discovers available tools from the gateway, presents them to the model during inference, and executes tool calls server-side when the model selects them, all within a single API call. This eliminates the need for customers to build and maintain client-side tool orchestration loops, reducing application complexity and latency for agentic workflows. Customers retain full control over tool access through their existing AgentCore Gateway configurations and AWS IAM permissions.

Server-side tool execution with AgentCore Gateway supports all models available through the Amazon Bedrock Responses API. Customers define tools using the MCP server connector type with their gateway ARN, and Amazon Bedrock handles tool discovery, model-driven tool selection, execution, and result injection automatically. Multiple tool calls within a single conversation turn are supported, and tool results are streamed back to the client in real time.

This capability is generally available in all AWS Regions where both Amazon Bedrock's Responses API and Amazon Bedrock AgentCore Gateway are available. To get started, visit the Amazon Bedrock documentation or the Amazon Bedrock console. For more information about Amazon Bedrock AgentCore Gateway, see the AgentCore documentation.

Published: 2026-02-24 23:02:00+00:00

AWS Observability now available as a Kiro power

🚀
New Service Feature Introduction
TL;DR: AWS Observability now available as Kiro power with AI-assisted workflows for faster infrastructure troubleshooting
AWS Services: CloudWatch, Application Signals, CloudTrail

Link: https://aws.amazon.com/about-aws/whats-new/2026/02/aws-observability-kiro-power/

Today, AWS announces AWS Observability as a Kiro power, enabling developers and operators to investigate infrastructure and application health issues faster with AI agent-assisted workflows in Kiro. Kiro Powers is a repository of curated and pre-packaged Model Context Protocol (MCP) servers, steering files, and hooks validated by Kiro partners to accelerate specialized software development and deployment use cases.

The AWS Observability power packages four specialized MCP servers with targeted observability guidance: the CloudWatch MCP server for observability data; the Application Signals MCP server for application performance monitoring; the CloudTrail MCP server for security analysis and compliance; and the AWS Documentation MCP server for contextual reference access. This unified platform gives Kiro agents instant context for comprehensive workflows including alarm response, anomaly detection, distributed tracing, SLO compliance monitoring, and security investigation. Additionally, the power includes automated gap analysis that helps you identify and fix missing instrumentation.

With the AWS Observability power, developers can now accelerate troubleshooting their distributed applications and infrastructure in minutes, directly in their IDE. The power addresses two critical needs: reducing mean time to resolution (MTTR) for active incidents and proactively improving your observability stack. For faster incident response, when investigating an active alarm, the power dynamically loads relevant guidance and operational signals so AI agents receive only the context needed for the specific troubleshooting task at hand. For stack improvement, the automated gap analysis examines your code to identify missing instrumentation patterns—such as unlogged errors, missing correlation IDs, or absent distributed tracing—and provides actionable recommendations. The power includes eight comprehensive steering guides covering incident response, alerting, performance monitoring, security auditing, and gap analysis.

The AWS Observability power is available for one-click installation within Kiro IDE and Kiro powers webpage in all AWS Regions, with each underlying MCP server functional based on regional support of the corresponding AWS service. To learn more about AWS observability MCP servers, visit our documentation

Published: 2026-02-24 19:05:00+00:00

AWS Deadline Cloud now supports running tasks together in chunks

🚀
New Service Feature Introduction
TL;DR: AWS Deadline Cloud now supports chunking tasks together for more efficient execution of short tasks or those with long startup times.
AWS Services: AWS Deadline Cloud

Link: https://aws.amazon.com/about-aws/whats-new/2026/02/aws-deadline-cloud-running-tasks-together-in/

Today, AWS Deadline Cloud announces support for grouping tasks into chunks to efficiently execute multiple tasks together. AWS Deadline Cloud is a fully managed service that simplifies render management for computer-generated 2D/3D graphics and visual effects for films, TV shows, commercials, games, and industrial design.

When your job has short tasks, or tasks that need to run in an environment with a long startup time, chunking them together for execution reduces the time and cost for completing the job. When creating a job, you can now manually specify a chunk size for the number tasks to group together for execution, or alternately specify a target run time for the execution of a chunk of tasks. The target run time will be used to dynamically change the number of tasks grouped together as the job completes to improve execution efficiency and achieve the target run time.

Running tasks together in chunks is now available in all AWS Regions where AWS Deadline Cloud is supported. To get started, visit the Deadline Cloud developer guide.

Published: 2026-02-24 18:13:00+00:00

AWS AppConfig integrates with New Relic for automated rollbacks

🚀
New Service Feature Introduction
TL;DR: AWS AppConfig launches New Relic integration for automated rollbacks during feature flag deployments based on application health monitoring.
AWS Services: AWS AppConfig

Link: https://aws.amazon.com/about-aws/whats-new/2026/02/aws-appconfig-new-relic-for-automated-rollback/

AWS AppConfig today launched a new integration that enables automated, intelligent rollbacks during feature flag and dynamic configuration deployments using New Relic Workflow Automation. Building on AWS AppConfig's third-party alert capability, this integration provides teams using New Relic with a solution to automatically detect degraded application health and trigger rollbacks in seconds, eliminating manual intervention.

When you deploy feature flags using AWS AppConfig's gradual deployment strategy, the AWS AppConfig New Relic Extension continuously monitors your application health against configured alert conditions. If issues are detected during a feature flag update and deployment, such as increased error rates or elevated latency, the New Relic Workflow automatically sends a notification to trigger an immediate rollback, reverting the feature flag to its previous state. This closed-loop automation reduces the time between detection and remediation from minutes to seconds, minimizing customer impact during failed deployments.


 

Published: 2026-02-24 16:00:00+00:00

Amazon EKS Node Monitoring Agent is now open source

🎉
Service Feature Change
TL;DR: Amazon EKS Node Monitoring Agent is now open source on GitHub, allowing customization and community contributions.
AWS Services: Amazon EKS, Amazon Elastic Kubernetes Service

Link: https://aws.amazon.com/about-aws/whats-new/2026/02/amazon-eks-node-monitoring-agent-open-source/

Amazon Elastic Kubernetes Service (Amazon EKS) Node Monitoring Agent is now open source. You can access the Amazon EKS Node Monitoring Agent source code and contribute to its development on GitHub.

Running workloads reliably in Kubernetes clusters can be challenging. Cluster administrators often have to resort to manual methods of monitoring and repairing degraded nodes in their clusters. The Amazon EKS Node Monitoring Agent simplifies this process by automatically monitoring and publishing node-level system, storage, networking, and accelerator issues as node conditions, which are used by Amazon EKS for automatic node repair. With the Amazon EKS Node Monitoring Agent’s source code available on GitHub, you now have visibility into the agent’s implementation, can customize it to fit your requirements, and can contribute directly to its ongoing development.

The Amazon EKS Node Monitoring Agent is included in Amazon EKS Auto Mode and is available as an Amazon EKS add-on in all AWS Regions where Amazon EKS is available.

To learn more about the Amazon EKS Node Monitoring Agent and node repair, visit the Amazon EKS documentation.

Published: 2026-02-24 15:00:00+00:00

AWS WAF announces AI activity dashboard for visibility into AI bot and agent traffic

🚀
New Service Feature Introduction
TL;DR: AWS WAF launches AI activity dashboard providing visibility into AI bot traffic, expanding Bot Control detection to 650+ unique bots and agents.
AWS Services: AWS WAF, AWS WAF Bot Control, CloudFront

Link: https://aws.amazon.com/about-aws/whats-new/2026/02/aws-waf-ai-activity-dashboard/

Today, AWS WAF announced a new AI activity dashboard that provides centralized visibility into AI bot and agent traffic reaching your applications. With this launch, AWS WAF Bot Control expands its detection coverage to track more than 650 unique bots and agents, offering one of the most comprehensive AI bot detection catalogs available.

AI-powered bots and autonomous agents are rapidly reshaping web traffic patterns. AI search crawlers index content, retrieval-augmented generation (RAG) systems fetch data in real time, and autonomous agents execute multi-step tasks across APIs and web applications. Without clear visibility, this traffic can increase infrastructure costs, affect application performance, and access content in ways that may not align with your organization’s security or business policies.


The AI traffic analysis dashboard provides a centralized view of AI bot and agent traffic across your protected resources. You can visualize AI traffic trends over time, identify the most active bots and frequently accessed paths, analyze request volumes by bot category and verification status, and take action directly using AWS WAF Bot Control rules, such as allowing verified AI search crawlers while rate-limiting or blocking unverified agents.

AWS WAF Bot Control's detection catalog now covers more than 650 unique bots and agents spanning categories including AI search engine crawlers, AI data collectors, AI assistants, and large language model training crawlers. The catalog is continuously updated, enabling customers to identify newly emerging AI bots as they appear.

For customers on flat-rate pricing plans, the dashboard is included with all paid plans. For WAF customers not subscribed to flat-rate plans, the AI traffic analysis dashboard is available at no additional cost. Refer to WAF pricing for details.

The new dashboard and expanded detection capabilities are available in all AWS Regions where AWS WAF is available.

To get started, visit the AWS WAF console or explore the AWS WAF Bot Control documentation.

Published: 2026-02-24 06:00:00+00:00

MediaConvert Introduces new video probe API and UI

🚀
New Service Feature Introduction
TL;DR: AWS Elemental MediaConvert introduces new Probe API for free metadata analysis of media files without processing video content.
AWS Services: AWS Elemental MediaConvert, AWS Step Functions

Link: https://aws.amazon.com/about-aws/whats-new/2026/02/aws-mediaconvert-introduces-video-probe/

Introducing Probe API, a powerful and free metadata analysis tool for AWS Elemental MediaConvert. Optimized for efficiency, Probe API reads header metadata to quickly return essential information about your media files, including codec specifications, pixel formats, color space details, and container information - all without waiting to process the actual video content. This analysis capability makes it an invaluable tool for content creators, developers, and media professionals who need to quickly validate files, automate workflows, or utilize Elementals' Step Functions to make encoding decisions based on source material characteristics.

For complete implementation details and usage examples, please visit the MediaConvert API Reference documentation. The Probe API can be utilized in any region where AWS Elemental MediaConvert is available, making it a versatile tool for streamlining your media workflow analysis.

To get started with Probe API and explore its capabilities, visit the AWS Elemental MediaConvert product page or consult the User Guide for comprehensive documentation.

Published: 2026-02-24 00:01:00+00:00

AWS Trusted Advisor now delivers more accurate unused NAT Gateway checks powered by AWS Compute Optimizer

🎉
Service Feature Change
TL;DR: AWS Trusted Advisor enhances unused NAT Gateway detection using Compute Optimizer for more accurate cost optimization recommendations.
AWS Services: AWS Trusted Advisor, AWS Compute Optimizer, NAT Gateway, CloudWatch, Cost Optimization Hub

Link: https://aws.amazon.com/about-aws/whats-new/2026/02/trusted-advisor-unused-nat-gateway-check/

AWS Trusted Advisor has enhanced its detection of unused NAT Gateway checks powered by AWS Compute Optimizer detection capabilities. The enhanced detection analyzes additional CloudWatch metrics over a 32-day lookback period and verifies whether NAT Gateways are associated with route tables, reducing false positives by avoiding flagging critical backup resources. This helps cost optimization teams and DevOps engineers confidently identify and remove unused NAT Gateways that incur unnecessary charges.

Each recommendation includes estimated monthly cost savings, enabling you to prioritize cleanup based on monetary impact. With these recommendations, you can run regular cost audits to catch idle NAT Gateways before charges accumulate. This simplifies cleaning up resources left behind after workload migrations or decommissions. You can view and act on these recommendations in the Trusted Advisor console alongside your other cost optimization checks, or through Trusted Advisor APIs.

This feature is available in all AWS Regions where AWS Trusted Advisor is supported. Organizations must be opted-in to Cost Optimization Hub and Compute Optimizer to access these enhanced recommendations. To learn more, visit the AWS Trusted Advisor documentation.

Published: 2026-02-23 23:00:00+00:00

Amazon announces generative AI-based artifacts in Amazon Q Developer for visualizing resource and cost data

🚀
New Service Feature Introduction
TL;DR: Amazon Q Developer artifacts now generally available for visualizing AWS resource and cost data with generative AI in Management Console.
AWS Services: Amazon Q Developer, AWS Management Console, Amazon S3, Amazon RDS

Link: https://aws.amazon.com/about-aws/whats-new/2026/02/generative-ai-based-Amazon-Q-artifacts/

Today, AWS announces the general availability of Amazon Q Developer artifacts in the AWS Management Console. Amazon Q artifacts is a generative AI-based user experience that enables customers to visualize resource data in tables and cost data in charts. The launch also moves the Q icon to the navigation bar and the chat panel to the left, making Amazon Q easier to access from anywhere in the AWS Management Console.

Customers can access Amazon Q artifacts by selecting the Amazon Q icon and asking questions about their AWS resources to understand the state of their resources and costs using Amazon Q artifacts. For example, on asking “List S3 buckets with tag value production", Amazon Q displays the S3 buckets that has a tag value of production in a tabular format. Customers can then select the hyperlinks on the bucket name to view the bucket details in the S3 console. Customers can also visualize cost and billing information with charts. For example, on entering "Show me RDS costs by instance type over the last 6 months", Q will render the response in a Q artifacts using a chart (e.g., bar graph, line chart, pie chart, or area chart). Customers can also use sample prompts in the Prompt Library in the Amazon Q chat panel to get started quickly. The artifacts are displayed in an artifact panel to the right of the Amazon Q chat panel. Users can expand Amazon Q to full-screen for a dedicated focus mode experience.

The Amazon Q Developer artifacts are available in all AWS Regions where Amazon Q Developer is available. To get started visit Amazon Q Developer documentation.

Published: 2026-02-23 20:05:00+00:00

Amazon Redshift Serverless introduces 3-year Serverless Reservations

ďą©
Pricing Change
TL;DR: Amazon Redshift Serverless introduces 3-year Serverless Reservations offering up to 45% savings with no upfront payment.
AWS Services: Amazon Redshift, Amazon Redshift Serverless

Link: https://aws.amazon.com/about-aws/whats-new/2026/02/amazon-redshift-serverless-three-year-reservations/

Amazon Redshift now offers 3-year Serverless Reservations for Amazon Redshift Serverless, a new discounted pricing option that provides up to 45% savings and improved cost predictability for your analytics workloads. With Serverless Reservations, you commit to a specific number of Redshift Processing Units (RPUs) for a 3-year term with a no-upfront payment option.

Amazon Redshift Serverless allows you to run and scale analytics without having to provision and manage clusters with a pay-as-you-go pricing model. Serverless Reservations help you further optimize compute costs and improve cost predictability of existing and new workloads on Amazon Redshift Serverless. Managed at the AWS payer account level, Serverless Reservations can be shared between multiple AWS accounts, reducing your compute costs by up to 45% on all Amazon Redshift Serverless workloads in your AWS account. Serverless Reservations are billed hourly and metered per second, offering a consistent billing model (24 hours a day, seven days a week) while maintaining the flexibility offered by Amazon Redshift Serverless. Any usage exceeding the specified RPU level is charged at standard on-demand rates. You can purchase Serverless Reservations via the Amazon Redshift console or by invoking the Serverless Reservations API “create-reservation”.

Serverless Reservations are available in all regions where Amazon Redshift Serverless is currently available. To learn more about Amazon Redshift Serverless pricing options, see the Redshift Serverless feature page, Redshift Pricing Page, or the Amazon Redshift Management Guide

Published: 2026-02-23 17:00:00+00:00

Aurora DSQL launches new Go, Python, and Node.js connectors that simplify IAM authentication

🚀
New Service Feature Introduction
TL;DR: Aurora DSQL launches new Go, Python, and Node.js connectors that simplify IAM authentication for PostgreSQL connections.
AWS Services: Aurora DSQL, IAM, AWS Free Tier

Link: https://aws.amazon.com/about-aws/whats-new/2026/02/aurora-dsql-launches-go-python-nodejs-connectors

Today we are announcing the release of Aurora DSQL Connectors for Go (pgx), Python (asyncpg), and Node.js (WebSocket for Postgres.js) that simplify IAM authentication for customers using standard PostgreSQL drivers to connect to Aurora DSQL clusters. These connectors act as transparent authentication layers that automatically handle IAM token generation, eliminating the need to write token generation code or manually supply IAM tokens. Tokens are automatically generated for each connection, ensuring valid tokens are always used while maintaining full compatibility with existing PostgreSQL driver features. The Postgres.js connector additionally supports WebSocket protocol, enabling customers to connect to DSQL clusters in environments where TCP connections are not available.

These connectors streamline authentication and eliminate security risks associated with traditional user-generated passwords. All three connectors support custom IAM credential providers, giving customers flexibility in how they manage their AWS credentials.

To get started, visit the Connectors for Aurora DSQL documentation page. For code examples, visit our Github page for pgx for Go, asyncpg for Python, and Websocket for Postgres.js. Get started with Aurora DSQL for free with the AWS Free Tier. To learn more about Aurora DSQL, visit the webpage.

Published: 2026-02-19 18:00:00+00:00

AWS Certificate Manager updates default certificate validity to comply with new guidelines

🎉
Service Feature Change
TL;DR: AWS Certificate Manager reduces public certificate validity from 395 to 198 days, with corresponding price reductions for exportable certificates.
AWS Services: AWS Certificate Manager, ACM

Link: https://aws.amazon.com/about-aws/whats-new/2026/02/aws-certificate-manager-updates-default/

Starting today, public certificates issued from AWS Certificate Manager (ACM) have a maximum validity period of 198 days, compared to previous validity period of 395 days. With this change, ACM-issued public certificates will be compliant with new Certification Authority Browser (CA/Browser) Forum mandate that certificates be no longer than 200 days starting 15th March 2026.

No action is required from the customers to receive this change. All new and renewed public certificates will by default have a validity of 198 days. Existing certificates with 395 days validity continue to be valid and can be used until they renew or expire. All other certificate functionality remains in place. ACM will still continue to auto renew the certificates before expiry. The certificates are now renewed 45 days before expiry. Existing 398 day validity certificates will renew 60 days before expiry and will renew with 198 days validity period.

We have reduced the pricing for ACM’s exportable public certificates in line with the shorter validity period. 198-day exportable public certificate will now cost $7/Fully Qualified domain name (down from $15) and $79/ wildcard name (down from $149). Please refer to ACM’s pricing page for more details. For more information about ACM, visit the ACM documentation.

Published: 2026-02-18 23:46:00+00:00

Amazon Aurora DSQL now integrates with Kiro powers and AI agent skills

🚀
New Service Feature Introduction
TL;DR: Amazon Aurora DSQL now integrates with Kiro powers and AI agent skills for AI-assisted development and database operations.
AWS Services: Amazon Aurora DSQL

Link: https://aws.amazon.com/about-aws/whats-new/2026/02/amazon-aurora-dsql-integrates-with-kiro-powers-and-agent-skills

Today, AWS announces Amazon Aurora DSQL integration with Kiro powers and AI agent skills, enabling developers to build Aurora DSQL-backed applications faster with AI agent-assisted development. These integrations bundle the Aurora DSQL Model Context Protocol (MCP) server with development best practices, so AI agents can help you with Aurora DSQL schema design, performance optimization, and database operations out of the box.

Kiro powers is a registry of curated and pre-packaged MCP servers, steering files, and agent hooks to accelerate specialized software development and deployment use cases. With the Kiro power for Aurora DSQL, agents have instant access to specialized knowledge, so developers can work confidently without any prior context, reducing trial-and-error development cycles. The power is available within the Kiro IDE for one-click installation.

The Aurora DSQL skill extends the same capabilities to additional AI coding agents through the Skills CLI. Developers can install the skill with a single command and select their preferred agents including Kiro CLI, Claude Code, Gemini, Codex, Cursor, Copilot, Cline, Windsurf, Roo, OpenCode, and more. When developers work on database tasks, the agent dynamically loads relevant skill guidance, including Aurora DSQL Postgres-compatible SQL patterns, distributed database design, and IAM authentication, eliminating the need to repeatedly provide the same context across conversations. As Aurora DSQL adds new features, future skill releases will include updated patterns and guidance, ensuring that agents always have current best practices.

For more information on the Aurora DSQL Kiro power and agent skills, visit the Aurora DSQL steering documentation and GitHub page. Get started with Aurora DSQL for free with the AWS Free Tier.

Published: 2026-02-18 18:00:00+00:00

Amazon Managed Grafana now supports AWS KMS customer managed keys

🚀
New Service Feature Introduction
TL;DR: Amazon Managed Grafana now supports AWS KMS customer managed keys for encryption at rest in workspaces.
AWS Services: Amazon Managed Grafana, AWS Key Management Service, AWS KMS

Link: https://aws.amazon.com/about-aws/whats-new/2026/02/amazon-managed-grafana-customer-managed-keys

Amazon Managed Grafana now supports customer-managed keys (CMK) through AWS Key Management Service (KMS), enabling you to encrypt data stored in in your Amazon Managed Grafana workspaces with your own encryption keys. Amazon Managed Grafana is a fully managed service based on open-source Grafana that makes it easier for you to visualize and analyze your operational data at scale.

Amazon Managed Grafana provides encryption at rest using AWS owned keys by default. With this launch, you now have an option to use a customer-managed key when creating an Amazon Managed Grafana workspace. This allows you to add a self-managed security layer, helping you meet your organization’s compliance and regulatory requirements.

This feature is now available in all regions where Amazon Managed Grafana is generally available, except in AWS GovCloud (US) Regions. To get started with Amazon Managed Grafana, refer Amazon Managed Grafana user guide. To learn more about Amazon Managed Grafana, visit the product page and pricing page.

Published: 2026-02-18 15:00:00+00:00

AWS Clean Rooms announces support for remote Apache Iceberg REST catalogs

🚀
New Service Feature Introduction
TL;DR: AWS Clean Rooms now supports catalog federation for remote Apache Iceberg REST catalogs, simplifying clean room setup.
AWS Services: AWS Clean Rooms, Amazon S3, AWS Glue

Link: https://aws.amazon.com/about-aws/whats-new/2026/02/aws-clean-rooms-remote-iceberg-catalogs

AWS Clean Rooms now supports catalog federation for remote Iceberg catalogs. This capability simplifies clean room setup by providing direct, secure access to Iceberg tables stored in Amazon S3 and cataloged in remote catalogs—without requiring table metadata replication. Organizations can now use AWS Glue catalog federation to provide direct access to their existing Iceberg REST catalog in a Clean Rooms collaboration. For example, a media publisher with data cataloged in the AWS Glue Data Catalog and an advertiser with data cataloged in a remote Iceberg catalog can analyze their collective datasets to evaluate advertising spend—without having to build ETL data pipelines or share underlying data with one another.

AWS Clean Rooms helps companies and their partners easily analyze and collaborate on their collective datasets without revealing or copying one another’s underlying data. For more information about the AWS Regions where AWS Clean Rooms is available, see the AWS Regions table. To learn more about collaborating with AWS Clean Rooms, visit AWS Clean Rooms.

Published: 2026-02-18 12:00:00+00:00

Amazon Bedrock reinforcement fine-tuning adds support for open-weight models with OpenAI-compatible APIs

🚀
New Service Feature Introduction
TL;DR: Amazon Bedrock adds reinforcement fine-tuning support for open-weight models with OpenAI-compatible APIs, enabling easier model customization.
AWS Services: Amazon Bedrock, AWS Lambda

Link: https://aws.amazon.com/about-aws/whats-new/2026/02/amazon-bedrock-reinforcement-fine-tuning-openai

Amazon Bedrock now extends reinforcement fine-tuning (RFT) support to popular open-weight models, including OpenAI GPT-OSS and Qwen models, and introduces OpenAI-compatible fine-tuning APIs. These capabilities make it easier for developers to improve open-weight model accuracy without requiring deep machine learning expertise or large volumes of labeled data. Reinforcement fine-tuning in Amazon Bedrock automates the end-to-end customization workflow, allowing models to learn from feedback on multiple possible responses using a small set of prompts, rather than traditional large training datasets. Reinforcement fine-tuning enables customers to use smaller, faster, and more cost-effective model variants while maintaining high quality.

Organizations often struggle to adapt foundation models to their unique business requirements, forcing tradeoffs between generic models with limited performance and complex, expensive customization pipelines that require specialized infrastructure and expertise. Amazon Bedrock removes this complexity by providing a fully managed, secure reinforcement fine-tuning experience. Customers define reward functions using verifiable rule-based graders or AI-based judges, including built-in templates for both objective tasks such as code generation and math reasoning, and subjective tasks such as instruction following or conversational quality. During training, customers can use AWS Lambda functions for custom grading logic, and access intermediate model checkpoints to evaluate, debug, and select the best-performing model, improving iteration speed and training efficiency. All proprietary data remains within AWS’s secure, governed environment throughout the customization process.

Models supported at this launch are: qwen.qwen3-32b and openai.gpt-oss-20b. After fine-tuning completes, customers can immediately use the resulting fine tuned model for on-demand inference through Amazon Bedrock’s OpenAI-compatible APIs - Responses API and Chat Completions API, without any additional deployment steps. To learn more, see the Amazon Bedrock documentation.

Published: 2026-02-17 21:17:00+00:00

Amazon Aurora MySQL 3.12 (compatible with MySQL 8.0.44) is now generally available

🚀
New Service Feature Introduction
TL;DR: Amazon Aurora MySQL 3.12 with MySQL 8.0.44 compatibility is now generally available with security enhancements and availability improvements.
AWS Services: Amazon Aurora, Amazon RDS

Link: https://aws.amazon.com/about-aws/whats-new/2026/02/amazon-aurora-mysql-312-available/

Starting today, Amazon Aurora MySQL - Compatible Edition 3 (with MySQL 8.0 compatibility) will support MySQL 8.0.44 through Aurora MySQL v3.12.

In addition to many security enhancements and bug fixes, Aurora MySQL v3.12 contains several availability improvements. For more details, refer to the Aurora MySQL 3.12 and MySQL 8.0.44 release notes. To upgrade to Aurora MySQL 3.12, you can initiate a minor version upgrade manually by modifying your DB cluster, or you can enable the “Auto minor version upgrade” option when creating or modifying a DB cluster. This release is available in all AWS regions where Aurora MySQL is available.

Amazon Aurora is designed for unparalleled high performance and availability at global scale with full MySQL and PostgreSQL compatibility. It provides built-in security, continuous backups, serverless compute, up to 15 read replicas, automated multi-Region replication, and integrations with other Amazon Web Services services. To get started with Amazon Aurora, take a look at our getting started page.

Published: 2026-02-17 20:00:00+00:00

Amazon MSK now supports dual-stack (IPv4 and IPv6) connectivity for existing clusters

🎉
Service Feature Change
TL;DR: Amazon MSK now supports dual-stack IPv4 and IPv6 connectivity for existing clusters across all regions.
AWS Services: Amazon MSK, Amazon Managed Streaming for Apache Kafka, AWS CLI, CloudFormation

Link: https://aws.amazon.com/about-aws/whats-new/2026/02/aws-msk-dual-stack-ipv4-and-ipv6

Amazon Managed Streaming for Apache Kafka (Amazon MSK) now supports dual-stack connectivity (IPv4 and IPv6) for existing MSK Provisioned and MSK Serverless clusters. This capability enables customers to connect to Amazon MSK using both IPv4 and IPv6 protocols, in addition to the existing IPv4-only option. It helps customers modernize applications for IPv6 environments while maintaining IPv4 compatibility, making it easier to meet compliance requirements and prepare for future network architectures.

Amazon MSK is a fully managed service for Apache Kafka that makes it easier for customers to build and run applications that use Apache Kafka as a data store. Previously, MSK Provisioned and Serverless clusters exclusively utilized IPv4 addressing for all connectivity options. With this new capability, customers can now enable dual-stack connectivity (IPv4 and IPv6) on existing MSK clusters using Amazon MSK Console, AWS CLI, SDK, or CloudFormation by modifying the Network Type parameter for a cluster from IPv4 to dual-stack. Upon successful update, MSK provisions IPv6-enabled network interfaces while maintaining existing IPv4 connectivity, ensuring uninterrupted service. To retrieve new IPv6 bootstrap broker strings for MSK clusters, customers can use the GetBootstrapBrokers API to obtain the necessary connection information. All MSK Provisioned and Serverless clusters will retain IPv4-only connectivity unless explicitly updated.

Dual-stack connectivity for existing MSK Provisioned and Serverless clusters is now available in all AWS Regions where Amazon MSK is available, at no additional cost. To learn more about Amazon MSK dual-stack support, refer to the Amazon MSK developer guide

Published: 2026-02-17 16:00:00+00:00

Claude Sonnet 4.6 now available in Amazon Bedrock

🚀
New Service Feature Introduction
TL;DR: Claude Sonnet 4.6 now available in Amazon Bedrock with frontier performance for coding, agents, and professional work at scale.
AWS Services: Amazon Bedrock

Link: https://aws.amazon.com/about-aws/whats-new/2026/02/claude-sonnet-4.6-available-in-amazon-bedrock/

Starting today, Amazon Bedrock supports Claude Sonnet 4.6, which offers frontier performance across coding, agents, and professional work at scale. According to Anthropic, Claude Sonnet 4.6 is their best computer use model yet, allowing organizations to deploy browser-based automation across business tools with near-human reliability. Claude Sonnet 4.6 approaches Opus 4.6 intelligence at a lower cost. It enables faster, high-quality task completion, making it ideal for high-volume coding and knowledge work use cases. 

 

Claude Sonnet 4.6 serves as a direct upgrade to Sonnet 4.5 across use cases that require consistent conversational quality and efficient multi-step orchestration. For search and chat applications, it delivers reliable performance across single and multi-turn exchanges at a price point that makes high-volume deployment practical, maintaining quality standards while optimizing for scale. Developers can leverage Claude Sonnet 4.6’s for agentic workflows, seamlessly filling both lead agent and subagent roles in multi-model pipelines with precise workflow management and context compaction capabilities. Enterprise teams can use Claude Sonnet 4.6 to power domain-specific applications with professional precision, including spreadsheet and financial model creation that accelerates analysis workflows, compliance review processes that require meticulous attention to detail, and data summarization tasks where iteration speed and accuracy are paramount. Claude Sonnet 4.6 requires only minor prompting adjustments from Sonnet 4.5, ensuring smooth migration for existing implementations. 

 

Claude Sonnet 4.6 is now available in Amazon Bedrock. For the full list of available regions, refer to the documentation. To learn more and get started with Claude Sonnet 4.6 in Amazon Bedrock, read the About Amazon blog and visit the Amazon Bedrock console.

Published: 2026-02-17 15:43:00+00:00

AWS Weekly Roundup: OpenAI partnership, AWS Elemental Inference, Strands Labs, and more (March 2, 2026) Blog Post

🚀
New Service Feature Introduction
TL;DR: AWS Weekly Roundup covering OpenAI partnership, AWS Elemental Inference, Strands Labs, and AI-DLC workshops for business transformation.
AWS Services: AWS Elemental

Link: https://aws.amazon.com/blogs/aws/aws-weekly-roundup-openai-partnership-aws-elemental-inference-strands-labs-and-more-march-2-2026/

This past week, I’ve been deep in the trenches helping customers transform their businesses through AI-DLC (AI-Driven Lifecycle) workshops. Throughout 2026, I’ve had the privilege of facilitating these sessions for numerous customers, guiding them through a structured framework that helps organizations identify, prioritize, and implement AI use cases that deliver measurable business value. AI-DLC is […]

Published: 2026-03-02 19:05:12+00:00

Announcing Amazon SageMaker Inference for custom Amazon Nova models Blog Post

🚀
New Service Feature Introduction
TL;DR: Amazon SageMaker AI Inference now supports custom Amazon Nova models with configurable instance types and auto-scaling.
AWS Services: Amazon SageMaker, Amazon Nova

Link: https://aws.amazon.com/blogs/aws/announcing-amazon-sagemaker-inference-for-custom-amazon-nova-models/

AWS launches Amazon SageMaker Inference for custom Amazon Nova models. You can now configure the instance types, auto-scaling policies, and concurrency settings for custom Nova model deployments to best meet their needs.

Published: 2026-02-16 21:25:23+00:00

AWS Weekly Roundup: Amazon Bedrock agent workflows, Amazon SageMaker private connectivity, and more (February 2, 2026) Blog Post

🚀
New Service Feature Introduction
TL;DR: AWS Weekly Roundup covering Amazon Bedrock agent workflows, Amazon SageMaker private connectivity, and other recent updates.
AWS Services: Amazon Bedrock, Amazon SageMaker

Link: https://aws.amazon.com/blogs/aws/aws-weekly-roundup-amazon-bedrock-agent-workflows-amazon-sagemaker-private-connectivity-and-more-february-2-2026/

Over the past week, we passed Laba festival, a traditional marker in the Chinese calendar that signals the final stretch leading up to the Lunar New Year. For many in China, it’s a moment associated with reflection and preparation, wrapping up what the year has carried, and turning attention toward what lies ahead. Looking forward, […]

Published: 2026-02-02 17:19:48+00:00

AWS Weekly Roundup: Kiro CLI latest features, AWS European Sovereign Cloud, EC2 X8i instances, and more (January 19, 2026) Blog Post

🚀
New Service Feature Introduction
TL;DR: AWS Weekly Roundup covering Kiro CLI features, European Sovereign Cloud, EC2 X8i instances and other updates for January 2026
AWS Services: EC2

Link: https://aws.amazon.com/blogs/aws/aws-weekly-roundup-kiro-cli-latest-features-aws-european-sovereign-cloud-ec2-x8i-instances-and-more-january-19-2026/

At the end of 2025 I was happy to take a long break to enjoy the incredible summers that the southern hemisphere provides. I’m back and writing my first post in 2026 which also happens to be my last post for the AWS News Blog (more on this later). The AWS community is starting the […]

Published: 2026-01-20 00:24:38+00:00

AWS Weekly Roundup: AWS Lambda for .NET 10, AWS Client VPN quickstart, Best of AWS re:Invent, and more (January 12, 2026) Blog Post

🚀
New Service Feature Introduction
TL;DR: AWS Weekly Roundup covering Lambda .NET 10 support, Client VPN quickstart, re:Invent highlights, and Free Tier credits promotion.
AWS Services: AWS Lambda, AWS Client VPN, AWS Free Tier

Link: https://aws.amazon.com/blogs/aws/aws-weekly-roundup-aws-lambda-for-net-10-aws-client-vpn-quickstart-best-of-aws-reinvent-and-more-january-12-2026/

At the beginning of January, I tend to set my top resolutions for the year, a way to focus on what I want to achieve. If AI and cloud computing are on your resolution list, consider creating an AWS Free Tier account to receive up to $200 in credits and have 6 months of risk-free […]

Published: 2026-01-12 17:39:47+00:00

Amazon RDS for PostgreSQL supports minor versions 18.3, 17.9, 16.13, 15.17, and 14.22

🎉
Service Feature Change
TL;DR: Amazon RDS for PostgreSQL now supports latest minor versions 18.3, 17.9, 16.13, 15.17, and 14.22 with security fixes.
AWS Services: Amazon RDS, Amazon RDS for PostgreSQL, AWS Organizations, Amazon RDS Blue/Green deployments, AWS Command Line Interface

Link: https://aws.amazon.com/about-aws/whats-new/2026/02/rds-minor-version-18-3-17-9-16-13-15-17-14-22/

Amazon Relational Database Service (RDS) for PostgreSQL now supports the latest minor versions 18.3, 17.9, 16.13, 15.17, and 14.22. These versions address the regression from the February 12, 2026 PostgreSQL community release. We recommend that you upgrade to the latest minor versions to fix known security vulnerabilities in prior versions of PostgreSQL, and to benefit from the bug fixes added by the PostgreSQL community.

You can upgrade your databases during scheduled maintenance windows using automatic minor version upgrades. To simplify operations at scale, enable automatic minor version upgrades and use the AWS Organizations Upgrade Rollout Policy to orchestrate thousands of upgrades in phases, first to development environments before upgrading production systems. You can also use Amazon RDS Blue/Green deployments with physical replication to minimize downtime for minor version upgrades.

Amazon RDS for PostgreSQL makes it simple to set up, operate, and scale PostgreSQL deployments in the cloud. See Amazon RDS for PostgreSQL Pricing for pricing details and regional availability. Create or update a fully managed Amazon RDS database in the Amazon RDS Management Console or by using the AWS Command Line Interface (CLI).

Published: 2026-02-27 08:00:00+00:00

AWS Compute Optimizer now applies AWS-generated tags to EBS snapshots created during automation

🎉
Service Feature Change
TL;DR: AWS Compute Optimizer now automatically tags EBS snapshots with automation event IDs for better tracking and governance.
AWS Services: AWS Compute Optimizer, Amazon Elastic Block Store, EBS

Link: https://aws.amazon.com/about-aws/whats-new/2026/02/aws-compute-optimizer-applies-tags-ebs-snapshots/

AWS Compute Optimizer makes it easier to identify snapshots that are created when snapshotting and deleting unattached Amazon Elastic Block Store (EBS) volumes by automatically applying an AWS-generated tag during creation. This enhancement improves visibility and tracking of EBS snapshots created through Compute Optimizer Automation.

When Compute Optimizer creates a snapshot before deleting an unattached EBS volume—whether initiated through manual actions or automation rules—the snapshot now receives the tag aws:compute-optimizer:automation-event-id with a tag value that links the snapshot to the unique identifier of the automation event that created it. This allows you to easily identify, track, and manage snapshots created through the automated optimization process, helping you maintain better governance over your backup resources and understand the source of snapshots in your environment.

This is available in all AWS Regions where AWS Compute Optimizer Automation is available. To get started with automated optimization, go to the AWS Compute Optimizer console or visit the user guide documentation.

Published: 2026-02-24 19:58:00+00:00

Amazon RDS Custom now supports the latest GDR updates for Microsoft SQL Server

🎉
Service Feature Change
TL;DR: Amazon RDS Custom for SQL Server now supports latest GDR updates including SQL Server 2022 Cumulative Update addressing security vulnerabilities.
AWS Services: Amazon RDS Custom, Amazon Relational Database Service

Link: https://aws.amazon.com/about-aws/whats-new/2026/02/amazon-rds-custom-supports-latest-gdr-updates-for-microsoft-sql-server/

Amazon Relational Database Service (Amazon RDS) Custom for SQL Server now supports the latest General Distribution Release (GDR) updates for Microsoft SQL Server. This release includes support for SQL Server 2022 Cumulative Update and KB5072936 (16.00.4230.2.v1).

The GDR updates address vulnerabilities described in CVE-2026-20803. For additional information on the improvements and fixes included in these updates, see Microsoft documentation for KB5072936. You can upgrade your Amazon RDS Custom for SQL Server instances to apply these recommended updates using Amazon RDS Management Console, or by using the AWS SDK or CLI. To learn more about upgrading your database instances, see Amazon RDS Custom User Guide.

Published: 2026-02-24 08:52:00+00:00

Amazon S3 now provides AWS source region information in server access logs

🚀
New Service Feature Introduction
TL;DR: Amazon S3 server access logs now include AWS source region information to help optimize cross-region request costs and performance.
AWS Services: Amazon S3

Link: https://aws.amazon.com/about-aws/whats-new/2026/02/amazon-s3-source-region-information/

Amazon S3 server access logs now include source region information, specifying which AWS Region requests to your data originate from. This identifies applications that are making cross-region requests, helping you to optimize cost and performance.

Source region information automatically appears at the end of each server access log entry with no additional configuration. For example, if your application requests data from your us-east-1 bucket from us-west-2, the log entry shows "us-west-2" as the source region.

This feature will be available in all AWS Regions in the coming weeks at no additional cost. To learn more about S3 server access log format and best practices, visit the S3 User Guide.

Published: 2026-02-23 23:14:00+00:00

AWS IAM Policy Autopilot is now available as a Kiro Power

🚀
New Service Feature Introduction
TL;DR: AWS IAM Policy Autopilot open source tool now available as Kiro power for AI-assisted development environments.
AWS Services: AWS IAM

Link: https://aws.amazon.com/about-aws/whats-new/2026/02/aws-iam-policy-autopilot-kiro-power/

AWS IAM Policy Autopilot, the open source static code analysis tool launched at re:Invent 2025, is now available as a Kiro power to bring policy expertise to agentic AI development. This tool helps developers quickly create baseline AWS IAM policies that can be refined as applications evolve, eliminating the need for manual IAM policy creation.

The Kiro power delivers significant benefits through one-click installation directly from the Kiro IDE and web interface, removing the need for manual MCP server configuration. This streamlined workflow enables faster policy creation and integrates seamlessly into AI-assisted development environments. Key use cases include rapid prototyping of AWS applications requiring IAM policies, baseline policy creation for new AWS projects, and enhanced productivity within IDE environments where developers can generate policies without leaving their coding workflow.

To learn more about AWS IAM Policy Autopilot and access the integration, visit the AWS IAM Policy Autopilot GitHub repository. To learn more about Kiro powers, visit the Kiro powers page

Published: 2026-02-23 18:49:00+00:00

Amazon RDS for Oracle now supports January 2026 Release Update and Spatial Patch Bundle

🎉
Service Feature Change
TL;DR: Amazon RDS for Oracle now supports January 2026 Release Update and Spatial Patch Bundle for enhanced security and performance.
AWS Services: Amazon RDS, Amazon Relational Database Service, AWS Organizations, AWS SDK, AWS CLI

Link: https://aws.amazon.com/about-aws/whats-new/2026/02/amazon-rd-for-oracle-jan-release-update-spatial-patch-bundle/

Amazon Relational Database Service (Amazon RDS) for Oracle now supports the Oracle January 2026 Release Update (RU) for Oracle Database versions 19c and 21c, and the corresponding Spatial Patch Bundle for Oracle Database version 19c. We recommend upgrading to the January 2026 RU as it includes security updates for Oracle database products. The Spatial Patch Bundle update delivers important fixes for Oracle Spatial and Graph functionality to provide reliable and optimal performance for spatial operations.

You can apply the January 2026 RU from the Amazon RDS Management Console, or by using the AWS SDK or CLI. To automatically apply updates to your database instance during your maintenance window, enable Automatic Minor Version Upgrade. You can apply the Spatial Patch Bundle update for new database instances, or upgrade existing instances to engine version '19.0.0.0.ru-2026-01.spb-1.r1' by selecting the "Spatial Patch Bundle Engine Versions" checkbox in the AWS Console.

You can use AWS Organizations upgrade rollout policy to stagger automatic minor version upgrades for your Amazon RDS database instances such that automatic minor version upgrades are first applied to non-production environments, allowing you time to validate before the upgrades are applied to production environments. For additional details, refer to Amazon RDS for Oracle documentation on using AWS Organizations upgrade rollout policy for automatic minor version upgrades.

Published: 2026-02-20 08:38:00+00:00

Amazon MQ now supports ActiveMQ minor version 5.19

🚀
New Service Feature Introduction
TL;DR: Amazon MQ now supports ActiveMQ minor version 5.19 with improvements and fixes across all AWS Regions.
AWS Services: Amazon MQ

Link: https://aws.amazon.com/about-aws/whats-new/2026/02/amazon-mq-activemq-5-19/

Amazon MQ now supports ActiveMQ minor version 5.19, which introduces several improvements and fixes compared to the previous version of ActiveMQ supported by Amazon MQ. Amazon MQ manages the patch version upgrades for your brokers. All brokers on ActiveMQ version 5.19 will be automatically upgraded to the next compatible and secure patch version in your scheduled maintenance window.

If you are utilizing prior versions of ActiveMQ, such as 5.18, we strongly recommend you to upgrade to ActiveMQ 5.19. You can easily perform this upgrade with just a few clicks in the AWS Management Console. To learn more about upgrading, consult the ActiveMQ Version Management section in the Amazon MQ Developer Guide. To learn more about the changes in ActiveMQ 5.19, see the Amazon MQ release notes. This version is available across all AWS Regions where Amazon MQ is available.

Published: 2026-02-19 17:00:00+00:00

Amazon Connect now includes agent time-off requests in draft schedules

🎉
Service Feature Change
TL;DR: Amazon Connect now shows agent time-off requests in draft schedules to help schedulers identify coverage gaps before publishing.
AWS Services: Amazon Connect

Link: https://aws.amazon.com/about-aws/whats-new/2025/02/amazon-connect-time-off-draft-schedules

Amazon Connect now includes agent time-off requests in draft schedules, making it easier for you to view why an agent was not scheduled on a particular day or part of the day. For example, when generating schedules for next month, you can see that an agent who typically works Monday to Friday wasn't scheduled for the first week because they're on leave without needing to check the published schedules or troubleshooting configuration as to why agent was not scheduled. This launch helps schedulers quickly identify coverage gaps and adjust schedules before publishing them to agents.

This feature is available in all AWS Regions where Amazon Connect agent scheduling is available. To learn more about Amazon Connect agent scheduling, click here.

Published: 2026-02-17 18:50:00+00:00

Amazon EventBridge Scheduler adds resource count metrics for quota monitoring

🚀
New Service Feature Introduction
TL;DR: Amazon EventBridge Scheduler now emits resource count metrics to CloudWatch for quota monitoring and capacity planning.
AWS Services: Amazon EventBridge Scheduler, Amazon CloudWatch

Link: https://aws.amazon.com/about-aws/whats-new/2026/02/amazon-eventbridge-scheduler-resource-metrics/

Amazon EventBridge Scheduler now emits resource count metrics to Amazon CloudWatch, enabling you to monitor the approximate number of schedules and schedule groups in your account. These new metrics help you identify when you're approaching your service quota limits so you can request increases before running out of capacity. You can increase the schedules quota, for instance, from the default of 10 million to billions.

With Amazon EventBridge Scheduler, you can create billions of scheduled events and tasks that run across more than 270 AWS services, without provisioning or managing infrastructure. You can set up one-time or recurring schedules using cron expressions, rate expressions, or specific times, with support for time zones and daylight savings. Today's addition of resource count metrics enhances your ability to manage capacity planning and scale your scheduled workloads with confidence.

These metrics are available at no additional cost in all AWS Regions, including the AWS GovCloud (US) Regions.

To learn more, see the Monitoring EventBridge Scheduler documentation or view the metrics in the CloudWatch console.

Published: 2026-02-17 17:29:00+00:00

Happy New Year! AWS Weekly Roundup: 10,000 AIdeas Competition, Amazon EC2, Amazon ECS Managed Instances and more (January 5, 2026) Blog Post

🚀
New Service Feature Introduction
TL;DR: AWS Weekly Roundup covering 10,000 AIdeas Competition, Amazon EC2, Amazon ECS Managed Instances and other updates for January 5, 2026
AWS Services: Amazon EC2, Amazon ECS

Link: https://aws.amazon.com/blogs/aws/happy-new-year-aws-weekly-roundup-10000-aideas-competition-amazon-ec2-amazon-ecs-managed-instances-and-more-january-5-2026/

Happy New Year! I hope the holidays gave you time to recharge and spend time with your loved ones. Like every year, I took a few weeks off after AWS re:Invent to rest and plan ahead. I used some of that downtime to plan the next cohort for Become a Solutions Architect (BeSA). BeSA is […]

Published: 2026-01-05 17:10:37+00:00

AWS Weekly Roundup: Amazon ECS, Amazon CloudWatch, Amazon Cognito and more (December 15, 2025) Blog Post

🚀
New Service Feature Introduction
TL;DR: AWS Weekly Roundup summarizing 2025 highlights including re:Invent, AWS Summits, and technology advancements across various AWS services.
AWS Services: Amazon ECS, Amazon CloudWatch, Amazon Cognito

Link: https://aws.amazon.com/blogs/aws/aws-weekly-roundup-amazon-ecs-amazon-cloudwatch-amazon-cognito-and-more-december-15-2025/

Can you believe it? We’re nearly at the end of 2025. And what a year it’s been! From re:Invent recap events, to AWS Summits, AWS Innovate, AWS re:Inforce, Community Days, and DevDays and, recently, adding that cherry on the cake, re:Invent 2025, we have lived through a year filled with exciting moments and technology advancements […]

Published: 2025-12-15 16:42:05+00:00