Relevant News for Frankfurt

Here are the latest news items for Frankfurt.

Policy in Amazon Bedrock AgentCore is now generally available

๐Ÿš€
New Service Feature Introduction
TL;DR: Policy in Amazon Bedrock AgentCore is now generally available, providing centralized controls for agent-tool interactions across thirteen AWS regions.
AWS Services: Amazon Bedrock, AgentCore

Link: https://aws.amazon.com/about-aws/whats-new/2026/03/policy-amazon-bedrock-agentcore-generally-available/

Policy in Amazon Bedrock AgentCore is now generally available, providing organizations with centralized, fine-grained controls for agent-tool interactions. Policy operates outside your agent code, enabling security, compliance, and operations teams to define tool access and input validation rules without modifying agent code. Teams can author policies using natural language that automatically converts to Cedar, the AWS open-source policy language. Policies are stored in a policy engine and attached to an AgentCore Gateway, which intercepts agent-tool traffic and evaluates each request against the policies before allowing or denying tool access. Policy helps ensure agents operate within defined parameters while maintaining organizational visibility and governance.

Policy in AgentCore is available in thirteen AWS Regions: US East (N. Virginia), US East (Ohio), US West (Oregon), Asia Pacific (Mumbai), Asia Pacific (Seoul), Asia Pacific (Singapore), Asia Pacific (Sydney), Asia Pacific (Tokyo), Europe (Frankfurt), Europe (Ireland), Europe (London), Europe (Paris), and Europe (Stockholm).

Learn more about Policy in AgentCore through the documentation, and get started with the AgentCore Starter Toolkit.

Published: 2026-03-03 18:00:00+00:00

Amazon Lightsail now offers OpenClaw, a private self-hosted AI assistant

๐Ÿš€
New Service Feature Introduction
TL;DR: Amazon Lightsail now offers OpenClaw, a private self-hosted AI assistant with built-in security and Amazon Bedrock integration.
AWS Services: Amazon Lightsail, Amazon Bedrock

Link: https://aws.amazon.com/about-aws/whats-new/2026/03/amazon-lightsail-openclaw/

Amazon Lightsail now lets you deploy OpenClaw, a private self-hosted AI assistant, on your own cloud infrastructure in a simple and secure manner.

Every Lightsail OpenClaw instance ships with built-in security controls, pre-configured and ready to use. Sandboxing isolates each agent session for improved security posture. One-click HTTPS access puts the OpenClaw dashboard in your browser securely, without requiring manual TLS configuration. Device pairing authentication ensures only your authorized devices can connect to your assistant. Automatic snapshots back up your configuration continuously, so you never lose your setup. Amazon Bedrock serves as the default model provider for Lightsail OpenClaw, and you can swap models or connect to Slack, Telegram, WhatsApp, and Discord as per your requirements.

Amazon Lightsail is available in 15 AWS Regions including US East (N. Virginia), US West (Oregon), Europe (Frankfurt), Europe (London), Asia Pacific (Tokyo), and Asia Pacific (Jakarta). To get started, visit the Lightsail console. For pricing and other details, visit the Amazon Lightsail pricing and quick start documentation pages.

Published: 2026-03-04 17:11:00+00:00

Amazon GameLift Servers launches DDoS Protection

๐Ÿš€
New Service Feature Introduction
TL;DR: Amazon GameLift Servers launches DDoS Protection feature to defend multiplayer games against denial-of-service attacks at no additional cost.
AWS Services: Amazon GameLift Servers

Link: https://aws.amazon.com/about-aws/whats-new/2026/03/amazon-gamelift-servers-ddos-protection/

Weโ€™re excited to announce Amazon GameLift Servers DDoS Protection, a new feature that helps game developers protect session-based multiplayer games that utilize Amazon GameLift Servers to help improve overall game session resiliency. DDoS Protection is designed to defend against denial-of-service (DoS) and distributed denial-of-service (DDoS) attacks, providing proactive, User Datagram Protocol (UDP)-based traffic protectionโ€“without the need for manual byte matching, and with negligible latency added.

Amazon GameLift Servers DDoS Protection co-locates a relay network directly alongside your game servers. The relay authenticates client traffic using access tokens so that only authorized traffic reaches the server.โ€ฏThe feature also enforces per-player traffic limits to help prevent disruptions, even from seemingly legitimate sources. Game developers can use DDoS Protection to protect against targeted disruptions to specific players or entire game sessions. Check out the Amazon GameLift Servers release notes to get started through the console or API, with sample code provided for popular game engines including Unreal Engine and native C++.

Amazon GameLift Servers DDoS Protection is available at no additional cost to Amazon GameLift Servers customers and is initially available in the following regions: US East (N. Virginia), US West (Oregon), Europe (Frankfurt), Europe (Ireland), Asia Pacific (Sydney), Asia Pacific (Tokyo), Pacific (Seoul).

Published: 2026-03-04 13:00:00+00:00

Amazon SageMaker Unified Studio now supports AWS Glue 5.1 for data processing jobs

๐ŸŽ‰
Service Feature Availability Change
TL;DR: Amazon SageMaker Unified Studio now supports AWS Glue 5.1 for data processing jobs with Apache Spark 3.5.6 and updated libraries.
AWS Services: Amazon SageMaker Unified Studio, AWS Glue

Link: https://aws.amazon.com/about-aws/whats-new/2026/03/amazon-sagemaker-unified-studio-aws-glue-5-1/

Amazon SageMaker Unified Studio now supports AWS Glue 5.1 for Visual ETL, notebook, and code-based data processing jobs. With AWS Glue 5.1 in Amazon SageMaker Unified Studio, data engineers and data scientists can run jobs on Apache Spark 3.5.6 with Python 3.11 and Scala 2.12.18, and use updated open table format libraries including Apache Iceberg 1.10.0, Apache Hudi 1.0.2, and Delta Lake 3.3.2.

You can use AWS Glue 5.1 in Amazon SageMaker Unified Studio when creating data processing jobs by selecting Glue 5.1 from the version dropdown in job settings. This applies to Visual ETL jobs, notebook jobs, and code-based jobs, so you can take advantage of the latest Spark runtime and open table format libraries across all your data processing workflows.

AWS Glue 5.1 in Amazon SageMaker Unified Studio is available in all the regions where Amazon SageMaker Unified Studio is available. To learn more, visit the Amazon SageMaker Unified Studio documentation. For details on what's included in AWS Glue 5.1, including updated open table format support and access control capabilities, see the AWS Glue documentation.

Published: 2026-03-03 23:00:00+00:00

Amazon Connect now supports dynamic dialing mode switching for outbound campaigns

๐Ÿš€
New Service Feature Introduction
TL;DR: Amazon Connect Outbound Campaigns now supports dynamic dialing mode switching, allowing real-time changes between preview and non-preview modes during active campaigns.
AWS Services: Amazon Connect, Amazon Connect Outbound Campaigns

Link: https://aws.amazon.com/about-aws/whats-new/2026/02/connect-dynamic-dialing-modes/

Today, AWS announces the general availability of dynamic dialing mode switching for Amazon Connect Outbound Campaigns, which allows contact center administrators to change between preview and non-preview dialing modes during active campaign execution. Previously, campaigns were locked into their initial dialing mode once started, requiring administrators to stop and restart campaigns to adjust strategies. This launch solves the problem of inflexible dialing strategies that couldn't adapt to real-time business needs and agent availability changes.

Dynamic dialing mode switching enables contact centers to optimize agent productivity and campaign efficiency in real-time without campaign interruptions. For example, you can automatically switch from progressive dialing to preview mode when handling high-priority contacts that require additional context, then revert back when traffic returns to normal patterns. This flexibility is particularly valuable for campaigns with varying contact priorities or fluctuating agent availability throughout the day.

Dynamic dialing mode switching is available at no additional cost in all AWS Regions where Amazon Connect Outbound Campaigns is supported: US East (N. Virginia), US West (Oregon), Canada (Central), Europe (Frankfurt), Europe (London), Asia Pacific (Seoul), Asia Pacific (Singapore), Asia Pacific (Sydney), Asia Pacific (Tokyo), and Africa (Cape Town).

To learn more, see the Amazon Connect Administrator Guide or visit the Amazon Connect website

Published: 2026-02-26 19:32:00+00:00

Amazon Location Service introduces LLM Context as a Kiro power and Claude Code plugin to improve AI performance

๐Ÿš€
New Service Feature Introduction
TL;DR: Amazon Location Service introduces LLM Context as AI development tools to improve code accuracy and accelerate location-based feature implementation.
AWS Services: Amazon Location Service

Link: https://aws.amazon.com/about-aws/whats-new/2026/02/amazon-location-service-introduces-kiro-power-claude-skill-llm-context/

Today, Amazon Location launched curated AI Agent context as a Kiro power, Claude Code plugin, and agent skill in the open Agent Skills format, usable by any compatible agent. Developers can use this context with generative AI tools such as Kiro, Claude Code, and Cursor to improve code accuracy, accelerate feature implementation, and reduce iteration time when adding Amazon Location-enabled capabilities to their applications. Amazon Location Service is a mapping service that offers geospatial data and location functionality such as maps, places search and geocoding, route planning, device tracking, and geofencing.

Once loaded by AI development tools, the curated Amazon Location context accelerates development of common location-based solutions such as address entry forms for delivery applications, map display, nearest-store lookup, and route visualization. The context includes pre-validated implementation patterns and step-by-step instructions for these use cases, allowing developers to focus on application-specific logic rather than API integration details.

Amazon Location Service is available in the following AWS Regions: US East (Ohio), US East (N. Virginia), US West (Oregon), Asia Pacific (Malaysia), Asia Pacific (Mumbai), Asia Pacific (Singapore), Asia Pacific (Sydney), Asia Pacific (Tokyo), Canada (Central), Europe (Frankfurt), Europe (Ireland), Europe (London), Europe (Spain), Europe (Stockholm), South America (Sรฃo Paulo), and AWS GovCloud (US-West).

To get started, download and install the context to your agent of choice from the amazon-location-agent-context repository on GitHub, or learn more about using AI and LLMs to accelerate development with Amazon Location Service.

Published: 2026-02-25 16:26:00+00:00

Amazon EC2 M8a instances now available in AWS Europe (Frankfurt) region

๐Ÿ–ฅ๏ธ
Instance Type Availability Change
TL;DR: Amazon EC2 M8a instances now available in AWS Europe (Frankfurt) region, offering 30% higher performance than M7a instances.
AWS Services: Amazon EC2, AWS Nitro Cards, AWS Management Console, Savings Plans, On-Demand instances, Spot instances

Link: https://aws.amazon.com/about-aws/whats-new/2026/02/amazon-ec2-m8a-instances-europe-frankfurt/

Starting today, the general-purpose Amazon EC2 M8a instances are available in AWS Europe (Frankfurt) region. M8a instances are powered by 5th Gen AMD EPYC processors (formerly code named Turin) with a maximum frequency of 4.5 GHz, deliver up to 30% higher performance, and up to 19% better price-performance compared to M7a instances.

M8a instances deliver 45% more memory bandwidth compared to M7a instances, making these instances ideal for even latency sensitive workloads. M8a instances deliver even higher performance gains for specific workloads. M8a instances are up to 60% faster for GroovyJVM benchmark, and up to 39% faster for Cassandra benchmark compared to Amazon EC2 M7a instances. M8a instances are SAP-certified and offer 12 sizes including 2 bare metal sizes. This range of instance sizes allows customers to precisely match their workload requirements.

M8a instances are built using the latest sixth generation AWS Nitro Cards and ideal for applications that benefit from high performance and high throughput such as financial applications, gaming, rendering, application servers, simulation modeling, mid-size data stores, application development environments, and caching fleets.

To get started, sign in to the AWS Management Console. Customers can purchase these instances via Savings Plans, On-Demand instances, and Spot instances. For more information visit the Amazon EC2 M8a instance page.

Published: 2026-02-24 16:00:00+00:00

Automated Reasoning policies now include references to the source document

๐ŸŽ‰
Service Feature Availability Change
TL;DR: Automated Reasoning policies now include source document references, available in 6 regions through Amazon Bedrock console and SDK.
AWS Services: Amazon Bedrock

Link: https://aws.amazon.com/about-aws/whats-new/2026/02/automated-reasoning-policies-include-references/

AWS announces the launch of source document references for Automated Reasoning policies, simplifying the task of reviewing and refining an Automated Reasoning policy. Automated Reasoning checks uses formal verification techniques to validate that content generated by foundation models is compliant with an Automated Reasoning policy. Automated Reasoning checks deliver up to 99% accuracy at detecting correct responses from LLMs, giving you provable assurance in detecting AI hallucinations while also assisting with ambiguity detection in model responses.

To create Automated Reasoning policies, users upload documents that describe the rules in a knowledge domain like HR policies or financial transaction approval guidelines. These documents are translated into a collection of formal logic rules and variables called an Automated Reasoning policy. With source document references, users can now review the generated policy rules and variables using references to content they are familiar with from the original document,

Test generation for Automated Reasoning checks is now available in the US (N. Virginia), US (Ohio), US (Oregon), Europe (Frankfurt), Europe (Ireland), and Europe (Paris) Regions. Customers can access the service through the Amazon Bedrock console, as well as the Amazon Bedrock Python SDK.

To learn more about Automated Reasoning checks and how you can integrate it into your generative AI workflows, please read the Amazon Bedrock documentation, review the tutorials on the AWS AI blog, and visit the Bedrock Guardrails webpage.

Published: 2026-02-23 09:35:00+00:00

Amazon EC2 M8i-flex instances are now available in additional AWS regions

๐Ÿ–ฅ๏ธ
Instance Type Availability Change
TL;DR: Amazon EC2 M8i-flex instances now available in 6 additional regions with Intel Xeon 6 processors
AWS Services: Amazon EC2

Link: https://aws.amazon.com/about-aws/whats-new/2026/02/amazon-ec2-m8i-flex-instances-FRA-ICN-KUL-NRT-SIN-YUL-region/

Starting today, Amazon EC2 M8i-flex instances are now available in Asia Pacific (Malaysia, Seoul, Singapore, Tokyo), Europe (Frankfurt) and Canada (Central) regions. These instances are powered by custom Intel Xeon 6 processors, available only on AWS, delivering the highest performance and fastest memory bandwidth among comparable Intel processors in the cloud. The M8i-flex instances offer up to 15% better price-performance, and 2.5x more memory bandwidth compared to previous generation Intel-based instances. They deliver up to 20% better performance than M7i-flex instances, with even higher gains for specific workloads. The M8i-flex instances are up to 30% faster for PostgreSQL databases, up to 60% faster for NGINX web applications, and up to 40% faster for AI deep learning recommendation models compared to M7i-flex instances.

M8i-flex instances are the easiest way to get price performance benefits for a majority of general-purpose workloads like web and application servers, microservices, small and medium data stores, virtual desktops, and enterprise applications. They offer the most common sizes, from large to 16xlarge, and are a great first choice for applications that don't fully utilize all compute resources.

To get started, sign in to the AWS Management Console. For more information about the M8i-flex instances visit the AWS News blog.

Published: 2026-02-19 02:00:00+00:00

Amazon Connect Cases now supports AWS Service Quotas

๐Ÿš€
New Service Feature Introduction
TL;DR: Amazon Connect Cases now supports AWS Service Quotas for centralized limit management and automatic quota increase approvals.
AWS Services: Amazon Connect Cases, AWS Service Quotas

Link: https://aws.amazon.com/about-aws/whats-new/2026/02/amazon-connect-cases-aws-service-quotas

Amazon Connect Cases now supports AWS Service Quotas, giving administrators a centralized way to view applied limits, monitor utilization, and scale case workloads without hitting unexpected service constraints. You can request quota increases directly from the Service Quotas console, and eligible requests are automatically approved without manual intervention.

Amazon Connect Cases is available in the following AWS Regions: US East (N. Virginia), US West (Oregon), Canada (Central), Europe (Frankfurt), Europe (London), Asia Pacific (Seoul), Asia Pacific (Singapore), Asia Pacific (Sydney), Asia Pacific (Tokyo), and Africa (Cape Town). To learn more and get started, visit the Amazon Connect Cases webpage and documentation.

Published: 2026-02-18 17:00:00+00:00

Amazon OpenSearch Service now supports storage optimized i7i instances

๐Ÿ–ฅ๏ธ
New Instance Type Introduction
TL;DR: Amazon OpenSearch Service now supports i7i storage optimized instances with better performance and lower latency across multiple regions.
AWS Services: Amazon OpenSearch Service, AWS Nitro System

Link: https://aws.amazon.com/about-aws/whats-new/2026/02/amazon-opensearch-service-supports-i7i-instances

Amazon OpenSearch Service now supports latest generation x86 based high performance Storage Optimized i7i instances. Powered by 5th generation Intel Xeon Scalable processors, I7i instances deliver up to 23% better compute performance and more than 10% better price performance over previous generation I4i instances.

I7i instances have 3rd generation AWS Nitro SSDs with up to 50% better real-time storage performance, up to 50% lower storage I/O latency, and up to 60% lower storage I/O latency variability compared to I4i instances. Built on the AWS Nitro System, these instances o๏ฌ„oad CPU virtualization, storage, and networking functions to dedicated hardware and software enhancing the performance and security for your workloads.

Amazon OpenSearch Service supports i7i instances in following AWS Regions US East (N. Virginia, Ohio), US West (N. California, Oregon), Canada (Central), Canada West (Calgary), Europe (Frankfurt, Ireland, London, Milan, Spain, Stockholm, Zurich ), Africa (Cape Town), Asia Pacific (Hong Kong, Hyderabad, Jakarta, Malaysia, Melbourne, Mumbai, Osaka, Seoul, Singapore, Sydney, Tokyo), Middle East (UAE), South America (Sรฃo Paulo) & AWS GovCloud (US-West).

For region specific availability & pricing, visit our pricing page. To learn more about Amazon OpenSearch Service and its capabilities, visit our product page.

Published: 2026-02-18 04:30:00+00:00

Amazon EC2 C8a instances now available in the Europe (Frankfurt) and Europe (Ireland) region

๐Ÿ–ฅ๏ธ
Instance Type Availability Change
TL;DR: Amazon EC2 C8a instances now available in Europe (Frankfurt) and Europe (Ireland) regions with improved performance.
AWS Services: Amazon EC2, AWS Nitro System, Savings Plans, On-Demand instances, Spot instances

Link: https://aws.amazon.com/about-aws/whats-new/2026/02/amazon-ec2-c8a-instances-europe-frankfurt-europe-ireland-regions

Starting today, the compute-optimized Amazon EC2 C8a instances are available in the Europe (Frankfurt) and Europe (Ireland) regions. C8a instances are powered by 5th Gen AMD EPYC processors (formerly code named Turin) with a maximum frequency of 4.5 GHz, delivering up to 30% higher performance and up to 19% better price-performance compared to C7a instances.

C8a instances deliver 33% more memory bandwidth compared to C7a instances, making these instances ideal for latency sensitive workloads. Compared to Amazon EC2 C7a instances, they are up to 57% faster for GroovyJVM allowing better response times for Java-based applications. C8a instances offer 12 sizes including 2 bare metal sizes. This range of instance sizes allows customers to precisely match their workload requirements.

C8a instances are built on AWS Nitro System and are ideal for high performance, compute-intensive workloads such as batch processing, distributed analytics, high performance computing (HPC), ad serving, highly-scalable multiplayer gaming, and video encoding.

To get started, sign in to the AWS Management Console. Customers can purchase these instances via Savings Plans, On-Demand instances, and Spot instances. For more information visit the Amazon EC2 C8a instance page.

Published: 2026-02-17 17:00:00+00:00

Amazon Connect now supports multi-line text fields on case templates

๐ŸŽ‰
Service Feature Change
TL;DR: Amazon Connect Cases now supports multi-line text fields on case templates for detailed documentation and structured data capture.
AWS Services: Amazon Connect, Amazon Connect Cases

Link: https://aws.amazon.com/about-aws/whats-new/2026/02/amazon-connect-cases-multiline-text-fields/

Amazon Connect now supports larger, multi-line text fields on case templates allowing agents to capture detailed free-form notes and structured data directly within cases. These fields expand vertically to accommodate multiple paragraphs, making it easier to document root cause analysis, transaction details, investigation findings, or customer-facing updates.

Amazon Connect Cases is available in the following AWS regions: US East (N. Virginia), US West (Oregon), Canada (Central), Europe (Frankfurt), Europe (London), Asia Pacific (Seoul), Asia Pacific (Singapore), Asia Pacific (Sydney), Asia Pacific (Tokyo), and Africa (Cape Town) AWS regions. To learn more and get started, visit the Amazon Connect Cases webpage and documentation.

Published: 2026-02-17 17:00:00+00:00

Global News

Introducing OpenClaw on Amazon Lightsail to run your autonomous private AI agents Blog Post

๐Ÿš€
New Service Introduction
TL;DR: AWS launches OpenClaw on Amazon Lightsail for running autonomous private AI agents with pre-configured Amazon Bedrock integration.
AWS Services: Amazon Lightsail, Amazon Bedrock

Link: https://aws.amazon.com/blogs/aws/introducing-openclaw-on-amazon-lightsail-to-run-your-autonomous-private-ai-agents/

AWS launches OpenClaw on Amazon Lightsail to run OpenClaw instance, pairing your browser, enabling AI capabilities, and optionally connecting messaging channels. Your Lightsail OpenClaw instance is pre-configured with Amazon Bedrock for starting with your AI assistant immediately โ€” no additional configuration required.

Published: 2026-03-04 20:04:16+00:00

AWS Weekly Roundup: Claude Sonnet 4.6 in Amazon Bedrock, Kiro in GovCloud Regions, new Agent Plugins, and more (February 23, 2026) Blog Post

๐Ÿš€
New Service Feature Introduction
TL;DR: AWS Weekly Roundup featuring Claude Sonnet 4.6 in Amazon Bedrock, Kiro in GovCloud Regions, and new Agent Plugins
AWS Services: Amazon Bedrock

Link: https://aws.amazon.com/blogs/aws/aws-weekly-roundup-claude-sonnet-4-6-in-amazon-bedrock-kiro-in-govcloud-regions-new-agent-plugins-and-more-february-23-2026/

Last week, my team met many developers at Developer Week in San Jose. My colleague, Vinicius Senger delivered a great keynote about renascent softwareโ€”a new way of building and evolving applications where humans and AI collaborate as co-developers using Kiro. Other colleagues, Duโ€™An Lightfoot, Elizabeth Fuentes, Laura Salinas, and Sandhya Subramani spoke about building and [โ€ฆ]

Published: 2026-02-23 16:56:24+00:00

AWS IAM Identity Center now supports multi-Region replication for AWS account access and application use Blog Post

๐Ÿš€
New Service Feature Introduction
TL;DR: AWS IAM Identity Center now supports multi-Region replication for workforce identities and permission sets, improving resiliency and enabling closer application deployment.
AWS Services: AWS IAM Identity Center

Link: https://aws.amazon.com/blogs/aws/aws-iam-identity-center-now-supports-multi-region-replication-for-aws-account-access-and-application-use/

AWS IAM Identity Center now supports multi-Region replication of workforce identities and permission sets, enabling improved resiliency for AWS account access and allowing applications to be deployed closer to users while meeting data residency requirements.

Published: 2026-02-03 19:13:34+00:00

AWS Batch now supports configurable scale down delay

๐Ÿš€
New Service Feature Introduction
TL;DR: AWS Batch introduces configurable scale down delay to reduce job processing delays for intermittent workloads.
AWS Services: AWS Batch

Link: https://aws.amazon.com/about-aws/whats-new/2026/03/aws-batch-configurable-scale-down-delay/

AWS Batch now allows you to configure a scale down delay for managed compute environments, helping reduce job processing delays for intermittent and periodic workloads. With the new minScaleDownDelayMinutes parameter, you can specify how long AWS Batch keeps instances running after their jobs complete (from 20 minutes to 1 week), preventing unnecessary instance terminations and relaunches that can delay subsequent job processing.

You can configure the scale down delay when creating or updating a compute environment via the AWS Batch API (CreateComputeEnvironment or UpdateComputeEnvironment) or the AWS Batch Management Console. The delay is applied at the instance level, based on when each instance last completed a job.

Scale down delay is supported today in all AWS Regions where AWS Batch is available. For more information, see the AWS Batch API Guide.

Published: 2026-03-02 19:05:00+00:00

AWS Elemental MediaLive Now Supports SRT Listener Mode

๐Ÿš€
New Service Feature Introduction
TL;DR: AWS Elemental MediaLive now supports SRT Listener mode for inputs and outputs, simplifying network setup by eliminating firewall configurations.
AWS Services: AWS Elemental MediaLive

Link: https://aws.amazon.com/about-aws/whats-new/2026/02/aws-medialive-introduces-srt-listener/

AWS Elemental MediaLive now supports Secure Reliable Transport (SRT) Listener mode for both inputs and outputs. With SRT Listener mode, MediaLive waits for connections rather than initiating them. Upstream sources push live video directly to MediaLive, and downstream systems pull encoded streams on demand. This simplifies network setup by removing the need for complex firewall configurations or static, publicly accessible IP addresses on the source or destination side. SRT Listener mode complements MediaLive's existing SRT Caller mode, giving you full control over which side of the connection initiates the SRT handshake.

SRT Listener mode enables flexible contribution and distribution workflows. On the input side, you can push streams from on-premises encoders or remote production sites, including MediaLive Anywhere deployments, directly to MediaLive in the cloud without coordinating firewall changes with your network team. On the output side, downstream distribution partners can connect to MediaLive and pull encoded streams when ready, without requiring MediaLive to initiate outbound connections. Both SRT Listener inputs and outputs support configurable latency settings and mandatory AES encryption to help ensure content security.

SRT Listener mode is available in all AWS Regions where AWS Elemental MediaLive is offered. To get started, see Setting up an SRT Listener input and Creating SRT outputs in listener mode in the AWS Elemental MediaLive User Guide.

Published: 2026-02-28 00:14:00+00:00

Amazon Lightsail expands blueprint selection with a new WordPress blueprint

๐Ÿš€
New Service Feature Introduction
TL;DR: Amazon Lightsail introduces new WordPress blueprint with guided setup wizard and IMDSv2 enforcement by default.
AWS Services: Amazon Lightsail

Link: https://aws.amazon.com/about-aws/whats-new/2026/02/wordpress-blueprint-lightsail/

Amazon Lightsail now offers a new WordPress blueprint, making it easier than ever to launch and manage a WordPress website on the cloud. With just a few clicks, you can create a Lightsail virtual private server (VPS) preinstalled with WordPress, and follow a guided setup wizard to get your site fully configured and running in minutes. This new blueprint has Instance Metadata Service Version 2 (IMDSv2) enforced by default.

With Lightsail, you can easily get started on the cloud by choosing a blueprint and an instance bundle to build your web application. Lightsail instance bundles include instances preinstalled with your preferred operating system, storage, and monthly data transfer allowance, giving you everything you need to get up and running quickly. The new WordPress blueprint includes a step-by-step setup workflow that walks you through connecting a custom domain, configuring DNS, attaching a static IP address, and enabling HTTPS encryption using a free Let's Encrypt SSL/TLS certificate โ€” all from within the Lightsail console.

This new blueprint is now available in all AWS Regions where Lightsail is available. For more information on blueprints supported on Lightsail, see Lightsail documentation. For more information on pricing, or to get started with your free trial, click here.

Published: 2026-02-27 23:28:00+00:00

Amazon Bedrock batch inference now supports the Converse API format

๐ŸŽ‰
Service Feature Change
TL;DR: Amazon Bedrock batch inference now supports Converse API format for unified model-agnostic input across batch workloads.
AWS Services: Amazon Bedrock

Link: https://aws.amazon.com/about-aws/whats-new/2026/02/amazon-bedrock-batch-inference-supports-converse-api-format/

Amazon Bedrock batch inference now supports the Converse API as a model invocation type, enabling you to use a consistent, model-agnostic input format for your batch workloads.

Previously, batch inference required model-specific request formats using the InvokeModel API. Now, when creating a batch inference job, you can select Converse as the model invocation type and structure your input data using the standard Converse API request format. Output for Converse batch jobs follows the Converse API response format. With this feature, you can use the same unified request format for both real-time and batch inference, simplifying prompt management and reducing the effort needed to switch between models. You can configure the Converse model invocation type through both the Amazon Bedrock console and the API.

This capability is available in all AWS Regions that support Amazon Bedrock batch inference. To get started, see Create a batch inference job and Format and upload your batch inference data in the Amazon Bedrock User Guide.

Published: 2026-02-27 19:00:00+00:00

Amazon OpenSearch Service adds new insights for improved cluster stability

๐Ÿš€
New Service Feature Introduction
TL;DR: Amazon OpenSearch Service adds two new Cluster Insights: Cluster Overload and Suboptimal Sharding Strategy for improved cluster monitoring.
AWS Services: Amazon OpenSearch Service

Link: https://aws.amazon.com/about-aws/whats-new/2026/02/amazon-opensearch-service-adds-new-insights-improved-cluster-stability/

Amazon OpenSearch Service has enhanced Cluster Insights with two new insights โ€” Cluster Overload and Suboptimal Sharding Strategy. Suboptimal Sharding Strategy provides instant visibility into shard imbalances that cause uneven workload distribution, while Cluster Overload surfaces elevated cluster resource utilization that can lead to request throttling or rejections. Both insights come with details of affected resources along with actionable mitigation recommendations.

Previously, identifying resource constraints and shard imbalances required manually correlating multiple metrics and logs, making it difficult to detect issues early. With these new insights, you can proactively monitor cluster health and take timely action.

Suboptimal Sharding Strategy detects shard imbalances caused by indices with too few shards relative to the number of data nodes, or by shards carrying disproportionately large amounts of data compared to others. It identifies the root cause of uneven workload distribution and provides recommendations to help you achieve optimal shard distribution for improved query performance and resource utilization. Similarly, Cluster Overload helps you identify elevated resource utilization, including CPU, memory, disk I/O, disk throughput, and disk utilization that can potentially lead to request throttling or rejections. It also provides scale-up recommendations so you can take timely action to protect your critical workloads.

These new insights are available at no additional cost for OpenSearch version 2.17 or later in all Regions where the OpenSearch UI is available. See the complete list of supported Regions here. To learn more, visit the Cluster Insights documentation or view the complete catalog of available insights.

Published: 2026-02-27 10:49:00+00:00

Amazon Bedrock announces OpenAI-compatible Projects API

๐Ÿš€
New Service Feature Introduction
TL;DR: Amazon Bedrock now supports OpenAI-compatible Projects API in Mantle inference engine for better isolation and access control.
AWS Services: Amazon Bedrock

Link: https://aws.amazon.com/about-aws/whats-new/2026/03/amazon-bedrock-projects-api-mantle-inference-engine/

Amazon Bedrock now supports OpenAI-compatible Projects API in the Mantle inference engine in Amazon Bedrock. Amazon Bedrock is a fully managed service that offers a broad selection of best-in-class foundation models from leading AI companies like Anthropic, Meta, and OpenAI, along with a broad set of specialized developer tools that make it easy to build and scale compelling generative AI applications. Mantle is Amazon Bedrock's distributed inference engine for large-scale model serving that supports OpenAI-compatible APIs.

With Projects API, customers who have more than one application, environment, or team can now create individual projects to achieve better isolation across all of them. You can assign different IAM-based access control to each project and add tags to each project for better cost visibility.

Projects are available for all customers using the OpenAI-compatible APIs, the Responses API and Chat Completions API, through the Mantle inference engine in Amazon Bedrock. There is no additional charge for using the Projects API. You pay only for the underlying model inference you consume. To get started with the Projects API in Amazon Bedrock, visit the Amazon Bedrock documentation

Published: 2026-02-26 23:06:00+00:00

AWS Lambda Durable Execution SDK for Java now available in Developer Preview

๐Ÿš€
New Service Feature Introduction
TL;DR: AWS Lambda Durable Execution SDK for Java now available in developer preview for building resilient multi-step applications.
AWS Services: AWS Lambda

Link: https://aws.amazon.com/about-aws/whats-new/2026/02/lambda-durable-execution-java-preview/

Today, AWS announces the developer preview of the AWS Lambda Durable Execution SDK for Java. With this SDK, developers can build resilient multi-step applications like order processing pipelines, AI-assisted workflows, and human-in-the-loop approvals using Lambda durable functions, without implementing custom progress tracking or integrating external orchestration services.

Lambda durable functions extend Lambda's event-driven programming model with operations that checkpoint progress automatically and pause execution for up to a year when waiting on external events. The new Durable Execution SDK for Java provides an idiomatic experience for building with durable functions and is compatible with Java 17+. This preview includes steps for progress tracking, waits for efficient suspension, and durable futures for callback-based workflows.

To get started, see the Lambda durable functions developer guide and the AWS Lambda Durable Execution SDK for Java on GitHub. To learn more about Lambda durable functions, visit the product page.

On-demand functions are not billed for duration while paused. For pricing details, see AWS Lambda Pricing. For information about AWS Regions where Lambda durable functions are available, see the AWS Regional Services List.

Published: 2026-02-26 07:00:00+00:00

AWS launches a playground for interactive Aurora DSQL database exploration

๐Ÿš€
New Service Feature Introduction
TL;DR: AWS launches browser-based playground for Aurora DSQL database exploration without requiring AWS account or setup
AWS Services: Amazon Aurora DSQL

Link: https://aws.amazon.com/about-aws/whats-new/2026/02/amazon-aurora-dsql-launches-playground/

Today, AWS announces a browser-based playground that enables developers to interact with an Amazon Aurora DSQL database without requiring an AWS account. With zero setup or infrastructure configuration, developers can create schemas, load data, and execute SQL queries directly form their browser.

The playground for Aurora DSQL provides an instant, ephemeral database environment, making it easy to experiment and learn. Built-in sample datasets help developers quickly explore core Aurora DSQL capabilities and get hands-on experience in minutes.

To start exploring, visit the playground for Aurora DSQL. To get started with your production workloads and learn more visit Amazon Aurora DSQL.

Published: 2026-02-25 18:00:00+00:00

Amazon WorkSpaces Applications extends support for 4K resolution

๐ŸŽ‰
Service Feature Change
TL;DR: Amazon WorkSpaces Applications now supports 4K resolution on non-accelerated instances across all connection modes at no additional cost.
AWS Services: Amazon WorkSpaces Applications

Link: https://aws.amazon.com/about-aws/whats-new/2026/02/amazon-workspaces-applications-4K-resolution/

Amazon WorkSpaces Applications now supports up to 4K (4096 x 2160) resolution on non-accelerated instance types and across all client connection modes. Previously, higher resolution monitors were limited to graphics-accelerated instances in WorkSpaces Applications classic mode. This update allows you to choose the appropriate instance type and provide a better end-user experience that aligns with your hardware investments.

This new feature benefits customers by providing a consistent and high-quality streaming experience across instances regardless of hardware acceleration capabilities. Whether using native application mode, classic application mode, or desktop view, your end users can now enjoy up to 4K resolution if their display device supports it. This enhancement is particularly valuable for users with ultra-wide monitors (21:9 aspect ratio) at 4K resolution, ensuring applications display with optimal clarity and detail at the maximum supported resolution of 4K.

These features are available at no additional cost in all the AWS Regions where WorkSpaces Applications is available. WorkSpaces Applications offers pay-as-you-go pricing. To get started with WorkSpaces Applications, see Amazon WorkSpaces applications: Getting started

To enable these features for your users, you must use a WorkSpaces Applications image that uses a WorkSpaces Applications agent released on or after February 4, 2026, or an image that uses Managed WorkSpaces Applications image updates released on or after February 18, 2026.

Published: 2026-02-25 16:00:00+00:00

AWS Deadline Cloud now supports running tasks together in chunks

๐Ÿš€
New Service Feature Introduction
TL;DR: AWS Deadline Cloud now supports chunking tasks together for more efficient execution of short tasks or those with long startup times.
AWS Services: AWS Deadline Cloud

Link: https://aws.amazon.com/about-aws/whats-new/2026/02/aws-deadline-cloud-running-tasks-together-in/

Today, AWS Deadline Cloud announces support for grouping tasks into chunks to efficiently execute multiple tasks together. AWS Deadline Cloud is a fully managed service that simplifies render management for computer-generated 2D/3D graphics and visual effects for films, TV shows, commercials, games, and industrial design.

When your job has short tasks, or tasks that need to run in an environment with a long startup time, chunking them together for execution reduces the time and cost for completing the job. When creating a job, you can now manually specify a chunk size for the number tasks to group together for execution, or alternately specify a target run time for the execution of a chunk of tasks. The target run time will be used to dynamically change the number of tasks grouped together as the job completes to improve execution efficiency and achieve the target run time.

Running tasks together in chunks is now available in all AWS Regions where AWS Deadline Cloud is supported. To get started, visit the Deadline Cloud developer guide.

Published: 2026-02-24 18:13:00+00:00

AWS AppConfig integrates with New Relic for automated rollbacks

๐Ÿš€
New Service Feature Introduction
TL;DR: AWS AppConfig launches New Relic integration for automated rollbacks during feature flag deployments based on application health monitoring.
AWS Services: AWS AppConfig

Link: https://aws.amazon.com/about-aws/whats-new/2026/02/aws-appconfig-new-relic-for-automated-rollback/

AWS AppConfig today launched a new integration that enables automated, intelligent rollbacks during feature flag and dynamic configuration deployments using New Relic Workflow Automation. Building on AWS AppConfig's third-party alert capability, this integration provides teams using New Relic with a solution to automatically detect degraded application health and trigger rollbacks in seconds, eliminating manual intervention.

When you deploy feature flags using AWS AppConfig's gradual deployment strategy, the AWS AppConfig New Relic Extension continuously monitors your application health against configured alert conditions. If issues are detected during a feature flag update and deployment, such as increased error rates or elevated latency, the New Relic Workflow automatically sends a notification to trigger an immediate rollback, reverting the feature flag to its previous state. This closed-loop automation reduces the time between detection and remediation from minutes to seconds, minimizing customer impact during failed deployments.


 

Published: 2026-02-24 16:00:00+00:00

MediaConvert Introduces new video probe API and UI

๐Ÿš€
New Service Feature Introduction
TL;DR: AWS Elemental MediaConvert introduces new Probe API for free metadata analysis of media files without processing video content.
AWS Services: AWS Elemental MediaConvert, AWS Step Functions

Link: https://aws.amazon.com/about-aws/whats-new/2026/02/aws-mediaconvert-introduces-video-probe/

Introducing Probe API, a powerful and free metadata analysis tool for AWS Elemental MediaConvert. Optimized for efficiency, Probe API reads header metadata to quickly return essential information about your media files, including codec specifications, pixel formats, color space details, and container information - all without waiting to process the actual video content. This analysis capability makes it an invaluable tool for content creators, developers, and media professionals who need to quickly validate files, automate workflows, or utilize Elementals' Step Functions to make encoding decisions based on source material characteristics.

For complete implementation details and usage examples, please visit the MediaConvert API Reference documentation. The Probe API can be utilized in any region where AWS Elemental MediaConvert is available, making it a versatile tool for streamlining your media workflow analysis.

To get started with Probe API and explore its capabilities, visit the AWS Elemental MediaConvert product page or consult the User Guide for comprehensive documentation.

Published: 2026-02-24 00:01:00+00:00

Amazon Aurora DSQL now integrates with Kiro powers and AI agent skills

๐Ÿš€
New Service Feature Introduction
TL;DR: Amazon Aurora DSQL now integrates with Kiro powers and AI agent skills for AI-assisted development and database operations.
AWS Services: Amazon Aurora DSQL

Link: https://aws.amazon.com/about-aws/whats-new/2026/02/amazon-aurora-dsql-integrates-with-kiro-powers-and-agent-skills

Today, AWS announces Amazon Aurora DSQL integration with Kiro powers and AI agent skills, enabling developers to build Aurora DSQL-backed applications faster with AI agent-assisted development. These integrations bundle the Aurora DSQL Model Context Protocol (MCP) server with development best practices, so AI agents can help you with Aurora DSQL schema design, performance optimization, and database operations out of the box.

Kiro powers is a registry of curated and pre-packaged MCP servers, steering files, and agent hooks to accelerate specialized software development and deployment use cases. With the Kiro power for Aurora DSQL, agents have instant access to specialized knowledge, so developers can work confidently without any prior context, reducing trial-and-error development cycles. The power is available within the Kiro IDE for one-click installation.

The Aurora DSQL skill extends the same capabilities to additional AI coding agents through the Skills CLI. Developers can install the skill with a single command and select their preferred agents including Kiro CLI, Claude Code, Gemini, Codex, Cursor, Copilot, Cline, Windsurf, Roo, OpenCode, and more. When developers work on database tasks, the agent dynamically loads relevant skill guidance, including Aurora DSQL Postgres-compatible SQL patterns, distributed database design, and IAM authentication, eliminating the need to repeatedly provide the same context across conversations. As Aurora DSQL adds new features, future skill releases will include updated patterns and guidance, ensuring that agents always have current best practices.

For more information on the Aurora DSQL Kiro power and agent skills, visit the Aurora DSQL steering documentation and GitHub page. Get started with Aurora DSQL for free with the AWS Free Tier.

Published: 2026-02-18 18:00:00+00:00

Amazon Bedrock reinforcement fine-tuning adds support for open-weight models with OpenAI-compatible APIs

๐Ÿš€
New Service Feature Introduction
TL;DR: Amazon Bedrock adds reinforcement fine-tuning support for open-weight models with OpenAI-compatible APIs, enabling easier model customization.
AWS Services: Amazon Bedrock, AWS Lambda

Link: https://aws.amazon.com/about-aws/whats-new/2026/02/amazon-bedrock-reinforcement-fine-tuning-openai

Amazon Bedrock now extends reinforcement fine-tuning (RFT) support to popular open-weight models, including OpenAI GPT-OSS and Qwen models, and introduces OpenAI-compatible fine-tuning APIs. These capabilities make it easier for developers to improve open-weight model accuracy without requiring deep machine learning expertise or large volumes of labeled data. Reinforcement fine-tuning in Amazon Bedrock automates the end-to-end customization workflow, allowing models to learn from feedback on multiple possible responses using a small set of prompts, rather than traditional large training datasets. Reinforcement fine-tuning enables customers to use smaller, faster, and more cost-effective model variants while maintaining high quality.

Organizations often struggle to adapt foundation models to their unique business requirements, forcing tradeoffs between generic models with limited performance and complex, expensive customization pipelines that require specialized infrastructure and expertise. Amazon Bedrock removes this complexity by providing a fully managed, secure reinforcement fine-tuning experience. Customers define reward functions using verifiable rule-based graders or AI-based judges, including built-in templates for both objective tasks such as code generation and math reasoning, and subjective tasks such as instruction following or conversational quality. During training, customers can use AWS Lambda functions for custom grading logic, and access intermediate model checkpoints to evaluate, debug, and select the best-performing model, improving iteration speed and training efficiency. All proprietary data remains within AWSโ€™s secure, governed environment throughout the customization process.

Models supported at this launch are: qwen.qwen3-32b and openai.gpt-oss-20b. After fine-tuning completes, customers can immediately use the resulting fine tuned model for on-demand inference through Amazon Bedrockโ€™s OpenAI-compatible APIs - Responses API and Chat Completions API, without any additional deployment steps. To learn more, see the Amazon Bedrock documentation.

Published: 2026-02-17 21:17:00+00:00

Claude Sonnet 4.6 now available in Amazon Bedrock

๐Ÿš€
New Service Feature Introduction
TL;DR: Claude Sonnet 4.6 now available in Amazon Bedrock with frontier performance for coding, agents, and professional work at scale.
AWS Services: Amazon Bedrock

Link: https://aws.amazon.com/about-aws/whats-new/2026/02/claude-sonnet-4.6-available-in-amazon-bedrock/

Starting today, Amazon Bedrock supports Claude Sonnet 4.6, which offers frontier performance across coding, agents, and professional work at scale. According to Anthropic, Claude Sonnet 4.6 is their best computer use model yet, allowing organizations to deploy browser-based automation across business tools with near-human reliability. Claude Sonnet 4.6 approaches Opus 4.6 intelligence at a lower cost. It enables faster, high-quality task completion, making it ideal for high-volume coding and knowledge work use cases. 

 

Claude Sonnet 4.6 serves as a direct upgrade to Sonnet 4.5 across use cases that require consistent conversational quality and efficient multi-step orchestration. For search and chat applications, it delivers reliable performance across single and multi-turn exchanges at a price point that makes high-volume deployment practical, maintaining quality standards while optimizing for scale. Developers can leverage Claude Sonnet 4.6โ€™s for agentic workflows, seamlessly filling both lead agent and subagent roles in multi-model pipelines with precise workflow management and context compaction capabilities. Enterprise teams can use Claude Sonnet 4.6 to power domain-specific applications with professional precision, including spreadsheet and financial model creation that accelerates analysis workflows, compliance review processes that require meticulous attention to detail, and data summarization tasks where iteration speed and accuracy are paramount. Claude Sonnet 4.6 requires only minor prompting adjustments from Sonnet 4.5, ensuring smooth migration for existing implementations. 

 

Claude Sonnet 4.6 is now available in Amazon Bedrock. For the full list of available regions, refer to theโ€ฏdocumentation. To learn more and get started with Claude Sonnet 4.6 in Amazon Bedrock, read theโ€ฏAbout Amazon blogโ€ฏand visit theโ€ฏAmazon Bedrock console.

Published: 2026-02-17 15:43:00+00:00

Amazon MQ now supports ActiveMQ minor version 5.19

๐Ÿš€
New Service Feature Introduction
TL;DR: Amazon MQ now supports ActiveMQ minor version 5.19 with improvements and fixes across all AWS Regions.
AWS Services: Amazon MQ

Link: https://aws.amazon.com/about-aws/whats-new/2026/02/amazon-mq-activemq-5-19/

Amazon MQ now supports ActiveMQ minor version 5.19, which introduces several improvements and fixes compared to the previous version of ActiveMQ supported by Amazon MQ. Amazon MQ manages the patch version upgrades for your brokers. All brokers on ActiveMQ version 5.19 will be automatically upgraded to the next compatible and secure patch version in your scheduled maintenance window.

If you are utilizing prior versions of ActiveMQ, such as 5.18, we strongly recommend you to upgrade to ActiveMQ 5.19. You can easily perform this upgrade with just a few clicks in the AWS Management Console. To learn more about upgrading, consult the ActiveMQ Version Management section in the Amazon MQ Developer Guide. To learn more about the changes in ActiveMQ 5.19, see the Amazon MQ release notes. This version is available across all AWS Regions where Amazon MQ is available.

Published: 2026-02-19 17:00:00+00:00

Amazon Connect now includes agent time-off requests in draft schedules

๐ŸŽ‰
Service Feature Change
TL;DR: Amazon Connect now shows agent time-off requests in draft schedules to help schedulers identify coverage gaps before publishing.
AWS Services: Amazon Connect

Link: https://aws.amazon.com/about-aws/whats-new/2025/02/amazon-connect-time-off-draft-schedules

Amazon Connect now includes agent time-off requests in draft schedules, making it easier for you to view why an agent was not scheduled on a particular day or part of the day. For example, when generating schedules for next month, you can see that an agent who typically works Monday to Friday wasn't scheduled for the first week because they're on leave without needing to check the published schedules or troubleshooting configuration as to why agent was not scheduled. This launch helps schedulers quickly identify coverage gaps and adjust schedules before publishing them to agents.

This feature is available in all AWS Regions where Amazon Connect agent scheduling is available. To learn more about Amazon Connect agent scheduling, click here.

Published: 2026-02-17 18:50:00+00:00