AWS Lambda Durable Functions expand serverless capabilities while raising lock‑in questions
Read Our Expert Analysis
Create an account or login for free to unlock our expert analysis and key takeaways for this development.
By continuing, you agree to receive marketing communications and our weekly newsletter. You can opt-out at any time.
Recommended for you

Snowflake sharpens AI and migration playbook while a major outage raises resilience questions
Snowflake has accelerated feature launches, strategic purchases and commercial model partnerships to broaden its AI, observability and migration capabilities, while a late‑2025 platform failure that lasted many hours across multiple regions exposed operational fragility. Recent moves include a multi‑year, roughly $200 million commercial agreement to surface OpenAI models inside the Data Cloud and the rollout of Cortex Code, a data‑aware coding assistant, but integration, governance and reliability will determine whether these advances become durable customer advantages.

Databricks launches Lakebase — a serverless OLTP platform that rethinks transactional databases
Databricks unveiled Lakebase, a serverless operational database that runs PostgreSQL-compatible compute over lakehouse storage to make transactional data immediately queryable by analytics engines. Early customers report dramatic cuts in application delivery time, while the architecture reframes database management as a telemetry and analytics problem suited to programmatic provisioning and AI-driven agents.
AWS Accelerates Internal AI Agents After Engineering Cuts
Following engineering reductions, AWS has reallocated senior talent and engineering capacity to accelerate internal agent development and embed those capabilities into core cloud workflows. That shift pairs with tightened internal governance after AI‑assisted incidents and a hardware-first push (Trainium), creating both a strategic moat for AWS and short-term execution and supply‑chain risks for customers and third‑party vendors.

Amazon Sees AWS Scaling Toward $600B as AI Drives Cloud Demand
Amazon projects AWS could reach $600B by 2036 driven by enterprise AI workloads; the company is pursuing a hardware‑first strategy — including its Trainium accelerators — and plans sustained, large‑scale infrastructure spending while supplementing with third‑party GPUs amid foundry and packaging bottlenecks.

Enterprises Confront LLM-Driven Code Debt and Surging Cloud Costs
Enterprises that rushed to replace engineers with LLMs now face brittle systems, runaway cloud spend, and opaque technical debt. Rapid code generation without platform discipline has surged operational risk and forced costly remediation.

Amazon leans on in‑house Trainium chips to cut AI costs and jump‑start AWS growth
Amazon is accelerating deployment of its custom Trainium AI accelerators to lower customer compute costs and shore up AWS revenue momentum. The move sits inside a broader industry shift toward bespoke silicon — amid supply‑chain constraints and competing hyperscaler designs — so investors will treat upcoming AWS results as a test of whether these chips can produce sustained growth and margin gains.
OpenAI Secures $110B, Deploys Stateful Runtime on AWS Bedrock
OpenAI reported roughly $110B in strategic commitments from Amazon , Nvidia , and SoftBank and announced a new stateful runtime that is planned to run on Amazon Bedrock . Reported figures and distribution terms are large but partially described in industry accounts as staged or illustrative memoranda rather than uniformly closed checks, leaving timing and enforceability open while still materially signaling a shift toward runtime-anchored vendor leverage.

OpenAI pushes agents from ephemeral assistants to persistent workers with memory, shells, and Skills
OpenAI’s Responses API now adds server-side state compaction, hosted shell containers, and a Skills packaging standard to support long-running, reproducible agent workflows. Early partner reports and ecosystem moves (including large-context advances from rivals) show the feature set accelerates production adoption while concentrating responsibility for governance, secrets, and runtime controls.