Back to Home

Amazon S3 Turns 20: The Service That Became the Foundation of Modern Cloud Storage

April 20, 2026

On March 14, 2026, Amazon Simple Storage Service (Amazon S3) celebrated its 20th anniversary since launch. Over two decades, the service has evolved from a small object storage solution into a global infrastructure that underpins thousands of digital services, analytics platforms, and artificial intelligence systems.

How It All Started

Amazon S3 was launched on March 14, 2006, with a brief announcement on the What's New AWS page. At the time, it was a simple service for storing and retrieving data through a web interface.

The core idea behind the service was to provide simple infrastructure components that handle the complex tasks of scalability, reliability, and availability. This let developers focus on building products rather than managing infrastructure.

From the very beginning, S3 was built on five key principles:

  • Security: data protection
  • Durability: 99.999999999% durability
  • Availability: high service availability
  • Performance: consistent performance at any scale
  • Elasticity: automatic scaling without manual intervention

S3 at Scale Today

Over 20 years, the service has grown to a scale that is hard to imagine.

At launch, S3 had:

  • approximately 1 petabyte of total capacity
  • 400 storage nodes
  • 3 datacenters
  • a maximum object size of 5 GB
  • pricing of $0.15 per GB

Today, Amazon S3:

  • stores over 500 trillion objects
  • processes more than 200 million requests per second
  • operates across 123 Availability Zones
  • is available in 39 AWS Regions
  • supports objects up to 50 TB

At the same time, storage costs have decreased by approximately 85%, down to about $0.02 per GB.

An Industry Standard

Over the past two decades, the S3 API has become the de facto standard for object storage. Many vendors now offer S3-compatible storage systems, enabling the use of the same tools and skills across different environments.

Engineering at Scale

Today, S3 operates thanks to complex engineering systems:

  • microservices continuously verify every byte of data
  • recovery mechanisms are triggered automatically in case of degradation
  • critical components are being rewritten in Rust to improve security and performance
  • mathematical methods are used to verify system correctness

One of the key achievements remains full backward API compatibility: code written for S3 in 2006 still works today.

The Future of S3

AWS views S3 not just as storage, but as a universal platform for working with data and AI. Among the new capabilities of the service:

  • Amazon S3 Tables: managed Apache Iceberg tables for analytics
  • Amazon S3 Vectors: vector storage for AI search and RAG
  • Amazon S3 Metadata: a centralized metadata catalog for data lakes

Summary

Over 20 years, Amazon S3 has transformed:

  • from 1 petabyte to hundreds of exabytes of data
  • from simple object storage to a foundation for analytics, data, and AI

And throughout this evolution, its core principles have remained unchanged: security, durability, availability, performance, and elasticity.

Get in touch

Partner with us today for a brighter, more innovative tomorrow

int_sales@muk.cloud

partner with us today