What is the term for storing data that are no longer needed on a daily basis for later access?

The Amazon S3 Glacier storage classes are purpose-built for data archiving, and are designed to provide you with the highest performance, the most retrieval flexibility, and the lowest cost archive storage in the cloud. You can choose from three archive storage classes optimized for different access patterns and storage duration. For archive data that needs immediate access, such as medical images, news media assets, or genomics data, choose the S3 Glacier Instant Retrieval storage class, an archive storage class that delivers the lowest cost storage with milliseconds retrieval. For archive data that does not require immediate access but needs the flexibility to retrieve large sets of data at no cost, such as backup or disaster recovery use cases, choose S3 Glacier Flexible Retrieval (formerly S3 Glacier), with retrieval in minutes or free bulk retrievals in 5—12 hours. To save even more on long-lived archive storage such as compliance archives and digital media preservation, choose S3 Glacier Deep Archive, the lowest cost storage in the cloud with data retrieval from 12—48 hours.

Amazon S3 Glacier Instant Retrieval

Amazon S3 Glacier Instant Retrieval is an archive storage class that delivers the lowest-cost storage for long-lived data that is rarely accessed and requires retrieval in milliseconds. With S3 Glacier Instant Retrieval, you can save up to 68% on storage costs compared to using the S3 Standard-Infrequent Access (S3 Standard-IA) storage class, when your data is accessed once per quarter. S3 Glacier Instant Retrieval delivers the fastest access to archive storage, with the same throughput and milliseconds access as the S3 Standard and S3 Standard-IA storage classes. S3 Glacier Instant Retrieval is ideal for archive data that needs immediate access, such as medical images, news media assets, or user-generated content archives. You can upload objects directly to S3 Glacier Instant Retrieval, or use S3 Lifecycle policies to transfer data from the S3 storage classes. For more information, visit the Amazon S3 Glacier Instant Retrieval page »

Key Features:

  • Data retrieval in milliseconds with the same performance as S3 Standard
  • Designed for durability of 99.999999999% of objects across multiple Availability Zones
  • Data is resilient in the event of the destruction of one entire Availability Zone
  • Designed for 99.9% data availability in a given year
  • 128 KB minimum object size
  • Backed with the Amazon S3 Service Level Agreement for availability
  • S3 PUT API for direct uploads to S3 Glacier Instant Retrieval, and S3 Lifecycle management for automatic migration of objects

Amazon S3 Glacier Flexible Retrieval (Formerly S3 Glacier)

S3 Glacier Flexible Retrieval delivers low-cost storage, up to 10% lower cost (than S3 Glacier Instant Retrieval), for archive data that is accessed 1—2 times per year and is retrieved asynchronously. For archive data that does not require immediate access but needs the flexibility to retrieve large sets of data at no cost, such as backup or disaster recovery use cases, S3 Glacier Flexible Retrieval (formerly S3 Glacier) is the ideal storage class. S3 Glacier Flexible Retrieval delivers the most flexible retrieval options that balance cost with access times ranging from minutes to hours and with free bulk retrievals. It is an ideal solution for backup, disaster recovery, offsite data storage needs, and for when some data occasionally need to be retrieved in minutes, and you don’t want to worry about costs. S3 Glacier Flexible Retrieval is designed for 99.999999999% (11 9s) of data durability and 99.99% availability by redundantly storing data across multiple physically separated AWS Availability Zones in a given year. For more information, visit the Amazon S3 Glacier storage classes page »

Key Features:

  • Designed for durability of 99.999999999% of objects across multiple Availability Zones
  • Data is resilient in the event of one entire Availability Zone destruction
  • Supports SSL for data in transit and encryption of data at rest
  • Ideal for backup and disaster recovery use cases when large sets of data occasionally need to be retrieved in minutes, without concern for costs
  • Configurable retrieval times, from minutes to hours, with free bulk retrievals
  • S3 PUT API for direct uploads to S3 Glacier Flexible Retrieval, and S3 Lifecycle management for automatic migration of objects

Amazon S3 Glacier Deep Archive

S3 Glacier Deep Archive is Amazon S3’s lowest-cost storage class and supports long-term retention and digital preservation for data that may be accessed once or twice in a year. It is designed for customers—particularly those in highly-regulated industries, such as financial services, healthcare, and public sectors—that retain data sets for 7—10 years or longer to meet regulatory compliance requirements. S3 Glacier Deep Archive can also be used for backup and disaster recovery use cases, and is a cost-effective and easy-to-manage alternative to magnetic tape systems, whether they are on-premises libraries or off-premises services. S3 Glacier Deep Archive complements Amazon S3 Glacier, which is ideal for archives where data is regularly retrieved and some of the data may be needed in minutes. All objects stored in S3 Glacier Deep Archive are replicated and stored across at least three geographically-dispersed Availability Zones, protected by 99.999999999% of durability, and can be restored within 12 hours. For more information, visit the Amazon S3 Glacier storage classes page »

Key Features:

  • Designed for durability of 99.999999999% of objects across multiple Availability Zones
  • Lowest cost storage class designed for long-term retention of data that will be retained for 7-10 years
  • Ideal alternative to magnetic tape libraries
  • Retrieval time within 12 hours
  • S3 PUT API for direct uploads to S3 Glacier Deep Archive, and S3 Lifecycle management for automatic migration of objects

Page 2

The Amazon S3 Glacier storage classes are purpose-built for data archiving, and are designed to provide you with the highest performance, the most retrieval flexibility, and the lowest cost archive storage in the cloud. You can choose from three archive storage classes optimized for different access patterns and storage duration. For archive data that needs immediate access, such as medical images, news media assets, or genomics data, choose the S3 Glacier Instant Retrieval storage class, an archive storage class that delivers the lowest cost storage with milliseconds retrieval. For archive data that does not require immediate access but needs the flexibility to retrieve large sets of data at no cost, such as backup or disaster recovery use cases, choose S3 Glacier Flexible Retrieval (formerly S3 Glacier), with retrieval in minutes or free bulk retrievals in 5—12 hours. To save even more on long-lived archive storage such as compliance archives and digital media preservation, choose S3 Glacier Deep Archive, the lowest cost storage in the cloud with data retrieval from 12—48 hours.

Amazon S3 Glacier Instant Retrieval

Amazon S3 Glacier Instant Retrieval is an archive storage class that delivers the lowest-cost storage for long-lived data that is rarely accessed and requires retrieval in milliseconds. With S3 Glacier Instant Retrieval, you can save up to 68% on storage costs compared to using the S3 Standard-Infrequent Access (S3 Standard-IA) storage class, when your data is accessed once per quarter. S3 Glacier Instant Retrieval delivers the fastest access to archive storage, with the same throughput and milliseconds access as the S3 Standard and S3 Standard-IA storage classes. S3 Glacier Instant Retrieval is ideal for archive data that needs immediate access, such as medical images, news media assets, or user-generated content archives. You can upload objects directly to S3 Glacier Instant Retrieval, or use S3 Lifecycle policies to transfer data from the S3 storage classes. For more information, visit the Amazon S3 Glacier Instant Retrieval page »

Key Features:

  • Data retrieval in milliseconds with the same performance as S3 Standard
  • Designed for durability of 99.999999999% of objects across multiple Availability Zones
  • Data is resilient in the event of the destruction of one entire Availability Zone
  • Designed for 99.9% data availability in a given year
  • 128 KB minimum object size
  • Backed with the Amazon S3 Service Level Agreement for availability
  • S3 PUT API for direct uploads to S3 Glacier Instant Retrieval, and S3 Lifecycle management for automatic migration of objects

Amazon S3 Glacier Flexible Retrieval (Formerly S3 Glacier)

S3 Glacier Flexible Retrieval delivers low-cost storage, up to 10% lower cost (than S3 Glacier Instant Retrieval), for archive data that is accessed 1—2 times per year and is retrieved asynchronously. For archive data that does not require immediate access but needs the flexibility to retrieve large sets of data at no cost, such as backup or disaster recovery use cases, S3 Glacier Flexible Retrieval (formerly S3 Glacier) is the ideal storage class. S3 Glacier Flexible Retrieval delivers the most flexible retrieval options that balance cost with access times ranging from minutes to hours and with free bulk retrievals. It is an ideal solution for backup, disaster recovery, offsite data storage needs, and for when some data occasionally need to be retrieved in minutes, and you don’t want to worry about costs. S3 Glacier Flexible Retrieval is designed for 99.999999999% (11 9s) of data durability and 99.99% availability by redundantly storing data across multiple physically separated AWS Availability Zones in a given year. For more information, visit the Amazon S3 Glacier storage classes page »

Key Features:

  • Designed for durability of 99.999999999% of objects across multiple Availability Zones
  • Data is resilient in the event of one entire Availability Zone destruction
  • Supports SSL for data in transit and encryption of data at rest
  • Ideal for backup and disaster recovery use cases when large sets of data occasionally need to be retrieved in minutes, without concern for costs
  • Configurable retrieval times, from minutes to hours, with free bulk retrievals
  • S3 PUT API for direct uploads to S3 Glacier Flexible Retrieval, and S3 Lifecycle management for automatic migration of objects

Amazon S3 Glacier Deep Archive

S3 Glacier Deep Archive is Amazon S3’s lowest-cost storage class and supports long-term retention and digital preservation for data that may be accessed once or twice in a year. It is designed for customers—particularly those in highly-regulated industries, such as financial services, healthcare, and public sectors—that retain data sets for 7—10 years or longer to meet regulatory compliance requirements. S3 Glacier Deep Archive can also be used for backup and disaster recovery use cases, and is a cost-effective and easy-to-manage alternative to magnetic tape systems, whether they are on-premises libraries or off-premises services. S3 Glacier Deep Archive complements Amazon S3 Glacier, which is ideal for archives where data is regularly retrieved and some of the data may be needed in minutes. All objects stored in S3 Glacier Deep Archive are replicated and stored across at least three geographically-dispersed Availability Zones, protected by 99.999999999% of durability, and can be restored within 12 hours. For more information, visit the Amazon S3 Glacier storage classes page »

Key Features:

  • Designed for durability of 99.999999999% of objects across multiple Availability Zones
  • Lowest cost storage class designed for long-term retention of data that will be retained for 7-10 years
  • Ideal alternative to magnetic tape libraries
  • Retrieval time within 12 hours
  • S3 PUT API for direct uploads to S3 Glacier Deep Archive, and S3 Lifecycle management for automatic migration of objects

Page 3

Amazon S3 was launched 15 years ago on Pi Day, March 14, 2006, and created the first generally available AWS service. Over that time, data storage and usage has exploded, and the world has never been the same. Amazon S3 has virtually unlimited scalability and unmatched availability, durability, security, and performance. 

Watch the videos below to hear from AWS leaders and experts as they take you back in time reviewing the history of AWS and the key decisions involved in the building and evolution of Amazon S3. 

15 years of Amazon S3 - Foundations of Cloud Infrastructure

15 years of Amazon S3 - Security is Job Zero

15 years of Amazon S3 - Building an Evolvable System

Watch the videos below to hear how Amazon S3 was architected for availability and durability inside AWS Regions and Availability Zones. Understand how to get started with Amazon S3, best practices, and how to optimize for costs, data protection, and more.

Beyond eleven nines - How Amazon S3 is built for durability

Architecting for high availability on Amazon S3

Back to the basics: Getting started with Amazon S3

Amazon S3 foundations: Best practices for Amazon S3

Amazon S3 storage classes primer

Optimize and manage data on Amazon S3

Amazon S3 Replication: For data protection & application acceleration

Backup to Amazon S3 and Amazon S3 Glacier

Modernizing your data archive with Amazon S3 Glacier

Using AWS, you gain the control and confidence you need to securely run your business with the most flexible and secure cloud computing environment available today. With AWS, you can improve your ability to meet core security and compliance requirements, such as data locality, protection, and confidentiality with our comprehensive services and features. 

Watch the security videos to learn about the foundations of data security featuring Amazon S3.

Foundations of data security

Securing Amazon S3 with guardrails and fine-grained access controls

15 years of Amazon S3 - Security is Job Zero

Managing access to your Amazon S3 buckets and objects

Demo - Amazon S3 security posture management and threat detection

Protecting your data in Amazon S3

Advanced networking with Amazon S3 and AWS PrivateLink

Proactively identifying threats & protecting sensitive data in S3

S3 Block Public Access overview and demo

Demo - Monitor your Amazon S3 inventory & identify sensitive data

Demo - Amazon S3 and VPC Endpoints

Serverless is a way to describe the services, practices, and strategies that enable you to build more agile applications so you can innovate and respond to change faster. Watch the serverless and S3 videos to see how AWS Lambda can be the best compute option when working with data in Amazon S3.

Serverless on Amazon S3 - Introducing S3 Object Lambda

S3 Object Lambda - add code to Amazon S3 GET requests to process data

Building serverless applications with Amazon S3

S3 & Lambda - flexible pattern at the core of serverless applications

The best compute for your storage - Amazon S3 & AWS Lambda

Live coding - Uploading media to Amazon S3 from web & mobile apps

Amazon S3 is the largest and most performant object storage service for structured and unstructured data—and the storage service of choice to build a data lake. Watch these videos to learn how to build a data lake on Amazon S3, and how you can use native AWS services or leverage AWS partners to run data analytics, artificial intelligence (AI), machine learning (ML), high-performance computing (HPC), and media data processing applications to gain insights from your unstructured datasets.

Building modern data lakes on Amazon S3

Harness the power of your data with the AWS Lake House Architecture

Amazon S3 Strong Consistency

Build a data lake on Amazon S3

AWS offers a wide variety of services and partner tools to help you migrate your data to Amazon S3. Learn how AWS Storage Gateway and AWS DataSync can remove the friction out of the data migration process as you dive into the solutions and architectural considerations for accelerating data migration to the cloud from on-premises systems.

Accelerating your migration to S3

Managed file transfers to S3 over SFTP, FTPS, and FTP

Optimize and manage data on Amazon S3

Page 4

Amazon S3 has various features you can use to organize and manage your data in ways that support specific use cases, enable cost efficiencies, enforce security, and meet compliance requirements. Data is stored as objects within resources called “buckets”, and a single object can be up to 5 terabytes in size. S3 features include capabilities to append metadata tags to objects, move and store data across the S3 Storage Classes, configure and enforce data access controls, secure data against unauthorized users, run big data analytics, monitor data at the object and bucket levels, and view storage usage and activity trends across your organization. Objects can be accessed through S3 Access Points or directly through the bucket hostname.

Amazon S3’s flat, non-hierarchical structure and various management features are helping customers of all sizes and industries organize their data in ways that are valuable to their businesses and teams. All objects are stored in S3 buckets and can be organized with shared names called prefixes. You can also append up to 10 key-value pairs called S3 object tags to each object, which can be created, updated, and deleted throughout an object’s lifecycle. To keep track of objects and their respective tags, buckets, and prefixes, you can use an S3 Inventory report that lists your stored objects within an S3 bucket or with a specific prefix, and their respective metadata and encryption status. S3 Inventory can be configured to generate reports on a daily or a weekly basis.

With S3 bucket names, prefixes, object tags, and S3 Inventory, you have a range of ways to categorize and report on your data, and subsequently can configure other S3 features to take action. Whether you store thousands of objects or a billion, S3 Batch Operations makes it simple to manage your data in Amazon S3 at any scale. With S3 Batch Operations, you can copy objects between buckets, replace object tag sets, modify access controls, and restore archived objects from S3 Glacier Flexible Retrieval and S3 Glacier Deep Archive storage classes, with a single S3 API request or a few steps in the S3 console. You can also use S3 Batch Operations to run AWS Lambda functions across your objects to run custom business logic, such as processing data or transcoding image files. To get started, specify a list of target objects by using an S3 Inventory report or by providing a custom list, and then select the desired operation from a pre-populated menu. When an S3 Batch Operation request is done, you'll receive a notification and a completion report of all changes made. Learn more about S3 Batch Operations by watching the video tutorials. 

Amazon S3 also supports features that help maintain data version control, prevent accidental deletions, and replicate data to the same or different AWS Region. With S3 Versioning, you can preserve, retrieve, and restore every version of an object stored in Amazon S3, which allows you to recover from unintended user actions and application failures. To prevent accidental deletions, enable Multi-Factor Authentication (MFA) Delete on an S3 bucket. If you try to delete an object stored in an MFA Delete-enabled bucket, it will require two forms of authentication: your AWS account credentials and the concatenation of a valid serial number, a space, and the six-digit code displayed on an approved authentication device, like a hardware key fob or a Universal 2nd Factor (U2F) security key.

With S3 Replication, you can replicate objects (and their respective metadata and object tags) to one or more destination buckets into the same or different AWS Regions for reduced latency, compliance, security, disaster recovery, and other use cases. You can configure S3 Cross-Region Replication (CRR) to replicate objects from a source S3 bucket to one or more destination buckets in different AWS Regions. S3 Same-Region Replication (SRR) replicates objects between buckets in the same AWS Region. While live replication like CRR and SRR automatically replicates newly uploaded objects as they are written to your bucket, S3 Batch Replication allows you to replicate existing objects. You can use S3 Batch Replication to backfill a newly created bucket with existing objects, retry objects that were previously unable to replicate, migrate data across accounts, or add new buckets to your data lake. Amazon S3 Replication Time Control (S3 RTC) helps you meet compliance requirements for data replication by providing an SLA and visibility into replication times.

To access replicated data sets in S3 buckets across AWS Regions, use Amazon S3 Multi-Region Access Points to create a single global endpoint for your applications and clients to use regardless of their location. This global endpoint allows you to build multi-Region applications with the same simple architecture you would use in a single region, and then to run those applications anywhere in the world. Amazon S3 Multi-Region Access Points can accelerate performance by up to 60% when accessing data sets that are replicated across multiple AWS Regions. Based on AWS Global Accelerator, S3 Multi-Region Access Points consider factors like network congestion and the location of the requesting application to dynamically route your requests over the AWS network to the lowest latency copy of your data. Using S3 Multi-Region Access Points failover controls, you can failover between your replicated data sets across AWS Regions allowing you to shift your S3 data request traffic to an alternate AWS Region within minutes.

You can also enforce write-once-read-many (WORM) policies with S3 Object Lock. This S3 management feature blocks object version deletion during a customer-defined retention period so that you can enforce retention policies as an added layer of data protection or to meet compliance obligations. You can migrate workloads from existing WORM systems into Amazon S3, and configure S3 Object Lock at the object- and bucket-levels to prevent object version deletions prior to a pre-defined Retain Until Date or Legal Hold Date. Objects with S3 Object Lock retain WORM protection, even if they are moved to different storage classes with an S3 Lifecycle policy. To track what objects have S3 Object Lock, you can refer to an S3 Inventory report that includes the WORM status of objects. S3 Object Lock can be configured in one of two modes. When deployed in Governance mode, AWS accounts with specific IAM permissions are able to remove S3 Object Lock from objects. If you require stronger immutability in order to comply with regulations, you can use Compliance Mode. In Compliance Mode, the protection cannot be removed by any user, including the root account.

In addition to these management capabilities, use Amazon S3 features and other AWS services to monitor and control your S3 resources. Apply tags to S3 buckets to allocate costs across multiple business dimensions (such as cost centers, application names, or owners), then use AWS Cost Allocation Reports to view the usage and costs aggregated by the bucket tags. You can also use Amazon CloudWatch to track the operational health of your AWS resources and configure billing alerts for estimated charges that reach a user-defined threshold. Use AWS CloudTrail to track and report on bucket- and object-level activities, and configure S3 Event Notifications to trigger workflows and alerts or invoke AWS Lambda when a specific change is made to your S3 resources. S3 Event Notifications automatically transcodes media files as they’re uploaded to S3, processes data files as they become available, and synchronizes objects with other data stores. Additionally, you can verify integrity of data transferred to and from Amazon S3, and can access the checksum information at any time using the GetObjectAttributes S3 API or an S3 Inventory report. You can choose from four supported checksum algorithms (SHA-1, SHA-256, CRC32, or CRC32C) for data integrity checking on your upload and download requests depending on your application needs.

In addition to these management capabilities, you can use S3 features and other AWS services to monitor and control how your S3 resources are being used. You can apply tags to S3 buckets in order to allocate costs across multiple business dimensions (such as cost centers, application names, or owners), and then use AWS Cost Allocation Reports to view usage and costs aggregated by the bucket tags. You can also use Amazon CloudWatch to track the operational health of your AWS resources and configure billing alerts that are sent to you when estimated charges reach a user-defined threshold. Another AWS monitoring service is AWS CloudTrail, which tracks and reports on bucket-level and object-level activities. You can configure S3 Event Notifications to trigger workflows, alerts, and invoke AWS Lambda when a specific change is made to your S3 resources. S3 Event Notifications can be used to automatically transcode media files as they are uploaded to Amazon S3, process data files as they become available, or synchronize objects with other data stores.

Page 5

Data redundancy – If you need to maintain multiple copies of your data in the same, or different AWS Regions, with different encryption types, or across different accounts. S3 Replication powers your global content distribution needs, compliant storage needs, and data sharing across accounts.

Replicate objects while retaining metadata — If you need to ensure your replica copies are identical to the source data, you can use S3 Replication to make copies of your objects that retain all metadata, such as the original object creation time, object access control lists (ACLs), and version IDs.

Replicate objects to more cost-effective storage classes — You can use S3 Replication to put objects into S3 Glacier, S3 Glacier Deep Archive, or another storage class in the destination buckets. You can also replicate your data to the same storage class and then use S3 Lifecyle policies to move your objects to a more cost-effective storage.

Maintain object copies under a different account — Regardless of who owns the source object, you can tell Amazon S3 to change replica ownership to the AWS account that owns the destination bucket to restrict access to object replicas.

Replicate your objects within 15 minutes — You can use Amazon S3 Replication Time Control (S3 RTC) to replicate your data in a predictable time frame. S3 RTC replicates 99.99 percent of new objects stored in Amazon S3 within 15 minutes of upload and is backed by a Service Level Agreement (SLA).

Page 6

Skip to main content

Close Getting started Videos Customers More resources

S3 Batch Operations is an Amazon S3 data management feature that lets you manage billions of objects at scale with just a few clicks in the Amazon S3 Management Console or a single API request. With this feature, you can make changes to object metadata and properties, or perform other storage management tasks, such as copying or replicating objects between buckets, replacing object tag sets, modifying access controls, and restoring archived objects from S3 Glacier — instead of taking months to develop custom applications to perform these tasks.

Introduction to Amazon S3 Batch Operations (2:03)

S3 Batch Operations is a managed solution for performing storage actions like copying and tagging objects at scale, whether for one-time tasks or for recurring, batch workloads. S3 Batch Operations can perform actions across billions of objects and petabytes of data with a single request. To perform work in S3 Batch Operations, you create a job. The job consists of the list of objects, the action to perform, and the set of parameters you specify for that type of operation. You can create and run multiple jobs at a time in S3 Batch Operations or use job priorities as needed to define the precedence of each job and ensures the most critical work happens first. S3 Batch Operations also manages retries, tracks progress, sends completion notifications, generates reports, and delivers events to AWS CloudTrail for all changes made and tasks executed.

S3 Batch Operations complements any event-driven architecture you may be operating today. For new objects, using S3 events and Lambda functions is great for converting file types, creating thumbnails, performing data scans, and carrying out other operations. For example, customers use S3 events and Lambda functions to create smaller sized, low resolution versions of raw photographs when images are first uploaded to S3. S3 Batch Operations complements these existing event-driven workflows by providing a simple mechanism for performing the same actions across your existing objects as well.

To perform work in S3 Batch Operations, you create a job. The job consists of the list of objects, the action to perform, and the set of parameters you specify for that type of operation. You can create and run multiple jobs at a time in S3 Batch Operations or use job priorities as needed to define the precedence of each job and ensures the most critical work happens first. S3 Batch Operations also manages retries, tracks progress, sends completion notifications, generates reports, and delivers events to AWS CloudTrail for all changes made and tasks executed.

Create a Batch Operations job

Manage and track a Batch Operations job

Teespring was founded in 2011 and enables users to create and sell custom on-demand products online. As every piece of custom merchandise requires multiple assets inside Teespring, they store petabytes of data in Amazon S3.

"Amazon S3 Batch Operations helped us optimize our storage by utilizing Amazon S3’s Glacier storage class. We used our own storage metadata to create batches of objects that we could move to Amazon S3 Glacier. With Amazon S3 Glacier we saved more than 80% of our storage costs. We are always looking for opportunities to automate storage management, and with S3 Batch Operations, we can manage millions of objects in minutes.”

James Brady, VP of Engineering - Teespring

Capital One is a bank founded at the intersection of finance and technology and one of America’s most recognized brands.

Capital One used Amazon S3 Batch Operations to copy data between two AWS regions to increase their data’s redundancy and to standardize their data footprint between those two locations.

"With Amazon S3 Batch Operations we created a job to copy millions of objects in hours, work that had traditionally taken months to complete. We used Amazon S3’s inventory report, which gave a list of objects in our bucket, as the input to our Amazon S3 Batch Operations job. Amazon S3 was instrumental in copying the data, providing progress updates, and delivering an audit report when the job was complete. Having this feature saved our team weeks of manual effort and turned this large-scale data transfer into something routine.”

Franz Zemen, Vice President, Software Engineering - Capital One

ePlus, an AWS Advanced Consulting Partner, works with customers to optimize their IT environments and uses solutions like, S3 Batch Operations, to save clients time and money.

"S3 Batch Operations is simply amazing. Not only did it help one of our clients reduce the time, complexity, and painstaking chore of having to pull together a wide selection of S3 operations, scheduling jobs, then rendering the information in a simple to use dashboard, it also helped tackle some daunting use cases I don't think we would have been able to tackle in the fraction of the time it took S3 Batch Operations to do.
 
For example, S3 Batch Operations made quick work of copying over 2 million objects across regions within the same account while keeping the metadata intact. The solution worked seamlessly performing similar tasks across accounts, and most notably, generated a completion report that automatically sifted and separated successful versus failed operations amongst 400 million objects allowing for simpler handling of the failed operations in a single file.”

David Lin, Senior Solutions Architect & AWS Certified Professional - ePlus 

Amazon S3 Batch Operations can be used to easily process hundreds, millions, or billions of S3 objects in a simple and straightforward fashion. You can copy objects to another bucket, set tags or access control lists (ACLs), initiate a restore from S3 Glacier, or invoke an AWS Lambda function on each one.

Read the blog »

This post demonstrates how to create list of objects, filter to only include unencrypted objects, set up permissions, and perform an S3 Batch Operations job to encrypt your objects. Encrypting existing objects is one of the many ways that you can use S3 Batch Operations to manage your Amazon S3 objects.

Read the blog »

This post reviews how to use S3 Batch Operations to trigger a video transcoding job using AWS Lambda, either from video stored in S3 or video requiring a restore from Amazon S3 Glacier.

Read the blog »

Watch the S3 Batch Operations Tech Talk

Learn how to get started and best practices.

Learn more 

Sign up for a free account

Instantly get access to the AWS Free Tier. 

Sign up 

Start building in the console

Get started building with Amazon S3 in the AWS Management Console.

Sign in 

Toplist

Latest post

TAGs