Zoom Recordings to S3: A Cost-Saving Archive Migration Pipeline

April 13, 2026

If your organization uses Zoom meetings at any scale, you'll fairly quickly run over the free tier of the meeting recording Cloud Storage plan, putting your account into overage pricing. Manually uploading recordings to a third-party cloud storage is a pain; what if you had an automated pipeline that would move your old recordings somewhere less expensive?

Using 1 TB of Cloud Storage on Zoom costs $100 USD a month. Storing the same 1 TB on Amazon Web Services S3 (Simple Storage Service), even on the Standard class of storage, is ~$26, and using a Glacier option would be a fraction of that. An important caveat here: if you plan to retrieve a large number of these videos every month, the data transfer fees will negate any savings resulting from moving to the cheaper storage -- watching 1 TB of video from S3 in a month would cost roughly $92, so you're better off staying on Zoom. A second minor downside is that you'll only have the raw video files, so no nice Zoom multi-camera player for these meetings. If you only need to store the recordings for compliance and occasional reference, read on.

The first step is to retrieve the meetings from your database (I'm assuming you're storing the URLs/IDs in some kind of entity; examples are for Sequelize).

const meetingsToTransfer = await ZoomMeeting.findAll({
    where: {
        startTime: {
            [Op.lt]: cutoff, // Only transfer recordings that are older than a certain point in time; I used 9 months
        },
        recordingStatus: "completed", // Only get meetings where the recording exists
        transferredToS3: false,
        untransferable: false, // More on this later
    },
    limit: 50, // Limit batch processing to 50 at a time; this is important if you're starting with a large number of meetings
    order: [["startTime", "ASC"]],
});

(This time-based query is the key to keeping the migration robust and efficient without too many moving parts -- it will always find all the meetings before the cutoff that have not been transferred already, but not any that cannot be transferred.)

For each meeting, get the recording data from Zoom. It's important to note that each recording may have multiple videos in recording.recording_files -- if the recording was stopped then restarted, or if there are multiple camera angles. You'll want to keep the recordings for a single meeting together; either store all the recordings in a folder named after the meeting's Zoom ID, or just namespace the files with a combination of the meeting ID and the video ID or iterator. If no recording exists for the meeting (generally because it was scheduled but never actually happened), mark that meeting as "untransferable" and skip it.

Next, for each video, download the raw video file to a temp directory using Axios or similar. When finished, upload it to the bucket in S3, making sure to set the contentType parameter to video/mp4. Add the returned S3 URL to the s3ArchiveUrls (or similar) field on the meeting record, and when every recording for the meeting has been added to this field, set transferredToS3 to true and save the meeting. Finally, when, and only if, every other operation has been completed successfully, delete the recordings via the Zoom API (in the unlikely event that the delete call fails, you could send an error notification and roll back the transferredToS3 flag for that meeting). Don't delete the temp file right away -- instead set up a daily cron job that deletes any files older than a week from the temp folder, as an extra safeguard against data loss.

Before we moved to an Enterprise-level Zoom plan, this pipeline moved 15 TB of video for us, saving thousands of dollars in storage fees -- easily paying for the development cost many times over.

All content copyright James Podles, 2026

Not a robot — just an English major