Skip to content
Paco's Blog
Go back

Migrate from AWS S3 to OVH Object Storage, a Rails application use case

Updated:

2025-10-20 - Migrate from AWS S3 to OVH Object Storage, a Rails application use case

The reason

There are many reasons, but for now, let’s just say that it makes more sense to use a provider that will respect the EU law in a clearer way. I have nothing against AWS, they are providing good tools with a rich ecosystem, but we are also leaving in an uncertain world, and digital sovereignty is something that will count more and more, at any level. This is a topic for another article. About OVH Object Storage, we chose it as it was already our main providers for other services. But it would have been the same logic for other S3-compatible solution such as Clever Cloud Cellar, Scaleway Object Storage or Hetzner Object Storage.

The OpenFoodNetwork and Coopcircuits projects

Here is also the moment for me to introduce the “we” here :) I want to talk to you about a project close to my heart since many years, OpenFoodNetwork. The project is originally from Australia, it is non-profit, and it helps with “creating the tools and resources needed to build food systems that are fair, local, and transparent”. One of the main tools shares the same name and is written in Ruby. I discovered the awesome people working on Coopcircuits, the French instance of this project, in a coworking space close to Buttes Chaumont in Paris almost 10 years ago. I started contributing on voluntary basis, and as I restarted my freelance activity in September, it was easier to find time for the project. I decided then to fix a few bugs and to work on the S3-compatibility provider issue.

App update

On the Rails codebase side, it was actually really simple to extend the storage solution as the Ruby library used, aws-sdk-s3, already supported s3-compatible providers. Actually, you just need to pass it the new endpoint you want to use.

You can summarise it by adding a new configuration for ActiveStorage, and triggering the corresponding service name when needed in the codebase.

# config/storage.yml
...
s3_compatible_storage:
  service: S3
  endpoint:
  access_key_id: xxxx
  secret_access_key: xxxx
  bucket: my-new-bucket
  region:
...

Note that you may need a specific configuration for files requiring public access. And also that you will need to tune your CORS configuration to accept files from the new provider, by editing for example, your config/initializers/content_security_policy.rb file.

FYI, the specific logic is described in this Pull Request.

ActiveStorage consideration

ActiveStorage stores files using an object called Blob. This object keeps a trace of the configuration used for the upload. It is a good way to track multiple storage systems over one database table.

As we want to migrate files definitively, we will then need to update the existing values in the database to make them point to the new storage configuration.

You can write a script or a task with this basic logic:

ActiveStorage::Blob.group(:service_name).count

ActiveStorage::Blob.where(service_name: 'former_aws_s3_configuration').update_all(service_name: 's3_compatible_storage')

ActiveStorage::Blob.group(:service_name).count

This needs to be done once the new configuration is set up and the files have been synchronised.

Buckets synchronisation

The main part of the task is there to synchronize the data between the old bucket and the new one. Again the S3-compatibility makes it quite easy, you just need some disk space and some time.

Also, in my case, we could cut the services quickly at night to carry out the configuration switch without risk of data loss. The bucket was about 55GB.

The plan was:

Using the AWS CLI, you have these useful commands:

# Create local folder
mkdir tmp-local-bucket

# Check configuration and ACLs on former bucket
aws s3api get-bucket-acl --bucket old-bucket --profile aws-profile

# Check configuration and ACLs on new bucket
aws s3api get-bucket-acl --bucket new-bucket --profile ovh-profile

# Sync files between former and new buckets, through local service
aws s3 sync --profile aws-profile s3://old-bucket ./tmp-local-bucket
aws s3 sync --profile ovh-profile --acl public-read ./tmp-local-bucket s3://new-bucket

Important note: Do not forget to synchronize your files with the proper ACL directly on the new bucket, here public-read.

If you warmed up the buckets synchronization, the Maintenance phase should last 5 minutes maximum.

Conclusion

This kind of migration is not really complex, you just need to prepare it properly and to run to some smoke tests to validate it. It is not something that is widely considered generally, but the data portability may become increasingly important on a strategic standpoint in the next few years. If data is the new gold, better to know clearly where it is and how your provider takes care of it.


Share this post on:

Previous Post
Digital sovereignty, a local consideration
Next Post
GenAI, my usage so far