Skip to main content
Katapult

Backwards Compatible Feature Development with Zero Downtime Deployment

5 Nov 2025 · 7 min read · Ruby

Backwards Compatible Feature Development with Zero Downtime Deployment

Introduction

I've been working with Rails for over a decade, and one of the most challenging aspects of building production applications is evolving features without breaking existing functionality. In this post, I’ll do a deep dive into how we introduced global and multi-targeted backup policies into our Katapult platform. You’ll learn how we launched the feature while maintaining backwards compatibility and achieving zero-downtime deployment.

We'll cover how to maintain backwards compatibility when:

  • Adapting an existing belong_to model relationship into has_many
  • Using feature flags for safe rollouts
  • Writing database migrations for zero downtime

These techniques will help you build robust, maintainable systems that can evolve without breaking existing functionality–especially when you're dealing with production systems where downtime isn't an option. It also forces you to think about how data is structured and managed in your application, which is always a good thing.

Adapting an existing belongs_to polymorphic relationship into a has_many

The existing backup system was straightforward: each backup policy was tied to a single target (disk or virtual machine) via a polymorphic belongs_to :target relationship.

Single Target Disk Backup

This meant that if you wanted to create a new backup policy for another target, you had to manually duplicate the policy.

Our goal was to support global policies that could apply to multiple targets and multiple types of targets. One backup policy could target specific groups of resources through tags, virtual machine groups, or individual selections.

Designing for Backwards Compatibility

The challenge was maintaining backward compatibility while extending the system's capabilities. Rather than replacing the existing single-target relationship, I introduced a new polymorphic association through a join table. This approach preserved all existing functionality while adding the flexibility needed for multi-target policies.

Multi-target Disk Backup

We created a MultiDiskBackupTarget model that acts as a bridge between backup policies and their various target types. This model uses the same polymorphic pattern as the original target field but enables multiple targets per policy through a has_many relationship:

class MultiDiskBackupTarget < ApplicationRecord
  belongs_to :disk_backup_policy`
  belongs_to :resource, polymorphic: true
  ALLOWED_RESOURCE_TYPES = %w[Disk Tag VirtualMachine VirtualMachineGroup].freeze
end

This design maintains the existing single-target behavior while allowing references to multiple resources. The polymorphic nature of both relationships means we can easily add new target types in the future without schema changes.

The implementation encapsulates the complexity of resolving different resource types to their associated disks within the MultiDiskBackupTarget model, providing a clean interface for the rest of the application.

Implementing Polymorphic Associations

We started by designing a polymorphic relationship structure that allows a single backup policy to work with different target types.

MultiDiskBackupTarget extends the familiar polymorphic pattern into a collection. Each row binds a policy to a resource that can be a Disk, VirtualMachine, VirtualMachineGroup, or Tag.

From there, the policy gathers the resources you choose and we resolve them to the actual disks behind the scenes. If we want to support a new kind of target later, we add it in one place and tell it how to map to disks – no heavy migrations or large refactors required. The rest of the app stays the same, keeping adoption smooth and future changes easy.

Introducing a single interface to handle multiple implementations

We needed to support both old and new implementations. Using a single interface to wrap both allowed us to isolate the complex policy resolution in one place. The key insight was to design the DiskBackupPolicy model with a flexible disks_to_backup method that handles multiple implementation strategies:

def disks_to_backup
  return organization_disks if global_policy?
  return backup_target_disks if backup_target_disks.any?

  case target
  when Disk
    target.can_take_backup? ? [target] : []
  when VirtualMachine
    target.disks.system.select(&:can_take_backup?)
  else
    []
  end
end

This approach provides several benefits:

  1. Unified Interface: Whether a policy targets a single disk, a virtual machine, or multiple resources through tags, the same disks_to_backup, the same method retrieves relevant disks.

  2. Extensibility: Adding new target types requires minimal changes to the core logic.

  3. Consistent Behavior: All backup policies, regardless of their target type, follow the same execution path, keeping the system predictable and maintainable.

The polymorphic MultiDiskBackupTarget model further enhances this flexibility by allowing backup policies to reference any resource type through a single interface. This design pattern is particularly valuable when building features that need to work across different domain objects while maintaining consistency.

Feature Flag Strategy for Safe Rollouts

Implementing a feature like multi-target backup policies requires careful rollout planning. We used a feature flag strategy to gradually introduce the new functionality while maintaining system stability.

The feature flag implementation operated at multiple levels:

  1. UI Level: The new form fields for global policies and multi-disk targets were conditionally rendered based on feature flags, so users only saw them when enabled.

  2. Controller Level: Logic handled new backup target attributes with fallbacks for when the feature was disabled.

  3. Model Level: The DiskBackupPolicy model included feature flag checks, ensuring that the new functionality didn’t interfere with existing backup policies.

This multi-layered approach provided several advantages:

  • Gradual Rollout: We could enable the feature for specific organizations or user groups first, allowing us to monitor performance and gather feedback.

  • Quick Rollback: If issues were discovered, we could disable the feature immediately without requiring a code deployment.

The key to success was setting the right granularity for flags – broad enough to be useful, but not so granular that the codebase became complex.

Zero-Downtime Database Migrations

Like many development teams, we want to roll out changes without service interruptions, so that we don’t need to put Katapult into any kind of maintenance mode unless absolutely necessary.

This adds complexity to database migrations, as they must remain compatible with both old and new code. The strong migrations gem is great for guiding safe schema changes.

Adding support for global backup policies and multi-disk targets required significant database schema changes. We needed to add new tables, columns, and relationships while keeping existing policies functional.

The migration strategy followed a careful sequence:

  1. Add New Columns with Safe Defaults: The global_policy column was added with a default value of false, ensuring that existing policies remained unchanged.

  2. Create New Tables: The multi_disk_backup_targets table was created with proper indexes and foreign key constraints, but without affecting existing data.

  3. Update Constraints Gradually: The target_type validation was updated to allow blank values, enabling the new global policy functionality while maintaining backward compatibility.

The critical aspect of these migrations was ensuring that they could be applied to a production database without causing downtime.

  • Non-blocking Operations: All schema changes were designed to be non-blocking, avoiding table locks that could impact user experience.

  • Backward Compatibility: Existing backup policies continued to work exactly as before, with the new functionality being additive rather than replacing existing behavior.

  • Data Integrity: Foreign key constraints and validations were added in a way that didn't break existing data relationships.

All migrations were tested in staging with production-like data to ensure performance and reliability.

This approach to database migrations is essential for any team that needs to deploy features without service interruption. The key is to think of migrations as a series of small, safe steps rather than a single large change, allowing for rollback at any point if issues are discovered.

Lessons Learned and Best Practices

This project provided valuable insights into building backwards-compatible features in production Rails applications. Here are the key takeaways that you can apply to your own work:

When to Use Polymorphic Associations

Polymorphic associations can be a powerful tool, but not always the right choice. They shine when modelling flexible relationships across different domain objects, but can create complexity around queries and performance.

For more insight:

Structuring Migrations for Zero-Downtime Deployments

Safe database migrations are the foundation of any zero-downtime deployment strategy. The goal is to evolve your schema incrementally, in a way that new code can run against both the old and new versions until the rollout is complete.

Recommended reads:

Testing Strategy for Backwards Compatibility

Even the most carefully planned rollouts can fail if you don’t validate both old and new code paths. Testing backwards compatibility means making sure that legacy functionality continues to behave as expected while new behavior is layered on. These resources dig into approaches for building confidence in these scenarios:

Conclusion

The disk backup policy feature shows how thoughtful design and careful implementation can enable significant functionality enhancements without disrupting existing systems. The key to success was treating backwards compatibility as a core design principle – not an afterthought.

Reducing risk

While this approach demands more upfront planning, it reduces risk, ensures smoother deployment, and builds long-term reliability.

By extending the existing systems rather than replacing them, we maintained stability while gaining new capabilities. The polymorphic relationship structure provided the flexibility needed for multi-target policies while preserving the simplicity of single-target policies.

This same pattern can apply to many areas of software development – new APIs, UI changes, model extensions, and integrations – all built in ways that protect existing functionality while delivering new value.

Ultimately, the best features are the ones that work seamlessly for all users, old and new alike. Backwards compatibility isn’t a limitation, it’s the foundation of building robust, user-focused software.

Applying This Approach to Other Features

This pattern of backwards-compatible feature development can be applied to many scenarios in Rails applications:

  • Add new API endpoints while maintaining existing ones
  • Introduce new UI components while keeping legacy interfaces functional
  • Extend existing models with new relationships and behaviors
  • Add new integrations while maintaining existing ones

The key is to think of new features as extensions rather than replacements, design for gradual rollout, and always have a rollback plan. By following these principles, you can deliver new functionality with confidence, knowing that your existing users and systems remain protected.

Remember the best feature is one that works reliably for all users, both old and new. Backwards compatibility isn't a constraint, it's a foundation for building robust, user-focused applications.

About the author

Richard P

Senior Software Engineer at Krystal. If I'm not in front of my computer I am probably bobbing around the Solent on my sailboat.

Start building today with £100 free credit

Sign up and you’ll be up and running on Katapult in less than a minute.