Launch: Solving the Hidden Complexity of Authorization Migrations with Oso Migrate

In 2012, GitHub was ready to expand upmarket, but their legacy authorization system was holding them back.

“The core problems they were having growing the platform… were based around how they had internally modeled permissioning, access rights and access control.”

That’s Rick Bradley, an engineering veteran who was approached by GitHub to lead the authorization refactor. As with any massive migration, the stakes were high.

“At GitHub, we were seeing permissioning operations on the order of millions an hour…if we make any mistakes with the publicity of people's code or access to their code, we have a problem…But what we realized was that if we tried to design a system, write tests for it, deploy it into production, we would find that we couldn't match the expectations of the behavior with what was actually happening in the system.”

The core challenge wasn’t building the new authorization system. It was understanding the behavior of the existing one. To ensure parity, the old and new logic needed to be run side-by-side with production data. But production data changes constantly, making it difficult to establish reliable baselines for comparison.

“We had to be able to reason about it. And one of the things that obstructs reasoning is if there are changes happening to the data sets that you're trying to reason about.”

To solve this problem, he and his team developed Scientist, a Ruby library for safely refactoring critical code paths through controlled experimentation. Scientist provides an interface for comparing the behavior of old and new code in production environments without risking system failures. The library allows developers to run both the legacy and refactored code simultaneously, collecting detailed metrics and performance data while serving users with the trusted original implementation. This approach enabled GitHub's engineering team to confidently migrate complex, mission-critical systems by validating that their refactored code produced identical results to the original implementation across thousands of real-world scenarios.

Rick's experience at GitHub mirrors our own experience with authorization system migrations. But most of us don’t have the luxury of bringing in a dedicated team to solve the problem. Sure, there’s a lot more information about authorization systems today than there was in 2012, but much of that material is focused on higher-order operational concerns like scalability and performance. For the teams we talk to, the industry has a huge blind spot: what it takes to migrate from a legacy system to a new authorization system.

Today, we are launching Oso Migrate to fix this. Oso Migrate makes authorization migrations go 3x faster through utilities and APIs that support the developer through each stage of migration, like an API for measuring parity between systems and a policy debugger for debugging inconsistencies. 

Oso Migrate is the result of our experience working with engineering teams all over the world to migrate safely from a mosaic of homegrown systems to Oso Cloud. So before describing Oso Migrate, I’ll share some of the most common challenges we observe in a migration like this, and how to think about solving them (with Oso Cloud or otherwise). 

Because the promise of Authorization as a Service doesn’t mean anything until you’re actually using it.

Why Authorization Migrations are Uniquely Challenging

Everybody assumes they know how their current authorization works, but in our experience…almost nobody does. It may come as a surprise, but the hardest part of an authorization migration is usually understanding the legacy authorization implementation.

Legacy authorization implementations are often messy. Authorization is a global concern, so it ends up entangling itself throughout your application. This gets even more complicated in microservice architectures. You can decompose your business logic, but you can’t decompose authorization, so now that logic is scattered across all your services. Here’s how that makes it hard to understand your existing authorization behavior.

Authorization logic and enforcement live in multiple layers

Authorization logic rarely lives in a single, well-defined location. Instead, it's scattered across multiple layers of your application stack, creating a complex web of interdependent checks that can be difficult to trace and understand. Sometimes authorization is cleanly contained within a single layer like the controller, but more often it's spread across middleware, controllers, service layers, and even SQL queries.

Consider a document management API endpoint that appears straightforward but actually implements authorization across three different layers:

In this example, a simple document retrieval involves JWT validation in middleware, role checking in the controller, and ownership verification in the service layer. Each layer assumes the others are working correctly, but changing any one piece could break the entire authorization chain.

Different services enforce authorization in different layers

The complexity compounds when different teams and microservices handle authorization at different layers. One service might enforce everything at the API gateway level, another might rely on row-level security in the database, and a third might scatter checks throughout its business logic. The freedom for different teams to set their own standards is a huge benefit of microservices, but it complicates shared concerns - like authorization - that benefit from standardization.

Consider two services in the same application that handle user data differently:

When you need to change authorization behavior, you have to remember that the User Service checks permissions in the controller while the Order Service has complex logic buried in the service layer. Missing either location during a migration could leave security gaps. Even with a few dozen endpoints it becomes impossible to predict how an authorization code change in one layer will ripple throughout the system (let alone the more mature systems that commonly have hundreds or thousands of endpoints).

Authorization logic gets tangled with application logic

Authorization logic inevitably gets entangled with core business logic. This happens gradually as developers add "quick fixes" or handle edge cases, eventually making it impossible to isolate the application logic from authorization changes. Each affects the other due to tightly coupled code and inconsistent abstractions.

Here's an example of how authorization logic can become deeply embedded in application logic:

In this endpoint, what is authorization logic and what is business logic? The authorization decisions depend on project status, user roles, budget constraints, and existing task counts. During a migration, you need to determine which of these checks are truly authorization concerns versus business rules that happen to throw authorization errors.

Services require data located outside their local data store to make decisions

Microservices architectures often require data from multiple services to resolve authorization decisions. This creates complex dependencies where a service needs to reach across service boundaries to gather the information necessary for authorization decisions. The challenge is compounded by the fact that identifying what constitutes "authorization data" isn't always straightforward.

Consider an expense reporting system where the Expense Service needs to authorize expense approvals:

The Expense Service has to call three other services – HR, Organization, and Finance – just to get all the data it needs to make an authorization decision. This isn’t even necessarily bad design. It’s just inevitable with shared concerns like authorization in microservices environments (or really any system with multiple databases). 

Taken together, these patterns make it difficult to reason about what will happen when you make a change to your current system, let alone migrate to an entirely new one. What we've learned: Understanding your system takes more time than you think.  What you need is a way to make that easier, and a migration strategy that minimizes the risk.

How we approach migrations to Oso Cloud

We advocate for an endpoint-by-endpoint, test-driven implementation strategy. Above all:

  1. Don't change your existing logic. Make the new logic match it.
  2. Run both implementations side by side. Switch from old to new with a feature flag.
  3. Migrate incrementally. Don’t try to “boil the ocean.”

This is the high-level playbook we run with our customers:

Choose an Endpoint to Migrate

Each endpoint will have a set of authorization data and logic required to make authorization decisions. The key is starting with an endpoint that balances complexity with business impact. We recommend beginning with endpoints that have well-defined authorization patterns and moderate complexity rather than the most critical or most complex endpoints. Some good questions to consider are:

  • Does the endpoint depend on data from other microservices?
  • How complex is the authorization logic?
  • How well do you understand the current authorization logic?

For example, a document viewing endpoint that has clear ownership concepts and role-based permissions would be ideal. A support endpoint that grants access to other users’ accounts based on ticket ownership and on-call schedules can wait until you've built confidence with simpler cases.

The goal is to establish a proven migration pattern that can be replicated across increasingly complex endpoints as your team gains experience with both the migration process and Oso Cloud's capabilities.

Understand how the endpoint’s authorization behaves

We recommend building a variety of scenarios, which include data inputs, actors, resources, and the expected authorization decision. The goal is to get as comprehensive an understanding of the current logic as possible so you can reproduce that logic accurately in your new system.This phase is often more complex than teams anticipate because it frequently uncovers edge cases and implicit assumptions that may not be documented anywhere.

For example, you might discover that your document endpoint has an undocumented rule where archived documents are only accessible by users with admin roles OR by the original document creator within 30 days of archival. These complex, conditional rules often emerge only through systematic scenario exploration and production traffic analysis.

Translate the authorization logic to Polar

This requires mapping entities like users and resources into our declarative policy-as-code language, Polar. The translation process involves both resource modeling and logic expression, each with their own challenges and iterative refinement needs.

Modeling your resources means identifying the key entities, attributes, and relationships that drive authorization decisions. This process often reveals gaps in your understanding. For instance, you might discover that user permissions aren't just based on static roles, but also on dynamic attributes like project status, or even calculated properties like current site activity.

Keeping a tight feedback loop during this phase is critical. It might require multiple iterations to solidify your entity definitions, refine relationship mappings, or add required attributes that weren't obvious in the original system.

Wire Up Oso Cloud

Next, add the Oso Cloud client to run alongside the endpoint's existing authorization enforcement. This parallel execution pattern is crucial for mitigating risk – your existing authorization continues to control access while Oso Cloud operates in shadow mode.

The integration typically involves adding the Oso Cloud client initialization to your service startup and creating a wrapper function that can call both your legacy authorization and Oso Cloud simultaneously:

This dual-execution approach ensures that any issues with Oso Cloud integration don't impact your production authorization decisions while allowing you to validate the migration.

Start feeding live data into Oso Cloud 

Once the policy is in place, you’ll load production data into Oso Cloud. This is the last thing you need before your application can start making authorization calls with Oso Cloud in shadow mode. 

In some cases, you’ll need to centralize authorization data in Oso Cloud. In these cases, our customers typically sync this data to Oso Cloud on an ongoing basis. This phase introduces the complexity of data synchronization and the challenge of maintaining consistency between your application's data model and Oso Cloud's authorization context. If you see an inconsistent result, you need to know whether it happened because the logic was wrong or because the data was out of sync. The more quickly you can determine that, the more quickly you can fix the issue.

Compare the two decisions

At this step, you compare the results of the old solution to Oso Cloud's output. This comparison involves three key challenges:

  • Legacy systems don't just return simple allow/deny decisions—they might return partial permissions ("read but not edit"), conditional access ("allow if user completes MFA"), or contextual information ("allowed but rate limited").
  • The same authorization check might return different results based on execution timing. Time-based permissions, rate limits, or quota checks can vary between systems due to slight timing differences.
  • Lastly, one system might be unavailable or return errors while the other succeeds. Your comparison logic must distinguish between legitimate authorization differences and operational discrepancies.

For any result, you need to be sure that you’re doing an apples-to-apples comparison and that any discrepancies are due to differences in the logic. Otherwise you’ll waste time chasing false negatives.

Debug the policy and data model to address inconsistencies

Once you identify discrepancies, refactor until you achieve parity between the two systems. This debugging phase often reveals the most complex aspects of your legacy authorization system and requires systematic investigation to resolve.

This typically involves tracing back through both systems to understand why they're making the decisions they’re making, e.g., 

  • Data investigation: Checking if the authorization data in both systems is identical
  • Logic analysis: Comparing the decision logic step-by-step
  • Context validation: Ensuring both systems have the same environmental context
  • Timing analysis: Investigating whether the decisions are timing-dependent

The biggest challenge throughout this process is explainability. If you don’t understand why both systems make specific decisions, then you won’t know what to refactor. You need detailed traces that show not just the final decision, but the intermediate steps and system state that led to that outcome.

Throughout the migration, you need tight feedback loops with deep visibility into the system behavior and rich context about authorization decisions. That’s why we built Oso Migrate. 

Introducing Oso Migrate

Oso Migrate is a local development TUI that streamlines authorization system migration by providing shorter, richer feedback loops when iterating on authorization logic.

Debugging complex authorization decisions is a slow, mentally taxing process. To lighten the load, we built a policy debugger that provides detailed information about why and how a decision was made. You can see the exact rule evaluation path, including which facts were consulted, which rules were triggered, and where the decision logic branched.

Instead of poring through application logs to trace how a result happened or inferring the logic through trial and error, you can now grasp authorization decisions at a glance. This lets developers iterate rapidly and safely, reducing the time to parity between your old and new systems.

The Parity API provides a centralized and simplified view of the decisions made by your legacy system and your new Oso Cloud implementation. Rather than requiring you to build custom comparison logic and manage decision logging across multiple systems, the Parity API standardizes this process and provides rich analytical capabilities.

The API is a decision comparison engine that automatically captures, normalizes, and analyzes authorization decisions from both systems. It handles the complexity of comparing decisions that may have different response formats, timing variations, or contextual information. We also provide historical trend analysis to track how decision parity improves over time as you refine your Oso Cloud policies and data model. You’ll (finally) have an answer to the inevitable question, “how’s the migration going?”

Authorization systems are hard to debug because the underlying data is constantly changing. User roles change, resources are modified, relationships are updated. By the time you investigate a discrepancy, the data may not be in the state that caused the issue. For example, a user might be denied access to a document at 2 PM, but when you debug at 3 PM, they now have the required permissions because their team membership changed in between. That was the impetus for building data snapshotting, which captures the point-in-time values of all data used in every evaluation decision, allowing you to replay prior decisions that didn't match up. This helps you debug time-sensitive authorization discrepancies and understand how data changes affect authorization behavior over time.

When combined, these features make it easy to understand the behavior of your new Oso Cloud implementation. Early feedback from customers has been overwhelmingly positive. Justin Helmer, Senior Staff Engineer at Webflow, says, “This is glorious. This is very valuable. We need this right now”.

Conclusion

Authorization migrations are one of the most complex technical challenges engineering teams face. At Oso, we've helped dozens of customers meet that challenge and we've used our experience to build a migration process that works. At the heart of that process is Oso Migrate: a suite of tools that takes the mystery out of the technical complexity. With the right guide, even the most confounding riddle can become a journey of understanding. And now, with Oso Migrate, you have your very own Scientist to guide you through your authorization migration.

To learn more about Oso Migrate, check out our documentation. If you're ready to move beyond the limitations of your current authorization system, book a 1:1 with one of our engineers.

Want us to remind you?
We'll email you before the event with a friendly reminder.
About the author

Graham Neray

Cofounder and CEO

Write your first policy