Oso Cloud Architecture

Architecture of Oso Cloud

Oso Cloud is a managed authorization service, meaning that you can use it without much regard to how it works internally. That said, it can be useful to understand how Oso Cloud works under the hood — perhaps you want to gain confidence about Oso Cloud's reliability guarantees, or perhaps you're just curious.

We built Oso Cloud to run on the critical path for any application. Its architecture is designed to provide:

  • High availability: the uptime of your app depends on the uptime of its authorization system. It cannot ever go down.
  • Low latency: every request through your app needs to perform at least one authorization check (sometimes multiple). It needs to be fast.
  • High throughput: an authorization service needs to scale with the number of actions that your users perform in your app.

We provide SLAs on availability. Meet with an engineer (opens in a new tab) to learn more about our SLAs.

How It Works

Architecture of Oso Cloud: Events stream to Oso Cloud Edge Nodes, and your apps send redundant requests to Edge Nodes in multiple Availability Zones

Oso Cloud deploys nodes at the edge, across dozens of regions and availability zones all over the world. It uses event sourcing for replication: a primary message broker streams updates to Edge Nodes, keeping them in sync with your fact and policy data.

High Availability

Oso Cloud has no single point of failure. We spread Edge Nodes across regions and availability zones to protect against network failures in any particular data center and to insulate your app from any one node's downtime, providing high availability. Oso Cloud has already withstood multiple region-wide outages without any customer-visible impact.

You can optionally deploy Oso Fallback Nodes colocated with your application inside your VPC. These run a current version of the Oso Cloud API and contain a copy of your rules and data, to provide an extra layer of insurance that you can always make authorization checks, even if Oso Cloud is unreachable. Fallback Nodes may lag Edge Nodes by up to 30 minutes. They are for disaster scenarios only, and not intended as a primary server for running authorization requests.

Performance

Oso Cloud does two things to keep latency low (<10 ms end-to-end). First, we deploy Edge Nodes in your regions, which keeps network latency low (2-5ms). Each Edge Node also precomputes indexes and caches queries to provide low in-process response times (<2 ms).

On average, each Edge Node can serve ~10k requests per second. We deploy additional nodes as needed, depending on your desired throughput. This means there's no upper bound on your read throughput.

Oso Cloud Edge Nodes live in AWS, inside of Oso's own VPCs. While we're already in most regions, we can launch nodes in a new region in <5 mins.

We offer dedicated Edge Nodes deployed in your region. Meet with an engineer (opens in a new tab) to talk more about custom deployments.

Consistency Model

Oso is eventually consistent by default, but in practice you can typically read your writes immediately. For cases where you need to guarantee strong consistency, Oso provides tunable Consistency Tokens. To learn more, read the page on Oso's Consistency Model.

Next Steps

Talk to an Oso Engineer

If you want to learn more about how to model your authorization with Oso Cloud or have any questions, connect with us on Slack. We're happy to help.

Get started with Oso Cloud →