Skip to main content
Oso Cloud is an authorization-as-a-service. It centralizes your authorization logic and exposes APIs for answering authorization questions.

How It Works

  1. Define authorization policies in Polar, our DSL for expressing permissions logic
  2. Store policies and authorization data, called facts, in Oso Cloud
  3. Make API calls with our client SDKs from your app when you need to answer authorization questions, like can user X view Z resource? or what resources can user X edit?

Writing Policies in Polar

Oso Cloud’s policy language, Polar, is designed for expressing arbitrarily complex and granular application authorization logic. Polar can express any model, including RBAC, ReBAC, ABAC, fine-grained authorization, organizational hierarchies, and more.

Data Management

Facts are a prescriptive data model for authorization-relevant data. There are multiple ways to provide facts to Oso Cloud. Which one you choose depends on your use case and application architecture.
Data that is exclusively or extensively used for authorization, like users, roles, and permissions are best stored in Oso Cloud as facts. Oso Cloud provides a robust mechanism for synchronizing data stores and detecting drift with Oso Sync.
Oso Cloud supports centralizing your data, but doesn’t require it. [Local authorization] is our unique approach to minimizing data synchronization and data transfer. When you make a request to the Oso Cloud API, you receive back database logic that can be executed against your local data store to finish the authorization evaluation.
Oso Cloud also supports providing data at request time, called context facts. This is useful when you have data that is not used regularly for authorization, but is needed to evaluate the request.

Evaluating Authorization Requests

Oso Cloud can return booleans, a list of resources, a list of permissions, or even database logic that can be used to complete an authorization decision. It all depends on your application requirements.

Architecture of Oso Cloud

Oso Cloud is a managed authorization service built to operate safely on the critical path of your application. Its architecture is designed to guarantee:
  1. High availability: Authorization cannot go down when your app is up. Oso Cloud is built for continuous uptime and fault tolerance.
  2. Low latency: Every request may involve one or more authorization checks. Each must complete in single-digit milliseconds.
  3. High throughput: Authorization must scale with user activity, handling thousands of checks per second across services.
While you don’t need to manage its internals, understanding how Oso Cloud is built helps explain its reliability, consistency, and performance guarantees. Architecture of Oso Cloud: Events stream to Oso Cloud Edge Nodes, and your apps send redundant requests to Edge Nodes in multiple Availability Zones Oso Cloud deploys nodes at the edge, across dozens of regions and availability zones all over the world. It uses event sourcing for replication: a primary message broker streams updates to Edge Nodes, keeping them in sync with your policy and facts.

High Availability

Oso Cloud has no single point of failure. We spread Edge Nodes across regions and availability zones to protect against network failures in any particular data center and to insulate your app from any one node’s downtime, providing high availability. Oso Cloud has already withstood multiple region-wide outages without any customer-visible impact. We provide SLAs on availability. The Datadog chart below shows our measured production availability over the calendar year from Jan 30th, 2024 to Jan 29th, 2025, across many regions, 99.99% and up. Datadog report that shows the last year availability across many regions, 99.99% and up
The availability test that’s running in each region, every minute with three assertions, is as follows: Availability test across 12 locations every minute with 3 assertions
You can optionally deploy Oso Fallback Nodes colocated with your application inside your VPC. These run a current version of the Oso Cloud API and contain a copy of your rules and data to provide an extra layer of insurance you can always use to make authorization checks, even if Oso Cloud is unreachable. Fallback Nodes may lag Edge Nodes by up to 30 minutes. They are for disaster scenarios only and are not intended to be a primary server for running authorization requests.

Performance

Oso Cloud optimizes for low latency (< 10 ms end-to-end) in two key ways. First, we deploy Edge Nodes in your regions, which keeps network latency low (2-5ms). Second, Edge Nodes precompute indexes and cache queries, keeping in-process response times low for most requests (< 2 ms). Edge Nodes scale dynamically to handle high request volumes. Our system auto-scales to match your throughput needs, ensuring consistent performance. Oso Cloud Edge Nodes run in AWS within Oso’s own VPCs. While we operate in most AWS regions, we can deploy nodes in a new region within minutes.
I