Skip to content

Rate limiting

AuthRateLimitConfig.from_shared_backend() is the canonical public entrypoint for the common shared-backend recipe. It materializes endpoint-specific EndpointRateLimit values from the private auth slot catalog while keeping manual AuthRateLimitConfig(..., EndpointRateLimit(...)) assembly available as the advanced escape hatch.

Import the builder aliases from litestar_auth.ratelimit when app code annotates shared-backend inputs:

from litestar_auth.ratelimit import AuthRateLimitEndpointGroup, AuthRateLimitEndpointSlot
  • AuthRateLimitEndpointSlot names the per-endpoint keys accepted by enabled, disabled, scope_overrides, namespace_overrides, and endpoint_overrides.
  • AuthRateLimitEndpointGroup names the shared-backend keys accepted by group_backends.

Those aliases are the stable builder identifiers:

AuthRateLimitEndpointSlot value AuthRateLimitEndpointGroup value Default scope Default namespace token
login login ip_email login
refresh refresh ip refresh
register register ip register
forgot_password password_reset ip_email forgot-password
reset_password password_reset ip reset-password
totp_enable totp ip totp-enable
totp_confirm_enable totp ip totp-confirm-enable
totp_verify totp ip totp-verify
totp_disable totp ip totp-disable
verify_token verification ip verify-token
request_verify_token verification ip_email request-verify-token

There is no extra preset or namespace-family mode behind those aliases. Use group_backends, scope_overrides, namespace_overrides, disabled, and endpoint_overrides directly when migrating existing key shapes.

Override precedence is:

  1. endpoint_overrides wins per slot and can replace the limiter or set it to None.
  2. Otherwise, only slots enabled by enabled (defaults to all supported slots) and not listed in disabled are generated.
  3. Generated limiters start from backend, then group_backends can swap the backend for the slot's group.
  4. scope_overrides and namespace_overrides adjust the generated limiter for that slot.

Those identifiers are the public builder contract. The private recipe objects that store them remain internal implementation details.

litestar_auth.ratelimit

Rate-limiting helpers for authentication endpoints.

Use :meth:AuthRateLimitConfig.from_shared_backend for the common case where a single :class:InMemoryRateLimiter or :class:RedisRateLimiter should back the standard auth endpoint set. Keep manual AuthRateLimitConfig(...) plus EndpointRateLimit(...) assembly for advanced cases that need fully custom per-endpoint wiring.

Examples:

Build the canonical shared-backend recipe::

from litestar_auth.ratelimit import AuthRateLimitConfig, RedisRateLimiter

rate_limit_config = AuthRateLimitConfig.from_shared_backend(
    RedisRateLimiter(redis=redis_client, max_attempts=5, window_seconds=60),
)

AuthRateLimitEndpointGroup = Literal['login', 'password_reset', 'refresh', 'register', 'totp', 'verification']

AuthRateLimitEndpointSlot = Literal['login', 'refresh', 'register', 'forgot_password', 'reset_password', 'totp_enable', 'totp_confirm_enable', 'totp_verify', 'totp_disable', 'verify_token', 'request_verify_token']

RateLimitScope = Literal['ip', 'ip_email']

TotpSensitiveEndpoint = Literal['enable', 'confirm_enable', 'verify', 'disable']

AuthRateLimitConfig(login=None, refresh=None, register=None, forgot_password=None, reset_password=None, totp_enable=None, totp_confirm_enable=None, totp_verify=None, totp_disable=None, verify_token=None, request_verify_token=None) dataclass

Optional rate-limit rules for auth-related endpoints.

from_shared_backend(backend, *, enabled=None, disabled=(), group_backends=None, scope_overrides=None, namespace_overrides=None, endpoint_overrides=None, trusted_proxy=False, identity_fields=_DEFAULT_IDENTITY_FIELDS, trusted_headers=_DEFAULT_TRUSTED_HEADERS) classmethod

Build endpoint-specific limiters from the package-owned shared-backend recipe.

The builder uses the private endpoint catalog for default scopes and namespace tokens, then applies override precedence in this order:

  1. backend for every enabled slot
  2. group_backends for slot groups such as totp or verification
  3. scope_overrides / namespace_overrides for slot-specific tweaks
  4. endpoint_overrides for full slot replacement or explicit None disablement

Parameters:

Name Type Description Default
backend RateLimiterBackend

Default limiter backend for enabled auth slots.

required
enabled Iterable[AuthRateLimitEndpointSlot] | None

Optional auth slot names to build. Defaults to all supported slots.

None
disabled Iterable[AuthRateLimitEndpointSlot]

Auth slot names to leave unset, even when they would otherwise be enabled.

()
group_backends Mapping[AuthRateLimitEndpointGroup, RateLimiterBackend] | None

Optional backend overrides keyed by auth slot group: login, refresh, register, password_reset, totp, or verification.

None
scope_overrides Mapping[AuthRateLimitEndpointSlot, RateLimitScope] | None

Optional per-slot scope overrides to preserve existing key behavior.

None
namespace_overrides Mapping[AuthRateLimitEndpointSlot, str] | None

Optional per-slot namespace tokens to preserve existing key names.

None
endpoint_overrides Mapping[AuthRateLimitEndpointSlot, EndpointRateLimit | None] | None

Optional full per-slot replacements. None disables a slot.

None
trusted_proxy bool

Shared trusted-proxy setting applied to generated limiters.

False
identity_fields tuple[str, ...]

Shared request body identity fields applied to generated limiters.

_DEFAULT_IDENTITY_FIELDS
trusted_headers tuple[str, ...]

Shared trusted proxy header names applied to generated limiters.

_DEFAULT_TRUSTED_HEADERS

Returns:

Type Description
Self

New config populated from the shared-backend builder inputs.

Source code in litestar_auth/ratelimit/_config.py
@classmethod
def from_shared_backend(  # noqa: PLR0913
    cls,
    backend: RateLimiterBackend,
    *,
    enabled: Iterable[AuthRateLimitEndpointSlot] | None = None,
    disabled: Iterable[AuthRateLimitEndpointSlot] = (),
    group_backends: Mapping[AuthRateLimitEndpointGroup, RateLimiterBackend] | None = None,
    scope_overrides: Mapping[AuthRateLimitEndpointSlot, RateLimitScope] | None = None,
    namespace_overrides: Mapping[AuthRateLimitEndpointSlot, str] | None = None,
    endpoint_overrides: Mapping[AuthRateLimitEndpointSlot, EndpointRateLimit | None] | None = None,
    trusted_proxy: bool = False,
    identity_fields: tuple[str, ...] = _DEFAULT_IDENTITY_FIELDS,
    trusted_headers: tuple[str, ...] = _DEFAULT_TRUSTED_HEADERS,
) -> Self:
    """Build endpoint-specific limiters from the package-owned shared-backend recipe.

    The builder uses the private endpoint catalog for default scopes and namespace
    tokens, then applies override precedence in this order:

    1. ``backend`` for every enabled slot
    2. ``group_backends`` for slot groups such as ``totp`` or ``verification``
    3. ``scope_overrides`` / ``namespace_overrides`` for slot-specific tweaks
    4. ``endpoint_overrides`` for full slot replacement or explicit ``None`` disablement

    Args:
        backend: Default limiter backend for enabled auth slots.
        enabled: Optional auth slot names to build. Defaults to all supported slots.
        disabled: Auth slot names to leave unset, even when they would otherwise be enabled.
        group_backends: Optional backend overrides keyed by auth slot group:
            ``login``, ``refresh``, ``register``, ``password_reset``, ``totp``, or
            ``verification``.
        scope_overrides: Optional per-slot scope overrides to preserve existing key behavior.
        namespace_overrides: Optional per-slot namespace tokens to preserve existing key names.
        endpoint_overrides: Optional full per-slot replacements. ``None`` disables a slot.
        trusted_proxy: Shared trusted-proxy setting applied to generated limiters.
        identity_fields: Shared request body identity fields applied to generated limiters.
        trusted_headers: Shared trusted proxy header names applied to generated limiters.

    Returns:
        New config populated from the shared-backend builder inputs.
    """
    group_backend_map: dict[AuthRateLimitEndpointGroup, RateLimiterBackend] = dict(group_backends or {})
    scope_override_map: dict[AuthRateLimitEndpointSlot, RateLimitScope] = dict(scope_overrides or {})
    namespace_override_map: dict[AuthRateLimitEndpointSlot, str] = dict(namespace_overrides or {})
    endpoint_override_map: dict[AuthRateLimitEndpointSlot, EndpointRateLimit | None] = dict(
        endpoint_overrides or {},
    )
    enabled_slots: frozenset[AuthRateLimitEndpointSlot] = (
        _AUTH_RATE_LIMIT_ENDPOINT_SLOT_SET if enabled is None else frozenset(enabled)
    )
    disabled_slots: frozenset[AuthRateLimitEndpointSlot] = frozenset(disabled)

    _validate_builder_names(
        enabled_slots,
        allowed=_AUTH_RATE_LIMIT_ENDPOINT_SLOT_SET,
        parameter_name="enabled",
        item_name="auth rate-limit slots",
    )
    _validate_builder_names(
        disabled_slots,
        allowed=_AUTH_RATE_LIMIT_ENDPOINT_SLOT_SET,
        parameter_name="disabled",
        item_name="auth rate-limit slots",
    )
    _validate_builder_names(
        group_backend_map,
        allowed=_AUTH_RATE_LIMIT_ENDPOINT_GROUPS,
        parameter_name="group_backends",
        item_name="auth rate-limit groups",
    )
    _validate_builder_names(
        scope_override_map,
        allowed=_AUTH_RATE_LIMIT_ENDPOINT_SLOT_SET,
        parameter_name="scope_overrides",
        item_name="auth rate-limit slots",
    )
    _validate_builder_names(
        namespace_override_map,
        allowed=_AUTH_RATE_LIMIT_ENDPOINT_SLOT_SET,
        parameter_name="namespace_overrides",
        item_name="auth rate-limit slots",
    )
    _validate_builder_names(
        endpoint_override_map,
        allowed=_AUTH_RATE_LIMIT_ENDPOINT_SLOT_SET,
        parameter_name="endpoint_overrides",
        item_name="auth rate-limit slots",
    )

    config_kwargs: dict[AuthRateLimitEndpointSlot, EndpointRateLimit | None] = {}

    for recipe in _AUTH_RATE_LIMIT_ENDPOINT_RECIPES:
        slot_override = endpoint_override_map.get(recipe.slot, _MISSING_OVERRIDE)
        if slot_override is not _MISSING_OVERRIDE:
            config_kwargs[recipe.slot] = cast("EndpointRateLimit | None", slot_override)
            continue

        if recipe.slot not in enabled_slots or recipe.slot in disabled_slots:
            continue

        config_kwargs[recipe.slot] = EndpointRateLimit(
            backend=group_backend_map.get(recipe.group, backend),
            scope=scope_override_map.get(recipe.slot, recipe.default_scope),
            namespace=namespace_override_map.get(recipe.slot, recipe.default_namespace),
            trusted_proxy=trusted_proxy,
            identity_fields=identity_fields,
            trusted_headers=trusted_headers,
        )

    return cls(**cast("Any", config_kwargs))

EndpointRateLimit(backend, scope, namespace, trusted_proxy=False, identity_fields=_DEFAULT_IDENTITY_FIELDS, trusted_headers=_DEFAULT_TRUSTED_HEADERS) dataclass

Per-endpoint rate-limit settings and request hook.

before_request(request) async

Reject the request with 429 when its key is over the configured limit.

Security

Only set trusted_proxy=True when this service is behind a trusted proxy or load balancer that overwrites client IP headers. Otherwise, attackers can spoof headers like X-Forwarded-For and evade or poison rate-limiting keys.

Raises:

Type Description
TooManyRequestsException

If the request exceeded the configured limit.

Source code in litestar_auth/ratelimit/_config.py
async def before_request(self, request: Request[Any, Any, Any]) -> None:
    """Reject the request with 429 when its key is over the configured limit.

    Security:
        Only set ``trusted_proxy=True`` when this service is behind a trusted
        proxy or load balancer that overwrites client IP headers. Otherwise,
        attackers can spoof headers like ``X-Forwarded-For`` and evade or
        poison rate-limiting keys.

    Raises:
        TooManyRequestsException: If the request exceeded the configured limit.
    """
    key = await self.build_key(request)
    if await self.backend.check(key):
        return

    retry_after = await self.backend.retry_after(key)
    logger.warning(
        "Rate limit exceeded",
        extra={
            "event": "rate_limit_triggered",
            "namespace": self.namespace,
            "scope": self.scope,
            "trusted_proxy": self.trusted_proxy,
        },
    )
    msg = "Too many requests."
    raise TooManyRequestsException(
        detail=msg,
        headers={"Retry-After": str(max(retry_after, 1))},
    )

build_key(request) async

Build the backend key for the given request.

Returns:

Type Description
str

Namespaced rate-limit key for the request.

Source code in litestar_auth/ratelimit/_config.py
async def build_key(self, request: Request[Any, Any, Any]) -> str:
    """Build the backend key for the given request.

    Returns:
        Namespaced rate-limit key for the request.
    """
    host = _client_host(request, trusted_proxy=self.trusted_proxy, trusted_headers=self.trusted_headers)
    parts = [self.namespace, _safe_key_part(host)]
    if self.scope == "ip_email":
        email = await _extract_email(request, identity_fields=self.identity_fields)
        if email:
            parts.append(_safe_key_part(email.strip().casefold()))

    return ":".join(parts)

increment(request) async

Record a failed or rate-limited attempt for the current request.

Source code in litestar_auth/ratelimit/_config.py
async def increment(self, request: Request[Any, Any, Any]) -> None:
    """Record a failed or rate-limited attempt for the current request."""
    await self.backend.increment(await self.build_key(request))

reset(request) async

Clear stored attempts for the current request key.

Source code in litestar_auth/ratelimit/_config.py
async def reset(self, request: Request[Any, Any, Any]) -> None:
    """Clear stored attempts for the current request key."""
    await self.backend.reset(await self.build_key(request))

InMemoryRateLimiter(*, max_attempts, window_seconds, max_keys=100000, sweep_interval=1000, clock=time.monotonic)

Async-safe in-memory sliding-window rate limiter.

Not safe for multi-process or multi-host deployments; use :class:RedisRateLimiter for shared storage (e.g. multi-worker or multi-pod).

Store the limiter configuration and request counters.

Raises:

Type Description
ValueError

If any limiter or storage configuration is invalid.

Source code in litestar_auth/ratelimit/_memory.py
def __init__(
    self,
    *,
    max_attempts: int,
    window_seconds: float,
    max_keys: int = 100_000,
    sweep_interval: int = 1_000,
    clock: Callable[[], float] = time.monotonic,
) -> None:
    """Store the limiter configuration and request counters.

    Raises:
        ValueError: If any limiter or storage configuration is invalid.
    """
    _validate_configuration(max_attempts=max_attempts, window_seconds=window_seconds)
    if max_keys < 1:
        msg = "max_keys must be at least 1"
        raise ValueError(msg)
    if sweep_interval < 1:
        msg = "sweep_interval must be at least 1"
        raise ValueError(msg)

    self.max_attempts = max_attempts
    self.window_seconds = window_seconds
    self.max_keys = max_keys
    self.sweep_interval = sweep_interval
    self._clock = clock
    self._lock = asyncio.Lock()
    self._windows: dict[str, SlidingWindow] = {}
    self._operation_count = 0

is_shared_across_workers property

In-memory counters are process-local and not shared across workers.

check(key) async

Return whether key can perform another attempt.

Source code in litestar_auth/ratelimit/_memory.py
async def check(self, key: str) -> bool:
    """Return whether ``key`` can perform another attempt."""
    async with self._lock:
        now = self._clock()
        self._maybe_sweep(now)
        timestamps = self._prune(key, now)
        if timestamps is None:
            return True

        return len(timestamps) < self.max_attempts

increment(key) async

Record a new attempt for key in the current window.

Source code in litestar_auth/ratelimit/_memory.py
async def increment(self, key: str) -> None:
    """Record a new attempt for ``key`` in the current window."""
    async with self._lock:
        now = self._clock()
        self._maybe_sweep(now)
        timestamps = self._prune(key, now)
        if timestamps is None:
            self._evict_oldest_keys()
            timestamps = deque()
            self._windows[key] = timestamps

        timestamps.append(now)

reset(key) async

Clear the in-memory counter for key.

Source code in litestar_auth/ratelimit/_memory.py
async def reset(self, key: str) -> None:
    """Clear the in-memory counter for ``key``."""
    async with self._lock:
        self._windows.pop(key, None)

retry_after(key) async

Return the remaining block duration for key in whole seconds.

Source code in litestar_auth/ratelimit/_memory.py
async def retry_after(self, key: str) -> int:
    """Return the remaining block duration for ``key`` in whole seconds."""
    async with self._lock:
        now = self._clock()
        timestamps = self._prune(key, now)
        if timestamps is None or len(timestamps) < self.max_attempts:
            return 0

        oldest_timestamp = timestamps[0]
        remaining = self.window_seconds - (now - oldest_timestamp)
        return max(math.ceil(remaining), 1)

RedisRateLimiter(*, redis, max_attempts, window_seconds, key_prefix=DEFAULT_KEY_PREFIX, clock=time.time)

Redis-backed sliding-window rate limiter backed by a sorted set.

Store the Redis client and shared rate-limiter configuration.

Source code in litestar_auth/ratelimit/_redis.py
def __init__(
    self,
    *,
    redis: RedisClientProtocol,
    max_attempts: int,
    window_seconds: float,
    key_prefix: str = DEFAULT_KEY_PREFIX,
    clock: Callable[[], float] = time.time,
) -> None:
    """Store the Redis client and shared rate-limiter configuration."""
    _load_package_redis_asyncio()
    _validate_configuration(max_attempts=max_attempts, window_seconds=window_seconds)

    self.redis = redis
    self.max_attempts = max_attempts
    self.window_seconds = window_seconds
    self.key_prefix = key_prefix
    self._clock = clock

is_shared_across_workers property

Redis-backed counters are shared across workers using the same Redis.

check(key) async

Return whether key can perform another attempt.

Source code in litestar_auth/ratelimit/_redis.py
async def check(self, key: str) -> bool:
    """Return whether ``key`` can perform another attempt."""
    count = self._decode_integer(
        await self._eval(
            self._CHECK_SCRIPT,
            key,
            self._clock(),
            self.window_seconds,
            self.max_attempts,
        ),
    )
    return count < self.max_attempts

increment(key) async

Record a new attempt for key atomically in Redis.

Source code in litestar_auth/ratelimit/_redis.py
async def increment(self, key: str) -> None:
    """Record a new attempt for ``key`` atomically in Redis."""
    now = self._clock()
    await self._eval(
        self._INCREMENT_SCRIPT,
        key,
        now,
        self.window_seconds,
        f"{now:.9f}:{uuid4().hex}",
        self._ttl_seconds,
    )

reset(key) async

Delete the Redis sorted set for key.

Source code in litestar_auth/ratelimit/_redis.py
async def reset(self, key: str) -> None:
    """Delete the Redis sorted set for ``key``."""
    await self.redis.delete(self._key(key))

retry_after(key) async

Return the remaining block duration for key in whole seconds.

Source code in litestar_auth/ratelimit/_redis.py
async def retry_after(self, key: str) -> int:
    """Return the remaining block duration for ``key`` in whole seconds."""
    return max(
        self._decode_integer(
            await self._eval(
                self._RETRY_AFTER_SCRIPT,
                key,
                self._clock(),
                self.window_seconds,
                self.max_attempts,
            ),
        ),
        0,
    )

RateLimiterBackend

Bases: Protocol

Protocol shared by rate-limiter backends.

is_shared_across_workers property

Return whether backend state is shared across worker processes.

check(key) async

Return whether another attempt is allowed for key.

Source code in litestar_auth/ratelimit/_protocol.py
async def check(self, key: str) -> bool:
    """Return whether another attempt is allowed for ``key``."""

increment(key) async

Record an attempt for key.

Source code in litestar_auth/ratelimit/_protocol.py
async def increment(self, key: str) -> None:
    """Record an attempt for ``key``."""

reset(key) async

Clear tracked attempts for key.

Source code in litestar_auth/ratelimit/_protocol.py
async def reset(self, key: str) -> None:
    """Clear tracked attempts for ``key``."""

retry_after(key) async

Return the number of seconds until key can try again.

Source code in litestar_auth/ratelimit/_protocol.py
async def retry_after(self, key: str) -> int:
    """Return the number of seconds until ``key`` can try again."""

TotpRateLimitOrchestrator(enable=None, confirm_enable=None, verify=None, disable=None, _ACCOUNT_STATE_RESET_ENDPOINTS=frozenset({'verify'})) dataclass

Orchestrate TOTP endpoint rate-limit behavior with explicit semantics.

External behavior stays unchanged: - verify uses before-request checks, increments on invalid attempts, and resets on success/account-state failures. - enable and disable do not consume verify counters.

Endpoints that should reset on account-state failures are listed in _ACCOUNT_STATE_RESET_ENDPOINTS (currently only verify).

before_request(endpoint, request) async

Run endpoint-specific before-request checks.

Source code in litestar_auth/ratelimit/_orchestrator.py
async def before_request(self, endpoint: TotpSensitiveEndpoint, request: Request[Any, Any, Any]) -> None:
    """Run endpoint-specific before-request checks."""
    if limiter := self._limiters.get(endpoint):
        await limiter.before_request(request)

on_account_state_failure(endpoint, request) async

Apply endpoint-specific account-state failure behavior.

Source code in litestar_auth/ratelimit/_orchestrator.py
async def on_account_state_failure(self, endpoint: TotpSensitiveEndpoint, request: Request[Any, Any, Any]) -> None:
    """Apply endpoint-specific account-state failure behavior."""
    if endpoint in self._ACCOUNT_STATE_RESET_ENDPOINTS and (limiter := self._limiters.get(endpoint)):
        await limiter.reset(request)

on_invalid_attempt(endpoint, request) async

Record endpoint-specific invalid attempt failures.

Source code in litestar_auth/ratelimit/_orchestrator.py
async def on_invalid_attempt(self, endpoint: TotpSensitiveEndpoint, request: Request[Any, Any, Any]) -> None:
    """Record endpoint-specific invalid attempt failures."""
    if limiter := self._limiters.get(endpoint):
        await limiter.increment(request)

on_success(endpoint, request) async

Apply endpoint-specific success behavior.

Source code in litestar_auth/ratelimit/_orchestrator.py
async def on_success(self, endpoint: TotpSensitiveEndpoint, request: Request[Any, Any, Any]) -> None:
    """Apply endpoint-specific success behavior."""
    if limiter := self._limiters.get(endpoint):
        await limiter.reset(request)