Demonstrating Proof of Possession (DPoP) in OAuth
OAuth 2.0 DPoP API security Access tokens
Unlike traditional bearer tokens, which can be used by anyone who obtains them, DPoP binds tokens to a specific client key, ensuring that only the legitimate holder can use them.
This writeup explains how DPoP works, its advantages over existing mechanisms like mutual TLS, and why its adoption is still emerging despite clear security benefits. It also outlines practical considerations for implementation and highlights scenarios where DPoP is particularly valuable, such as single page applications, mobile clients, and high-assurance APIs.
While DPoP is not yet universal, growing vendor support and evolving standards suggest it will become increasingly important for organisations seeking robust token security.
The problem space
Bearer tokens and replay
Classic OAuth 2.0 bearer tokens have three uncomfortable properties:
They are transferable
If an attacker exfiltrates a bearer token from a browser store, mobile device, proxy log, or misconfigured backend, that token can usually be replayed from any location until it expires.They are often long lived in some ecosystems
Even if access tokens are short lived, refresh tokens or personal access tokens can live much longer, widening the replay window.They are easy to integrate and therefore everywhere
Bearer tokens power a large amount of modern web and mobile identity, so any systematic weakness scales with the ecosystem.
Sender constrained tokens try to reduce these risks by ensuring that only the original legitimate client can successfully use the token. Mutual TLS (mTLS) does this at the transport layer by binding the token to a client certificate.
DPoP instead operates at the application layer.
What DPoP does
At a high level, DPoP allows a client to prove that it holds the private key associated with a public key that was used when obtaining a token. The resource server can then check that the same key is being used when the token is presented.
There are two key moving parts.
1. The client key pair
The client generates a public and private key pair, typically using Web Crypto in a browser, a secure enclave on a mobile device, or an operating system key store in a backend. The public key is expressed as a JSON Web Key (JWK).
2. The DPoP proof header
For each HTTP request to the authorisation server or resource server, the client sends a DPoP HTTP header. The value of this header is a JSON Web Token signed with the client private key. It contains a number of claims defined by RFC 9449:
htm – the HTTP method of the request
htu – the HTTP target URI
iat – the time at which the proof was issued
jti – a unique identifier for the proof
cnf or equivalent key reference – embeds or references the public key
Because the proof is signed with the private key and includes the method and URL, an attacker cannot simply reuse the same proof for another endpoint or method, and the server can detect replay via jti and iat.
3. Token binding
When the client obtains an access token from the authorisation server, it includes a DPoP proof on the token request. The authorisation server validates the proof and, if successful, binds the issued token to the key described in the proof. The binding is typically represented in a confirmation claim within the token.
Later, when the client calls an API, it presents:
The access token in the Authorization header
A fresh DPoP header for that request
The resource server validates both the token and the DPoP proof and ensures that the key bound into the token matches the key used to sign the current proof. If an attacker has only the token but not the private key, the call fails.
DPoP therefore moves tokens away from a bearer model and towards a proof of possession model without requiring full mutual TLS infrastructure.
Why DPoP is relevant
Stronger security for modern client types
DPoP is particularly attractive in scenarios where mutual TLS is difficult or impractical:
Single page applications in the browser
Browser based applications are high risk from a token theft perspective. Traditional guidance already pushes them towards Proof Key for Code Exchange (PKCE), shorter lived tokens and refresh token rotation. DPoP adds another line of defence by making stolen tokens significantly harder to replay from another context.Native mobile applications
Managing client certificates at scale on mobile platforms is operationally challenging. DPoP leverages application-level keys and existing crypto APIs, which can be easier to manage while still raising the bar for token misuse.Device flows and constrained clients
Work is underway to describe how DPoP can secure the OAuth 2.0 device authorisation grant, further constraining tokens issued to devices with limited input capability.High assurance profiles such as FAPI 2
The OpenID Financial grade API 2.0 security profile references DPoP as one of the mechanisms to achieve sender constrained tokens for financial grade APIs. It treats DPoP and mTLS as complementary options depending on the ecosystem.
Risk reduction and defence in depth
DPoP does not magically protect against all token theft scenarios but it does improve the situation in several ways:
A token leaked from a log or backend cannot easily be reused from a different host because the attacker does not possess the key.
A token intercepted by a malicious reverse proxy is harder to exploit unless the proxy can also generate valid DPoP proofs with the correct key and claims.
Replay of a captured request is constrained by short proof lifetimes, method and URL binding, and nonce or jti replay detection where implemented.
For sectors that handle sensitive data or payments, these properties are increasingly attractive, which is why DPoP is gaining visibility in open banking and high security API discussions.
So, why is DPoP not everywhere yet?
Given the security benefits, it is reasonable to ask why DPoP is not yet ubiquitous. There are several practical reasons.
1. It is still relatively new at standards level
The core specification only became an RFC in September 2023. OAuth and OpenID Connect ecosystems tend to move at enterprise pace. It takes time for:
Authorisation servers to implement and ship DPoP support
SDKs for browsers, mobile and backend stacks to stabilise
API gateways and service-meshes to provide built in DPoP validation
We are now seeing major vendors such as Okta, Auth0, Curity and Keycloak deliver DPoP features, but many of these are still tagged as advanced, early access or optional capabilities rather than defaults.
2. Client and key management complexity
DPoP requires clients to handle asymmetric keys correctly:
Generate keys with appropriate algorithms and sizes
Store private keys securely and avoid leakage
Rotate keys and cope with token binding when keys change
Implement proof generation for every protected request, including correct method and URL claims and dealing with clock skew
For browser-based applications, this means relying on Web Crypto APIs and the constraints of the browser security model. For mobile clients, it means using platform specific secure storage. Organisations that have only just adopted PKCE and modern OAuth best practices may see this as additional engineering overhead for incremental benefit.
3. Competing and complementary mechanisms
DPoP is one member of a family of sender constraint mechanisms:
Mutual TLS for client authentication and certificate bound tokens, as defined in RFC 8705
HTTP message signing approaches
Private key JWT based client authentication and token binding patterns
In regulated ecosystems such as open banking, mTLS is already entrenched for organisation to organisation connectivity. In that context, the marginal value of adding DPoP over existing certificate-based binding is sometimes perceived as limited.
For many public clients, PKCE plus short-lived tokens and refresh token rotation already addresses a significant part of the threat model, without needing per request proofs. Some practitioners therefore prioritise PKCE implementation and push DPoP further down the roadmap.
4. Ecosystem and interoperability considerations
To get full value from DPoP, the entire chain must participate correctly:
The authorisation server must accept and validate proofs and embed the correct confirmation data into tokens.
The resource servers and API gateways must validate both the token and the DPoP proof on every call.
Error handling, retry logic, and nonce challenges must be implemented consistently across clients and servers.
Legacy services that sit behind an API gateway may not be aware of DPoP at all, so introducing it requires upgrades or additional components such as sidecars or gateway plugins. There is also a testing and observability cost: engineers need to debug failures where token validation passes but DPoP verification fails.
5. Perception of complexity versus benefit
DPoP primarily reduces the replay value of stolen tokens. It does not eliminate other classes of risk:
If an attacker compromises the device or browser where the client key lives, they may gain both the token and the private key.
If the API itself is vulnerable, DPoP does not prevent exploitation of application-level logic flaws.
Because of this, some teams see DPoP as a defence in depth enhancement rather than a foundational requirement. They prioritise investment in areas with a clearer risk reduction story such as modern OAuth profiles, hardened authorisation servers, and robust token management, and plan to revisit DPoP later.
Where DPoP is likely to gain momentum next
Although adoption is not yet universal, several signals suggest that DPoP usage will increase:
FAPI 2 and related high security profiles explicitly reference DPoP as an accepted sender constraint mechanism. As regulators and industry schemes adopt these profiles, DPoP becomes a more natural choice for mobile and browser scenarios where mutual TLS is difficult.
Vendor support is maturing across identity providers, API gateways and SDKs, reducing the barrier to entry. Okta, Auth0, Curity, Keycloak, Kong and others now provide guides, tooling and examples for DPoP.
Security teams are pushing for stronger guarantees in environments where token theft is a realistic threat, such as highly distributed mobile clients and untrusted browser devices.
In practice, you can expect DPoP to appear first in new greenfield APIs and identity projects that already follow OAuth best current practices, particularly where you control both the authorisation server and the protected APIs.
Practical guidance
If you are evaluating DPoP for your own environment, a pragmatic approach is:
Clarify the threat model
Identify where token exfiltration is most plausible, for example browser-based applications that call back end APIs directly, or mobile clients operating in hostile networks.Assess current controls
Confirm whether you have already implemented PKCE, OAuth 2.1 aligned flows, short lived access tokens, refresh token rotation and strong client authentication where applicable.Start with one well scoped client and API
Use platform vendor support where available. For example, Okta, Auth0, Curity or Keycloak provide step by step guides to enabling DPoP for specific clients and resource servers.Invest in observability
Ensure that you log DPoP validation outcomes, reasons for failure, and the relationship between tokens and keys, so that operations teams can troubleshoot effectively.Plan for gradual rollout
Treat DPoP as an enhancement that you progressively roll out to high risk clients and APIs, rather than an all or nothing change. This reduces integration risk and allows feedback from early adopters.
Conclusion
DPoP is a well-designed answer to a long-standing weakness in OAuth. It provides a standard way to move from pure bearer tokens towards proof of possession tokens without the operational burden of mutual TLS everywhere.
Its relevance is clear in environments where token theft and replay are realistic threats, particularly for single page applications, native mobile clients and open banking style APIs. The reason you are not seeing DPoP everywhere yet is less about the quality of the idea and more about the realities of ecosystem maturity, implementation complexity and competing priorities.
As standards such as FAPI 2 and OAuth security best current practice continue to evolve and as more vendors ship stable DPoP support, the trade-off will shift. For now, forward leaning organisations are already piloting DPoP where it meaningfully strengthens their token security story, while the rest of the industry gradually catches up.