Hacker Newsnew | past | comments | ask | show | jobs | submit | grose's favoriteslogin

> In Keycloak nothing made sense to me until I got myself familiar with OAuth 2.0 and OpenID Connect.

Hot take: OAuth2 is a really shitty protocol. It is one of those technologies that get a lot of good press, because it enables you to do stuff you wouldn't be able to do in standardized manner without resorting to abysmal alternatives (SAML in this case). And because of that it shines in comparison. But looking at it from a secure protocol design perspective it is riddled with accidental complexity producing unnecessary footguns.

The main culprit is the idea to transfer security critical data over URLs. IIUC this was done to reduce state on the involved servers, but that advantage has completely vanished, if you follow today's best practices to use the PKCE, state and nonce parameter (together with the authorization code flow). And more than half of the attacks you need to prevent or mitigate with the modern extensions to the original OAuth concepts are possible because grabbing data from URLs is so easy: An attacker can trick you to use a malicious redirect URL? Lock down the possible redirects with an explicitly managed URL allow-list. URLs can be cached and later accessed by malicious parties? Don't transmit the main secret (bearer token) via URL parameters, but instead transmit an authorization code which you can exchange (exactly) once for the real bearer token. A malicious app can register your URL schema in your smartphone OS? Add PKCE via server-side state to prove that the second request is really from the same party as the first request...

It could have been so simple (see [1] for the OAuth2 roles): The client (third party application) opens a session at the authorization server, detailing the requested rights and scopes. The authorization server returns two random IDs – a public session identifier, and a secret session identifier for the client – and stores everything in the database. The client directs the user (resource owner) to the authorization server giving them the public session identifier (thus the user and possible attackers only ever have the possibility to see the public session identifier). The authorization server uses the public session identifier to look up all the details of the session (requested rights and scopes and who wants access) and presents that to the user (resource owner) for approval. When that is given, the user is directed back to the client carrying only the public session identifier (potentially not even that is necessary, if the user can be identified via cookies), and the client can fetch the bearer token from the authorization server using the secret session identifier. That would be so much easier...

Alas, we are stuck with OAuth2 for historic reasons.

[1] https://aaronparecki.com/oauth-2-simplified/#roles


There are advantages to embedding. You can retain the host language type system and and object model. If you have a great query language and model but have to write a ton of code to marshal back and forth, it might not be adding that much value (classic impedance mismatch).

While go’s compile time metaprogamming is virtually non-existent, it’s runtime metaprogramming with reflection is more or less complete. There’s a runtime cost to using it, but that can be mitigated.

See https://github.com/cockroachdb/cockroach/tree/master/pkg/sql... for a reflection-driven, embedded logic query language in go that achieves pragmatic goals of writing logic queries over data structure graphs at reasonable performance and pretty good expressibility.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: