I've made myself used to using external service for AuthN, but still on the fence whether I want or need the complexity of external AuthZ. (though my architecture is still evolving around a single core monolith system despite having a dozen satellite micro and not so micro services)
My main grief with external AuthZ is that they are engineered to be too generic and unconcerned with the space and resource consumption.
Have you considered one of the open source solutions 'inspired by Google's Zanzibar paper' as an extension to your Option 3?
The true question is what "complexity" really is. Because it's mostly in the eyes of the developers.
For microservice lovers, two hash maps maintained as part of the component that needs to evaluate AuthZ decisions is "complexity", because "there are well-known battle-tested solutions, and scientific papers about them, why do you want to reinvent the wheel?"
And for those of us who are allergic to 10x .. 100x CPU/RAM/throughput/latency waste, using a "standard" service is adding complexity, because other parts of the system would now have to be designed in a way that assumes this extra ~milliseconds latency that should not be there in the first place.
We looked at Zanzibar, yes, and its latency is too high for our needs. OPA's approach of compiling policy rules into native code for fast application is very lucrative then, at least the very policy application part of OPA, in isolation.
Looks like in our case we'd need to:
* either cache the policy application results, not inputs, and have another (centralized) component to [re]evaluate those policies and push those evaluated results to the clients (in which case both OPA and Zanzibar work),
* or build an in-house client side layer that maintains those hash maps up to date and evaluates the AuthZ decisions within dozens or hundreds of microseconds (this could also be a sidecar that runs on localhost with sub-millisecond [gRPC] call via an already established [TCP] connection).
And, as you are correctly pointing out, the local service / sidecar solution would run on the same machine where the application code runs, which means its RAM usage should be wise, not unconstrained, which again poses the very question of the feasibility of an external generic solution.
I've made myself used to using external service for AuthN, but still on the fence whether I want or need the complexity of external AuthZ. (though my architecture is still evolving around a single core monolith system despite having a dozen satellite micro and not so micro services)
My main grief with external AuthZ is that they are engineered to be too generic and unconcerned with the space and resource consumption.
Have you considered one of the open source solutions 'inspired by Google's Zanzibar paper' as an extension to your Option 3?
The true question is what "complexity" really is. Because it's mostly in the eyes of the developers.
For microservice lovers, two hash maps maintained as part of the component that needs to evaluate AuthZ decisions is "complexity", because "there are well-known battle-tested solutions, and scientific papers about them, why do you want to reinvent the wheel?"
And for those of us who are allergic to 10x .. 100x CPU/RAM/throughput/latency waste, using a "standard" service is adding complexity, because other parts of the system would now have to be designed in a way that assumes this extra ~milliseconds latency that should not be there in the first place.
We looked at Zanzibar, yes, and its latency is too high for our needs. OPA's approach of compiling policy rules into native code for fast application is very lucrative then, at least the very policy application part of OPA, in isolation.
Looks like in our case we'd need to:
* either cache the policy application results, not inputs, and have another (centralized) component to [re]evaluate those policies and push those evaluated results to the clients (in which case both OPA and Zanzibar work),
* or build an in-house client side layer that maintains those hash maps up to date and evaluates the AuthZ decisions within dozens or hundreds of microseconds (this could also be a sidecar that runs on localhost with sub-millisecond [gRPC] call via an already established [TCP] connection).
And, as you are correctly pointing out, the local service / sidecar solution would run on the same machine where the application code runs, which means its RAM usage should be wise, not unconstrained, which again poses the very question of the feasibility of an external generic solution.