OAuth didn't make a lot of sense to me until I learned about RFC7517. JSON Web Keys allow for participants to effectively say "all keys at this URL are valid, please check here if not sure". The biggest advantage being that we can now rotate out certificates without notifying or relying on other parties. We can also onboard with new trusted parties by simply providing them a URL. There is no manual certificate exchange if this is done all the way.
I am seeing many fintech vendors move in this direction. The mutual clients want more granular control over access. Resource tokens are only valid for a few minutes in these new schemes. In most cases we're coming from a world where the same username and password was used to access things like bank cores for over a decade.
If you dig one step beyond hetzner you should start to see that the whole thing is unavoidably global. There is no truly dominant monopoly holder anywhere. Who makes the photolithography machines? What about those weird Japanese companies that make chemicals and substrates that no one else can?
FaaS is almost certainly a mistake. I get the appeal from an accountant's perspective, but from a debugging and development perspective it's really fucking awful compared to using a traditional VM. Getting at logs in something like azure functions is a great example of this.
I pushed really hard for FaaS until I had to support it. It's the worst kind of trap. I still get sweaty thinking about some of the issues we had with it.
What's the issue with logging? I would have expected stdout/stderr to get automatically transferred to the providers managed logging solution (e.g. cloudwatch).
Though I never really understood the appeal of FaaS over something like Google-Cloud-Run.
As a developer who spent a couple months developing a microservice using aws lambda functions:
it SUCKS. There's no interactive debugging. Deploy for a minute or 5 depending on the changes, then trigger the lambda, wait another 5 minutes for all the logs to show up. Then proceed with printf/stack trace debugging.
For reasons that I forgot, locally running the lambda code on my dev box was not applicable. Locally deploying the cloud environment neither.
I wasn't around for the era but I imagine it's like working on an ancient mainframe with long compile times and a very slow printer.
Cloud Run is one service where GCP really shines. It is very flexible: handles services and long running background jobs, less annoying run time limitations like you find in Lambda.
> Getting at logs in something like azure functions is a great example of this.
This is the least of the problems I've experienced with Azure Functions. You'd have to try very hard to NOT end up with useful logs in Application Insights if you use any of the standard Functions project templates. I'm wondering how this went wrong for you?
Diffusion models need to infer the causality of language from within a symmetric architecture (information can flow forward or backward). AR forces information to flow in a single direction and is substantially easier to control as a result. The 2nd sentence in a paragraph of English text often cannot come before the first or the statement wouldn't make sense. Sometimes this is not an issue (and I think these are cases where parallel generation makes sense), but the edge cases are where all the money lives.
But I do wonder if diffusion models will be used in more complex Software Architecture for their long-term coherence, no exposure bias, and their symmetric architecture could work well with interaction nets.
https://en.wikipedia.org/wiki/Cummins_B_Series_engine
reply