Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

JSON is good for dumps of your data, for humans reading it, and for interoperability without the schema - On your backends, you should always serialized to proto because it's darn more efficient.


Always is such a strong word, always ;) Human readable is very important to me. If performance is not an issue I'd rather grep through it.


If it's a startup or a personal project with no special performance requirements, chance of me using protobufs for anything is very slim. Human readability aside, just bothering to set up the serdes and other boilerplate is too low on my list of priorities. Makes more sense at medium scale.


Slapping it all into gRPC is less boilerplate than whatever else you'd be doing (unless you're doing GraphQL or something). I'd always default to doing that (or thrift, honestly, but same difference) unless there were a particular reason to use something like a manual API.


gRPC requires HTTP/2, which is a nontrivial added requirement. Prevents it from running on Heroku and ironically Google AppEngine, and wherever it does work, load balancers will probably need special setup for it. Even Kubernetes. You can translate it to/from JSON, but that defeats the purpose.


> gRPC requires HTTP/2, which is a nontrivial added requirement. Prevents it from running on Heroku and ironically Google AppEngine

Then use thrift, which does plain http. But most of the design decisions in the article are the same there as well. Turns out there's really only one way to do this.


Yep, it's all basically the same thing, with small differences in performance and practicality.


Oh, gRPC over JSON still gives you the advantage of a well-defined API. But I use OpenAPI aka Swagger for that because it's a much thinner layer, especially if for whatever reason I need to handle HTTP GET (callbacks/webhooks) in addition to my typical HTTP POST JSON.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: