Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Other than some quite significant performance gains, I'd say the main upside for the case of a backend API would be channels. Concurrency is dead simple in Go - and if you want to do 5 different requests to ElasticSearch in parallel and merge the results when all of them are finished (like we do for Universal Search), that's just a few lines of very readable Go. Try that with a GIL, sure, multithreaded Python and Ruby is possible, but it's not for the faint hearted and not as easy to read.

Moreover, I love static deploys and cross compilation with Go. Compile your app (even to Windows!), copy it to a server, simply run it as a single binary. No dependency management, no apt-get & easy_install & pip, it just runs.



While Go does provide channels, I'd argue that they are not dead simple. I'm not saying this to bash Go, and I have willingly used it to solve problems. But I think it needs to be made more clear that this often-praised aspect of Go may disappoint those who are familiar with alternative techniques available in mainstream (read: not Haskell) languages.

For instance, look at the "Go Concurrency Patterns: Pipelines and cancellation" article: https://blog.golang.org/pipelines You'll notice the line "We introduce a new function, merge, to fan in the results" and after that, you will see how you have to write merge() yourself for every different data type that you use. Yes, you will have to repeat this same type of code, over and over, any time you want to pipeline, fanout, or merge, a new data type (unless you resort to interface{}). Furthermore, you will have to use the Go race detector to make sure you didn't actually mess something up.

I can't speak for Python or Ruby, but if you are using Node.js you can use a library like Bluebird which provides promise combinators. Then it's very easy to perform 5 requests and to handle errors and cancellation on one or more requests. You can do this and more on any arbitrary data type without writing merge() and nesting coroutine returning functions repeatedly.

So for handling async operations like dealing with APIs, I personally prefer tools like promise combinators or reactive programming (see Reactive Extensions for Javascript, also available in many other languages, or supplies in Perl 6 if you're crazy like me) over the significantly more manual approach of using typed channels in Go. I'm sure there are tasks where tight manual control of channels is important, but for the type of work I've been doing Go is simply too low level.

This article goes into more detail on the weaknesses of pipelining in Go: https://gist.github.com/kachayev/21e7fe149bc5ae0bd878


>This article goes into more detail on the weaknesses of pipelining in Go: https://gist.github.com/kachayev/21e7fe149bc5ae0bd878

This write-up provides some great arguments for why generics could be very useful for abstracting away some of the details of concurrency management.

I find it odd how so many Go developers insist generics are totally unnecessary.


Generics can be nice, but they're clearly not "necessary" to any of the languages that don't support them. Besides, the cost that generics would exact in terms of syntax complexity, compile time, and startup time is something that many Go-detractors dismiss as irrelevant.

Myself, I think enforced tab-based formatting is utterly insane, but ultimately it doesn't matter. Part of the philosophy behind Go is to keep certain things very simple, and leaving generics out is a big part of that. I don't think that they're going to change their mind because non-Go programmers advocate for them. If that makes it the wrong language for your project, then there are so many others to consider.


But Go already has generics: channels, maps, make(), len(), etc. are work on generic types. You couldn't have typed channels without generics!

Go just doesn't have any syntactical way of declaring generic types. But given the above, nobody can argue that generics isn't extremely useful, even in the context of Go. "Oh, but the official library is special because it's part of the language", someone might argue. But one would have to be fairly damn obtuse to claim that Go would be better without the built-in generics, or that for some reason those benefits wouldn't extend to the language as a whole, if implemented.

I'm really surprised the authors of Go decided to allow generics only for the official library, because they've had to jump through some serious hoops to avoid it. If you look at "reflect", "builtin" and "sort", to pick a few, those packages are a graveyard of typing awkwardness. Look at the sorry state of the sort package, which even has a special function to sort strings. It goes on and on; every time I work with Go code, I end up implementing functions like min() and max() and cmp(). Why is "range" special? Why isn't there a way for generate iterators for any value? Etc.

Go is "simple", sure, but ends up being rather complicated as a result, with tons of the same code having to be written over and over for different types, and tons of typecasting between interface{} and real types, and so on. Nobody (as far as I can see) is asking for Haskell-style typeclasses or operator overloading or type traits or higher-kinded types or any of that. Several up-and-coming languages (such as Nim) implement generics without going overboard with complexity; quite the opposite, generics makes those languages simpler.


>Go is "simple", sure, but ends up being rather complicated as a result, with tons of the same code having to be written over and over for different types, and tons of typecasting between interface{} and real types, and so on. Nobody (as far as I can see) is asking for Haskell-style typeclasses or operator overloading or type traits or higher-kinded types or any of that.

Exactly. We're not asking for much here, just the ability to do the most basic kind of type parameterization.

The only benefit afforded by missing generics is simplicity in the compiler. It does nothing or worse than nothing when it comes to simplicity in actual Go code.


Even if generics never come to Go, my hope is that at least common pipelining tasks like merge() will become part of the stdlib and have magical generic support... but I haven't heard anything to indicate that such a thing will happen. :(


Python has good co-routine support. In Python 3.5 we have async/await and the concurrent.futures module. For I/O bound tasks like running REST APIs or MapReduce jobs I don't see the GIL being much of a problem; you spend eons waiting to do a few milliseconds of work then wait some more.

> While Go does provide channels, I'd argue that they are not dead simple.

I'd agree.

In my experience it's easier to explain a solution to a fundamental problem than to an abstract one. Channels, futures, promises... all very abstract concepts; useful to the cognoscenti but none are satisfactory at solving the fundamental problem of parallel execution. Hence everyone in their camps about which is right for which tasks.

So even with channels parallel programming is still difficult. You just have the added burden of a different, unique abstraction. Every language ecosystem either has their own community-adopted one or a plethora of them.

I think I'll reserve, "dead simple," for when we have a universal language of parallel execution. Until then... we don't know how to compute!


> Concurrency is dead simple in Go - and if you want to do 5 different requests to ElasticSearch in parallel and merge the results when all of them are finished (like we do for Universal Search), that's just a few lines of very readable Go. Try that with a GIL, sure, multithreaded Python and Ruby is possible, but it's not for the faint hearted and not as easy to read.

This example is terrible. Waiting for DB responses, ElasticSearch queries, and long running IO is one place where Python and Ruby multi-threading with a GIL works great. A GIL means multiple threads can't execute Python code at the same time. All of those tasks are by definition NOT running Python code, they're sitting around blocked waiting for responses.

A better example would be something like image processing, where an image is loaded into memory, broken into multiple independent chunks, and each chunk is processed at the same time in multiple threads. In Go that should work just fine, but in Python and Ruby each thread will spend a lot of time waiting for the GIL.


>Other than some quite significant performance gains

I am aware that other languages perform better in benchmarks than, say, Python does. But in my experience, I've not ever found the speed of the language to be a bottleneck when I'm benchmarking and optimizing for scalability in a web app.

It's always something else. The database interface, the network, a crappy web framework, whatever. It always seems to be something other than the fundamental language that bogs things down.

I'll openly admit I might be missing something or that perhaps I haven't tried to scale high enough. I just don't get how it's relevant that x language is y times faster than Python when Python hasn't ever been the problem. There's always just so much more low-hanging fruit than the language.


> Try that with a GIL, sure, multithreaded Python and Ruby is possible, but it's not for the faint hearted and not as easy to read

Actually that example does tend to be simple and readable in Python and Ruby. I've used a pattern similar to this to parallelize calls to a few back-end services from a Ruby app and it worked out great. The code I used looked something like this: https://gist.github.com/kevinmcconnell/8365521, which I think is quire readable.

I've definitely found Go's approach to concurrency to be very helpful in other situations; I just think that example could be a bit misleading.


> Try that with a GIL, sure, multithreaded Python and Ruby is possible, but it's not for the faint hearted and not as easy to read.

As others have noted, the example use case is one where multithreading with a GIL/GVL isn't particularly problematic. Moreover, both Python and Ruby have GIL/GVL-free implementations (in Python's case, Python 2 only in Jython/IronPython; in Ruby's case, much more current in the language level supported, since current JRuby is Ruby 2.2-compatible.)


"Concurrency is dead simple in Go" In my experience, concurrency is not dead simple in Go, not even close.


Concurrency is never dead simple.


Try that with a GIL, sure, multithreaded Python and Ruby is possible, but it's not for the faint hearted and not as easy to read.

FWIW, you can use a ruby gem like typhoeus specifically for this:

    hydra = Typhoeus::Hydra.new
    requests = (0..9).map{ Typhoeus::Request.new("www.example.com") }
    requests.each{ |request| hydra.queue(request) }
    # blocks until all requests are complete
    hydra.run




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: