Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

From the book:

"But it has comparatively few features and is unlikely to add more. For instance, it has no implicit numeric conversions, no constructors or destructors, no operator overloading, no default parameter values, no inheritance, no generics, no exceptions, no macros, no function annotations, and no thread-local storage."



> no default parameter values

Hearing this leaves a bad taste in my mouth, because one of the (mis)features I've found in Go is its implicit default value in struct initialization. In Go, you don't get a compile error when you miss some fields while initializing a struct. e.g.

  type T struct {
    a int
    b string
    c float64
  }
  
  t := T{c: 1.5}
will happily set `t.a` to 0 and `t.b` to an empty string. While this is useful for maintaining backward compatibility (adding more fields to a struct doesn't break upstream code), it also hinders discovering genuine mistakes at compile time. So typical Go programs end up using 0 or an empty string as a sentinel value. This is probably what made me feel Go's dynamic nature the most. It really is pretty close to a dynamic language.


I should explicitly state I seem to be in the minority in the Go community here, but: You don't get a compiler error if you initialize by name. You do get a compiler error when you initialize by position. In my minority opinion, you can and should use that to your advantage whenever possible. Some structs are clearly "configuration-like", for instance, and you don't want an error if a new option shows up, which will probably default to whatever you had before anyhow. Some structs are clearly data structures, and you'd really like to know if your two-dimensional point suddenly grew a third parameter. Of course it's not a bright shining line, but it's often pretty easy to tell which you have, or which thing you want, and use the correct initialization.

In this case, if you used:

    T{4, "hello", 3.5}}
most future type changes to the T struct will become compiler errors. (It won't be if the types are compatible, for instance, changing the first to a float would still result in a legal struct. If you have richer types in play that is less of an issue.)

golint will then complain at you, but you can pass a command-line switch to turn that off.

(This, amusingly, puts me in the rare position of siding against the Go community, on the side of the language designers. Bet you may not have known there is such a position to take. :) )


One non-obvious downside is that the Go 1 compatibility guarantee doesn't apply to struct literals that don't use field names. (I suspect you're aware of this, but other readers might not be.)

So it's possible that a future version of Go could add a field to some struct you're using and your code will stop compiling when you upgrade. It's an easy fix, of course, so it's not that big of a deal, but it's worth realizing.


The point is that if I'm using stuct literals, I want the compiler to stop me for those structs.

I'm explicitly rejecting the idea that all struct changes should be possible without producing compiler errors. Compilers errors when the guarantees your code is based on changes is a feature, not a bug.


That's a pretty risky thing to do.


I just don't get this attitude. I'm asking for the compiler to break my code if something I depended on changes. The alternative is the risky one! This is the safe alternative.

Compiler errors aren't evil. They're a tool. They work best when there is a one-to-one correspondence between problems and errors. That's not possible in the general case, but the closer we get, the better. And the worst case is not when I get a spurious error. That's easy to deal with. The worst case is when I don't get an error I should have. If you're going to worry about "riskiness", that's the risk that should keep you up at night. Not compiler errors for things that turn out to be no big deal, and can quite likely be fixed with one quick go fmt -s.


In all other cases I'd want the compiler to break my code. The problem here is that this technique is very fallible, the chance for false negative, undetected errors is high. It's risky because there's a ton of cases where you won't get a compiler error. It's unduly making you feel safe, which is not a good thing in my opinion. This is mostly why I think it's risky, because you feel safe when you shouldn't.

With keyed-fields, the worst case is that you have uninitialized fields, which typically doesn't cause much problems and get caught quickly where it matters. With unkeyed-fields, you might have code that compiles but sets unexpected fields. Things that would otherwise panic, now just keep working without you noticing, until strange things happen and you have to review all initializations and remember the struct layout every time you see the struct being created.

Personally, I don't like both techniques anyways. It's too error prone. I'll prefer writing a small constructor where I handle initialization deliberately. It's not super Go-ish but at least I centralize all the issues surrounding struct initialization in 1 place: the constructor. Then when I change what fields go in the struct, I change the function signature and the compiler breaks and doesn't let things fall through silently.


I agree. But the industry is hurtling down a tunnel of weak typing and runtime checking. So compiler features are diminishing in relevance at a geometric rate.


I see the exact opposite trend happening. Weak typing is plateauing. It's the last moment of apparent strength before long, slow, but inevitable collapse. Most interesting work is being done on the static side right now, partially because there's no more work to be done on the dynamic side. (A great deal of being dynamic is precisely throwing away all the structure you might build further features on.)

You can also see this in how all the dynamic languages are working on adding "optional" or "incremental" dynamic typing. Static languages, by contrast, generally create one dynamic type, stick it in a library somewhere, and let the small handful of people who really need it use it. Few, if any, of them are adding any dynamic features. The motion trends are clear.


So I get to go back and look at the struct to see the order every time I initialize an instance? Or watch everything break when the noob on the team alphabetizes the struct fields? Yeah, that's a great solution.


If you're doing this, it is, by definition, on structs you choose to do it on. If you lack the judgment ability to decide when you want that, fine, never do it.

And the noob that is so noobish that they change code and don't even compile it to check to see whether it works is a menace well beyond this issue. That's an overpowerful argument; the real problem is the noob that isn't even running the compiler. The noob doesn't "break struct initializations" specially, they break everything.


> If you lack the judgment ability to decide when you want that, fine, never do it.

The choice in question is whether I want code that breaks silently when I add a field to a struct (named fields in initializers), or code that breaks silently when I swap fields of the same type in a struct (positional fields in initializers). Please tell me more about how "judgment ability" makes this anything other than a choice between brittle code and brittle code.

> And the noob that is so noobish that they change code and don't even compile it to check to see whether it works is a menace well beyond this issue. That's an overpowerful argument; the real problem is the noob that isn't even running the compiler. The noob doesn't "break struct initializations" specially, they break everything.

Compilation will not catch all situations where struct fields are reordered. Consider the rather common case where two fields on a struct are of the same type. If a noob swaps the order of these fields, it will compile just fine using your method of struct initialization. It's even quite possible that if unit tests initialize the structs in the same way, this could get past unit tests as well.

This is a pretty obvious case, and the fact that I have to explain it to you is yet another example of having to dumb things down for Go users who don't know the first thing about programming language design.


"Consider the rather common case where two fields on a struct are of the same type."

Or perhaps even the even more complicated case I already mentioned upthread, that an int will still happily initialize a float?

"Consider the rather common case where two fields on a struct are of the same type. If a noob swaps the order of these fields, it will compile just fine using your method of struct initialization. It's even quite possible that if unit tests initialize the structs in the same way, this could get past unit tests as well.... dumb things down for Go users"

What does any of this have to do with Go? All languages with structs have these "problems"! Even Haskell will have the exact same problems (even before you turn on OverloadedStrings). You're reaching so hard to be dismissive of some sort of stereotypical programmer that only exists in your head that you've completely surrendered reason. You should reconsider whether that's really who you want to be.


> What does any of this have to do with Go?

I am not able rightly to apprehend the kind of confusion of ideas that could provoke such a question. We're discussing ways of initializing Go structs.

> All languages with structs have these "problems"!

This is completely false, and exactly why I'm dismissing you: we wouldn't be having this conversation if you knew anything about other languages. There are plenty of languages which will warn you when you fail to initialize a struct field when initializing fields by name.


That looks a lot like C to me, which I wouldn't call a dynamic language.

  typedef struct Foo {
    int a;
    int b;
  } Foo;

  Foo f = (Foo) { .a = 1 }; // This will initialize b to zero.


Isn't it funny how C has so many ways to accomplish the same thing. Why did you use a typedef with a tag?

  typedef struct {
    int a;
    int b;
  } Foo;

  Foo f = { .a = 1 };


Tagless structs can't be forward-declared.

(Obviously, this looks like a local struct, so there's possibly no need for forward declaration. But you might have a snippet to generate this sort of thing for you. Or maybe it's just force of habit. And so on.)


> Or maybe it's just force of habit.

Or cargo-culting.

edit: wow somebody felt threatened. I have no shame stating that I cargo-culted exactly that for a while before I actually wondered what I was doing.


I'm guilty of this when writing C. I really have no desire to learn the language properly (it's hard enough to fit C++ in my brain), so I'll just follow the patterns others have set.

Microsoft does stuff like:

    typedef struct _FOO {
        ...
    } FOO, *PFOO;
Yeah ok fine, I'll do that.


When doing this in your own libraries, be sure to document how to generate the struct tag name from the typedef name. (MS don't do this - but they're not consistent about it anyway.) Then when people see a typedef'd struct used somewhere in a header, they'll know how to forward declare it in their own headers.


What's forward declaring mean?



It's one of those features that seems mad until and unless one runs into the situation that justifies it.

Go provides default values to avoid the C error-factory of random undefined behavior resulting from re-use of whatever is in a memory address; that much is clear. But the reason Go lets you partially instantiate an object (and separates out construction from state) is to make it easier to write unit tests, where the common case is that you want to circumvent the "main line" object construction pathways.


I have long felt that floats should default to NaN, so that any attempt to perform operations with them before they're initialized results in an error.


From the preface: "achieving maximum effect with minimum means."

Sort of the anti-Perl? I say this as someone who likes both Perl and Go.

Go is very contrarian, and I applaud this.


> From the preface: "achieving maximum effect with minimum means."

Wouldn't be out of place at a marketing agency, with about the same level of truth too.

> Sort of the anti-Perl?

In what sense? The one thing you can say about Perl is that it's a huge language, so the anti-perl would be a very small language. Go isn't a very small language (like Forth), it isn't even a small one (like Smalltalk or Self) it's about the same size/complexity as an early Java. Somewhat bigger in some ways (more magical builtins and constructs) somewhat smaller in others (simpler visibility rules, no synchronised methods/blocks), but in the best case it's a wash.

> Go is very contrarian, and I applaud this.

Perl is also very contrarian.


I think it's fair to call Go an anti-Perl.

Perl is very liberal. There's always more than one way to do things.

Go is very conservative, in comparison.

I'd agree that both are contrarian, but for very different reasons.


> I think it's fair to call Go an anti-Perl.

Then again, pretty much anything can fairly be called an anti-Perl, possibly even Perl.


Go is very contrarian, and I applaud this.

It takes more than reversing the order of parameters and using known-braindead ideas like codified tabs-are-good syntax to make contrarian ideas valuable.

Just because you change green lights to mean stop and red lights to mean continue doesn't make contrarian suddenly better than the way things were.


While I think your comment is unconstructive, I do have to say that I don't quite understand why Go decided to force hard tabs.

Even more confusingly to me, I really don't understand why they seem to standardize on tabs expanding to 8 spaces rather than 4.

The 2 spaces (of soft or hard tabs) favored by some Ruby and CoffeeScript programmers is too little, but 8 spaces is way too much.


> While I think your comment is unconstructive, I do have to say that I don't quite understand why Go decided to force hard tabs.

Since you're using goftm which imposes a strict discipline, tab-indents and space-align allows configuring tabwidth however you want locally without imposing that on other collaborators. The issue with the idea is usually doing it consistently and people properly configuring their editor (if the editor allows tab-indent+space-align at all), when the code is being hard-reformatted it's not an issue.

> Even more confusingly to me, I really don't understand why they seem to standardize on tabs expanding to 8 spaces rather than 4.

8 is the historical/default tabwidth on Unices (unconfigurable environments generally have a tabwidth of 8), using hard tabs but defaulting to anything else would be odd. And since it's tabs, you can configure your environment to whichever tabwidth you prefer (like 3 or 6, I've not seen editors with support for tabwidths in half-spaces or pixels but in theory that's also an option) (well technically the CSS tab-size property supports arbitrary <length> tabsizes but only Chrome >= 42 supports that, the rest only supports <integer> spaces, except for IE which has no support whatsoever).

A claimed benefit of 8 tabwidth is also that rightsward drift becomes a problem extremely early, the tabwidth thus acts as a check against over-nesting. Now that's inconvenient in languages with significant "natural drift" like C# (where your code lives in a method in a class in a namespace so you're already 3 indents deep before you've writing anything, class-in-files languages tend to have a tabwidth of 4 or even 2 probably for that reason), but IIRC Go only has a single "natural ident" the rest is all yours, so a tabwidth of 8 serves as a check against nesting code too much.


people properly configuring their editor

A lot of coding is reading examples online these days. Trying to read Go code on GitHub is awful since three forced tab indents feels like you're 50% across the screen already (and forget trying to read it on mobile).

Browsers don't really have a "set tab width" option that I've found (and forget trying to set user options on mobile browsers).

a check against nesting code too much.

For expert programmers coding for long-term correctness, then yes. But beginners and lean "we just gotta ship this shit" startups will just create 9 levels of unreadable cruft.


Github allows you to set the tab size to <n> when viewing code by add ing "?ts=<n>" to the end of the url. I don't know if there is a way to set it for an account.


> Browsers don't really have a "set tab width" option that I've found.

The `tab-width` CSS property is supported by all browsers except MSIE, though only for integer amount of spaces (aside from Chrome 42 which supports arbitrary widths). In most desktop browsers can setup a "user css" to set it.

> For expert programmers coding for long-term correctness, then yes. But beginners and lean "we just gotta ship this shit" startups will just create 9 levels of unreadable cruft.

Would their unreadable cruft be any more readable with a tabwidth of 4 or (god forbid) 2?


I am vastly in favor of hard tabs, as it doesn't enforce tab size. Question, why do you say that the standardize on tabs as 8 spaces? I've done all my golang programming with 4 tab spaces.


Hard tabs make "pretty/readable indent" formatting difficult too.

If you want to line up certain arguments across lines, you just can't because you're forced to an unknown width of alignment chosen by the reader. So, all your code will just be indents that ignore the specific visual alignment intentions of the author, and that reduces readability and understandability in multi-person teams (and programming is a team sport, not a one-person-does-it-all game).


> Hard tabs make "pretty/readable indent" formatting difficult too.

That's alignment not indentation. AFAIK gofmt uses spaces for vertical alignment


But, that's arguing two points, right? That's like saying ASCII has 8 built in non-visual field separators, so people should use those instead of CSV/TSV for text tables.

Sure, it's technically the right distinction, but it's not practical in any reality in which we live.

Trying to say "alignment" is distinct from "indent" and that tabs and spaces can be mixed depending on your intention is just crazy talk.

The only place tabs should be used is in Makefiles, and Makefiles should be autogenerated by CMake these days, never written by hand.


> But, that's arguing two points, right?

No?

> Sure, it's technically the right distinction, but it's not practical in any reality in which we live.

It's not practical to do by hand (because most people can't be arsed to configure their editor to do it, or their editor is incapable of it in the first place), why would it not be practical when a tool takes care of it for you and everybody uses that tool?

> Trying to say "alignment" is distinct from "indent" and that tabs and spaces can be mixed depending on your intention is just crazy talk.

And yet gofmt seems to work.

> The only place tabs should be used is in Makefiles, and Makefiles should be autogenerated by CMake these days, never written by hand.

Why? If the distinction between indentation and alignment can be made and can be made correctly, it means anyone can pick the tabwidth they prefer and things will just look right for everybody, that's strictly superior to either tabs or spaces. That's been advocated for decades, it just doesn't work when you leave it to people, which gofmt doesn't.

I'm quite far from a go fan, but achieving the ideal of "tabs for indentation, spaces for alignment" is definitely praiseworthy, whatever you think of other formatting rules.


I've worked on a (C++) codebase which mixes tabs and spaces for this reason. It works okay - in particular, since my editor is configured for 4-space tabs, while patches are reviewed in GitHub with an 8-space tabs, any mistakes are likely to be caught in review. But it's not that hard to avoid making them in the first place if you remember to use the space bar to line things up.


gofmt works freaking beautifully for this though. Sure, if I tried to do it myself, I'd screw it up. That's why I don't do things that the computer can do better, easier, faster then I can. I just hit save, and my editor routes it through gofmt and refreshes file. It's to the point where I'll hit ctrl-s after moving a brace just to have gofmt reformat, CAUSE IT DOES IT FASTER THEN I COULD.


A significant fraction of the Go core engineers use proportional fonts when programming. On their screens, hard tabs are the only tabs that work. Spaces on proportional fonts are too tiny to be useful for moving code around.


What? Really? Who in the world would do that?


Users of Acme, written by Rob Pike. See screenshots on this page and note that the font used is not monospaced. http://acme.cat-v.org/


Is there a reason why Acme doesn't use a monospace font? I couldn't find any justification on that site.


In this message I'm trying to make the argument I think they would make (I personally use monospaced fonts).

It's not that Acme can't use monospaced fonts, it's that Rob/Russ/others don't want to use them. Proportional fonts are better fonts, so why not use them instead? One possible reason is that existing code formatting conventions assume that text is lined up in columns, but we have a tab key that magically lines things up: it's the whole job of the tab key. So, why not forget about space-based alignment, use the tab key for the job it was built to do, and get the advantage of using pretty fonts?


The use of proportional fonts is the least weird thing about Acme :) I'm really intrigued by that editor. Some day I'll give it a honest try.


I used Emacs from 1993-2004, and switched to Acme for the 11 years to present. I don't miss trying to memorize all the key combinations from Emacs. I like that Acme presents a clean, simple, and direct Unicode interface to what I work with: mostly editing shell scripts, and running shell commands, as a build engineer. It takes a while to get used to mouse-button chording, but I don't even think about it now. I constantly use guide files, in many directories, to store and modify commonly used commands to highlight and run, so I make many fewer typos now, and don't forget which commands to run or how I run them. I can also switch contexts a lot faster, both because commands are laid out in the directories where I use them, and because the Dump and Load commands store and retrieve sets of files in the tiled editor subwindows. When I had to work on Windows I enjoyed having a pared-down unixy userland that I could write scripts in, to use also in my Linux Inferno instance (mostly communicated from one instance to the other through a github repo for backup and version control). The biggest drawback to me with Inferno is that so few other people run it, that I have to compile it myself on any new platform on which I run it (there are not really rpms/debs/etc available to just install it). But your experience with Plan 9 Acme might be better, I just prefer also working with the Inferno OS improvements, such as bind, /env, sh, etc.


I love two spaces. I used to use four but switched a few years back and now anything more than two looks strange to me. It's just a preference, but it does keep your line length shorter, which is nice if you like to adhere to a maximum line length throughout your code.


> no default parameter values

This would seriously bum me out as I find the easiest to extend the functionality of an existing Python function is to add a new parameter with a default value. This way, regardless of whether the existing code base the calls the new or old version of the function, it performs the same way as it always has.


Since Go is statically-typed and compiled, it's much easier to refactor a function compared to Python. Change it and fix everywhere the compiler complains.


That's fine for internal functions, but a big problem when publishing any kind of interface. It's really nice to be able to extend an interface without breaking stuff or adding cruft.


true, forgot to think about that. But still potentially a lot of changes to fix, even if you are explicitly told what needs adjusting.


And this is where Go's tooling shines: https://golang.org/cmd/gofmt/

(Refactor automatically, not by hand.)


You can use variadic arguments with an interface type or a single struct parameter to get similar behavior.


Are they actually proud of having no generics? :o


No, the authors specifically said that they would like to have generic features but couldn't figure out a way to implement it without unacceptable performance problems.

I'm pretty sure the feature will show up in the next few minor version increments.


> I'm pretty sure the feature will show up in the next few minor version increments.

No it won't. People need to stop expecting generics because it will never happen. Not that agree with this but it was made pretty clear in the go-nuts mailing list that the Go team wants to keep Go type system "simple".


Maybe in Go 2? ;)


Never gonna happen, Go 2 is considered harmful.


This is one of the things I like about Go: it's "done." In exchange for passing on extensions that might make certain use cases easier, we'll avoid the bloat and have decades of backward compatibility.

We just came out of a decade of nifty language mania. What I learned is that languages are boring but problems are interesting. Algorithms and solutions are interesting. A great solution to a challenging problem is really interesting even if it's in the most boring language ever.

I have code on my machine written in C in the 70s because C is largely "done." People today continue to write interesting stuff in C. Neuromancer was written in the same language as Lord of the Rings and Moby Dick, too.


C is clearly not done, actually. Both C99 and C11 added, deprecated, and even removed many features. You can run C in the 70s not because it didn't change, but because it retained backward compatibility. It will be pretty surprising if Go manages to prohibit even the same level of changes C has taken.


What C compiler do you use to compile that 70s code? I find modern compilers usually choke on the "done" C code from then.


You have to feed it a switch or two, but gcc will do it.

Of course, we're talking about pre-ANSI C where any function prototype without a void argument implicitly accepts any and all arguments.


ANSI was 9 years after the 70s. I'm curious about the 70s C code that was "done". My point is that Go may still be in the 70s.


You missed the joke re: Go 2.

I don't think Go is done. The active development speaks otherwise.


I think everyone missed it. Glad you, at least, didn't :-)


Basically they dismissed every implemented approach, despite generics obviously working in other systems. (And it's hardly a "new" feature unless we're counting in multiples of decades.)


> Basically they dismissed every implemented approach, despite generics obviously working in other systems.

But working at a price. And Go isn't willing to pay the price (especially in terms of compile time). If Go ever adds generics, it will be with a new approach that doesn't blow up compile times.


C# compiles and JITs incredibly fast and has generics. F# has the same generics and compiles far slower. The reason isn't compile times. (Especially since one impl of generics is just generating code, which is cheap.)

IIRC the reasons on the list were the standard tradeoffs of memory space and so on. Do they emit specialized versions for each function, or not, and so on. Again, stuff that's working fine in other platforms.


A few years ago, Andrei Alexandrescu showed that the dmd compiler was actually faster than the gc compiler, and D supports templates. Recently, with 1.5, Go has shown that it is ready to take a hit on compile-time speeds. I don't think the argument of compilation speed holds for generics. To me it sounds much more like a culture thing: generics, for better of for worse, add an extra thing to think about and I think that the Go authors and many Go developers are just not interested.


I wonder how much developer time is spent due to a lack of generics.

Developer time costs more than compiler time.


> Developer time costs more than compiler time.

I find that to be a very odd statement. Usually, the developer waits for the compiler in order to find out if the code compiles and executes properly. That is, every minute of compiler time costs a minute of developer time.

Worse, the developer time you spend due to lack of a feature, you spend while writing some code that would benefit from the feature. The compiler time you pay every time you compile - year after year, for some projects.


> I find that to be a very odd statement. Usually, the developer waits for the compiler in order to find out if the code compiles and executes properly. That is, every minute of compiler time costs a minute of developer time.

If your business starts to hit a wall on compile times you can buy a computer that can compile twice as fast. It's much harder to buy a developer who can think twice as fast. And every year the computers get faster and the developers stay the same.

> Worse, the developer time you spend due to lack of a feature, you spend while writing some code that would benefit from the feature. The compiler time you pay every time you compile - year after year, for some projects.

No, the cost of being unable to abstract increases exponentially as your system grows. If using language X lets you cut 500 lines from a 1000-line Go project, then when you have a 2000-line Go project, in language X you'd be able to cut 500 lines from each half of it considering each half in isolation - and then you'd be able to cut some more because of things that were common between the two halves - so you'd end up with just 750 lines of language X. And you pay the cost of extra lines every time you read or debug, year after year.


> If your business starts to hit a wall on compile times you can buy a computer that can compile twice as fast.

Buying a fast machine only gets you so far. Large C++ projects take minutes to compile even on the fastest machines available. Plus, you'd need to buy one for every developer.

> when you have a 2000-line Go project, in language X you'd be able to cut 500 lines from each half of it considering each half in isolation - and then you'd be able to cut some more because of things that were common between the two halves - so you'd end up with just 750 lines of language X.

While this is true in theory, in practice I think the effect is not quite as large. As the project grows, developers take ownership of certain parts of the code and become ignorant of other parts. This is the whole point of abstraction. Under these conditions it will take a heavy investment of time and effort to find and replace the things in common between the two halves. So you might cut 250 lines in common between two 1000-line halves, but you're not going to cut 25,000 lines in common between two 100,000-line halves without a serious amount of work.

I think Go's design shows awareness of this effect. The Go literature does not preach the battle against code duplication as strongly as, say, Java. The goal is to make it easy to understand the other team's 100,000 lines, even if that comes at the expense of some code duplication.

Note: I am not a Go programmer, but I do think that optimizing for ``code entropy'' (lack of duplicated code) over all else is a mistake.


> Under these conditions it will take a heavy investment of time and effort to find and replace the things in common between the two halves.

Maybe. I find the same patterns tend to show up in a lot of code, so very high-level libraries like scalaz or recursion-schemes (that you can't even think without a powerful type system) turn out to save code virtually everywhere.

> Note: I am not a Go programmer, but I do think that optimizing for ``code entropy'' (lack of duplicated code) over all else is a mistake.

Intuitively it does seem like other things should be more important, but I've become more attached to that measure through experience. Even seemingly innocuous duplication tends to go wrong over time.


It depends on the use case... C# is a wonderful language, since the addition of generics and lambdas, it's downright beautiful to work with. But this does come with a cost... Even a simple hello world console app has a pretty significant spin up time compared to go, or anything that is truly compiled.

In some cases, if you have long-lived services, then Java and .Net make sense... You can get farther with the code in place. If you are running one-off executable handlers, that need to start and finish quickly, then you probably would favor go.

It's entirely possible for different options to be part of a larger solution.. and while I agree, the lack of generics is truly painful... I remember C# before, and feel that Java's generics are a horrible implementation... I'd rather wait for a nice implementation just the same.


I cannot find any reason to believe that lambdas have anything, at all, to do with sin-up time. A hello world console app wouldn't even be using them much (closures are just objects so...).

And I doubt generics make a significant difference in runtime but I don't have a CLR v1.1 around to test it out. For comparison, a C# hello world takes about 10ms longer than a C one (both compiled with optimizations; .NET 4.6 C# 6 / MSVC 19) on my i5 Broadwell laptop. Timing as measured by "time" in bash (~25ms vs ~35ms).

I'm guessing you're talking about JIT in general and of course have a point there. I doubt it's significant for any significant values of significant.


I wasn't saying lambdas are a reason that it's slower.. only that it was a feature that made it really nice to work with.

I remember the difference being a bit more than that, on the order of half a second in difference.. but that was around the .Net 1.0 timeframe.. I still used it for a lot of things because it didn't matter to me.. but a couple of things I wanted to use it for at the time was too much lag for starting an EXE and getting output from the command prompt.. running as a service was a different story.

A 1.2Ghz early Athlon was a lot slower than what we have today as well... even so, depending on what you need, even 10ms can make a difference.


I don't think it has to be a tradeoff though. Look at OCaml or possibly D - decent type system, fast compilation and fast runtime performance. And I'd expect Rust to do even better.


> Worse, the developer time you spend due to lack of a feature, you spend while writing some code that would benefit from the feature. The compiler time you pay every time you compile - year after year, for some projects.

Except that this is a false dichotomy: you don't have to have a lack of features to get a fast compile. Incremental compiles have existed for a very long time in various language ecosystems and will achieve very acceptable results.

In addition, SSDs and multicore CPUs can be leveraged to decrease compile times, and these things are only getting better.


I imagine not that much in practice. It is not like someone is going to manually write out identical functions twenty times for each type they want to support. That's precisely what computers are good at doing and there are countless tools to do it painlessly.

The bigger problem is that Go doesn't have type inheritance or similar. Meaning, there's no great way to say that this generic function will only work with number types, for example. You leave the burden on the programmer to ensure their generalized function is applied only to types which it is intended to be used with.

While that is less than ideal, I cannot see that increasing developer time by a significant margin.


> It is not like someone is going to manually write out identical functions twenty times for each type they want to support.

I'll stop you right there. Have you seen a modern Go codebase?

They most certainly duplicate the simplest of functions, resort to `go generate`, or use reflection.


I am afraid I am not entirely sure of what you are trying to get across here. Your mere mention of go generate indicates to me that you do understand my point about computers being able to free the programmer from doing the drudgery of implementing the same generalized function twenty times. And since you are familiar with go generate, I expect you also realize there are seemingly endless tools that exist to solve this specific problem.

I _think_ what you are trying to say is that templates in Go are less convenient than in some other languages. That is a completely fair assertion. But the idea of having to type `go generate` occasionally adding significant man hours to a project seems a little far fetched. You could even:

  alias go='go generate && go'
I completely understand the appeal of templates/generics being a first-class language feature. Not even the Go authors themselves discount the usefulness. I don't understand why the lack of them is adding so many more man-hours to your projects? The overhead of working around the lack of them should not be that significant, even if less pleasant.


Which is why Go is advocating using "go generate" to generate code. See projects such as https://clipperhouse.github.io/gen/


A fantastic idea, even slower to compile (since you need to parse extra source files), more complex workflow, and extra files which developers have to remember to ignore.


This is exactly the case that the problems of a language are solved by tooling. This attitude can be easily found in the Java ecosystem, where the answer to every question is "to use IDE". Interestingly, Java recently abandoned the idea of feature stagnation and started improving the language. I'm curious when this will happen to Go.


Slower to compile, but starting from a much faster starting point.


I'm not even aware of go compiling most of the time (it's typically under 1 second). go's build system handles these sort of things without a more complex workflow.


> I'm not even aware of go compiling most of the time (it's typically under 1 second).

That's besides the point, codegen from extra on-disk files can only be slower than codegen without extra on-disk files, so "generics are slow to compile" and "go has generate" don't make sense together, yet they are used together to assert that generics are bad and anyway go has a replacement.

> go's build system handles these sort of things without a more complex workflow.

No they don't. If you're using go generate you have to run go generate. That's a strictly more complex workflow than not having to run it.


I didn't say generics were bad. I said compilation is so fast that I didn't notice it.

While I agree that "strictly" your assessment is corect it's more complex, I think you're being a bit literal; it's just another command. most of my builds have hundreds of commands.


Slow compilers waste developer time.


Incremental compiles have existed for quite some time.

Go is only unique in that it can do a complete recompile in very attractive times. The only hitch, of course, is that you sacrifice features known and loved in other languages for over a decade.


I sometimes feel that HN needs some reply bots.

For instance, when generics are mentioned in relation to Go, then it should auto-reply with this link: https://news.ycombinator.com/item?id=9622417

It would save so much time.


compare this with python, which has carefully evolved into a better language.


More like into two languages that will keep on existing for the next decade.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: