Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Go 1.10 Beta 1 is released (groups.google.com)
105 points by shabbyrobe on Dec 8, 2017 | hide | past | favorite | 101 comments


From the release notes, arguably the best feature: "There are no substantive changes to the language."

Seriously, Go is not a perfect language, but I have come to really appreciate the stance that the Go team have taken on stability. Looking forward to Go 2, but very happy to keep on with Go 1.


My biggest fear being that go-2 may be a major break, like symfony-2 and angular-2 were (as examples).

Running old code and having it "just work" is something I really love with go. I hope go-2 new features won't come at the cost of breaking old code.


Are we too early for "Go-2 considered harmful" jokes?


There is always the possibility that a go fix tool makes a transition to version 2 super simple.


The maintainers have already publicly stated that this will be the case. They used python 2 to 3 as an example of what they're NOT going to do.


Awesome. Does anybody here know where this discussion happened, for further reference?


I believe the "official" start of the discussion was on the Go blog [1].

[1]: https://blog.golang.org/toward-go2


Indeed, thanks.

Here is the relevant part from it:

> Go 2 must also bring along all the existing Go 1 source code. We must not split the Go ecosystem. Mixed programs, in which packages written in Go 2 import packages written in Go 1 and vice versa, must work effortlessly during a transition period of multiple years. We'll have to figure out exactly how to do that; automated tooling like go fix will certainly play a part.


Indeed. I didn't think about that, but given how gophers are into code generation, this could happen.


Yes, contrast this with Swift which has become unwieldy in my opinion.


I enjoy Swift a lot. Haven’t really ran into anything that made me feel this way except for the compile times which are fucking terrible


Yeah, good luck getting generics.


Adding generics is being considered and might be happening. In the meantime you could write an Experience report[0]?

[0]https://github.com/golang/go/wiki/ExperienceReports


It's interesting that generics are so important now but they were almost dropped from .NET schedule as being "academic only": https://blogs.msdn.microsoft.com/dsyme/2011/03/15/netc-gener...


> ...they were almost dropped from .NET schedule as being "academic only"

For all the good that has come out of .NET, and the nice platform it is today, there are a lot of high level design decisions where the team decidedly landed on 'the wrong answer'. To their credit they've been moving towards 'the right answer' for a while now.

Auto-wrapped properties instead of exposed value fields. Smart initialization. Safe default values. Generics. Anonymous functions. Nulls. Higher order functions.

Still under way: pattern matching, DSL support, option type, type aliasing, etc.

Related, but I also find it highly fascinating on those topics how much of Visual Basic's design they ignored, derided, and have then had to re-implement after-the-fact. A lot of babies got thrown out with the COM+/VB6/MFC bathwater.


> A lot of babies got thrown out with the COM+/VB6/MFC bathwater.

Not sure what you mean here, COM has become the main way of doing Windows APIs since Vista, going full circle to the original design of COM Runtime with WinRT (now UWP).


I was referring to the APIs, components, and language experience that got recycled incompletely as they were ported into .Net and "abstracted away", moreso than how components are marshaled :)

COMs resilience in the face of competing models, as you've pointed out, and the fractured GUI landscape of the windows client platform are symptoms, IMO, of ceding a pretty mature platform for something almost, but not quite, as capable. It has taken .Net over a decade to relearn lessons won painfully for the VB5 and VB6 teams, and I believe they lost a lot of larger systems because of WinForms restrictions in the 2005-2009 window.


Curious, what in your opinion is 'the right answer' in regards to parametric polymorphism?


I think the ideal is something close to ML-style polymorphism a la OCaml/Haskell, or F# on .Net.

The generics in C#/Vb.Net are pretty nice too, but their absence in the language during the design of the original standard libraries is still felt (typically along IDictionary and ICollection interfaces).

It seems obvious in retrospect how useful they are, but I do remember lots of contemporaneous "debate" about how academic and complicating they would be that turned out to be. My personal opinion is that these debates were relics of the C++ contingent of the ecosystem, and less applicable to languages running on top of virtual machines.


C# was originally trying to copy Java, a language that took influence from C (weak types retrofitted) and Smalltalk (no static types at all). Industry has since learned a lot about the tremendous value that types bring. Other forms of static analysis are now gaining popularity too.


Lets not forget that was in 1999, 10 years before Go came into existence.


For the record I'm not trying to argue generics are not important, on the contrary having worked with them for years go seems to be too limiting in this matter for my practical use.

But I agree they can be tricky to get right (compare e.g. declaration-site vs use-site variance [0]) or "accidentaly Turing-complete" TypeScript [1].

[0]: https://schneide.wordpress.com/2015/05/11/declaration-site-a...

[1]: https://gist.github.com/hediet/63f4844acf5ac330804801084f87a...


hell no, if generics get into go I'll have to find another favorite language.


> hell no, if generics get into go I'll have to find another favorite language.

Well, I hope go get generics fast then. Seriously, what is this community that rejects any possible enhancement to a language? Even C gets new features albeit slowly. A language that doesn't evolve is a dead one.

There is nothing complicated with generics, they are just incomplete types, that's all. Generics =/= C++ templates.

Ada has a great implementation of generic programming which forces the developer to complete generic types before using them. In fact Ada got a lot of things that go got wrong despite being way older, especially when it comes to concurrency and types.

Package based generics make them completely compile time and runtime safe, no type erasure. The'd be the equivalent of reflect.MakeFunc or reflect.MakeStruct at compile time, so without any performance penality or ugly reflection, which go has right now.


I feel what is with this demand that Go has to have generics when there are some excellent languages with Generics available.


I'm on the fence about generics...but to answer your question generally...it's not all 'enhancements', it's making a language less readable. Sure, I can choose never to use it. But now I have (more) trouble reading others code. Go's strength, to me, is readability.


People programming in Go today already have workarounds to provide the functionality of generics, but at the cost of type safety and speed.

It is very common (especially in libraries) to pass `interface{}` around and use reflection to do manual type checking and type casting.

With generics the code that uses `interface{}` today would actually be much more readable, because the intention of the author would be clear.


Go has generics, but only for the blessed few built-in functions and types the Go team decided we couldn't live without.

We just want that special casing gone so we could write, say, a generic set that works just as well as map


Why? Like, I might have weird taste, because I really liked generics and even Java-style checked exceptions :-) And to me, generics always sounded like the most practical way to do safe code-reuse across various types.


They add complexity which makes understanding and auditing code much harder. Look at OpenSSL and their overuse of macros, or heavily-templated C++ code.


No one is asking for C/C++ style macros, or even C++ templates. Go generics would be closer to Java or C# generics, which increase code clarity, reduce boilerplate and copy/paste code, and communicate intent.


I see. If it can be implemented as something that make code easier to read and not worse. Why not. But I’d need to see a clear example of that.


Fair. In my opinion generics are good alternative to macros or templates, because they are much more constrained.


Yeah, I can see how having features added to your favorite language while remaining fully compatible with the code you write today would warrant such a reaction. I mean, just having them in your language, even if you don’t have to use them is just so horrible.


I think it would be naive to believe that adding generics to Go would have no effect on the stdlib, third party packages, and ultimately how you write Go code.


I don't have a dog in this hunt, but if a feature is in a language, someone will use it which will mean eventually everyone has to.


Well the same is true for multiple languages which will mean not everyone has to use Go. Especially when there are so many interesting languages to choose from.


What are you saying, you never read other people's code? That's actually my job.


In his defense, just because you don't write generics yourself doesn't mean you don't encounter them.


Good luck, even C11 has light generics nowadays.


light generics? C++ has very powerful generics and has had them for a long time, going under the name 'template metaprogramming'.


He said C11, not C++(11).


I am speaking about C, not C++.


Golang arrays and maps are generic ...


Yes, but I do not have to read their implementation fortunately, and not so many things in Go use generics if I do want to read their implementation.


This post basically proves that Google hires idiots and even created a language for idiots. :))


You'll love Algol 68.


[flagged]


I don't have the power to downvote but this comment has no place here. Please consider editing it.


[flagged]


Some people’s attitude about this would make you think they actually believe this


Some people's attitude about this would make you think they are actually suffering from this.


I don't know why you're downvoted. Generics is a feature that's holding back a lot of developers dealing with complicated business logic (and need for abstraction) from moving to Go. And at the same time, including generics into the language means that all libraries will add generics interfaces - and most probably, stop supporting the old, non-generic interfaces in some time into the future, breaking backwards compatibility at some point.


I am one who downvoted that statement, because I considered it a snippy remark without any content. Especially, as the question of generics in Go has been mentioned and discussed often enough. And if one feels the need to restart that discussion with each Go release, one should do so with an insightful comment.

I do not agree with your comment. I don't see how generics are needed to deal with "complex business logic". Yes, ultimately I would like to see generics added to the Go language. As does the Go development team. But, as they have clearly laid out, this is not a trivial undertaking. There hasn't yet been an implementation concept presented, which fits into the Go framework with its design goals.

And until then, I am quite happy that they didn't implement some half-finished concept.


> I don't see how generics are needed to deal with "complex business logic".

Complex business logic really has no place in Go. I would advise people to use more expressive languages (ideally with a modern type system). Go was designed for low-level network systems programming and is not well suited to more high-level problems, despite what the hype train might claim. I see the future as a polyglot one. Folks should use the most appropriate language for whatever domain they are currently working in and not let their careers be defined by any one language or technology.


And Java was designed for toasters. But enough people worked on it made it one of the most popular general purpose language of all times. A lot of folks do not have time to evaluate most appropriate language for a given task at hand and they would rather write code for their needs in a general purpose language and be done with it.


Sure they tried to make it work. But it did create a static typing and OOP backlash. IMHO, too many of us succumbed to the marketing. Many would have been better off using OCaml or Python, both of which were released about the same time (and OCaml has always shipped with "generics"!).


Can you elaborate on what do you mean by "complex business logic"? Any examples?


I didn't bring the term up, but I personally understand it to mean complex domain-specific code. Some examples from my own workplace: pricing/risk of derivatives transactions, financial contract definition, portfolio optimisation, interpreters/compilers for various complex data transformations.


Don't disagree but what often goes unnoticed in this discussion is that there are generics designs out there: https://github.com/golang/proposal/blob/master/design/15292-....

It's not so much "we should add generics but no one is really working on it" it's more "we should add generics and we've tried a dozen designs that haven't really panned out and we're still working on it".

That's what makes me think they will end up happening.


Adding generics to Java and C after several years, shows how successfully the "wait and see" actually works, versus languages that have supported them from the get go.

CLU was designed in 1975 and ML in 1973, several languages with support for some kind of generics have been born and died since then.

So in 42 years, there wasn't a single generic implementation that could fit Go's design goals, other than not having generics at all?!


But Go has not existed for 42 years. Fitting any of those models on Go is exactly what should be avoided unless one of those models magically was a perfect fit for Go which came out just a few years ago.


That is just hand waving, it is more than clear that //go:generate, 90's style is the the only way accepted by Go design team.

There is nothing special in Go's type system that hasn't been tried out in 42 years of generics CS research.


Magically? It would take magic for one of these models to NOT be a perfect fit for Go.

The fact that Go "has not existed for 42 years" is irrelevant -- almost all of its characteristics have been present for 3 to 5 decades, even altogether in the same language(s).

E.g. http://cowlark.com/2009-11-15-go/


Seems author has added an update after 6 years:

"..Updated 2015-09-25: So, about six years later, people are still reading this. I'm not sure how to feel about this; Looking at it now, it's incoherent, badly argued, and a lot of the details are simply wrong. But 1000 hits a month indicate that people still get value out of it."

But this article link is still quite handy for random proof of Go badness.


Whatever the author added, the original points still stand.

He might have changed his mind, but the facts are more stubborn.


Especially since he adds "Don't get me wrong; I still think that Go is a terrible language. I now work for a company that does a lot of it, and every time I touch it it depresses me a little more".


The "wait and see" approach as you call it, means that "Generics" (a.k.a. parametric polymorphism) have had to be retro-fitted to languages like Java. This means they are more complex than they need to be and have some ugly edge cases.


This. We still can't overload foo(List<Integer> x) and foo(List<String> x) because vintage 2004 JVMs wouldn't be able to distinguish them. Type erasure was a horrible compromise from which Java will probably never completely recover.


In C11 they are even uglier with the _Generic selector, which understandably was made to work together with how generics were done via pre-processor macros.


These two statements are arguments for different discussions:

> I don't see how generics are needed to deal with "complex business logic".

Here, we're talking about what user (developer who chooses between Go and Java) needs or wants. User doesn't care about costs behind the product.

However, instead of proving this statement, you jump to a different perspective whatsoever:

> But, as they have clearly laid out, this is not a trivial undertaking. There hasn't yet been an implementation concept presented, which fits into the Go framework with its design goals.

And here you're estimating things from the cost perspective; and it quite logically follows that it's not rational for Go developers to dive into generics at the moment. Just keep it mind that this decision means that Go stays unappealing to users like me.


No, I am not talking about costs at all. First of all, I don't see how complex business logic requires generics. I have written complex business applications and not required them. I really have no idea why you would "need" them.

And also for the Go developers, it is not about "cost". They have no idea how they could implement generics without fundamentally changing the Go language to something different than what it was about before. They are working on concepts, but nothing resulted what would be a candidate for implementation.


This boils down to how much one values static types and code reuse. If you do not have Generics (a.k.a. parametric polymorphism) then you are forced to make a choice between type safety and code reuse. You cannot have them both.


I imagine the downvotes are because it doesn't really add anything insightful. It just looks like whining.

Personally, I don't get the fuss. If you want generics, go use Java/Kotlin/something else. I don't see anyone complaining about the lack of generics in C or brainfuck, what makes golang special?


Because C actually has better support than Go for generics.

They could be faked with macros since the early days, which was what Borland's BIDS framework in Borland C++ 2.0 for MS-DOS made use of, dropped when version 3.0 with initial template support was released (around 1992).

Additionally, C now has basic language support for generics in C11 with _Generic.


Except adding tools for doing some kind of macros is much simpler in Go than C because the language is easier to parse. What stops you from using a preprocessor in Go? It is not really part of the compiler in C either.


Sure it is, ANSI C11 (ISO/IEC 9899:2011) chapter 6.1 and section 6.5.1.1.


Fair enough, my C knowledge is a little behind the curve.


This isn't a hard concept to understand.

People like 90% of Go e.g. simplicity, build process, speed and feel that if they added features such as generics, decent error handling e.g. Option or Exceptions then it might go to 95%.

Everyone is looking for that perfect development platform.


> If you want generics, go use Java/Kotlin/something else. I don't see anyone complaining about the lack of generics in C or brainfuck, what makes golang special?

Because it's painful to see a language that would be just perfect for a LOT of things with just ONE extra feature - but without this feature that is a requirement, we have to chose alternatives that have irritating downsides.


People have been using Basic and similar for ages to successfully implement business logic without generics. What is necessary for business applications is good integration with databases and reading/writing in various data formats.

That typically depends on reflection facilities provided by the language, not genetics. In fact genetics, unless done with extreme care, inevitably makes reflection API more complex. In turn that makes it harder to write readers/writers for typical business formats harming the case of business applications.


Basic got generics long time ago in Visual Basic.NET.

I have been coding since early 80's, naturally I have delivered lots of production code without generics.

Oberon, one of my favorite language family and influence to Go, which I used for a while, also did not had generics.

But that was in 1996, when generic programming was WIP in ANSI C++, ML compilers were starting to be adopted, Ada was too expensive, Java and .NET were yet to come.

In 2017 I only use static typed languages without support for genericity when forced to do so.


Isn't there some fact about how COBOL (which I'm reasonably sure doesn't have generics) still handles more transactions and/or more $$ per year than anything else?


Use another language then. What is the point of different languages if they are supposed to have the same feature set? If you want native code type safety and generics use Rust instead. If that is too complicated use a more friendly type safety focused language like Swift.

Really you are spoiled for choice and there is no need to insist every language should follow your particular language philosophy.


This release finally brings a way to raise an error when an unknown key is encountered in a JSON object: https://go-review.googlesource.com/c/go/+/74830

The error's text is close to useless for even a moderately complex JSON object, but at least it's possible to catch this condition now.


I wouldn’t recommend that against external APIs in general, though there are use cases. Adding keys is usually the best way to make backwards compatible changes in an API, but you’d make those changes backwards incompatible in your own infra.


My experience is that strictness is preferable here; silently accepting bad keys in a JSON object will lead the caller to think it's working, when it's not.

If you then break the API's backwards compatibility by, say, removing a field, then your API will visibly break, which is correct, something you'd otherwise not detect.

The solution to maintaining backwards compatibility is a combination of explicit versioning, erroring like this, good documentation, and trying to support old features for as long as possible.

Definitely never liked Go's silent ignoring of JSON keys. The workarounds (e.g. overriding the unmarshal code to apply a check) have always been very invasive.

Edit: Maybe you meant in the client, since you wrote "against". If so, I would still argue that strictness is good, as you do want your client to break (or at least warn) if something changes.


> If you then break the API's backwards compatibility by, say, removing a field, then your API will visibly break

That's a different issue. Fields going missing is definitely something I want to cause big problems.

I do mean consumers (e.g. unmarshalling JSON responses from APIs), as the typical mechanism to support backwards compatibility is by adding keys. Otherwise you can't upgrade ever without entirely new API versions, and you force lockstep for deployments. Needing new data is a common thing - maybe you have user endpoint that's not returning <some_flag> that you added that you now want in some new service. Bumping APIs for every added field would get unwieldy fast and you'll have services scattered among many API 'versions'.

It's just like in protobuf. In protobuf (or most other service-to-service serialization libs), adding a new field is not backwards incompatible.


That's a good point. It's more useful for config files than APIs I reckon. That's why I've been hanging for it.


This is the improvement that I’m most excited about for this release. I waded through some code recently that implemented its own validation in a hacks way by text-digging the re-serialization of structs... We were unable to put default (zero) values in our config files...


> We were unable to put default (zero) values in our config files...

Huh? I don't know the details of what you're (de)serializing, but the standard strategy for default values that are not zero values is to have unpack into a pointer. For example, if you have a message like

  { "foo": 23, "bar": 42 }
and the "bar" field has a default value of 5, you unpack like:

  var msg struct {
    Foo int
    Bar *int
  }
  err := json.Unmarshal(data, &msg)
  //handle error
  if msg.Bar == nil {
    defaultValue := 5
    msg.Bar = &defaultValue
  }
If you cannot change the type of the message to have a pointer field, you can implement json.Unmarshaler for the receiving type such that UnmarshalJSON deserializes into a temporary type, then copies the values into the recipient:

  type Message struct {
    Foo int
    Bar int
  }

  func (m *Message) UnmarshalJSON(in []byte) error {
    var msg struct {
      Foo int
      Bar *int
    }
    err := json.Unmarshal(data, &msg)
    if err != nil {
      return err
    }
    m.Foo = msg.Foo
    m.Bar = 5
    if msg.Bar != nil {
      m.Bar = *msg.Bar
    }
    return nil
  }
Now you can deserialize into Message (or into a struct containing a Message or *Message or []Message or whatever) and it will fill in the correct default value.


You can have whatever default value you like by initialising the struct before you pass it to json.Unmarshal. If the key isn't present, it won't be set:

    type Test struct {
        Foo string
        Bar string
    }

    func NewTest() *Test {
        return &Test{Foo: "a"}
    }

    func main() {
        t := NewTest()
        if err := json.Unmarshal([]byte(`{"Bar": "b"}`), t); err != nil {
            log.Fatal(err)
        }
        fmt.Printf("%#v\n", t)
    }
Output:

    &main.Test{Foo:"a", Bar:"b"}


One thing you may want to consider in the future, or, at least, whoever wrote that code, is that it's not that difficult to fork the standard library and make it do something different in your program. I forked the encoding/xml library once so I could add tag stack depth limits and size limits for tags and text nodes, so I could use it to write an XML validator (in which you must be careful that the user can not simply blow out your RAM by sending you too much text), and I forked encoding/json so that you could declare a map[string]interface{} field on an struct you are decoding into that will "catch" any unmatched keys in the JSON object and treat it the same as if you passed the map[string]interface{} in as the thing to decode into, instead of discarding them. (Encoding works too.) The first was on the order of two days and the second was on the order of four hours (untested in production, but the unit tests are fairly easy and comprehensive), though in both of those particular cases if you've not fiddled with the "reflect" package enough to understand it, you'll also need to pay that price. Other packages may require other knowledge, of course. But the time required is such that it may be less than the time spent on dirty, half-working hacks, which tends to grow from the "just five minutes" of the initial estimate to "five days and we're still paying for the hack five years later...".



I don’t see anything regarding vendoring. What’s the current official tool for managing dependencies ?


There's no official tool (yet). The community seems to push dep as a future official tool though: https://github.com/golang/dep


Dep is very nice. We've been using Glide for more than a year, but it's incredibly buggy/flaky and has several ugly misbehaviours. Dep isn't finished, but it's already better than Glide, and we're transitioning over to it.


> For the X86 64-bit port, the assembler now supports 359 new instructions, including the full AVX, AVX2, BMI, BMI2, F16C, FMA3, SSE2, SSE3, SSSE3, SSE4.1, and SSE4.2 extension sets. The assembler also no longer implements MOVL $0, AX as an XORL instruction, to avoid clearing the condition flags unexpectedly.

Is x86 vector support only being added now?

https://www.youtube.com/watch?v=gso3g_ofjlw


The go assembly language is a bit idiosyncratic and didn't include all instructions, which led to https://golang.org/doc/asm#unsupported_opcodes .

The standard library is actually chock full of specialized assembly implementations of various algorithms. Well worth a look even if you don't use Go. Here's an example of aes encryption which is a relatively recent instruction: https://golang.org/src/crypto/aes/asm_amd64.s


Tried my consensus library in go, and it is now 5% faster AGAIN. I like such magic. ;)

What I am really looking forward to is the NUMA awareness in the runtime. Most servers are now NUMA machines, in fact AMD Threadripper is NUMA and it costs a few hundred $ only.


> There is no longer a limit on the GOMAXPROCS setting. (In Go 1.9 the limit was 1024.)

Why was there ever a limit? Is anyone in the know on this?



> The go get command now supports Fossil source code repositories.

Neat! I've been waiting for this.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: