I'd have to guess it would be conflict resolution. Imagine two separately imported packages for which you didn't write that add a method in on a type from a third package you didn't write, all with the same name/method signature. How does the compiler you know which of the implementations to use?
The example, while being possible, looks quite improbable.
If it is only a matters of conflicting names, qualified names should be able to resolve the ambiguity; here I suppose you imagine implementing the same interface for the same type differently in two packages.
That a package implements an interface for the same external method for a third-party type is already dubious. Now, you have two packages like this and you want to import them both. That's quite a corner case.
Practically, the last package that is imported could "win" (so you can decide which you want). The compiler has to register all the possible implementations anyway, so a warning is trivial to emit on redefinitions.
----
I'd guess the restriction is to allow compilation of dynamic dispatch on a package basis (think separate compilation) instead of waiting till all possible files have been processed.
Yeah, good point. Though it would be handy if you could at least within your own package add methods to types. But then I guess those methods would be uppercase but not exported, which would be weird.
You can certainly imagine schemes like specifying in the import which types to extend, but that seems complex enough that I'm not surprised a language as radically devoted to simplicity as Go left it out.
> A method defined in a new package might break a type switch (or some reflection-based code) in another package.
In that case the code would break because it assumes too much. If the language was designed differently, people would code differently too. This is like having a Java abstract class which "knows" all the possible subclasses in advance: if it breaks, it is the responsibility of that abstract class for having too much coupling.
> Plus it's unclear what should happen with interfaces when multiple packages define methods with the same name.
I am not sure I understand: if methods are defined in different packages, they have different qualified names, don't they? is there any ambiguity here?
> In that case the code would break because it assumes too much. If the language was designed differently, people would code differently too.
Sure, but we want interfaces to work the way they work now, because it's useful and reduces coupling.
> This is like having a Java abstract class which "knows" all the possible subclasses in advance: if it breaks, it is the responsibility of that abstract class for having too much coupling.
The analogy is not very apt, because in Go code the consumer is the one which is broken, not the producer. In the current design the consumer can make static assumptions about code, if those assumptions are helpful. This works because the assumptions don't change when new packages are imported. In other words, they work because the coupling is known and stable.
You know exactly what a type is because its definition is only in a single package, the one you import. If you could define methods anywhere, you introduce hidden dependencies and more coupling. A new package might change the nature of the type. This is not acceptable.
Of course people would write code differently if this weren't the case, but then the language would be less useful.
> I am not sure I understand: if methods are defined in different packages, they have different qualified names, don't they? is there any ambiguity here?
No, they don't have different qualified names.
package foo
import (
"fmt"
"numbers" // type T int is defined here
)
type Hexer interface {
Hex() string
}
func PrintHex(v interface{}) string {
switch vv := v.(type) {
case Hexer:
return vv.Hex()
case numbers.T:
return fmt.Sprintf("%x", vv)
default:
return "unknown"
}
}
> If you could define methods anywhere, you introduce hidden dependencies and more coupling. A new package might change the nature of the type. This is not acceptable.
This is the Expression Problem: adding new methods to existing types and/or implementing existing methods on new types. The intent is on the contrary to decouple things. You can use any feature to sabotage code.
> What will PrintHex(T(42)) return? "2a", "0x2a", or "0x2A"?
The compiler would warn you and take the latest definition into account, which is the one from "baz". That would be a pragmatic approach.
I notice that neither "bar" nor "baz" import "foo".
So in your (hypothetical) example, Hex belongs to the global namespace, not "foo". I guess this is the case in non-hypothetical Go too? I didn't realize this, thanks.
> The compiler would warn you and take the latest definition into account, which is the one from "baz".
As a general rule, the Go compiler does't issue warnings.
There is no latest definition. The order of imports doesn't matter, and bar and baz could be imported by different packages. I imported both in foo here since we only have three packages (four, if you count numbers), but bar and baz could be imported (separately) by qux and quux, and both of them could be imported by package waldo.
> I notice that neither "bar" nor "baz" import "foo".
Correct, they don't need to, but they couldn't anyway, as import cycles are not allowed. Package dependencies need to form an acyclic graph.
> Hex belongs to the global namespace, not "foo". I guess this is the case in non-hypothetical Go too?
No, Hex belongs to Foo. If Hex were to be used by package fred, fred would have to import foo, then use foo.Hex.
In fact, in this case, I should have made hex an unexported interface (lower case). This is a very common idiom in Go. Users of the language define internal interfaces to group other, 3rd party types, by behavior. They can then execute this behavior in a type-safe manner while having all the 3rd party types all decoupled from themselves and from your own package.
The fact that you don't have to declare interfaces before defining the types is the number one reason Go is a statically-typed language that feels like a dynamically-typed language.
You mentioned the Expression Problem. Haskell solves this by type classes. Go interfaces are dual to type classes. There is no need to add methods to existing types, as the consumer can define new types that embed the old type and its method. And if some other code uses interfaces, it will then be able to use both the old type and your new type.
package europe
import "fmt"
type Celsius float64
func (c Celsius) Kelvin() float64 { return c + 273.15 }
func (c Celsius) String() { return fmt.Sprinf("%d°C", c) }
You can print Celsius(42) and you'll get 42°C, and you can pass the type to some function consuming a Kelvin interface:
package physics
import "europe"
type Temperature interface {
Kelvin() float64
}
func ThermalExpansion(m Material, t Temperature) float64 {
... // some code
x := t.Kelvin() * m.ThermalCoefficient()
... // some more code
return x
}
But now I understand that you declare Hex in both bar and baz independently of foo.
The reason I wrote about a global namespace is because bar.Hex and baz.Hex are implicitly implementing foo.Hex thanks to their names (and signature), whereas in other languages foo.Hex, bar.Hex and baz.Hex would be considered as distinct methods. But the confusion is cleared now, thanks.
The example with temperatures is interesting because defining conversion methods is a recurrent problem and you missed one case that is useful too.
The European author of the Celsius package doesn't know about Farenheit, or simply doesn't care about it.
However, the author of the america package knows about Celsius and needs to convert between Celsius and Farenheit.
He could define a ToF method which converts from °C to °F:
func (c Celsius) ToF() Farenheit { ... }
The same could be done with other units: Rankine, Delisle, Newton, ... (thanks Wikipedia)
Now in Go, you don't declare methods like this.
You have most likely:
Maybe because structs have some fields private and some public. To add a method outside the package would be to make those private fields visible in the method outside the package which then allows such methods to manipulate the struct and that will break the abstraction provided by it.
> ... would be to make those private fields visible in the method outside the package.
Since you write the implementation of the method inside your struct's package, you have the right to manipulate private data there as you would do with any function. I don't understand the problem.
I think the problem they wanted to avoid is incompatibility (Go cares much about compatibility, see Go 1’s Promise of Compatibility[1]). Private fields mean to external packages “This is non of your business. The field can change or be removed at any time without notice.”.
If external packages could add methods to a struct, thus turning private fields into effectively-public fields, the whole struct (all its fields) potentially becomes public. How do you maintain a package if your defined API (public fields/methods) is ignored and instead everything is made available to external packages? To keep compatibility, you could never refactor the package, instead you have to keep all private fields to not break external packages that use them.
You could just let the external methods only access public variables. That's exactly what Java let's you do when you extend another class. I agree that private members should stay private.
I wish I could add methods from outside of my package. I currently run into cyclical dependency issues when I try and define something like...
func (g Game) GetPlayers() []Player { ... }
func (p Player) GetGame() Game { ... }
While having Game and Player in separate packages. It's not the end of the world having them in the same package, but the amount of methods does pile up...
There is no need to bring private fields too. Just keep the same visibility rules. The new method would be written inside a user package which would have access only to the public members of the type being dispatched on. That mean that you could only define methods for external structs that make use of their public API.
I don't get it. According to the github history there was a hello world commit in 1972, conversion to C in 1974, conversion to ansi-c in 1988, then the next commit was the first go specification in 2008.
Git was written in 2005.
What did these hello world commits have to do with go? Why are they in the git history?
It's a joke basically saying "First, there was B, and it looked like this. Then, there was C, and it looked like this. Finally, there was Go, a new evolution of our work".
The git dates were clearly faked and created entirely for the purpose of this joke.
> According to the github history there was a hello world commit in 1972
> Git was written in 2005.
Not sure about this specific case, but it isn't uncommon to migrate from another version control system (e.g. Subversion) to Git while maintaining the past history.
Subversion was (to my surprise) only released in 2000, so you'd have to migrate from something even older -- perhaps CVS (released in 1990)? RCS was released in 1982, so you might have migrated from that. SCCS was released in 1972, so they MIGHT have used it, and kept those migrations going. But my guess is they fudged the git history as some weird type of historical documentation of where these languages came from?
We had a look at first commits from various projects a while ago [1] - I like how the first commits get cleaner the newer the projects are, presumably because version control became more ubiquitous.
How is this so old? Was Go something that was brewing for a long time prior to its development and debut at Google? Also, Kernighan? Maybe I live under a rock, but I wasn't aware of his involvement.
These commits are obviously jokes, probably by Ken Thompson or Rob Pike.
> Was Go something that was brewing for a long time prior to its development and debut at Google?
Its an evolutionary descendant of Bell Labs languages: B -> NewB -> C -> Newsqueak -> Alef -> Limbo -> Go. (The channel-based concurrency model was introduced in this line with Newsqueak)
> Also, Kernighan? Maybe I live under a rock, but I wasn't aware of his involvement.
He is peripherally involved as the co-author of 'The Go Programming Language' book: http://www.gopl.io
Go goes back even further than that. They started developing it on punched cards. Then the 1960s happened and... well you had to be there. Long story short, the project never got completed until 2012.
What I find interesting is that there is absolutely no mention of the word 'test' in the spec. With unit testing and TDD/BDD so ubiquitous now, perhaps new languages should be created with testing support baked in instead of it beng added as an after-thought.