Why are we degoogling, for what purpose? I couldn't care less about giving them what likely amounts to ~10€ of margin per year on the hardware sale. What I care about is not giving them data which is worth a lot more than that, and to take back control over my device.
When you go with an alternative you lose superior privacy and security offered by GrapheneOS and you just end up leaking more data back to Google and other ad-tech companies than you would otherwise, negating any benefits several times over.
I think it’s very valid. I want to be hardware-independent, not only OS independent. I need graphene to work on a fairphone, jolla phone or whatever other alternatives there are. E/os can do that (to an extent), Graphene can’t for probably very good reason, but still: It‘s not an alternative then.
"Google built Android to be impossible to maintain without them."
Could be a very genuine answer to that question. Do you really need all of Android? What if you can build a very similar thing at a fraction of the size.
It's not really that interesting. For instance, we've seemingly decided that various blue collar workers are incapable of not falling to their deaths and so have come up with OSHA and various other national equivalents. Drivers are incapable of not crashing and so we started including air bags. Woodworkers seemingly can't stop cutting their fingers off using a table saw and so we came up with SawStop.
Yes there is. RAII is not a full replacement for GC and you will shoot yourself in the foot if you treat it as such. The design of C++ also includes many unpatchable holes in the standard library which WILL cause errors and UB.
You take a reference to a vector element, which you later accidentally invalidate by pushing to the same vector.
You move out of a unique_ptr, but then you accidentally use it again.
You create a cycle with shared_ptr causing a memory leak.
If you std::sort a list of floats with NaNs (among other things) you can stomp over memory. The sort function requires some very specific ordering otherwise you get UB.
AI proponents have been very vocal about AI safety being meaningless. But nobody could have expected that the end of the world would have come because Trump puts Grok in charge of the US nuclear arsenal. We truly live in the dumbest timeline.
Ah, I see. Do I understand correctly that this means that for a given instance of polymorphic object I can switch between static polymorphism and dynamic dispatch, and use them both simultaneously? How is this useful in practical terms, like why would I want to do it?
Sort of. Given an instance (can even be a primitive) you can obtain a dyn reference to a trait it implements simply by casting it.
let a: i32 = 12
let b = &a as &dyn std::string::ToString; // i32 implements the ToString trait
let c = a.to_string(); // Static dispatch
let d = b.to_string(); // Dynamic dispatch through dyn reference
Note that there's not really any polymorphic objects in rust. All polymorphism in this case goes through the dyn reference which contains a pointer to a vtable for a specific trait.
Additionally, going from a dyn reference to a type-specific reference is not easy. Also, certain methods and traits are not dyn-compatible, mostly due to generic parameters.
The main use comes in with various libraries. Doing dynamic dispatch on a specific type is not very useful, but your library might expose a trait which you then call some methods on. If you accept a generic parameter (eg. impl Trait) each such invocation will cause monomorphization (the function body is compiled separately for each generic type combination). This can obviously bloat compile times.
Using a dyn reference in your API will result in only a single version being compiled. The downside is the inability to inline or optimize based on the type.
One additional use I found is that you can sometimes get around the divergent expression type in match expressions. Say you need to print out some values of different types:
let value: &dyn Display = match foo {
A(numeric_id) => &numeric_id,
B(string_name) => &string_name,
C => &"static str",
};
This would not work without dyn as each value has a different type.
Ah, I see. Thanks for the example. I think I understand now. In C++ problem of monomorphization, or potential bloat due to excessive template instantiations, is normally handled at the linker level but can be controlled at the code level too through either by rewriting the code with some other type-erasure technique or simply by extracting bits of not-quite-generic-code into smaller non-generic entities (usually a non-templated base class).
Does this mean that the Rust frontend is spitting out the intermediate representation such that it doesn't allow the de-duplication at the linking phase? I see that Rust now has its own linker as of Sep 25' but it still works with normal linkers that are used in C and C++ too - gnu ld, lld, mold, ...
There's no custom rust linker just yet. The change in September was to switch from GNU LD to lld for performance on Linux. There are some rust linker projects (like wild), but these tend to be aimed at speed (and/by incremental linking) rather than executable size.
I'm not sure how useful deduplication at the linker level is in practice. Though I don't think Rust does anything different here than C++. The main issue I imagine is that the types used in generic code have different sizes and layouts. This seems to me like it would prevent deduplication for most functions.
I think the question is, do you know at compile time what the concrete type is? In situations where you do, use static. (I'm not sure I'd call that "polymorphism". If you know the static type it's just a function on a type, and who cares that other types have functions with the same name?) But if you don't know the concrete type at compile time, then you must use dynamic dispatch.
And you can use each approach with the same type at different points in the code - even for the same function. It just depends on you local knowledge of the concrete type.
That's polymorhpism 101, and not quite what I was asking. From my understanding what Rust has is something different to what C++ is offering. In C++ you either opt-in for static- or dynamic-dispatch. In Rust it seems that you can mix both for the same type object and convert between the two during runtime. It seems that this is true according to the example from dminik from the comment above but the actual purpose is still not quite evident to me. It seems that it tries to solve the problem of excessive template instantiations, called as I can see monomorphizations in Rust. In C++ this is normally and mostly done through the linker optimizations which may suggest that Rust doesn't have them yet implemented or that there are more useful cases for it.
> It seems that it tries to solve the problem of excessive template instantiations
No, I don't think the way Rust implements dynamic dispatch has much, if anything, to do with trying to avoid code bloat. It's just a different way to implement dynamic dispatch with its own set of tradeoffs.
> I know this is a foreign concept to some, but you can have a backbone.
Challenge it in court. Move the company to a different jurisdiction. Burn everything down and refuse to comply.
Challenge in court is fine, even healthy.
Threatening to burn everything down and refuse to comply might well work; simply daring Trump to a game of Russian Roulette about this popping the bubble that's only just managing to keep the US economy out of recession, on the basis that he TACOs a lot, I can see it working in a way it wouldn't if he were a sane leader making the same actual demands just for sane reasons.
Move the company to a different jurisdiction? That would have worked if AI was a few hundred people and a handful of servers, as per classic examples of:
At the height of its power, Kodak employed more than 140,000 people and was worth $28 billion. They even invented the first digital camera. But today Kodak is bankrupt, and the new face of digital photography has become Instagram. When Instagram was sold to Facebook for a billion dollars in 2012, it employed only 13 people. Where did all those jobs disappear? And what happened to the wealth that all those middle class jobs created?
But (I think) now that AI needs new data centres so fast and on such a scale that they're being held back by grid connection and similar planning permission limits, this isn't a viable response.
They can be burned down, but I think they can't realistically be moved at this point. That said, I guess it depends on how much Anthropic relies on their own data centres vs. using 3rd parties, given Amazon's announced AWS sovereign cloud in Europe?
I think you have a misunderstanding of the term alignment. Really, you could replace "aligned" with "working" and "misaligned" with "broken".
A washing machine has one goal, to wash your clothes. A washing machine that does not wash your clothes is broken.
An AI system has some goal. A target acquisition AI system might be tasked with picking out enemies and friendlies from a camera feed. A system that does so reliably is working (aligned) a system that doesn't is broken (misaligned). There's no moral or philosophical angle necessary if your goal doesn't already include that. Aligned doesn't mean good and misaligned doesn't mean evil.
The problem comes when your goal includes moral, ethical and philosophical judgements.
Given the prompt and a random string generator, would the LLM still produce a game? Presumably yes. In that case I'm not quite sure that the dog here has any real involvement. It's could be replaced with the yes command.
If I have to give Google a lot of money every 4-6 years to remain "de-googled" then I never was.
reply