To ensure a seamless experience for both Android and iOS users, Quick Share currently works with AirDrop's "Everyone for 10 minutes" mode. This feature does not use a workaround; the connection is direct and peer-to-peer, meaning your data is never routed through a server, shared content is never logged, and no extra data is shared. As with "Everyone for 10 minutes" mode on any device when you’re sharing between non-contacts, you can ensure you're sharing with the right person by confirming their device name on your screen with them in person.
This implementation using "Everyone for 10 minutes” mode is just the first step in seamless cross-platform sharing, and we welcome the opportunity to work with Apple to enable “Contacts Only” mode in the future.
Also `we welcome the opportunity to work with Apple to enable “Contacts Only” mode in the future` doesn't make it sound like Apple actually helped implement this
Compared to Signal, where does element stand today in terms of privacy and encryption? Due to the decentralized nature they werent able to offer the same guarantees from what I remember
Matrix allows for unencrypted messages so it's inherently less encrypted than Signal. The federation capability also means messages leak metadata. Furthermore, encrypted messages also contain some metadata in the unencrypted envelope. Some protocol features (emoji reactions) also ended up outside of the encrypted envelope because of that. It's a risk with any protocol that has encryption bolted on and optional.
On the other hand, you can host your own Matrix server and still participate in the network, whereas Signal will have you convince your friends and family to install a custom Signal client if you want to run your own Signal server, for instance because you don't want to rely on Amazon's servers (Signal was down when Amazon went down this morning).
Signal sacrifices network openness for encryption capabilities.
There's also the MLS/MIMI side of things, but AFAIK that work hasn't been completed yet (MIMI isn't even a full RFC yet).
Element/Matrix, with some modifications, has been chosen as the messenger of choice by the French government (Tchap) as well as the German military (BwMessenger, BundesMessenger) and healthcare (TI-Messenger).
> Matrix allows for unencrypted messages so it's inherently less encrypted than Signal.
But that logic, Matrix is less encrypted than Whatsapp, too, which is a crazy thing to say.
> The federation capability also means messages leak metadata.
It's the opposite: The centralized architecture means that there is a single target server to attack for the metadata. With decentralization, you can't easily scale up your attack to all users.
> But that logic, Matrix is less encrypted than Whatsapp, too, which is a crazy thing to say.
From a protocol perspective, it is. Without an open-source WhatsApp client and independent protocol security analysis, it's hard to judge the effectiveness of the encryption, of course.
> means that there is a single target server to attack for the metadata
Signal does not collect or provide much metadata. It has IP:port mappings, for sure, and keeps track of when a user last checked in, but the protocol itself is extremely well-suited to resist analysis.
A lot of information Matrix provides you for "free" once you break the HTTPS tunnel needs advanced analysis to get it out of Signal. Signal's protocol security is really impressive, I don't think there's anything comparable out there.
Somewhat related - Can someone explain this to me? France and Germany want to lessen dependence on American organizations, so they choose Matrix, also an American organization.
Matrix, the organisation, takes care of the open source side of things.
BwMessenger is a partnership with "ELEMENT SOFTWARE SARL" (according to https://messenger.bwi.de/datenschutz), the French entity of the commercial side of the people originally behind the open Matrix ecosystem (https://element.io/legal/company-information). I'm not sure why the French entity is doing business with the Germans as Element also has a German entity, but either way the American side is not the one doing the work.
For the American entity, a lot (most?) of the work that's not from unrelated open source contributors seems to be coming in from either EU countries or the UK.
Element is also UK headquartered, albeit with French/German/US subsidiaries when selling to those respective governments. BWI buy via France because when we started working with them we didn’t have a German legal entity yet.
Signal and any kind of Slack SaaS: US infrastructure, US law around data governance. Matrix (and Zulip, for that matter, and mattermost too) encourage self-hosting on your own infrastructure, or at least in-country, even if the upstream security patches are coming from US developers.
If it's open source (and libre software) then it's not as important where the main development offices are (or where the company is incorporated). You still have control.
Thank you, and I see it's registered in the UK.
I think it started in the US? Well, not like it's relevant anymore.
And can you answer this question:
If everyone has secure chat, then won't that benefit criminal organizations?
I struggle to understand the love for private communication when it seems like that would benefit, for example, religious sects and sex abuse rings.
NOT that I like that Zuckerborg keeping all my messages.
> If everyone has secure chat, then won't that benefit criminal organizations? I struggle to understand the love for private communication when it seems like that would benefit, for example, religious sects and sex abuse rings. NOT that I like that Zuckerborg keeping all my messages.
Yes, sort of.
The thing is, the government is already not permitted to wiretap people, at least without reasonable suspicion.
Wiretaps themselves are not admissible in court, and can only be offered as a mechanism to correlate behaviour anyway. At least in the UK. (Which, is ironic when you consider what's going on there with online speech, but I digress).
Factually speaking, in order to do a crime you have to physically do a crime, the police knowing when and where do not require access to your communications to figure out. They will sting people, get people to turn on other people or simply catch red-handed when doing ordinary police work.
If we legitimately believe what the governments of the world are saying: that we need to embolden the police. Then funding them properly is the right start, yet nobody seems to be doing that. The EU has been making cross border communication easier though, which is in-line with emboldening the police, so I'll give them that.
Having more information will do very little to help, for the same reason that phone taps aren't given out freely (and never have been) - because even if you have the data, you have to choose how to act on it, and you'd need the resources to investigate and follow-through.
There is a distinct irony that unencrypted SMS is more secure than online messengers, because there are legal protections.
Are you European? I don't understand that use of hinder. You mean prevent from using? Then no, I don't think preventing normal people from using encryption will prevent criminals from using encryption, and didn't mean to imply that
> If everyone has secure chat, then won't that benefit criminal organizations?
Probably. But criminal organizations also benefit from having electricity, or cars, or a million other things that we all would be much worse off if we didn't have them. Just because something benefits criminal organizations as a side effect is not really a reason to not do it for the benefit of ordinary citizens.
My point wasn't that we should or shouldn't have it. I just get the impression that the same people calling for privacy will be highly outraged the next time, for example, an Austin Wolf (gay porn 'star' who used Telegram to share thousands of files showing abuse of children) situation arises, or it's inevitably revealed that religious sect xyz coordinated over it. Europeans trash talk Telegram (and that is fine), but somehow Matrix is different? How?
Oh I don't think it's different at all in that respect. I think that many people are very ignorant about the inherent double-edged sword that is freedom, and think that it's possible to deny it to only bad people. On top of that, many people don't particularly value private communications, considering it to be a theoretical issue that doesn't affect them. So yeah there will certainly be outrage in cases like you mentioned.
I think these two topics need to be looked at a bit separately, similar to for example WhatsApp, where you have e2ee but there are still lots of privacy risks.
In the matrix ecosystem, as far as I understand, having only one user from the matrix.org homeserver in your room already undermines metadata privacy to some degree. Also, there still are issues with decrypting messages from time to time with certain combinations of clients, rooms and homeservers, which effectively means that the "failsafe" option for getting messages across the network is using unencrypted rooms.
Having free, secure, federated, usable instant messaging is still not solved imho, and I think it's not easy to solve. So far matrix is the best attempt in my book, but it's also not there (yet?).
> So far matrix is the best attempt in my book, but it's also not there (yet?).
IMO XMPP is the best attempt so far, but it's completely outdated by today's standards. Matrix is a modern attempt, but it's just bad. I doubt that Matrix will actually get anywhere usable in the future.
It's absolutely possible to build such a protocol with high performance, seamless UX, Signal's level of privacy and security, and Discord's level of features. It's just a lot of work to actually build the specifications and flagship implementations, compared to just building a good centralized option.
> Matrix is a modern attempt, but it's just bad. I doubt that Matrix will actually get anywhere usable in the future.
Obviously I’m biased, but I seriously suggest looking at the various vids from the Conference. Matrix has definitely had some ups and downs in the past, but right now it is in a good place.
>I think requirements also changed a lot over the years with smartphones and mobile internet access everywhere.
I recently started using an XMPP client on a smart phone (Cheogram, fork of Conversations). It handles that stuff remarkably well. Switching between, say, mobile data and WiFi takes seconds. It seems to have some way of noticing the loss of connection and immediately fires up a new TCP connection on the new medium.
I don't think this is a super useful comparison, because the two services have wildly different threat models. I think of Matrix as a secure replacement for Discord. Signal is about small group messaging. It's literally a replacement for the built-in texting app on your phone, and that's its intended userbase. Signal is what you use when you need to know, to the limit of best practices available to ordinary users, that your messages will be as private as they can be made to be. That's a goal that isn't compatible with many of the affordances people want for project discussion platforms and things like that.
If you pit Signal against Matrix and make the competition purely about security, Signal will win for the foreseeable future. But I think it makes much more sense to think about different sets of tradeoffs being more appropriate for different kinds of problems.
Signal is centralized, so it becomes a huge target of all kinds of hackers and three-letter agencies. This alone is sufficient for me to never touch it. And then, there is this:
The vast majority of people using "end to end encrypted" messaging systems fail to verify the identity of their contacts. So those running the servers can fairly trivially MITM the messages. So in practice it does matter who controls the servers.
The good thing is that verifying the other contact is invisible to the server in Signal. This means that it's stochastically sufficient that a few people do check their contacts in order to see whether there is any widespread MITMing going on.
It's less encrypted. E.g. you'd think that emoji reactions are end-to-end-encrypted (as they are in Signal). But they aren't[1]. I expect similar implementation issues wrt. the encryption in Matrix.
Signal uses a whole suite of modern cryptography, including post-quantum ratchets for key agreement and zero-knowledge proofs for group membership.
Meanwhile, Matrix has a plaintext mode and knowingly shipped libraries with side-channels for years, by their own admission (and left many clients in the ecosystem depending on the vulnerable C implementation when they rewrote their cryptography protocol in Rust).
Even today, they are not the same protocol. Olm/Megolm is distinct from Signal in a lot of ways that I've outlined in my previous blog posts.
I don't particularly care if people like Matrix, but please don't spread falsehoods about the cryptography being used.
The fundamental difference boiling down to trust isn't primarily in the cryptography; it's entirely down to the infrastructure and the root of control.
Signal is widely regarded as the gold standard for centralised E2EE, but its architecture forces you into two massive, non-negotiable trust compromises:
1) You must trust the Signal corporation with all your metadata. Every routing and handshake detail passes through one single choke point that they control. That is an unacceptable risk for security-minded users.
2) You rely completely on Signal to truthfully publish a pre-compiled binary that actually reflects the open-source code. For the vast majority, this is unverifiable in practice. It's a critical client-side act of faith.
Matrix’s design fundamentally eliminates these single points of failure, shifting the root of trust squarely to the user (or a group you trust):
1) Self-hosting; This is the game-changing feature. Host your own Synapse/Dendrite instance. Your metadata never leaves your control. You move the trust boundary from a corporation to yourself. You genuinely achieve "no communication outside your control."
2) Matrix uses an open specification. You can use FluffyChat, Nheko, or Element. This breaks the coupling between the server and the client. Even if you rely on a third-party server, you can use a client built by a completely different team, making the client-side code independently auditable and verifiable across projects. This is the ultimate defence against subtle backdoors in a single vendor's binary.
TL;DR: Signal offers "trusted third-party" crypto running on a single, unauditable binary. Matrix is decentralised, verifiable zero-trust communication. The comparison isn't about the strength of the AES key or which data it has been applied to; it's about the architectural freedom to not have to trust another entity with either your data or your code. That freedom represents an essential leap in trustworthiness.
Super nice summary. Makes me want to use Matrix again, but the clients have all been very poor in my experience. Element on desktop was okay and I used it for work without issue, but it's not nearly as slick as "scan this QR code and import your contacts" (oh that's another difference, your ability to use the network is governed by Signal allowing you to register an account, typically requiring a phone number for bot prevention, which seems like an extreme step for an app that aims to keep you anonymous.)
You might be making good points, I'm not familiar enough with the context to tell, but whining about downvotes is in bad taste, so a large part of your downvotes probably come from there, mine included.
Apologies, it's frustrating watching my comment go from +5 to -2 in a handful of seconds.
Not that I'm into karma farming (or that it even means anything), but it irritates me to think that people are gaming the discourse here.
There's an implicit groupthink when it comes to seeing greyed out comments; to the point that people may (and do) think that the comment is non-factual or at the very least unpopular. This is especially true in subjects that are critical of Signal.
Quoting the guidelines [0], if you think that's really what's happening, you can try reaching out to the mods.
> Please don't post insinuations about astroturfing, shilling, brigading, foreign agents, and the like. It degrades discussion and is usually mistaken. If you're worried about abuse, email hn@ycombinator.com and we'll look at the data.
With Signal, you can't really validate the code running on the client. Signal insists on distributing only via Google Play Store or Apple App Store, so usually updates are automatic and uncontrolled by you. And Signal has a history of not releasing timely updates of their client code, so even if you would do your own builds or compare their released code to their public updates, you would have at least a few weeks latency. And I doubt anyone would notice, since the Signal people tried hard to piss off everyone who did reproducible builds of their code.
In theory you can do the same with Signal, as they source dump their server code every now and then.
If you reject that on the basis of "we can't know if it's what they're running" or "it's a partial dump", then I don't see how Matrix is any different. Not only we can't know if Matrix servers have modified software, but we also have to trust/verify several servers instead of a single one.
Im feeling pretty dumb even after reading the tldr. Can anyone who is well versed in this explain how this is better or safer? I read about the time, will it now be slower to send messages?
In the standard practical analysis of quantum threats to cryptography, your adversary is "harvesting and then decrypting". Everybody agrees that no adversary can perform quantum cryptography today, but we agree (to agree) that they'll plausibly be able to at some point in the future. If you assume Signal is carrying messages that have to be kept secret many years into the future, you have to assume your adversary is just stockpiling Signal ciphertexts in a warehouse somewhere waiting so that 15 or 20 years from now they can decrypt them.
That's why you want PQ key agreement today: to protect against a future capability targeting a record of the past. (It's also why you don't care as much about PQ signatures, because we agree no adversary can time travel back and MITM, say, a TLS signature verification).
To understand the importance of a PQ ratchet, add one more capability to the adversary. In addition to holding on to ciphertexts for 15-20 years, assume they will eventually compromise a device, or find an implementation-specific flaw in cryptography code that they can exploit to extract key material. This is a very realistic threat model; in fact, it's of much more practical importance than the collapse of an entire cryptographic primitive.
You defend against that threat model with "forward secrecy" and "post-compromise security". You continually update your key, so the compromise of any one key doesn't allow an attacker to retrospectively decrypt, or to encrypt future messages.
For those defenses to hold against a "harvest and decrypt" attacker, the "ratchet" mechanism you use to keep re-keying your session also needs to be PQ secure. If it isn't, attackers will target the ratchet instead of the messages, and your system will lose its forward and post-compromise secrecy.
I think this used to be true. Now one problem is that a Signal message goes through this whole forward secrecy protocol, but the receiving device has some probability of uploading it to the cloud with a static key that never changes.
You don't have to enable the Signal backups feature, but you have no way of knowing whether the recipient of your messages has. One person in a group chat with that enabled will undo all of the forward secrecy you're describing.
The static symmetric key is fundamentally different from an ephemeral asymmetric key. We've no indication that symmetric encryption is vulnerable to "store now, decrypt later" attacks when used with a sufficiently long key, which Signal has. Non-post-quantum asymmertic cryptography is vulnerable to "store now, decrypt later" attacks, which is why forward secrecy is needed.
The backups feature doesn't open up any new vulnerability that didn't inherently exist in sending messages to someone else you might not fully trust. One person in a group chat can also take pictures of their phone's screen & upload your messages to the public.
Images can be modified, won't these essentially be signed as verifiably coming from the sender, or is cryptographic proof of that thrown away in what they store?
This is no different than if recipient of the secure message shares the message in plaintext. The problem is a discipline problem not a technology one, and the solution is the same in both cases.
There's a difference between what Signal does in the app and a manual action a user performs outside of the app. It is not realistic to expect that people will see a feature Signal has built for them in the app and understand the underlying implications to "post compromise security" and "forward secrecy" that it may have.
The expectation is that what happens inside Signal is secure, and the features Signal provides are secure. If the idea is that nobody is going to enable this feature, then why build it? If the idea is that many people are going to enable this feature, then this entire cryptographic protocol is meaningless.
These things can still be used as evidence. The process used by the police of a rogue country (or any other adversary) isn't a cryptographer's highly technical wet dream or nightmare. They simply look at the screen of your phone saying you sent or received a message, and as far as the adversary is concerned, that proves you sent or received it. Even if you didn't. (Actually, they use Cellebrite and just trust whatever the Cellebrite analyzer outputs, which is basically what your screen would have said)
I've yet to see a protocol that lets you convincingly insert fake messages into both sides of your own chat history, especially in a way that isn't detectable by say, sqlite rowid order, but that would be an interesting idea for where to take this sort of thing.
Those are the breaks though when catering to a large audience with wildly differing threat models. Do you throw away users that are looking for a vague sense of security so they run off somewhere else less secure because you lack some feature?
If you are just looking for "secure(TM)[X]", you are making a mistake somewhere anyway.
If your life or livelihood depends on it, you learn what the impact of every choice is and you painstakingly keep to your opsec.
Somewhere between the two user action becomes a necessity. You need to judge where that point is for you and take responsibility for it because nobody else can guarantee it.
At the very least they should have excluded any chats with disappearing messages enabled from being included in backups.
With disappearing messages off it was already reasonable to assume that a compromise of a counterparty's phone would result in exposure of all previous messages, so enabling backups wouldn't expose you to new risk.
That would cater to those who want to keep their chat history forever without exposing those with disappearing messages enabled to new risk.
The history of Signal has been to provide the security properties we're talking about without users having to think about it or understand. To suddenly remove forward secrecy is a very big change, and it isn't one that they seem to have acknowledged or documented. Like this blog post: they are making an announcement that they have a "post-quantum ratchet," when they have effectively removed the ratchet. It's theater.
I think you missed the point entirely. You can't have security without thinking about it. You can have vague sense of security, which is the theater you are talking about.
Show me a company anywhere that can provide security without user thought and deliberate action. It's a fantasy to believe anything you don't have to think about isn't theater. Hell, if you aren't thinking about it, you're one of the actors in that theater.
But practically, it probably has more risk as people bypassing employer or legal controls think it’s “secure”. So they have conversations that they wouldn’t have.
The security problem that a messaging app like Signal solves is NOT: Allow a person to secure their communication against eavesdroppers.
It solves the problem: How can a group of people (two or more people) securely communicate with each other.
The group has to mutually decide their risk profile, and then decide which features of the application to use. And each person in the group has to decide whether they can trust others in the group to follow the agreed upon opsec. Signal cannot solve these social problems.
Historically as long as everything remained "in the app," it was secure. It's an easy assumption to make and communicate to others. Now it's more complicated: there are things that people can unwittingly do "in the app" that make it less secure.
AFAIK, it has the same security as before. Perfect forward secrecy means that if someone starts recording encrypted messages in transit and two years later obtains an encryption key, they cannot use that key to decrypt the messages they recorded earlier (because of re-keying).
On the other hand, if an adversary captures one of the group participants' phone and breaks device security, and the chat was recorded on that device, then they can access all recorded chats. By the same token, no cryptography can protect against a malicious group participant who records messages.
In the same scenario, cloud backups seem to merely imply that the same adversary can obtain the cloud backup key and therefore decipher the cloud backups if they get their hands on it. They won't need that, however, since the group chat history is already stored on the device. If no chats were recorded on the device at all the situation would be different.
You also have no way of knowing when someone you're chatting with screenshots your messages and uploads them to Imgur.
I jest, and Signal's support for backups do really increase exposure to this risk, but just trying to say its a matter of degree not a fundamentally new change. People that have been using sigtop[0] to backup their Signal messages to plaintext also create the same exposure risk.
I don't think that's quite right. PQ attacks focus on the "trapdoor" functions in asymmetric cryptography, _not_ the symmetric encryption that happens after key negotiation. The current concern is that a future attacker could unwrap the symmetric key, not directly attack the symmetric encryption that is used for something like backups.
(Note: I didn't actually dig into the backup implementation, but my guess is that it's more of a KDF -> symmetric design, rather than the sorts of asymmetric negotiation you'd find in multi-party messaging.)
If the app takes your disappearing message, encrypts it with a static key that never changes and is never deleted, and uploads it to the cloud, then the message is never truly "disappearing." A "post compromise" event will allow the attacker to decrypt that ciphertext at any point in the future. All of this ratcheting is undone by backups.
Disappearing messages were never a real thing in the first place. You can have a gentleman's agreement that the person you send your message to will delete it after reading it, there's no way to guarantee anything beyond that.
(Fair point though that probably "disappearing" messages shouldn't be included in backups since that obviously prevents them from being deleted. Idk if Signal implements that or not.)
What type of static key? If it's just a big symmetric key that isn't derived from an asymmetric handshake of some type then no, that's not our current understanding of the PQ threat model.
Part of the premise of FS/PCS is that "shit happens" to compromise keys even if the underlying cryptography is strong, so if you want a coherent end-to-end FS/PCS story, the claim would be that you need to be ratcheting everywhere.
Definitely, but when we're running around sprinkling PQ algorithms all over the place, it's on top of the asymmetric bits, not replacing the "boring" stuff like your symmetrically encrypted backups. Shit certainly does happen, especially where key management is involved, but I'm not sure I agree that offering an encrypted backup feature is necessarily undoing the FS/PCS story.
edit: Well, let me argue with myself for a moment. I don't think offering an encrypted backup feature undoes the PQ story. But FS/PCS is weakened, sure, since we're talking about all types of shit happening, not just currently known (or strongly theorized) attacks.
Yes, if Signal has effectively removed ratcheting and forward secrecy from the logical "encryption protocol" by encrypting all messages (even disappearing messages) with a single static key that never changes for your lifetime and sending them to the cloud, then all this talk about "post-quantum ratchets" is theater. There are no ratchets.
I'm slightly confused about the PCS part. If I've understood correctly the new key is derived from the old key + some kind or message header. If the attacker has access to a key and messages encrypted with it, can't they read the shared secret used for key exchange and use their existing key to generate the new one? Or is this only possible with ECDH and not KEM?
The new one is randomly chosen (with the randomness coming from both parties, and then combined using ECDH and/or KEM). So you cannot predict it from previous key material, pretty much by definition.
What is the state of PQ symmetric crypto? My layman's understanding is that 128 bit AES is known to be broken by a quantum computer and that 256 AES may be OK but that isn't certain? Is this an additional vector for the "harvest and wait" strategy in the future?
> My layman's understanding is that 128 bit AES is known to be broken by a quantum computer
Weakened, not broken. Quantum computers turns 128 bit AES into 64 bit equivalent. Which will still be extremely difficult for quantum computers due to the large computer size/number of steps required.
And it's 64-bit equivalent in a way that's inherently impossible to parallelize, so 2^64 sequential quantum operations. Those operations are much, much slower than classical ones.
Well, you get sqrt(n / N) as a result. It works like any other parallel computation.
E.g. if you have 256 quantum computers, then each one of them needs to search only 60 bits of the key space to crack a 128-bit key (each one of them will only need to search 2^120 keys).
It's not really going to make much difference with near-future quantum computers. Especially since Grover's algorithm _has_ to complete all the 2^60 steps to produce a reliable result, you can't just run a quantum computer for a while, stop it, and then restart it.
They still take the same amount of time to run it. You don't get a result by running for 1/n of the steps, so adding more QCs lets you attack more keys at once but doesn't let you attack any given key any faster.
>> Forward secrecy is somewhat overrated in end to end encrypted messaging. Most people do not want a truly off the record experience but instead keep their old messages around indefinitely. As long as those old messages exist and are accessible to the user they will be just as accessible to any attacker that gets access to the secret key material.
On a more serious note, if a quantum computer can break a key, a task requiring exponential complexity with key length on a classical computer, then breaking N keys is only a negligible additional cost in comparison.
So it kind of feels like it’s overrated in this case to be honest :)
Their existing post quantum encryption didn't do post compromise security (PCS) against quantum attackers. This new one does.
I am excited to finally know what they mean by PCS after reading this article. It means that the session keys from their key agreement scheme (n ratchet) are generated new so an attacker doesn't get them again after a fairly specific sort of compromise. So from that I get that the off the record (OTR) protocol also has PCS. Which is a bit disappointing, I thought that they had come up with some new concept.
This key agreement doesn't happen that often. So a user isn't going to notice any slowness even if it was significantly slower.
> "What does this mean for you as a Signal user? First, when it comes to your experience using the app, nothing changes. Second, because of how we’re rolling this out and mixing it in with our existing encryption, eventually all of your conversations will move to this new protocol without you needing to take any action. Third, and most importantly, this protects your communications both now and in the event that cryptographically relevant quantum computers eventually become a reality, and it allows us to maintain our existing security guarantees of forward secrecy and post-compromise security as we proactively prepare for that new world."
Yes, they basically got the ios native version to also run for macs [0]. The perks of apple silicon I guess. It is a bit ironic how going to ARM architecture initially was initially thought of a burden for developpers having to maintain both x86 and ARM versions, while as it turns out, for those who target both ios and macos it makes it easier.
I am not sure if the old electron-based whatsapp is still available, maybe the one from the website, vs the one from app store, is still electron?
At the core of secure backups is a 64-character recovery key that is generated on your device. This key is yours and yours alone; it is never shared with Signal’s servers. Your recovery key is the only way to “unlock” your backup when you need to restore access to your messages. Losing it means losing access to your backup permanently, and Signal cannot help you recover it. You can generate a new key if you choose. We recommend storing this key securely (writing it down in a notebook or a secure password manager, for example).
i missed that paragraph, thanks for pointing it out. i wonder what algorithm they're using here, and if we could use third party tooling to decrypt these messages on a local computer? it might be a pathway to some cool experimental third-party Signal apps
https://security.googleblog.com/2025/11/android-quick-share-...
To ensure a seamless experience for both Android and iOS users, Quick Share currently works with AirDrop's "Everyone for 10 minutes" mode. This feature does not use a workaround; the connection is direct and peer-to-peer, meaning your data is never routed through a server, shared content is never logged, and no extra data is shared. As with "Everyone for 10 minutes" mode on any device when you’re sharing between non-contacts, you can ensure you're sharing with the right person by confirming their device name on your screen with them in person.
This implementation using "Everyone for 10 minutes” mode is just the first step in seamless cross-platform sharing, and we welcome the opportunity to work with Apple to enable “Contacts Only” mode in the future.