A collegue of mine was tech lead at a large online bank. For the mobile app, the first and foremost threat that security auditors would find was "The app runs on a rooted phone!!!". Security theater at its finest, checkboxes gotta be checked. The irony is that the devs were using rooted phones for QA and debugging.
Meanwhile, it's probably A-OK for the app to run on a phone that hasn't received security updates for 5 years.
I don't get it. If they're worried about liability, why not check the security patch level and refuse to run on phones that aren't up to date?
I'm guessing it's because there are a lot of phones floating around that aren't updated (probably far more than are rooted), and they're willing to pretend to be secure when it impacts a small number of users but not willing to pretend to be secure when it impacts many users.
Because a phone running an unknown OS is significantly more dangerous than a phone that hasn't received security updates for years. For example, a malicious OS maker could add their own certificate to the root store, essentially allowing them to MitM all the traffic you send to the bank.
Liability works on the principle that "if it's good enough for Google, it's good enough for me." A bank cannot realistically vet every vendor, so they rely on the OS maker to do the heavy lifting.
Even if they wanted to trust a third-party OS, they would need to review them on a case-by-case basis. A hobbyist OS compiled by a random volunteer would almost certainly be rejected.
I can add certificates on my unrooted android. That how HTTPToolkit [0] works, it only requires adb, which (thankfully) doesn't trip banking apps. Banking apps can (and do iirc) pin certificates, so a rooted phone adds no risk whatsoever.
Also in my experience a rooted phone experience is by far more secure than the OEM androids. Security is supposed to assess risk objectively, yet "running on a Xiaomi phone with 3rd party apps that cannot be uninstalled and have system access" is somehow more secure than "running on a signed LineageOS where user can edit hosts file".
>Because a phone running an unknown OS is significantly more dangerous than a phone that hasn't received security updates for years.
That's just straight-up false ; the phone without security updates has known exploits its user knows nothing about (and certainly not how to avoid them). The phone with an unknown OS has a user capable of installing said OS, at the very least.
> Because a phone running an unknown OS is significantly more dangerous than a phone that hasn't received security updates for years.
I'm not convinced this is generally true, at least as can be detected by an app. Back when I had my phone rooted, it was configured so that it would pass all the Google checks and look like the stock OS. That configuration was probably dangerous, but apps were happy with it. Now that I run an OS that doesn't lie about what it is, I'm flagged as untrustworthy. What's the point in being honest?
Overall, I don't think they really have any idea what's a threat based on the checks they're doing, so I don't think they can say at all what's more or less trustworthy. But I think that a phone that reports being years out of date should reasonably not be expected to be secure, but yet they mark it as secure anyway. Many of those devices can be rooted in a way that can still pass their checks. I would think, if nothing else, that would be reason to block them, since they're interested in blocking rooted devices.
> If they're worried about liability, why not check the security patch level and refuse to run on phones that aren't up to date?
Google doesn't provide an API or data set to figure out what the current security patch level is for any particular device. Officially, OEMs can now be 4 months out-of-date, and user updates lag behind that.
Your guess is good, but misses the point. Banks are worried about a couple things with mobile clients: credential stealing and application spoofing. As a consequence, the banks want to ensure that the thing connecting to their client API is an unmodified first-party application. The only way to accomplish this with any sort of confidence is to use hardware attestation, which requires a secure chain-of-trust from the hardware TEE/TPM, to the bootloader, to the system OS, and finally to your application.
So you need a way for security people working for banks to feel confident that it's the bank's code which is operating on the user's behalf to do things like transfer money. They care less about exploits for unsupported devices, and it's inconvenient to users if they can't make payments from their five-year-old device.
And this is why Web Environment Integrity and friends should never be allowed to exist, because Android is the perfect cautionary tale of what banks will do with trusted-computing features: which is, the laziest possible thing that technically works, and keeps their support phone lines open.
I'm not an Android developer, but I was thinking they could use something like the android.os.Build.VERSION.SECURITY_PATCH call to get the security patch level. Maybe that's not sufficient for that purpose, though.
Sure, there is enough information available to the app to determine what OS version and patch level it is running under. The issue is, the app would need to communicate this to the bank via an API, and the bank wants to trust the app in the first place in order to rely on this information.
Even then, two things turn out to be true:
- Banks don't actually want to put in the effort and deal with angry customers with slightly-out-of-date devices.
- All the credential-stealing malware on Android works perfectly fine on stock, unmodified, non-rooted OS images anyway. They just need to socially-engineer the user to grant accessibility permissions to the malicious app.
It's more frustrating because my partner's pixel 4A cannot use google pay or the bank apps because it is an invalid os - I am guessing due to lack of updates? So, perfectly fine hardware, but crippled in functionality due to the lack of software updates.
ive seen:
-"but ios can be jailbroken and it doesnt have an AV!" while the MDM does not allow jailbroken devices, and they also allowed sudo on linux.
auditors are clueless parasites as far as im concerned. the whole thing is always a charade where the compliance team, who barely knows any better tries to lie to yhe auditor, and the auditor pick random items they dont understand anyway. waste of time, money and humans.
Yep, some stakeholder wants a pen-test or an audit so you do it and address the findings to keep them happy. Going through it now at work - bunch of silly findings because the pen testers know they don't get paid to send back an empty report and tell you everything is fine.
As long as copying some numbers, printed on a piece of plastic, into an online order form is all the authentication that is needed for a transaction, anything more than that is inherently security theater.
That’s why for most transactions I do with a credit card in my country, you need an extra validation with the mobile app. It is mostly American websites that do not enable this functionality.
Yes, because we don't want these stupid locked down apps. Credit cards give buyers many protections, it's very easy to dispute an illegitimate transaction.
The consumer does not typically pay this directly. It may be passed onto the consumer indirectly through higher prices, but those apply to anyone regardless of payment method. On the contrary, I get cash back on purchases and other rewards.
Because we have anti-fraud consumer potection rules and CCs operate on a make money first type of bais. The debit networks on the otherhand are a different story.
Yeah that's the first thing a pentest will complain about, had the same problem too. I pushed back enough so that it's trivial to bypass but the bank and pentesters also agreed with me that it's security theater or else I would never had the chance.
I always ask them if they have root/admin on their computer. Then follow up playing dumb with "shouldn't we lock out PCs too?". Watching them stammer is worth the 30 second aside.
Who do we lobby to get this removed from the auditors checklists? This is a solvable problem but it’s political. And if we don’t solve it personal computing is at risk.
Start by calling (or visiting the area office of) your senator and congressman. If you are reasonably articulate, they engage and listen. Doesn't matter if the listener is not a techie; they will ask questions around policy and why it affects constituents.
This is 1000x more useful than online petitions or other passive stuff. Politicians know that one person to have taken the effort to do this, means 1000 others are feeling the same thing but are quiet.
From my experience with the fed level senator.. they're already lobbied to shit. For example, explaining to Duckworth that fed level id tying to your internet travel and encryption backdoors aren't safe.. they'll send you copy that she really wants you to know she's thinking about the children while rolling around in her wheelchair.
A lot of that is security theater at its best. However given the forced attack surface I would imagine that there is a hard push from authoritarians and the finance world to make a "secure chain" from service to screen.
My guess: They're afraid that the scammers are going to mirror the screen and remote control access to the app. (More orgs are moving to app/phone based assumptions because it saves the org money and pushes cost on the consumer) Instead of providing protections from account take over.. we're going to get devices we don't own and we have to to pay for, maintain and pay for services to get a terminal to your own bank account. Additionally, there are many dictatorships, like the UK, North Korea, etc, that are very adimate that you don't look at things without their permission. So they're trying to close the gap of avoiding age verification bypasses with VPNs.
Moreso, the project advises against rooting your phone and tells you that if you install GrapheneOS and root it that you aren't running GrapheneOS anymore.
Yep that's my experience as well, if you don't get the play protect™ absolution your device is seen as rooted. Latest app to display this BS behavior was PagerDuty, I guess they have to protect their secret sauce of calling an API and showing notifications
I know some people have issues with Duo, I don't, with pager duty i _just_ installed th last version from Play store, logged in with sso to my org and I'm in, can do or see everything.
Maybe it's play services in your case, not play integrity? I'm on the last release from the stable channel.
My favorite is when it must have punctuation, but certain punctuation is silently banned, so I have to keep refreshing my password generator until it gives me an acceptable combination.
I came across a "special character" requirement while creating an account. The client validation was not the same as the server validation. The client proceeded as if my account was created, but it never was. The client functioned without an account until it was closed. I asked the creator what their app's problem was, why did I need to keep resetting my password, then be told that I don't have an account, and have to create it anew.
They would not believe I was creating an account and using the device, because their own logging was so terrible.
I had to send them a screen recording from me using this abomination, and only then was I told "you're using the wrong special characters". They helpfully gave me some examples of allowed special characters, which then would pass the server validation.
I wish they would have gotten rid of the account requirement, as the device and client software seemed to work fine without them.
Sometimes when that happens, and any of `:({ |&;` are on the no-no list, I try bypassing the client validations and setting my password to a shell fork bomb. So far as I'm aware it hasn't broken anything yet, but I'm determined to keep trying.
Somewhat unrelated, is there any technical reason certain punctuation might be banned? I can understand maybe not allowing letters with diacritics or other NON-ASCII chars but why would a system reject an @ sign or bracket > for example?
Depending on the protocol they can be url encoded or even helpfully html encoded; the same password can be used over different protocols. It's the best to not use punctuation by default (length supplies more entropy than charset), I add -0 at the end to make dumb password policies happy.
Sorry I'm a bit lost here. Are you saying requiring a special character and a number are dumb password policies? Wouldn't charset AND length make for exponentially higher entropy? 52 (or 62 for digits) to the length power vs (62+20 special chars) to the length power? Or am I missing something?
I guess what they're saying is that, for example, a password of length 12 has about 71 bits of entropy if using an alphabet of 62 characters, and 76 bits with an alphabet of 82 characters. But if you only increase the length by 1 you already get 77 bits with 62 characters only. So length beats adding special chars in that sense.
Gotcha, I guess my question is, why not both? Is it the requirement of special chars over a min-length password that is in question here? Like the system is like "minimum 8 char password but also three special chars, ancient heiroglyphs, and the blood of your firstborn child" when you can omit the special chars and just have min 16 char password for the same security benefit?
This is true, but I think the argument is that for maintainers of the system, it's more work to allow more char options when it (should be) more trivial to change MAX_PASS_LENGTH from 12 to 32. Like, if you're gonna add more restrictions, make it the ones that encourage, not block, more secure passwords.
A lot of the restricted stuff is cargo-cult fear of symbols that could be used in SQL-injection or XSS attacks.
A properly-coded system wouldn't care, but the people who write the rules have read old OWASP documents and in there they saw these symbols were somehow involved in big scary hacks that they didn't understand. So it's easier to ban them.
Having more than just alphanumeric characters widens the domain of the password hash function, and this directly increases the difficulty of brute-force cracking. But having a such a small maximum password length is... puzzling, to say the least. I would accept passwords of up to 1 KiB in length.
With rainbow tables, even 11-character simple passwords like 'password123' can be trivially cracked, and as the number of password leaks show, not everyone is great at managing secrets and credentials.
It's easier for me to remember really long passphrases than even short alphanumeric strings - small maximum password lengths set my teeth on edge. The passwords should be getting hashed anyway right?
The problem is that you never really know what a website operator does with your credentials. Ideally, you have both a unique email and a unique password for each site, because sadly credential stuffing [1] is a thing.
I recommend all my friends and family to use a password manager like Bitwarden, and if they can't do that for some reason, at least use a 3-word passphrase separated by a hyphen.
The amount of times people have complained to me that this doesn't work because of low max-chars on passwords is insane.
I regularly conduct transactions at the branch of my local bank wherein they ask me for no credentials whatsoever. I also once forgot to bring my account number with me and the teller said "no worries, I'll look it up for you." Kind of horrifying.
That's scary. I wonder if incompetence like that could lead to a lawsuit in the case of a breach.
At this point I wouldn't be surprised if there exists a system that just asks for username with a checkbox "check here if you are the owner of this account"
Until the late 2010s, the AD account password at my financial institution employer was capped at 12 characters because, for a subset of workers, AD creds were sync'ed to a mainframe application that could only support that many characters.
Sounds about right. One of Australia's big four banks had the online banking password requirement of exactly six characters for a long time - for similar reasons I assume.
I think we (whoever we is) should start normalizing the concept of passphrases; on sign-up screens they should show the benefits of a passphrase. I'm surprised that Googles PW generator does not use passphrases, and I don't know about ios because I haven't tried theirs yet.
When I'm trying to log into something on a device that has a terrible keyboard, like a TV or giant touchscreen, it's a lot easier to type words I know than gibberish.
Haha having such a low range of max chars just makes it that much easier to brute force doesn't it?
On password length, I once had an account on Aetna that let me put whatever I want for my password, so I used a three-word passphrase that bitwarden generated for me. It ended up being like 20 chars.
Then I tried to log in with that password. Whooosies, the password input only allowed max 16 chars!
Ended up using a much less secure password because of this.
Maximum lengths like this are like a big neon sign that says:
"Hey idiot, I'm storing your password in plaintext, don't know anything about password security, and I'm also going to make you pick something you can't remember for 'security'."
Gotta admit, this triggered me. I don’t think those are the same thing. If no one had a good password we wouldn’t affect each other negatively. If no one picked up trash, we would.
I'm pretty sure it's referencing Half-Life 2, where an agent of an oppressive regime tells you to pick up a can that they just dropped on the floor as a sadistic display of authority (and to provide world-building and teach the grab mechanics to the player).
The GP is equating policies for strong passwords that aren't trivially cracked with authoritarianism.
If no one had a good password, we actually would affect each other negatively. If your personal banker can be easily compromised, that means that you could be easily parted with your money.
> The GP is equating policies for strong passwords that aren't trivially cracked with authoritarianism.
Incorrect - the requirements I mentioned make passwords less memorable and less secure (maximum length 12???). Obviously that's not as bad as authoritarianism, but I was trying to capture the arbitrary act being forced on us for no real justifiable reason.
Are you saying that corporations respect the letter of the law when it comes to privacy? They don't, they can just drop some lunch money when caught red-handed [0]
Even when they write in their privacy policy that they collect private data and sell them to third parties, unlawfully, that does not make it any better. Cambridge Analytica was operating with respect to Facebook policies. Would you say that people that took an IQ test and were manipulated into voting pro-Brexit were well-aware of the sauce they were eaten with?
Discord is unfortunately no different, they're profit-driven and likely to sell user data already or in the future, because it's incredibly easy and profitable to do so. Why would a chat app try and predict its users' gender? [1]
Would you say that slaughtering Native Americans and enslaving Africans "worked pretty well" for them, or do you only speak from the White adventurer perspective?
Sure, I don't claim all of them go well. Do you want to run a hypothetical exercise on how many they get right vs wrong? And based on that we can see if this is a "fantasy" or not?
No but you're claiming "if they all are investing X amount then these bets obviously must pan out". If you follow that rationale then it means that all bets that these company's make in the same space must all pan out. So if they don't all pan out then the fact that they're all making bets isn't a sound rationale for it being true.
As others have pointed out, investors notoriously have FOMO, so rationale actors (CEOs of big tech) naturally are incentivized to make bets and claims that they are betting on things that the market believes to be true regardless if they are so as to appease shareholders.
> simply that most bets are made intelligently with serious intent.
That is NOT what you said. You said this:
> Why would they all put money into this if it is so obvious to all of you that it is not going to work?
In other words: "if these companies are putting all of this money to work then it's obvious it will work"
So, no you didn't simply say "their bets are made intelligently with serious intent". No one is saying these companies aren't serious about it, they are saying there are legitimate physics limitations involved here that are either being ignored or the companies are betting on a novel scientific breakthrough.
> your take on investors is naive and largely incorrect - its the musical chairs theory of markets.
Then you clearly have never worked with investors before.
Here’s our point of disagreement: I think smart people have made a bet in this with serious intent after considering all the pitfalls.
You think they are either deliberately ignoring it (so ridiculous) or they are betting on a scientific breakthrough.
It’s too comical to even address the “they are ignoring it part” so I’ll ignore it.
I would agree with you that part of their bet might involve hoping for breakthroughs and the investment analysis probably factored it in.
Lots of earlier investments have banked on breakthroughs like this.
Also your opinion on how markets work is naive and unscientific.
Agreed, and it's nice and easy for anyone already using `.env` files, although the private key used to decrypt the dotenvx key-values is itself a secret.
Yeah i don't understand this. You still need to secure your .env.keys file same as you would be doing with a standard .env. Is the benefit just that you can track it with git?
Standard .env is unencrypted, while a dotenvx .env file has plaintext keys and encrypted values. Anyone with access to the repo would also need the DOTENVX_PRIVATE_KEY variable to decrypt the env file.
One key deployed to your hosts means adding new secrets doesn't take operations effort. Also, the process uses a public/private key pair, so adding a new variable doesn't expose existing variables.
If they're not doing evil work, why all the secrecy? It's not like they're going bankrupt either since, like you mentioned, the demand is not going away
Because people are hipocrites - our stated goals (clean environment, fair business) are different from the actual ones (get a lot of stuff and energy cheaply)
But these shouldn't be in contradiction. Oil and gas will end when they will be unprofitable, priced out by much cheaper renewables. Of course this will result in more and cheaper stuff and energy, boost economic growth rates not suppress them.
Well, regardless of what government does, renewables will eventually price out oil and gas. And the government and the megacorps will be on their side because that way they will be making more money. Not before.
No one is trying to limit renewables just for the sake of it. They are trying to do so because so far renewables don't allow to make much money while oil and gas does. There won't be any reason for the powers that be, to resist them once this situation reverses.
I believe news sites let crawlers access the full articles for a short period of time, so that they appear in search results. Archive.is crawls during that short window.
> The Incus project was created by Aleksa Sarai as a community driven alternative to Canonical's LXD.
Today, it's led and maintained by many of the same people that once created LXD.
Communication requires accurate timing. Time dilation occurs between Earth and satellites, a phenomenon that isn't part of Newton mechanics, so relativity is indeed involved.
Source? Google is literally an online ad monopoly, and being sued for it. They did track and continue tracking users, and they sell data though their SSP, DSP, ad networks, ad exchanges they own.
Find the webpage where you can buy googles user data. Not where you can buy ad slots, but where you can buy the raw tracking data like data brokers sell.
Still, it is _personal_ data collected and sold by Google, which was the point raised by gp comment. As for it being personally identifying, the aggregation/pseudonymization/anonymization process doesn't even prevent precise identification [0]. I'd say it's pretty close.
reply