Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

It's not bad work but it looks like The Register has hyped it much too far. Breakdown:

* OSX (but not iOS) apps can delete (but not read) arbitrary Keychain entries and create new ones for arbitrary applications. The creator controls the ACL. A malicious app could delete another app's Keychain entry, recreate it with itself added to the ACL, and wait for the victim app to repopulate it.

* A malicious OSX (but not iOS) application can contain helpers registered to the bundle IDs of other applications. The app installer will add those helpers to the ACLs of those other applications (but not to the ACLs of any Apple application).

* A malicious OSX (but not iOS) application can subvert Safari extensions by installing itself and camping out on a Websockets port relied on by the extension.

* A malicious iOS application can register itself as the URL handler for a URL scheme used by another application and intercept its messages.

The headline news would have to be about iOS, because even though OSX does have a sandbox now, it's still not the expectation of anyone serious about security that the platform is airtight against malware. Compared to other things malware can likely do on OSX, these seem pretty benign. The Keychain and BID things are certainly bugs, but I can see why they aren't hair-on-fire priorities.

Unfortunately, the iOS URL thing is I think extraordinarily well-known, because for many years URL schemes were practically the only interesting thing security consultants could assess about iOS apps, so limited were the IPC capabilities on the platform. There are surely plenty of apps that use URLs insecurely in the manner described by this paper, but it's a little unfair to suggest that this is a new platform weakness.



Thank you for posting this. This is dramatically different from the way The Register was hyping it. It's pretty serious of course, but the iOS vulnerability is pretty minimal[1], and yet The Register made it sound like the keychain was exploited on iOS and it seems that's not the case at all.

[1] How often is that even going to be exploitable? Generally cross-app communication like that is to request info from the other app, not to send sensitive info to that app.


No, iOS applications definitely (ab)use URL schemes to send sensitive information or trigger sensitive actions. The problem isn't that they're wrong about that; it's that it's not a new concern.


The only example that really comes to mind where actual secret information is sent over a URL like that is things like Dropbox OAuth tokens, which require the requesting app to have a URL scheme db-<app_key> that it uses to send the the token. But besides the fact that this isn't a new issue, it's hard to imagine this actually being a serious problem, because it's impossible for the malware app to hide the fact that it just intercepted the URL request. If I'm in some app and request access to Dropbox, it switches to the Dropbox app and asks for permission, and then switches to some other app, it's pretty obvious that other app is behaving badly. Especially since there's no way for that other app to then hand the token back to the original app, so you can't even man-in-the-middle and hope the user isn't paying attention.


It's less common with the major well-known applications, in part because almost all of those get some kind of security assessment done, and, like I said: this was for a long time the #1 action item on any mobile app assessment.

What you have to keep in mind is that for every major app you've heard of, there are 2000+ that you've never heard of but that are important to some niche of users.


Sure, I get that. I'm still just having a hard time imagining trying to exploit this, because it's impossible to hide that you did it from the user, and it completely breaks the app you're trying to take advantage of (since you took over its URL handler, it can never receive the expected information, so you can't even try to immediately pass the data to the real app and hope the user doesn't notice).

Assuming the model where you send a request to another app, which then sends the secret data (such as an OAuth token) back to you, it also seems rather trivial to defeat (if you're the app with the secret data). Just require the sending app synthesize a random binary string and send it to you, and use that as a one-time pad for the data. You know your URL handler is secure (because otherwise it can't have been invoked), and this way you know that even if some other app intercepts the reply, they can't understand it. Granted, this doesn't work for the model where you send secret data in your initial request to the other application, but I can't even think of any examples of apps that do that.


Why can't the "other app" just fake a Dropbox-looking display that says "Sorry, service unavailable. Click here to try again." while it does malicious stuff in the background? And then pass to the real Dropbox once it's finished being malicious?


Several reasons:

1. You can't intercept the request to Dropbox itself, because that doesn't contain any secret data. You'd need to intercept the response, and you can't fake the UI for that app because it would be immediately apparent to even the most cursory inspection that your app is not in fact the app that made the request (even if you perfectly mirrored their UI, you wouldn't have any of their data so you couldn't replicate what their app is actually showing). And anyone who looks at the app switcher would see your app there so you can't possibly hide the fact that you launched at that time.

2. Even if you could be 100% convincing, you can't actually pass the data to the real app when you're done recording it because, by virtue of overriding their URL handler, you've made it impossible to invoke the real app's URL handler. There's no way on iOS to specify which app you're trying to open a URL in. All you can do is pass the URL to the system and it will open the app it thinks is correct. Since you overrode their URL handler, if you try and call it, you'll just be calling yourself again. And since you've now made their URL handler inaccessible, you've cut off the only possible way to pass that data to the real app (even if it has other URL handlers, they won't accept the same data).

So the end result is that if you do try and take over someone else's URL handler, it'll be blindingly obvious the moment you actually intercept a request.

The only approach that even seems semi-plausible would be attempting to phish the user by presenting a login UI as if you were Dropbox and hoping they enter their username/password, but the problem with that is the entire point of calling out to a separate app is that you're already logged-in to that app, so if the user is presented with a login form at all, they should instantly be suspicious. And of course as already mentioned you can't hide the fact that you intercepted the request, so you'll be caught the first time you ever do this.

On a related note, even if you can make a perfectly convincing UI, your launch image will still give you away as being the wrong app (since the user will see your launch image as the app is launched). Unless you make your launch image look like the app you're trying to pretend to be, but then you can't possibly pretend to be a legitimate app because the user has to actually install your app to begin with, which means they'll be looking at it. If they install some random app from the app store and it has a launch image that looks like, say, Dropbox, that's a dead giveaway that it's shady. There's not really any way to disguise an app like that.


In iOS9, apps can now register to arbitrary http URLs, but that in fact requires the app to be correctly associated with the domain, which in turns requires a lengthy process (the domain must expose via HTTPS a json naming the bundle id, signed with the TLS private key).

So I think they made it right for generic URLs while the custom URLs has been a little unfortunate from day 1, but it's hardly something new.

Btw can anybody explain how association to arbitrary http URLs works in Android? Is there a similar validation, or can any app intercept any URL if it wishes so?


In Android it's all been rolled into the Intent/IPC system since day 1. Apps are composed of Activities, and Activities can defined Intent filters. Intent filters describe what the Activity can handle, including but not limited to URLs.

Through this system, any app can register for any url (IIRC you can filter by scheme, host, and/or path). When a url is invoked, the system asks the user which app should handle it if there are several that can. You can also set a default app for the given url, etc - the whole system, though very flexible, has been widely criticized as having mediocre UX (though IMO it mostly works just fine).

In Android M (unreleased), they've added a similar feature as in iOS 9 whereby you can ensure that URLs you define and own are always handled by your app. Essentially you host a json file at your domain, served over https, that specifies the SHA256 fingerprint of your app's signing cert. Your app defines the url filter similar to before and the system makes sure that you match the fingerprint.

Android being Android, you can still tweak the default handling of intents even if apps do this, but it's pretty hidden.


Not sure the above is entirely complete (though perhaps accurate), at least given what they are claiming. They are claiming in the introduction that the WebSocket attack can work on Windows and iOS. However, they don't seem to explain how in section 3.3 My guess is they're saying that an app can create a BG server on an iOS device and you can't control which apps connect to it. Not particularly insightful.

I'm skeptical as to their chops given the general disorganization of their paper and their over hyping of the scheme issue which is pretty basic / well known. And, in fact, it's not that hard to authenticate an incoming scheme on iOS via app prefix. Just have to dig in the docs a bit.


Their Websockets attack appears to be premised on a Safari extension that assumes it can trust a Websockets endpoint bound to localhost.


https://blog.agilebits.com/2015/06/17/1password-inter-proces...

This particular attack is worrisome because it doesn’t require “admin” or “root” access, unlike other attacks that depend on the presence of malicious software on the system.

It's a weakness.


It clearly is a weakness on OSX (apparently much less so on iOS). The issue is, it's not a new weakness.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: