Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Marketing campaigns often convince people to think things, however subconsciously, i.e. Lincoln is associated with Luxury or X cereal is a wholesome breakfast.

What I hadn't seen until this was a marketing campaign that actually proselytized so many into proclaiming the message for free. On HackerNews, Reddit and in public places you're likely to find someone vouching for Apple's privacy practices sometimes with the same verbiage that's on the billboards! Maybe that's a testament to Steve Jobs's lasting marketing impact.

I personally see the incentive structure which makes Apple more privacy-friendly than say Google. But I'm deeply suspicious of such a convenient message that the largest corporation in the world puts its resources behind. Also, being more privacy-friendly than Google and being privacy friendly are two different things.



I remember Tim Cook said they are not in the Ads Business. Unfortunately I am too busy, Apple's PR has worked hard to delete those tracks of what Tim Cook said, or Google's search engine is longer showing what I am looking for.

And I remember that "Apple is not in the Ad Business" before 2018 was the most cited defence and reason on HN. That was before the war on tracking, the submarine articles on ads, and the attack on Facebook.

Because you know what? Privacy is a Fundamental Human Right. And because iPhone is the only smartphone that values your privacy, banning iPhone sales in your country is also against Human Right.

>But I'm deeply suspicious of such a convenient message.

I was probably the only few who was deeply deeply suspicious of the "Dont be Evil" Google in the early 00s, in an era of "Dont be Evil", when everyone in Tech thought they were Saint. The self righteousness of Google, I thought nothing could be worse than Google's hypocrisy. I mean how can something be worse than "Dont be Evil"?

Well here we are. The era of "Fundamental Human Right". A company that worked with CCP, invested $275B to build and improve the whole CE supply chain in China. Helping those companies to compete in area where CCP has a strategic interest, Continue to invest and help those companies to set up operation in India or Vietnam in the name of "Diversification from China" as PR headlines.


There is a speech that was given at EPIC (Electronic Privacy Information Center) in 2015 that is likely to have the quote you are looking for.[0][1]

There is a short clip of his speech at the event which was reported by NBC News, in which he states the boundaries of their advertisement program at the time.[2]

EDIT: There is also this story from the Verge in 2014 which includes a lengthy quote about the advertisement program.[3]

[0] https://archive.epic.org/2015/06/tim-cook-backs-privacy-cryp...

[1] https://archive.epic.org/june1/

[2] https://www.nbcnews.com/tech/apple/its-wrong-apple-ceo-tim-c...

[3] https://www.theverge.com/2014/9/17/6368669/tim-cook-talks-up...


They're not in the search business because the potential is much more limited compared with selling hardware and services. People use YouTube and Facebook for many hours a day and the amount of revenue compared with what Apple can get from ads in app store, news, perhaps iPhone search and other Apple apps are orders of magnitude higher (hundreds of billions of dollars compared with single digit billions for Apple).

Privacy is very different for most of the ads Apple are currently showing too. Search ads work without the need for tracking because when people are searching for something you show them ads for things they are searching for now, not from some model of their interests created from their tracked internet history.

This is why Apple can have an ad business and also destroy Facebooks (and others) ad business based on tracking - they don't need to squeeze every dollar from targeted ads.


> I remember Tim Cook said they are not in the Ads Business.

Yes, he said that after Apple failed in the ads business, just as it was focussing on leveraging its platform control against the firms that had beat it to reshape the field for the next try.


> I was probably the only few who was deeply deeply suspicious of the "Dont be Evil" Google in the early 00s, in an era of "Dont be Evil", when everyone in Tech thought they were Saint. The self righteousness of Google, I thought nothing could be worse than Google's hypocrisy. I mean how can something be worse than "Dont be Evil"?

I don't think that's accurate or fair, and I think (as I often see) it misses a lot of context around where "Don't Be Evil" came from, and what it really meant.

"Don't Be Evil" was basically to highlight and contrast Google's desired culture from Microsoft's at the time of the late 90s/early 00s. That is, at the time, and especially early in Microsoft's existence, MS was pretty famous for "dirty tricks". E.g look at the early history/origins of DOS, anti-competitive tactics WRT DR-DOS [1], how they fought the browser wars, the full history outlined in Microsoft v United States, etc. The icon for MS in Slashdot at the time was famously the Gates "borg" icon, and that is how a lot of people viewed MS.

When it comes to Google, I think the whole idea behind "Don't Be Evil" is that they believed that you could make money withOUT dirty tricks, and up until 2010-2012 or so I think this was largely true. People flocked to Google and their products not because they were forced to, but because the products were genuinely much better than the competition at the time. Search, GMail, Maps, StreetView, Chrome - when all of these came out I remember thinking "holy shit this is amazing".

The problem, though, is that at some point all very successful companies reach a size where I believe it's only possible to respond to your economic incentives, which are to grow at any cost. I mentioned 2010-2012 (maybe a little later, 2012-2014) because that's when I feel like I really saw Google's approach change to really squeeze the pennies from their existing business, e.g. when they made ads more and more indistinguishable from organic results, or when they made it so that any remotely commercial search term has an ENTIRE page of ads above the fold. Paying "the Google tax" became a real thing, e.g. you'd have to pay Google for an add JUST on your domain name because competitors might bid higher.

Thus, if anything, I give Google props for "holding out" a good ~12-15 years before their growth and economic incentives made it "must increase revenue at all costs" and the beancounters took over.

1. https://en.wikipedia.org/wiki/AARD_code


>"dirty tricks"

You mean like colluding with other big companies to suppress wages ? Your timelines don't check out since there's evidence of this going back to 2007.


A common Microsoft criticism is EEE. Learned about that from Sun.

Embrace. Enhance. Extinguish.

I forgot about the Borg icon. M$ for MS in the 90s was what the cool kids did


I would argue Google turned evil quickly, pretty much the moment they released the surplus behavioural data they could glean off their users when they used their services could be extracted for profit which begun even before gmail, allowing them to “personalize” your services. And by personalize, I mean Google built a profile on you to better learn how they could poke and prod at your behaviour in order to manipulate you into doing things for their customers, the advertisers.

I think the doubleclick acquisition hopped this into overdrive and also allowed Google to start screwing both sides by extracting larger and larger rents from advertisers with its near monopoly power and ad cartel with Facebook. We saw the Google tracking cookie spread like a plague shaking down users for data even if they did not use a Google service, as sites effectively needed to install googles tracking cookie on its customers computers to take full advantage of the Google ad monopoly, obliterating the pretence of consent you had with a service like gmail where you were at least consciously agreeing to data scraping.

I honestly think it took them almost no time at all to go from pagerank and spiders to leap towards building their panopticon and the modern surveillance economy. I think the reason people didn’t consider them evil is that what they were doing was so innovative and groundbreaking that people didn’t fully understand the implications. The way their business operates shouldn’t even be legal with them playing both sides of the ad market and their relentless spying being opt-out at best if you have sufficient technical knowledge. The spying of Google and companies like that it’s undermining government privacy protections at this point as the government can acquire spy data it couldn’t gather itself legally (for good reason) from private companies.


Wow … it’s insane how “Apple is not in the ad business” has been scrubbed from the internet.

I’ve seen this happen before. But not with a big corporation.


The modern internet gives me a strange sense of amnesia. I swear everything is heavily censored and redacted now, but how can I prove it when my primary view into this world is the search engines themselves? It feels a bit like the simulation hypothesis.


Which is why increasingly I want a browser to record everything I read. So I can always go back to it. Instead of relying on Search Engine, where I could have quoted the "exact" phase and still not get any decent results.


> The self righteousness of Google, I thought nothing could be worse than Google's hypocrisy.

I wasn't very suspicious at the time, but learned to be. My take on this is different though. I don't think it was a case of self-righteousness as much as extreme naivete of some postdocs that were just entering the business world. naivete in thinking a statement such as that couldn't be twisted to the point it meant less that it already does ("evil" is not well defined), and naivete to think they wouldn't be the ones twisting it, whether on purpose or subconsciously, as business needs slowly changed and they had to justify keeping their business afloat and profitable, and people in jobs, and shareholders happy.

You either set up your business such that it's incentivized to align with your morals, or your business (the market) will incentivize you to change your morals to align with it.

If nothing else, we've learned that practices that seem mostly benign one decade at low scale can become very troubling the next decade when done at a much larger scale and/or when additional consequences of the practice become known. Choices made that align with your morals at one time may have consequences that mean you were wrong, even if you couldn't really have known it, but now your business relies on this prior decision.

Running a business is hard, making overreaching statements you can't live up to later is easy.


I degoogled long before it became popular to do so. And I was on gmail early enough to have needed an invite.

But the writing was on the wall for anyone who knew what to look for. And it's the same with the likes of Paypal. I refuse to use these services because they're not banks and are not beholden to the same rules (they're getting more regulated, and my find themselves to be a bank equivalent eventually).


> I remember Tim Cook said they are not in the Ads Business. Unfortunately I am too busy, Apple's PR has worked hard to delete those tracks of what Tim Cook said, or Google's search engine is longer showing what I am looking for.

The first sentence of the actual article links to when Tim Cook said that.


A huge part of Apple’s positive incentive structure was the fact that they didn’t do ads.

But now that they are getting into the ad space that incentive structure benefit pretty much crumbles away.


You might be surprised to know that Apple has been doing ads for over a decade. [0] They‘ve had ads for a long time, and still do - App Store search ads[1] and Apple News ads, to my knowledge.

[0]: https://en.m.wikipedia.org/wiki/IAd [1]: https://searchads.apple.com/


Let's not forget that App Store search ads replaced iAds, which was an in-app ad network. I assume that it was shut down because of too much competition in the space, but perhaps we'll see it come full circle as Apple diminishes the "effectiveness" of third-parties.


I don't get where people get that 'Apple didn't do ads'. I'm not even an iOS user and I know that much. Must be drinking unreal levels of kool-aid.


Apple was and is so bad at advertising that people think they didn't do it until recently.

Part of it just comes with the territory. Apple is a computer company that prides itself on not only having a sense of taste, but being able to impose that sense of taste on its business partners. This is contrary to the goals of advertisers - tastelessness kind of comes with the territory and advertising inherently messes with the user experience.


And there was the Steve Jobs era ad network that has big, interactive type flash ads that Apple had to approve as being "good". I don't remember much except working on a video component of a Geico ad that was mobile only in ~2009.


Also: both the App Store and Apple News cannot be uninstalled on MacOS.


I have huge concerns about this, I think it's really hard for any publicly traded company who gets into ad-tech not to end up making some questionable choices. The incentive structure in the ad business is such that no matter how strong a core org you think you have, it likely corrupts over time. While Apple's track record on privacy is commendable, Tim loves (and needs to keep shareholders happy) a good Services growth story.

It's also clear at this point, the most profitable ads in the industry are the ones that most take advantage of personal data - this isn't a secret. The difference in profitability can be stark too, which is why I worry so much about the incentive structure.


They were always a marketing company though. See this more as the prodigal son returning to his roots.


How is it a marketing company?


> On HackerNews, Reddit and in public places you're loath to find someone vouching for Apple's privacy practices

FYI loath means "unwilling", so it doesn't make sense here. "Loath to X" is like "I would loathe doing X". The word you're looking for is "likely".


I see you were right, but I think both choices (loathe and likely) are acceptable. I originally thought jack was saying "its unfortunate to find" or "it's disgusting to find".


Not op, but I wondered if this was a Britishism. My partner recently introduced to me to "lousy with" meaning "abundance"...


"Lousy" also has a definition of being infested with lice (the singular form of which is louse); according to the Etymology Dictionary this may be the original meaning (https://www.etymonline.com/word/lousy)

From there, it seems to have developed as an American slang to describe "infestations" of other kinds.


My mom who is not British and has spent almost no time outside of the Great Plains uses this term. I think the “lous” come from louse, the singular form of lice. A great abundance of not a great thing.


TIL! But my partner has heard it used in the UK like "I'm lousy with options" as in, I'm overflowing with choice. That was a fun convo as I was like ... What??


Had my negatives flipped- thanks for pointing out!


It was quite a thing to see Apple's huge "What Happens on iPhone, Stays on iPhone" billboards up around town, at the exact same time they were announcing the rollout of a new content-side illegal-material scan-and-notify system.

(I think they've since delayed the rollout of that system indefinitely, after public outcry.)


To be fair - wasn't that scanning on-device, and only uploading metadata on things that you yourself were already uploading to their cloud?


It was restricted to that for the time being yes. But still a big step in the wrong direction. I don't want my own phone spying on me. It's a bridge too far. Scanning on a cloud service is a very big difference.

Does it matter in practice? No. But it makes me feel very different about it. That's important too.


Same way I feel about it. When the data is on your servers, I fully expect it to be analyzed and checked to ensure it's compliant with the host's standards. That's part of our agreement, as customer and service. When you move that software onto the device I use, now I have to be conscious of everything I interact with. It's a horrible sinking feeling that isn't easily mitigated by platitudes like "we promise not to abuse it!"


The Apple philosophy though is to think of the phone as an appliance, not a general purpose computer. The model of "ownership" is also a lot more gray with Apple devices. The idea that Apple has more control over the device than you do is the accepted norm. Given that, I think they could easily argue that your data is on their device, so analyzing/checking is expected. They've been slowly iterating more and more to this model for years, likely because I think a lot of people will not go along with it unless the heat turns up slowly.


That’s exactly what they proposed to do.


I guess we have different definitions of "Apple's servers" then.


When you sync or upload folder to iCloud, iCloud is Apple’s property. They were scanning content before it landed on Apple’s property to enable S2E crypto.

Sounds like you’re taking exception with the explicit parental notification, where a parent with a child enrolled in their iCloud “family” can request to be alerted when their minor child takes an action on the phone owned by the parent.

The EFF wrote an awful blog that deliberately confused the already confusing release from Apple. Your privacy is almost certainly weaker as a result, as various entities can use a subpoena or warrant to access your files.


The problem is that the data wasn't on their servers, it was just flagged to go to Apple's servers.

And once you're scanning files with one flag set, nothing technologically prevents the scanning of files without that flag being set. And to quote myself from the Google Stadia brouhaha - "companies lie in PR statements" - so I have no reason to trust Apple's statement that they would never scan other files.


You’re getting upset because they chose to tell you that this is happening.

The material that they scan for is illegal to possess. You should assume that anyone that accepts cleartext uploads of binary data is scanning for that and other material.

At the end of the day, you don’t have the juice to negotiate contractual protections, so corporate self-interest is really your only real protection.


> The material that they scan for is illegal to possess.

Do you know what else is illegal to possess? Proof of abortions in some states. Proof of being gay in many countries. Winne the Pooh in at least one country.

> You should assume that anyone that accepts cleartext uploads of binary data is scanning for that and other material.

Again, since this seems to be lost every time, this isn't being checked post upload. That matters.


> And once you're scanning files with one flag set, nothing technologically prevents the scanning of files without that flag being set.

You need to read up on how the system worked, because they picked a design that made absolutely no sense if they wanted to do that. They’d have to redesign it to work in a different way if they wanted to do that.


You can read up on it, if you want. I'm not going to use iCloud if it scans my data before it hits Apple's servers.


If you don't trust them to do what they are saying, when they are saying it happens - then why on earth do you trust the device to not do it just by avoiding iCloud?


The phone was analyzing what you would be sending to the cloud.


True. What stops it from analyzing other files on your phone? A policy block. It's simple to change policies (or be forced to change policies).


What’s stopping them from doing the same with your unencrypted photos in iCloud?

From a technical standpoint, at least you are protected from future policy changes if your files in the cloud are encrypted.

Understand, I am playing the devils advocate role more than anything else.


Nothing. And since it's uploaded, I've agreed that it's OK for them to scan and report on them.

The key point I'm trying to make is that where the data is located matters for whether Apple (or Microsoft or Redhat or whichever company) has the ability or right to read and report on that data.

> at least you are protected from future policy changes if your files in the cloud are encrypted.

If, and only if, that data is never synced back to your phone (which Apples does currently).


> What stops it from analyzing other files on your phone? A policy block.

The phone would be literally incapable of determining if there were any matches. Please read up on how it was designed to work.


Yes, that's fair and true -- they promised the scanning would self-limit to content being synched with iCloud.

But I think that's pretty thin gruel, since now you're just a feature-flag (or even a bug) away from all content being scanned. More broadly, the entire endeavor is very much at odds w/ the sentiment expressed in their public advertising.


> now you're just a feature-flag (or even a bug) away from all content being scanned.

That’s not true. The system didn’t work the way you are assuming. The device had no knowledge of any matches. The “scanning” was a coöperation of client-side and server-side code, each with an extremely limited knowledge of the data involved.


I don't think you're correct -- the phone uploads security vouchers from photos, any photos, to Apple's servers. If enough of the vouchers match the image database, Apple can decrypt the vouchers to a low-rez version of the photos.

Apple says this would only be used on iCloud synced photos, but I don't see any technical reason the process could not be performed for any photo.

Am I mistaken?

> The device had no knowledge of any matches.

True, but how is it relevant?


The point the feature is lost here.

They want to scan on device so they do not have to scan it in the cloud.

Because people are in full FUD mode on this, we're stuck storing photos without full end-to-end encryption because Apple has do the scanning on their side.


Well, there is another option: Apple could actually respect your privacy, support end-to-end encryption, and not scan your content at all.


They do, if you choose not to push your content into their servers.

If you don't want them to check for unwanted content, don't put your content on their machines.


How does this work in regards to a companies obligation (if there is one) to scan for illegal illicit material (I don’t feel like typing that term out that we all know) ?


Sure, if the law requires it, then the company must do it. At that point, you are living under a rather intrusive government!


I don't think there are many governments in the western world that wouldn't get to that point. The entire point of client-side scanning was to head off the possibility of a law saying they have to scan everything server-side for the exact same reasons.

> Well, there is another option: Apple could actually respect your privacy, support end-to-end encryption, and not scan your content at all.

This is not an option you, or Apple, think there is to choose.

Peoples FUD and misrepresentation over what was happening and your snark over "yes but their advertising campaign says things stay on the phone" (which is literally true) is IMO likely actually accelerating the privacy degradation. When they are compelled by law to scan this all centrally in one place in the cloud, I guess you will be happier that it's more invasive but just happening "elsewhere"?


> To be fair - wasn't that scanning on-device, and only uploading metadata on things that you yourself were already uploading to their cloud?

On-device scanning like that would be pointless, though. IIRC, stuff uploaded to their cloud is already accessible to Apple for server-side scanning. The controversial thing was the on-device scans would trigger some kind of upload of un-uploaded stuff to Apple for further investigation.


> IIRC, stuff uploaded to their cloud is already accessible to Apple for server-side scanning. The controversial thing was the on-device scans would trigger some kind of upload of un-uploaded stuff to Apple for further investigation.

No, the stated behaviour was that it would be uploaded to the cloud at the same time as the encrypted image. Uploaded data was still "encrypted" "at rest" (but possible for them to decrypt, just not done routinely). The whole point of the client-side scanning was that scanning was done locally on the phone and uploaded with it, as an explicit alternative to them regularly decrypting and scanning all user data on their servers. The stated operation was; if the phone fingerprint matched a public CSAM fingerprint list, then it would be automatically decrypted and scanned with a second, private fingerprinting algorithm on the server - only if this second, private fingerprint matched would the data be flagged up for potential violation (and inspected).

The alternative to this was them just regularly scanning all data in their cloud anyway. Apparently people are happier with that option.


> The controversial thing was the on-device scans would trigger some kind of upload of un-uploaded stuff to Apple for further investigation.

No, a load of people assumed it would do that, but it’s not possible with the proposed scheme because the device had no knowledge of any matches.


Please provide a source for that claim.


I wouldn't trust a corporation of this size to make sure there aren't recurring bugs which cause scanning of things I have on my device, but do not upload to their servers. If the code for scanning, flagging me and reporting to authorities is on the device, then I expect bugs (intentional or not) which will trigger it, even though I don't use iCloud. Put the scanning in the cloud, then I'll be fine with it - I don't use the cloud.

Also the scanning was calculating hashes based on content, not just metadata.


> If the code for scanning, flagging me and reporting to authorities is on the device

It’s not. The device has no idea if there are any matches. Everybody is assuming how it works without actually reading how it works.


Devil's advocate: "We kill people based on metadata." -Snowden


That statement was made by General Michael Hayden[0], former National Security Agency director, former Central Intelligence Agency director[1].

[0] https://youtu.be/kV2HDM86XgI?t=1072

[1] https://en.wikipedia.org/wiki/Michael_Hayden_(general)


Thanks for pointing that out! I'd heard it from Snowden, but didn't realize he wasn't the original source: https://twitter.com/theyesmen/status/652963715168534528


I don't think it's possible to exist as a public-facing corporate entity without some kind of content scanning mechanism. If that were possible, then every child abuser would simply move their data onto those platforms and they'd become untouchable.

Even MEGA, which signals the virtues of privacy/security through the prominence of decryption keys on its UI flows, will still report illegal content to authorities and display a message saying so if content was removed for that reason.

Any publicly traded company that touts perfect privacy cannot deliver what they are claiming, or they'd become the service of choice for every type of disenfranchised person - including child abusers.

Apple has received much more flak than the average corporation over this issue because this fundamental impossibility of perfect privacy clashed with its own privacy signaling in a loud way, and the flurry of debate over the technical merits of the novel, widely shared on-device scanning solution caused much more scrutiny than the boring server-side scanning that has been ubiquitous for decades.

But the fact is that no matter how Apple tries to approach the CSAM problem, it will ultimately have to weed out child abusers from its servers or be publicly and legally lambasted. That is what society has decided is best for the welfare of children, and as a result we will have to live with an imperfect level of privacy as provided by such entities.


Yup, and now your materials are in clear text on their service getting scanned routinely I’m sure.

That particular issue was the privacy advocacy people run amok.


Actually that’s perfectly in line with it “staying on your iPhone” that they were proposing to do content scanning on your phone. Not that I agree with it. But it is consistent.


But results about matches don't stay on the phone, which I think is clearly a violation of the statement (unless you are interpreting it in an extremely literal way).


It only gets sent to Apple if you turned on iCloud photo syncing to send the photos to Apple.

That means the alternative would be to send the photos to Apple and Apple scans the photo. Either way you send the photo to Apple and meta data gets generated about CSAM. It’s just a matter of where the data gets generated.

I’m also uneasy about it happening on the phone. But honestly, by it being processed on the phone, that means it can be encrypted before it gets to Apple’s servers.

I’m basically working under the assumption that scanning for CSAM is legally required.


> I’m basically working under the assumption that scanning for CSAM is legally required.

It is explicitly not legally required in the US [1]. Providers are required to report "apparent CSAM" that they find on their own, but they are not compelled to search their servers or private devices for its presence.

And this is the case for a very good reason: if it was mandated by US law, then prosecutions would be subject to much stronger 4th amendment review under the "state action doctrine" (i.e., the companies are searching your files without probable cause as compelled representatives of the government.) The current arrangement evades this review under the very thin fig-leaf that US providers are doing the searching on their own.

[1] https://crsreports.congress.gov/product/pdf/LSB/LSB10713


FOSTA/SESTA and other law push back on that, wherein a neutral host (website, hotel) can be held responsible for crimes commited on their property if the government decides they are generally aware. Apple doesn't want to be an accessory. So even if they can't be required to scan, they can be punished for not scanning if something illegal turns up


IANAL and certainly don't want to defend those laws, but I believe FOSTA/SESTA ban providers from operating services with the intent to promote or facilitate various crimes. In other words, the provider has to knowingly distribute the material. I'm pretty sure that Apple encrypting its photo backup service would not satisfy these criteria, but if it did and the only way to comply with those laws was enforced CSAM scanning, then many CSAM prosecutions based on it would probably be tossed out.


As far as I know, it's not legally required, at least in the US, though I wouldn't be surprised if suggestions from gov behind the scenes were the inspiration for this. I guess the EU is in the process of trying to mandate something like this.

Which would be unfortunate. At that point, you won't be able maintain digital privacy from the govt w/o de-facto becoming a criminal.

CSAM is, I think, simply the initial justification for these systems, since it's widely reviled. But the system itself is not CSAM-specific, and the temptation to expand its scope will likely be irresistible.

If your goal was to become an authoritarian tyrant, you would be very happy to have this in place. :-)


> But results about matches don't stay on the phone

Results about matches aren’t ever on the phone in the first place. Only the server can determine if there are any matches.


Few months ago when launching apps on Mac Os became sluggish because their telemetry service had high latency -- that was the moment I lost faith in all of Apple's privacy claims.

PS: Still use a MBP, iPhone and an Apple Watch. :(


If you just put macOS on a proxy you can see the it basically never stops phoning home. Privacy my ass.


On a laptop not on WiFi what happens?


That telemetry could easily -- more easily in fact -- have been done in a privacy protecting way: have your machine ship with a signature database and then have it randomly and frequently download deltas. Then check the signature database locally. It would be faster too. Especially on the mac we're not talking about an enormous database.

Rather disappointed that Apple didn't take this route. They do do something similar with their virus database (XProtect).


They didn’t say what happens on MacOS stays on MacOS.

That type of capability is core to most general purpose OS’s today. Any significant company is running EDR, etc that’s even more intrusive


That "telemetry" (which is misleading in the current context) was about checking for malware. I'm talking the specific case of launching apps on macOS.


Part of the issue IIRC is that application names were exposed in the request, not encrypted in any way. So there are legitimate privacy/security concerns in publicly announcing every application that you open on your computer.


that is still very much the same thing


Not only that, in my experience, the support is fanatical in nature. I often get downvoted and flagged for daring to state the obvious, which is that Apple is positioning itself to become one of the largest companies in the Ad space. It's probably more than just great marketing / PR (at least - of the usual kind).


> "Also, being more privacy-friendly than Google and being privacy friendly are two different things."

in theory, that's ok, if we have healthy, functioning markets that are free from undue influence of any individual participant. the "invisible hand" of the market would drive it iteratively toward more privacy (assuming this is valued by more than minor segment, greater than ~15% of the market). the market would (and should) be an ongoing conversation between suppliers and consumers to reach all the profitable corners of supply and demand, rather than a couple behemoths with megaphones telling us how great they are, rather than showing it through their products and practices.

p.s. - has anyone else noticed adguard doing port-scans on your gateway from their dns service IPs? i haven't dug into it yet, so i don't know whether it's spoofed or whatnot.


> the market would (and should) be an ongoing conversation between suppliers and consumers to reach all the profitable corners of supply and demand, rather than a couple behemoths with megaphones telling us how great they are

I agree but where do we see this? Everywhere I look it’s mega corps. Food, fuel, power, electronics, clothes. I can’t think of a good example of the ideal relationship.


mostly in commodities markets (almost by definition, ha). if we had an anti-trust division with any teeth, we'd have many more markets like this, as that's the whole point of anti-trust enforcement--to un-distort markets to drive greater efficiencies and maximize value across the economy (not just in large corps and solely for the already wealthy).


Monopolies are optimally efficient across the economy, as long as the monopolist doesn't get too greedy. Competition is wasteful -- competition is why the deadweigh loss ad industry exists!


monopolies seeming to be efficient like that is only true in a limited static analysis. in a dynamic and complex economy, there's great value in the flexibility, ingenuity, resilience, price discovery, and creative restructuring provided by multiple competitors in a given market.


> functioning markets that are free from undue influence of any individual participant. the "invisible hand" of the market would drive it iteratively toward more privacy (assuming this is valued by more than minor segment, greater than ~15% of the market)

The market is for advertisers, and advertisers - whether they are small or large businesses - value tracking and measurability of their advertising investments.


advertisers value a way of determining ROI, which doesn't necessarily require pervasive tracking (see: nielsen ratings of yore).

in any case, my point was about the consumer electronics market, which is apple's core industry, and which, in a healthy and well-functioning market, also has a key stake in this conversation (driving it toward non-distorted, optimally efficient outcomes).


> doesn't necessarily require pervasive tracking

I disagree. Incrementality studies (which measure ROI) as advertisers want them are basically impossible with ATT. You need to be able to pass an ID between apps.


yes, but you haven't shown that that translates into more precise and accurate ROI. marketers and advertisers believe it should, but there's no solid proof. that's because markets (and any human endeavor) is complicated beyond our ability to model (and solve) it deterministically. attribution models (such as incrementality studies) can sometimes give you clues, but can't really tell you why any given person bought something with any certainty. it's the old adage of half of advertising dollars are wasted, but you don't know which half.


> there's no solid proof

I think modern incrementality studies give solid proof of the value of an ad on the basis of a good model of how the world works. Of course, models can be wrong - it could turn out that solipsism is true, physics is false, and the world outside of your own mind is a figment of your imagination!

That the world is complex and models are inherently wrong does not mean that nothing of value to businesses was lost with ATT.


no, there's a belief of value (and fomo), but not concrete proof. there's little correlation between the price of an ad and the value of attribution beyond the value of the ad itself.


Incrementality studies provide concrete proof of how your ad impacts behavior. How you value that behavior change is up to the business and reflected in the price they are willing to pay.


Before ATT happened, FB has a tool that would run experiments to assess incrementality of your advertising. It's possible, but there are a bunch of privacy trade-offs..


right, but again, those are very likely probabalistic, population-level models that make assumptions about how to attribute credit--does it all go to the first view/click? how likely is the first view/click really the first view/click? do you instead apportion credit across clicks/views? how? it's somewhat useful at a population level, but not at all at an individual level, especially not for the tradeoff in privacy, anonymity, and autonomy.

but the kicker is, is it better than just doing studies without the more invasive attribution data, especially in relation to the higher price and market consolidation? very unlikely. ad monopolization means more of the value in the value chain goes to the monopolist regardless of the proportion of value they provide in the chain.


> is it better than just doing studies without the more invasive attribution data,

Absolutely, as the controlled incrementality study is impossible without either attribution or some group-based approximation of attribution (ie. federated cohorts, etc.)


> right, but again, those are very likely probabalistic, population-level models that make assumptions about how to attribute credit--does it all go to the first view/click? how likely is the first view/click really the first view/click? do you instead apportion credit across clicks/views? how? it's somewhat useful at a population level, but not at all at an individual level, especially not for the tradeoff in privacy, anonymity, and autonomy.

So the idea is that you run the experiment, and this can then help you understand where you should attribute value, as you know that the only difference between the two groups was the exposure to FB ads.

Now, to be fair, unless FB is most of your spend, you still have a bunch of problems, but they'll wash out equally across conditions (theoretically, at least) so the estimate should be good.


I like to say that Apple is a marketing company that makes decent tech products. They absolutely played this market like a fiddle. They sat back and watched Google and FB absorb tons of bad press and I'm sure they were feeding it behind the scenes. It always felt to me like a ploy. As much as ads and the SEO game had their problems, they were there to support the open web. Apple kept tapping the breaks on improving their web browsers and driving users to apps because that was their walled garden. Owning an iPhone now is essentially a status symbol, owning an Android means you're poor. The privacy rules designed to kill ads was only ever designed to hurt Google, not to protect users.

As a counter example of marketing gone wrong, Amazon has very steadfastly never sold their customer data to anyone. No one ever thought for a second that they did this because they were interested in privacy.


Google also has never sold customer data. Amazon has been in level trouble several times for stealing customer (product seller) data.


“Selling data” means nothing. Amazon trades in data services all of the time. Facebook is serving up retargeted Amazon ads in near real time.

Apple usually designs their experiences around things they control end to end.


'Deeply suspicious' is an understatement. These corporations don't have your back, and if something benefits the user, it's only as a side effect. Sadly, many still buy into the messaging, which must be hard to avoid when Apple's marketing has always been around making you feel like the kool kid in the block. Anybody who has ever believed Apple's pro-privacy scam is living in a fairy tale.


No one has anyone's back, except if your mother loves you, so what's the point of that complaint?

Alliances of mutual benefit are still good.


Except that this isn't an alliance, and the point of the complaint is that they market themselves as pro-privacy when they are anything but.


apple uploads and modifies all photos you take on your device with the explicit stated intent of referring you for prosecution. That doesn't sound very private to me.


It's called doing well by doing good, and earning a god reputation.

If you do good, people will talk about it, and you get to talk about it.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: