Isn't this overblown? They're (reasonably, in my opinion) looking for telemetry in order to improve the program. It's very hard to design in a vacuum, and some sort of (respectful) telemetry would be fine with me.
We can't both want OSS to improve and resist giving feedback. I think the possible UX improvements are worth giving them some data, as long as the scope remains limited (which it should, since it's OSS and we can see what gets sent).
I thought it was more about the fact that it's opt-out (if I understand it correctly!).
Companies have existed for centuries without needing constant telemetry... we have many methods that do not require telemetry: surveys, emails, forums etc.
In fact, there is a subreddit here [0] that the devs/owners could easily ask questions.
Edit: Also, I think people are totally jaded with telemetry. There are some companies that take data whether you want them to or not so people are naturally wary... what's to stop them pushing out an update that grabs a massive pile of sensitive data and then when the backlash starts, they throw their hands up and say "sorry"... much like Google do/have done.
Really? That's a weak argument if ever I heard one.
Perhaps it's time to take a stand and say "no more!"
What's to stop them adding in something else later that "accidentally" grabs IP addresses, and sends them "accidentally" to their third party analytics company who doesn't give a shit about your privacy. That company then takes their other data, joins it all together and adds to the picture of you that you never asked them to create!
When you open this door, there is no closing it.
The argument comes down to this (for me, at least): You got the software to this point without telemetry. Why do you need it now? Is the application all of a sudden unusable?
Edit: I forgot! They already take your IP address but my point still stands!
I disable telemetry whenever I can. But the major vendors already use it everywhere (Firefox, vscode, the rust installer, just a few examples) - I've disabled it in all of those, but most don't, and we don't always pay attention to apps that add this as a "feature".
Correction: rustup (rust installer) did collect telemetry but it seems to be removed now. https://github.com/rust-lang/rustup/issues/341 The reputation may stick, as we know :) No worries now.
I mentioned this in the last thread here, but due to COPPA, generally it is illegal for businesses operating in the US to collect data on children under 13, even if you ask (there are exceptions but usually you need to prove that you have the parents' permission). They are doing what is required by the law, and in fact it was probably already illegal for children under 13 to use the product in ways that could generate personal data. So nothing has really changed, it seems they are only explaining what the situation already was.
It still is irrelevant, if you use it offline, or you opt out of the analytics. Yes, an adult will probably have to set it up that way for them, but this has always been the case with any internet-enabled computer: it falls on the parents (or the school, childcare service, etc) to set up parental controls and oversee the child's online activity.
I think their approach is pretty sane - I don't know if it started out that way, but 'opt in only' seems like the defaults we want. But like parental responsibility, that's aside from the main point.
The fact that they are collecting this data now subjects them to COPPA requirements, thus the restriction to people over 13. If they hadn't started collecting data, they would not have had to comply with that restriction.
That seems to not really be relevant, if you want to have a data-driven development process, then you have to, you know, collect data. If you said they should just use email or github issues or something, and have people submit their feedback manually, those have the same deal, and they still would have to comply with COPPA regarding that data.
Yes, that is true - they would have to comply regarding the data collection. However, they would not need to impose limitations on the usage of application itself, only on the tools/platform used to submit the data.
> Companies have existed for centuries without needing constant telemetry
And if you're fine with software from the 1700s, you're welcome to use that, but if you want UX improvements to Audacity, there's only so much that can be done without telemetry.
Quantitative (ie telemetry) data gives us the what, not the why.
Qualitative (usability testing and other observation methods up to and including ethnography) can give us the why (users seem to do what we didn't expect).
Modern UX design can make use of both modes of user research such that they support each other. I.e discover the what via quant and understand it better via qual, to land at a solution.
Or discover both why and what via qual, and then determine severity (how many users get stuck in a given way) via quant.
But of course, there needs to be explicit consent such that trust is not breached, no matter how inconsequential the data collected is considered to be. The decision belongs to users.
Apparently there wasn't consent.
Edit: Ideally, should anonymised usage data in OSS projects be public? Such that we can see the data and design decisions made based on it? Perhaps this would generate more of the trust that was desired here.
Companies like Apple and digital research were producing great UI/UX in the 80s, well before widespread internet usage made telemetry a viable option for companies.
(I mean the 1980s, just to stave off any attempts to send me back in time to observe computing in the 18th century).
Apple OS pre OS-X, and Windows 3.1/95, used to crash all the time. A big part of the improvements in crashes (talking to people in the companies) is automatic submission of errors.
Also, Apple was selling Apple IIs for the modern equivalent of $5,000. I'm sure Audacity could achieve some great in-person UI studies is people were willing to pay that kind of money.
I don't understand this argument. "Apple did good UX without telemetry in the 80s, so you should be able to do good UX without telemetry now, Muse".
Like, what do people think they want the telemetry for? Are they planning to sell the treasure trove that is "how many people clicked on the 'select audio sink' button" to the highest bidder?
You can tell them "do it without telemetry" all you want, but in the end it's just not going to be as good as if they had feedback.
I'm pretty certain Ardour and Reaper don't have any telemetry, and they have UX a million times better. Audacity can be muuuuch better with having to stoop to telemetry.
They're (reasonably, in my opinion) looking for telemetry in order to improve the program.
The original Reddit post (Discussion - https://news.ycombinator.com/item?id=27727150) stated Audacity would collect data necessary for "law enforcement, litigation". That doesn't sound like telemetry for improving the app.
Even if it's only added to cover their asses or for legal purposes, with no intention to exercise that provision, it's already there, which means that changing their mind in the future and making use of the provision costs them nothing, and they won't have to tell anyone about it.
There are only a handful (<1%) of people who care about having Sentry (crash test collection) or GA (click or view analytics without PII) in an app. But those people are very vocal about it. So it looks like a lot of people care about it.
We are not even talking about tracking for ads or for surveillance. It is only about logs for crash analysis and UX improvement.
No. And we've done well without telemetry for decades. And don't be daft, multimedia was one of the biggest drivers of technology over the last 5 decades. Telemetry has only been practically possible in the last decade or so.
Is it really so hard to just ask people? No need to spy on them. Also I don't think spying on them will improve the product. Telemetry has limited value in the first place.
Having worked on a variety of user-facing products for several years, I can tell you that it is hard. It depends on your audience but typically many users don’t report problems. If they do, they don’t give you enough details to diagnose the issue. And why should they? They’re not knowledgeable in how your system works. So you have to reach out to them and ask for more information. Maybe they need to reproduce the problem to give it to you. If you’re unlucky, they won’t be able to reproduce it. Even if they were able to, you have now taken more of their time.
The ask could be as simple as “hey this went wrong, but you can click here to send us the error info and someone will get back to you” and many users will STILL not share the information. You might think that they must not care that much but often times you’d be wrong. Users can be surprisingly irrational at times.
In a lot of the software I've written for businesses telemetry was critical in being able to get quick answers to things like "is anyone using this specific portion of the app?" Or "are the new users using the application yet? "
If you're expecting an answer different than "yes" to either of these questions, it means you've made a very expensive mistake much earlier in the process.
The right time to ask if anyone will be using a specific portion of the app is when the feature is being proposed, not after it's been implemented and released. Replacing discovery up front with post-release telemetry is a sure way to lose money.
Conversely, if you're not making that mistake, then you don't really need telemetry to tell you what you already know.
I think there's plenty of data for that, given that it's how UX designers generally do things nowadays. What I'd like more to see is a telemetry library or service that respects your data, so we can say "oh, they're using OpenTelemetry, so that's fine", where OpenTelemetry guarantees that it will respect user data.
That sort of thing can't really exist. Consider this scenario:
A developer of an email client wishes to use telemetry to figure out which IMAP features are most worth supporting. They could, say, log every IMAP server you connect to. Or instead run an IMAP ID command and log the response. Or they could ship the CAPABILITIES response verbatim. Or maybe they could carefully parse the CAPABILITIES response and only report the existence of specific tokens (which may include capabilities not yet supported by the client) back as telemetry.
There's a gradient of scuminess going on there, all to (purportedly) track the same information. What matters for privacy concerns is the choice of data being tracked; how it's actually collected and transmitted is comparatively unimportant. So what you really want is someone to approve the choice of what to track, which isn't what a third-party library or service is likely to give you. And you don't need to necessarily route it through a third party; if we're talking about open-source software, the data that's being tracked is public knowledge--an organization that reviews telemetry and gives a stamp of approval would be sufficient to achieve the same ends.
But even then, I suspect many people are going to have different ideas as to what data is safe to track. Even the list I gave for tracking IMAP, I fully expect that there exists sharp disagreement about whether some of those entries would be okay for telemetry purposes.
Having the choice to send and having transparency to see what is sent would be a great choice.
Sometimes projects take their cue from enterprise.
Hypothetically, if Microsoft can’t count how many SharePoint installations they have active and on what version, telemetry could help them plan a roadmap. Only that inch is often taken and stretched into a mile.
What's wrong with the feedback they get through traditional channels, though? There's already a GitHub issues page and the forums. There's no need to exfiltrate user data to figure out something they're already telling you.
They're looking for things like "how much usage does X thing get?", and this is hard to get from users for various reasons. You need lots of fine-grained feedback like menu clicks and things like that, you can't discover whether a button is easily discoverable from GitHub issues.
One way to do this is to make it a part of a beta program rather than baked into the main app. That makes it slightly more manageable for the developers too.
That's true, but I don't know how many would bother installing the beta, especially given that most distributions use stable. This would be the same as opt-in telemetry.
Windows Insider already had over 10 million users 4 years ago[0]. I'd say that approach works well enough. I myself used it on Windows Phone 8, because some upcoming updates came with cool features and being in the Insider program gave me earlier access.
I also used Firefox Nightly for a while because of the massive performance improvements at that time. As Audacity claims they're doing it for UX improvements, it would be easy to get frequent Audacity users interested in a beta build featuring tracking but also all the improvements being tested.
Github is in no way something that can be (or rather: is) used by anyone outside the tech bubble. Audacity however very much is. It is used by a lot of people that will never write a line of code and don't care (and shouldn't have to) about what github even is.
As someone who has done a little support work: The problem is that those channels require users to describe the problem to you. Many users are surprisingly bad at doing this.
It is possible to create opt-in, on-demand reporting tools.
Someone I know well created one such, based on a common reporting template of a major Free Software project, after realising that the template itself was based on information readily obtained from the system. Fleshing that out a bit resulted in an automatic syste self-documentation tool. They'd introduced it to the support group at one former company, and informal feedback several years later was that this was the principle diagnostic utility for the team. There was no ongoing telemetry, merely a "here's the script, run it and send us the output". (If necessary, the output could have been automatically emailed or transmitted by other means.)
There are now several of these, the "About This Mac" utility on OSX, as well as several for Linux, of which I'm aware.
No, they're not app-specific diagnostics, but precisely the same principles apply.
I would argue in OSS it doesn't work well, just because volunteers are going through and triaging most of the worthless reports (if you're lucky!) out it's still boiling the ocean, it's hard to put "this sucks and fuck you" on a backlog.
You can't really compare unstructured feedback in a forum / issue tracker to getting detailed Sentry data tagged with a version number and other metadata. Seeing quick spikes of issues of a new version at a glance is valuable data to have.
You're conflating bug reports and error reporting with telemetry and network access. You can't send telemetry via email or issue trackers, that's for bug reports.
> For example, how is it possible that Audacity even exists, got stable, and used enough to be bought, without telemetry, if it is so vital??
The obvious answer to this is "nobody said Audacity isn't good, this is about getting better".
We can't both want OSS to improve and resist giving feedback. I think the possible UX improvements are worth giving them some data, as long as the scope remains limited (which it should, since it's OSS and we can see what gets sent).