I believe you misunderstood. The issue is not whether software can be ported to a touch platform. It is that keyboard/mouse-oriented UI and ergonomics and a touch-oriented UI and ergonomics cannot successfully co-exist. It’s not a programming challenge.
(Of course, the user experience will always be somewhat different on a small phone screen compared to a big Mac screen. But that doesn't mean you can't build an app that works well on both from one code base. And certainly an iPad app is not all that different from a Mac app.).
Again, you are thinking of porting a touch app to a non-touch device when the issue is that good touch and non-touch UI and ergonomics cannot co-exist on the same device.
And I am familiar with Catalyst. I think Catalyst is a good example of how software suffers in a K/M-oriented environment when it came from a touch environment without many modifications. Even the Apple-developed apps have too many controls and views designed for tactile screens. The Catalyst development team is introducing UI elements that make more sense in macOS, like a compact calendar picker to replace the touch-style picker wheel, but it’s going to take a long time before a Catalyst app lets a developer quickly make a macOS version that feels designed for macOS, and again, there is a need for that native macOS feel because quality touch UI interfaces are slow and difficult to use on a laptop or desktop.
Voice certainly seems like a viable solution to some of that. In some ways it’s imminent if the accessibility improvements pushed to iOS, iPadOS and macOS are advanced just a bit further.
However, we should also expect to see PC use pushed into more advanced and specialized territory on our most powerful and versatile devices as phones and tablets assume more traditional PC work. That specialized use will advance as fast as the most attuned interface for that platform (keyboard+mouse) and the others will necessarily lag behind.
All of this won't solve the mobile/desktop dichotomy.
I rename things dozens of times a day. Saying "rename function A to B" dozens of times a day is unviable on desktop and is nearly unusable on the phone. And this is a fundamentally different UI.
>Again, you are thinking of porting a touch app to a non-touch device when the issue is that good touch and non-touch UI and ergonomics cannot co-exist on the same device.
Sure they can, just not on the same screen sizes/input methods.
You could have Excel that looks like iOS Excel when opened in the iPhone that automatically turns into Excel that looks like macOS Excel when the iPhone is connected to a larger screen with a mouse and everything.
Since you have both of those apps already, you can trivially combine them.
In the process you'll also get to reuse large parts of both UI code, and almost all of the non-UI code.
And if you've started from scratch, it would be even easier to find ways to reuse more UI code -- e.g. components could come in "auto-dual" versions that adapt.
>Could you explain to me how can you trivially combine two apps?
You already have the UI code for mobile and desktop. All you need to do is switch to one or the other when the user connects/disconnects an external monitor.
At the most basic, you could just save the spreadsheet state (staying with the Excel used as an example), and load in the background and switch to show the desktop version of the app with it pre-loaded. Same way as if the user manually saved their spreadsheet, closed the mobile version of the app, and opened the same spreadsheet with the desktop version - but more transparently.
Between this and "sharing UI" there is a big spectrum. If you already have the mobile and desktop version, and the backend is more or less the same (as can be the case with apps like Excel for macOS and iOS), then compared to the work you've already done its trivial to add an intelligent way to switch from one UI to the other keeping all the other state (working spreadsheet, clipboard, current executing action, etc).
>You will need to design a completely different set of interactions, components, layouts etc. for the mobile version compared to the desktop version.
Not necessarily. A sphreadsheet cell is a sphreadsheet cell. Whether you click on it with touch or the mouse pointer doesn't matter. You could easily share the same underlying widget (and e.g. just show more of them). The formula editor that appears can similarly be shared. Other forms might need some extra padding, or some widgets to become larger or smaller etc.
We already have apps that run the same in iOS and macOS, through Apple's translation layer + layout constraints and switches on widgets. The "Voice Memos" apps is basically the exact same thing between iOS and Mac.
> You already have the UI code for mobile and desktop. All you need to do is switch to one or the other
There's no "just switch". I wish people stopped hand-waving at complex technical problems with "just"s and "all you need"s.
What you're saying is: "you have two completely different UIs with completely different modes of interactions, completely different layouts, affordances, a myriad other things. 'All you have to do' is ship them together and switch them on the fly".
> then compared to the work you've already done its trivial to add an intelligent way to switch from one UI to the other
It is not "trivial"
> A sphreadsheet cell is a sphreadsheet cell. Whether you click on it with touch or the mouse pointer doesn't matter.
It does matter. Because interactions are completely different. Just for the most trivial example: once you've selected a cell, you can immediately start typing you can immediately start typing when you're on a desktop. On a mobile device you have to do an additional tap (double tap in Excel) or tap a different area (entry box in Google Sheets) on the screen to start typing. And that's just one interaction. There are hundreds of other interactions which will be different.
> We already have apps that run the same in iOS and macOS, through Apple's translation layer + layout constraints and switches on widgets.
Yes, and almost all of them fail in the most basic ways on the desktop: they provide incorrect widgets (dates for example), they break user input, they handle focus incorrectly, they don't have keyboard shortcuts, they use interaction patterns that are alien to the desktop and so on and so forth.
Let's take a look at "voice memos":
- No shortcut to delete a Voice Memo, but a slide-to-reveal Delete button. Alien to desktop
- Esc doesn't work to exit editing screen or recording screens
- Cmd+W quits the app which is against the HIG
- Once search input has focus, you can't Tab out of it (but you can Shift-Tab)
- In the editing screen the Crop Button is inside window chrome which is against HIG if I'm not mistaken.
Yes, this app runs "the same on iOS and MacOS", and that's precisely the problem: it shouldn't run "the same". It must be different because the desktop is different.
And note: this is a first-party app with next to zero functionality: a few buttons, a screen that shows one thing at a time. That's it. And it already is filled with inconsistencies and bad behaviour on the desktop. It will only be much, much worse for any app with more complex functionality (unless developers take conscious and specific steps to address this).
>There's no "just switch". I wish people stopped hand-waving at complex technical problems with "just"s and "all you need"s.
Well, and I wish you've read my whole comment before the BS about hand-waving. I explicitly describe what I mean.
>What you're saying is: "you have two completely different UIs with completely different modes of interactions, completely different layouts, affordances, a myriad other things. 'All you have to do' is ship them together and switch them on the fly".
Yes. Nothing particularly special about it. You could do just that: it's technically feasible (trivial even), and it would still be an adequate experience.
>It is not "trivial"
Well, agree to disagree. I've done it for apps and it's nothing much. What would be trivial for you, just flipping a compiler flag or changing 10 lines of code? Well, you ain't gonna get that.
>It does matter. Because interactions are completely different. Just for the most trivial example: once you've selected a cell, you can immediately start typing you can immediately start typing when you're on a desktop. On a mobile device you have to do an additional tap (double tap in Excel) or tap a different area (entry box in Google Sheets) on the screen to start typing.
That's a bogus difference. If the mobile device is connected to an external BT keyboard, you can already "just start typing".
Even if that wasn't the case, 99% of the widget is the same. The fact that to get the virtual keyboard to show up cell focus on mobile is not enough, is a negligible difference (not to mention it will probably not even touch the cell widget code, but be in another part of the UI dispatch, even handle directly by the framework).
>Yes, and almost all of them fail in the most basic ways on the desktop
And they are still perfectly operable, and people (including me) use them every day. So there's that.
Basically, it’s really hard to actually develop totally separate UIs in the same app, e.g. another user brought up the Catalyst project, which is bringing poor-fit touch paradigms into macOS despite the developers’ intentions in many cases. One paradigm will be dominant. Care must also be taken to not load too many unused resources.
Interactive workflows also differ between UIs.
Since so much of an OS is the native UI and the first-party applications, even if the mobile, tablet and PC versions of an OS share some libraries, they can’t really share enough to meaningfully call them one OS without compromising the experience on all three.
So while there may be three pane email on the iPad as well as Mac, and email on iOS, they don’t really share enough to be called the same app, and if they did, at least one of them would suffer. And some of the interactions on the macOS version effectively can’t be brought over.