Hacker Newsnew | past | comments | ask | show | jobs | submit | linguae's commentslogin

Two years ago I decided to give up my career as an industry researcher to pursue a tenure-track professor position at a community college. One of the reasons I changed careers is because I felt frustrated with how research at my company changed from being more self-directed and driven by longer-term goals to being directed by upper management with demands for more immediate productization.

I feel generative AI is being imposed onto society. While it is a time-saving tool for many applications, I also think there are many domains where generative AI needs to be evaluated much more cautiously. However, there seems to be relentless pressure to “move fast and break things,” to adopt technology due to its initial labor-saving benefits without fully evaluating its drawbacks. That’s why I feel generative AI is an imposition.

I also resent the power and control that Big Tech has over society and politics, especially in America where I live. I remember when Google was about indexing the Web, and I first used Facebook when it was a social networking site for college students. These companies became successful because they provided useful services to people. Unfortunately, once these companies gained our trust and became immensely wealthy, they started exploiting their wealth and power. I will never forget how so many Big Tech leaders sat at Trump’s second inauguration, some of whom got better seats than Trump’s own wife and children. I highly resent OpenAI’s cornering of the raw wafer market and the subsequent exorbitant hikes in RAM and SSD prices.

Honestly, I have less of an issue with large language models themselves and more of an issue with how a tiny handful of powerful people get to dictate the terms and conditions of computing for society. I’m a kid who grew up during the personal computing revolution, when computation became available to the general public. I fell for the “computers for the rest of us,” “information at your fingertips” lines. I wanted to make a difference in the world through computing, which is why I pursued a research career and why I teach computer science.

I’ve also sat and watched research industry-wide becoming increasingly driven by short-term business goals rather than by long-term visions driven by the researchers themselves. I’ve seen how “publish-and-perish” became the norm in academia, and I also saw DOGE’s ruthless cuts in research funding. I’ve seen how Big Tech won the hearts and minds of people, only for it to leverage its newfound power and wealth to exploit the very people who made Big Tech powerful and wealthy.

The tech industry has changed, and not for the better. This is what I mourn.


It's not just tech it's everything. This is an existential crisis because we have rolled back almost two centuries. We are just handing the keys to the kingdom to these sociopaths, and we are thanking them for it. They are not even having the decency to admit they really just want to use us as numbers, this was always the case since the industrial revolution. Dozens of generations worldwide have toiled and suffered collectively to start creating life changing technology and these bloodsucking vampires that can't quench their thirst just live in their own reality and it doesn't include the rest of us. It's really been the same problem for ages but now they really seem to have won for the last time.

I’m not the OP, but my answer is that there’s a big difference between building products and building businesses.

I’ve been programming since 1998 when I was in elementary school. I have the technical skills to write almost anything I want, from productivity applications to operating systems and compilers. The vast availability of free, open source software tools helps a lot, and despite this year’s RAM and SSD prices, hardware is far more capable today at comparatively lower prices than a decade ago and especially when I started programming in 1998. My desktop computer is more capable than Google’s original cluster from 1998.

However, building businesses that can compete against Big Tech is an entirely different matter. Competing against Big Tech means fighting moats, network effects, and intellectual property laws. I can build an awesome mobile app, but when it’s time for me to distribute it, I have to either deal with app stores unless I build for a niche platform.

Yes, I agree that it’s never been easier to build competing products due to the tools we have today. However, Big Tech is even bigger today than it was in the past.


Yes. I have seen the better product lose out to network effects far too many times to believe that a real mass market competitor can happen nowadays.

Look at how even the Posix ecosystem - once a vibrant cluster of a dozen different commercial and open source operating systems built around a shared open standard - has more or less collapsed into an ironclad monopoly because LXC became a killer app in every sense of the term. It’s even starting to encroach on the last standing non-POSIX operating system, Windows, which now needs the ability to run Linux in a tightly integrated virtual machine to be viable for many commercial uses.


Oracle Solaris and IBM AIX are still going. Outside of enterprises that are die hard Sun/Oracle or IBM shops, I haven't seen a job requiring either in decades. I used to work with both and don't miss them in the least.

I wonder if there are still undergraduate CS programs that use a functional programming language in the intro course? MIT switched away from SICP (Scheme) to Python back in the late 2000s, and Berkeley switched away from Scheme sometime afterwards. Shriram Krishnamurthi has heavily promoted Racket in education and continues to do so, but for introductory CS he has moved on to Pyret, which has Algol-like syntax as opposed to S-expressions and doesn’t require programmers to think in terms of functional programming; for example, Pyret has loops and mutable variables.

At my institution (Ohlone College in Fremont, California), we teach the intro courses in C++. However, for discrete math, a sophomore-level course, each instructor is allowed to choose the language. I chose Haskell the last time I taught the class, and I plan to use Haskell again, since I believe it’s a great vehicle for coding discrete math, plus I believe it’s a great thing teaching undergraduates functional programming early on.


There's an interesting historical angle here: Church's lambda calculus actually predates Turing machines by a few months (both 1936), and they're provably equivalent in computational power. Church even proved undecidability of the Entscheidungsproblem first, using lambda calculus.

Yet despite this head start, the Turing machine formalism became the dominant framework for CS theory—complexity classes, computability, formal verification. Whether that's path dependence, historical accident, or something deeper about how humans reason about computation, I'm not sure.

But it does make me wonder: if the imperative, state-based model proved more tractable even for theorists, maybe FP's learning curve isn't purely about unfamiliarity. There might be something genuinely harder about reasoning in terms of pure functions and recursion vs. "do X, then Y, update Z."

Fully acknowledge this is handwavy—curious if others have thoughts.


I don’t necessarily feel the need to ban loops and variables, but I do kind of wish schools didn’t start with object-oriented languages. My undergraduate program started with Scheme but I actually think a proper procedural language would be a fine choice too.

(Or even - and I know this is a spicy take - assembly language. An intro-level course that takes students through games like TIS-100 and Human Resource Machine might have a lot of pedagogical value.)

What I’ve observed is that people who have had to spend some time working in a language that maintains a clean separation between functions and data tend to be better at domain modeling, which ultimately enables the to produce designs that are simpler and easier to maintain. OOP can be a powerful mechanism, but it seems like, perhaps ironically, people who only know OOP tend to have a harder time reasoning about information flow, control flow and state. Perhaps because object-oriented language features are mostly meant as a way to abstract over those concerns. Which does have some value, but maybe also discourages learners from thinking about them too deeply.


Or Shenzhen I/O - I tried to get my son into it for just this reason.

Grinnell College's (my alma mater) CS program first intro course is entirely in Racket. The program basically has three sequential intro courses: functional programming with Racket, imperative programming with C, and OOP with Java.

I wonder whether it's easier at small liberal arts colleges – my alma mater has a second-semester course teaching SML, Go, and PDP-11 assembler.

The issue is that "CS 110" gets used by a lot of STEM students in other departments that require computational methods and statistical tools, and that require you get to a CS course taught by actual computer scientists. I don't necessarily think that it's a bad thing that you'll need CS 101 before a higher-level biology elective that needs R!


Grinnell 17' here. At first I didn't enjoy intro with Racket. But by the end I changed my mind and it ended up being one of my favorite courses. In retrospect that class easily had the most impact on me during my CS major... I still have quite a few functional programming habits. These days I like to code in elixir for side projects, which has quite a few similarities.

Anyways, cool to see another alum here! I hope the department keeps it around for a while longer :)


UBC’s CPSC 110 uses Racket. It was built around How to Design Programs when I took it years ago, and at a glance I think it still is.

I know I am biased both by my education and my chosen path. My first computer class was pascal based,my second course was intel based assembler. These were both in community college. When I started my degree in Computer Engineering, the microprocessor course was essentially an assembly course using the Motorola 6800 chip. I was lucky that the main CS courses were taught using C, course work was compiled on Vax mainframes running some unix variant. I used Borland C/C++ 3.0 to do my homework, then would have to figure out what changed when I used the Vax compiler to turn in my work. I really think it is worthwhile to get an understanding of the low level workings of computing. Every now and then I see a nice article comparing different access times for disk, ROM, DRAM, cache which is also important to know for critical code. Embedded for Telecom, Medical,etc.

My school switched to C++ for intro stuff right before I enrolled, and egad, I still think it's a horrid choice. It required learning, or at least memorizing, a significant amount of boilerplate before a freshman could even start to consider writing their logic. I'd already written an awful lot of programs before I got there so it wasn't too awful for me to make the leap. I spent a lot of time helping my fellow students get over the massive speedbumps they faced so they could start learning.

C++ is not my favorite language, and if I had full control, I’d prefer a more beginner-friendly language. However, most of our students intend to transfer to a university. We have articulation agreements with universities defining what is transferable. Most undergraduate CS programs in California teach their intro courses in Python, Java, or C++; there’s even the rare intro courses taught in C. Some universities don’t care about the intro language from community college intro courses as long as those classes teach their same concepts of programming, but there are other universities that insist on particular languages. What’s nice about C++ is that it pleases most universities. If a student can grasp C++, then that student could easily learn Python or Java.

I have a feeling we’ll have to revisit this topic in the next five years as college and university CS departments grapple with the implications of generative AI coding tools such as Claude Code, but that’s another story…


Your discrete mathematics class has programming and is taught by the CS department? I can't say I have seen that. My Discrete mathematics class was taught in the Math department, no programming, and we had a few CS theory courses in the CS department which are basically Discrete++ but they were all sans programming.

We teach discrete structures as mostly a math class with problem sets, but we have some programming exercises to help “concretize” some of the more abstract portions of the class and to also provide motivation to students who want to see practical applications of discrete math.

At Brown university there are 3 intro courses you can choose from. Two use functional programming languages.

my high school still uses simply scheme and SICP for the fall semesters of its CS sequence I believe, but granted it's in an affluent city in the Bay and its recently retired teacher came from UCB

Northeastern held out for a long time, only switching away from Racket, despite the protest of students and professors alike, in the last year or two.

>...despite the protest of students...

I work closely with Northeastern CS students (via co-op program) and haven't heard anything but negative opinions about Racket.


I understand your sentiment. Unfortunately, the history of America's legal system isn't simple. There are people in prison who never actually committed a crime, but who were convicted because they couldn't afford good legal representation during their trial. This disproportionately affects the poor, and there are correlations between poverty and minority status in America. Some people have been able to get their convictions overturned, but this typically requires very sympathetic people advocating for them.

There's also a very long history in America of laws and law enforcement being targeted against poor people and minorities. Vagrancy laws (https://en.wikipedia.org/wiki/Vagrancy#Post-Civil_War) and modern anti-homeless laws effectively criminalize homelessness, and the War on Drugs has had a major negative impact on poor people and minorities. Yes, in this situation those who have been imprisoned due to such laws did violate the law, but such laws, in my opinion, serve the function of kicking people while they are down rather than addressing the root causes of their poverty.

There's a good argument that having a system of convict labor creates a perverse incentive to fill that labor pipeline, similar to how well-meaning traffic laws (such as speed limits) can be abused (for example, "speed traps").


This reminds me of something I thought about last night while I was preparing lecture slides for an introductory object-oriented programming course for a community college. I was rewatching Steve Jobs’ January 1997 Macworld speech, where he was presenting OpenStep to the audience (this was just a few weeks after the announcement of the merger of Apple and NeXT). OpenStep is the original name of the Cocoa Objective-C API in macOS. When Steve Jobs was talking about the benefits of OpenStep compared to the Macintosh Toolbox and Win32, he said it was much easier to develop apps using OpenStep compared to them. He went on to talk about how many small software companies with small teams will be born from leveraging the OpenStep API and development tools.

While the Mac ecosystem did get the Omni Group and some other companies who leveraged OpenStep/Cocoa to build great software, the Mac ecosystem was (and still is) dominated by major vendors (e.g., Microsoft and Adobe) shipping very large apps (e.g., Office and the Creative Suite) developed by very large teams.

I believe ease of development, whether through new APIs or through generative AI, isn’t enough. Competitors to entrenched software companies need to deal with network effects, proprietary file formats and protocols, and the “long tail” of features in large software packages that deter users from switching to smaller, less feature-rich applications.


I remember those days. Thankfully Windows NT 4.0 and Windows 2000 had a Windows 95/98-style desktop but used the rock-solid NT kernel. Unfortunately they were not marketed to home users.

I feel similarly about the classic Mac OS: excellent interface and UI guidelines hampered by its cooperative multitasking and its lack of protected memory.

Windows XP and Mac OS X were major blessings, bringing the NT kernel and Mach/BSD underpinnings, respectively, to home computing users.


When the Republican Party has been largely purged of opposition to Trump (except for senators Rand Paul, Susan Collins, and Lisa Murkowski and a tiny handful of people in the House), and when six out of nine Supreme Court justices are generally loyal to the GOP, then there are effectively no checks and balances.

Trump learned during his first term that he can bypass checks and balances by making sure the GOP is thoroughly MAGA. People who stood up to Trump have been sidelined, such as Justin Amash, Mitt Romney, and, most famously, Mike Pence, who stood up to Trump on January 6 and paid a heavy political price for it. That’s why Vance, not Pence, is the current VP.


Short of enough Republicans finally declaring enough is enough and deciding on impeachment or the fourth clause of the 25th Amendment, the only other option is for pro-impeachment senator candidates to run as Republicans in the primaries, which begin as early as March. Theoretically, if enough Republican senators up for reelection get primaried due to their refusal to rein in Trump, this may put pressure on the rest of the GOP’s senators to remove Trump this year, and this may also encourage the House (which only requires a majority to impeach).

Of course, the challenge is convincing the electorate in red states that Trump’s antics regarding Greenland are catastrophic enough to warrant his removal, given the stranglehold MAGA has on the Republican electorate.


You're completely dismissing all extraparliamentary means of opposition.

Protests. Riots. Strikes.

Y'know, the sort of thing that toppled Yanukovych in Ukraine, lotsa Middle Eastern dictators during the Arab Spring, British rule in India, Soviet control over the Baltics, etc etc etc.

Your politicians are use- and spineless. It's time for your people to step up.


The sad reality is Americans really don't care about foreign policy -- the only thing that could actually lead to major strikes or protests large enough to move the needle would be if large numbers of American soldiers were dying (i.e., Vietnam).

Plus, I just saw a little segment on Fox News where it was portraying this whole Greenland deal as a way to "help Greenlanders boost their economy, each person will get X cash, blah blah". So anyone only watching Fox is probably convinced we're doing Greenland a favor, liberating them from Danish oppression, just like we recently liberated Venezuela from oppression (in case you didn't know!).


Last time I checked, only 56% of Americans disapproved Trump's actions. Not enough to trigger big riots...

https://www.economist.com/interactive/trump-approval-tracker


Only 4% or so approve of going to war to conquer Greenland, so if it gets that bad you might expect sentiment to keep turning. but his approval floor has been pretty steady at no lower than ~30 percent through every controversy so far.


I hope I'm wrong but I don't see American citizens rioting over international affairs unfortunately. Hopefully he'll be unpopular enough to lose senate, and his successor won't be as insane. That would be the best outcome.


Americans have a history of rioting over economic and social conditions, however. An attack on Greenland may open a Pandora’s box of consequences that will devastate America by us becoming a pariah state, which will lead to economic pain.

For the sake of the country, I hope that this is finally the red line that will get enough Republicans representatives to finally have the courage to rein in Trump, at least on this issue.


As soon as they get their new marching orders from the Fox & Friends mothership, it's going to be 44%.


May I introduce you to the 3.5% rule? https://en.wikipedia.org/wiki/3.5%25_rule


To add, this 56% is not evenly distributed politically. Protests in California, Minnesota, and New York (all blue states) are not likely to get red state representatives and senators to threaten Trump with removal. Blue state congresspeople are already on board with removing Trump, but removal can’t happen without 2/3rds of the Senate getting on board, which means this can’t happen today without some Republican support.

I’m a Californian. It’s one thing for me to write Alex Padilla or Adam Schiff; they’d vote to convict if they have the chance. But they won’t get a chance unless people like Ted Cruz and Lindsay Graham say “enough is enough,” but I don’t live in those states.


[flagged]


It is a virtue of Americans that they are unemotional and resolve disputes at the ballot box. [...] Nothing is so important that it can't wait until the next election.

MAGA does not fit that bill. January 6 was a direct attempt to overthrow an election outcome and by extension the government. The current executive is anything but emotionally well-regulated.


[flagged]


I'm curious what you think the AOC/Mamdani left is even like. MAGA is the culmination of decades of escalating extremism. I was in OKC when the right wing terrorist killed so many innocent people, and that was what, 35 years ago? Meanwhile, AOC/Mamdami are lunatics who want ... better healthcare? Less inequality? What is so objectionable about their ideology that it justifies that absolute craziness that has consumed the right wing?


> Meanwhile, AOC/Mamdami are lunatics who want ... better healthcare? Less inequality?

It's not about their goals, it's about first world versus third world approaches to achieving those goals. The first-world approach is about shared sacrifice and building systems with the correct tradeoffs, incentives, etc. That’s how you end up with a system like Sweden that has high middle class taxes to support a robust welfare state along with extremely competitive corporate taxes. It also fosters efficiency because middle class people have a lot of skin in the game.

The third-world approach instead is tribalistic. The bad tribe, rich people, have the money, and the job of government is to expropriate that money and give it to the good tribe. In that kind of politics, you see a strong emphasis on identity and class warfare, and very little talk about tradeoffs, system, and sacrifice. It’s a type of politics that works equally well in Bangladesh, where the population is barely literate, as it does in Queens. (Of course, MAGA is like that too. Trump is the third world version of Reagan or Romney. It’s not a coincidence that no Republican in history has done better in Queens’s “Little Bangladesh” than Trump in 2024.)

The devastation from AOC/Mamdani politics is far worse than right-wing terrorism. In 1960, South Korea had a GDP per capita around $150, while Bangladesh was at $100. But my parent’s generation Indians/Bangladeshis were AOC/Mamdani socialists. As a result, Bangladeshi grew to just $260 by 1989 when we left. By then, Korea was at $6,000. And of course today Korea is a first world country while Bangladesh is still a third world country.

Around 2010, Bangladesh adopted neoliberalism and tripled its GDP per capita in just 14 years. So there was nothing about Bangladesh structurally that prevented the same kind of growth you saw in South Korea. It was all cultural and political. If my parent’s generation hadn’t been AOC/Mamdani socialists, I’d still probably live in my homeland. More importantly, millions of children would be alive today who instead died in poverty because of delayed economic growth.

This is not a problem specific to very poor countries. Latin America is largely a lower middle income continent with slower growth than developed economies. From 1960 to 2018, Latin American GDP per capita grew just 1.8% annually: https://www.gisreportsonline.com/r/latin-america-economic-gr.... Latin America actually fell further behind the U.S. since 1960.


You do know that Sweden has a system much like what AOC/Mamdami advocate for?


Your previous statement was flagged so I couldn't reply, but I wanted to pick up on this one thing you said:

MAGA is a necessary response to the AOC/Mamdani left

But MAGA officially began in 2015, when Trump announced the launch of his Presidential campaign with a speech including a string of vicious remarks against immigrants. AOC was elected in 2019 and Mamdani (to the NY state assembly) in 2021. MAGA has been a populist movement from the outset, seemingly motivated by conservative dislike of the Obama administration (anything but populist) and the prospect of a Hillary Clinton administration (likewise), as well as an atavistic dislike of immigrants.

The devastation from AOC/Mamdani politics is far worse than right-wing terrorism.

The populist politics of AOC and Mamdani seem motivated by exasperation with the prejudice, corruption, and general lawlessness of the Trump administration and the larger MAGA movement, and I doubt they would have enjoyed their electoral success if the Republican party had selected a staid institutionalist over Trump in 2016.

It's interesting to hear context about the economic history of Bangladesh, but I don't think comparing the grinding poverty of cold-war era Bangladesh with the economic and strategic hegemony of early 21st century USA is even slightly illuminating or useful.


We can just look at the current situation. AOC/Mamdani policies have been the norm in the US since ... oh wait, never. We are run by the billionaires, not by the socialists.

Maybe your argument is that our GDP is doing great? Except the entire point of MAGA is that a whole class of people feel like GDP is not describing their own situation accurately. It's almost like all it really describes is how successful the billionaires are. The US lags behind a bunch of western nations in important metrics, and we are decidedly to the right of them and have been for a very long time. Trying to lay blame for this on the paltry excuse for 'the left' that we have in the US is pretty lame.


"But Mooooooom, he started it!"

Rush Limbaugh was preaching the death to America hate that would become Trumpism before Mamdani or AOC were even born.

Perhaps try taking some responsibility.


Protests and strikes are constitutional rights, and they are elements of healthy and functioning democracies.

https://www.aclu.org/know-your-rights/protesters-rights


I agree that there’s always been toxicity on the Internet, but I also feel it’s harder to avoid toxicity today since the cost of giving up algorithmic social media is greater than the cost of giving up Usenet, chat rooms, and forums.

In particular, I feel it’s much harder to disengage with Facebook than it is to disengage with other forms of social media. Most of my friends and acquaintances are on Facebook. I have thought about leaving Facebook due to the toxic recommendations from its feed, but it will be much harder for me to keep up with life events from my friends and acquaintances, and it would also be harder for me to share my own life events.

With that said, the degradation of Facebook’s feed has encouraged me to think of a long-term solution: replacing Facebook with newsletters sent occasionally with life updates. I could use Flickr for sharing photos. If my friends like my newsletters, I could try to convince them to set up similar newsletters, especially if I made software that made setting up such newsletters easy.

No ads, no algorithmic feeds, just HTML-based email.


I agree. Six years ago during COVID I wrote a document describing my idea of a dream personal computing environment, where all functionality is accessible using an API, enabling scripting and customizable UIs. UIs are simply shells covering functionality provided by various objects.

Unfortunately I haven't had the time to implement this vision, but Smalltalk environments such as Squeak and Pharo appear to be great environments to play around with such ideas, since everything is a live object.


A lot of Linux programs are command line only, with multiple GUIs available to use them. Sounds similar to what you're describing.


It's not a novel idea: I've also invented that, as have most people I know who've thought about this problem. (This is a good thing: it means it'll be fairly easy to bootstrap a collaborative project.) I never got as far as writing up a full document, though: only scattered notes for my own use. Would you mind sharing yours?


Sure: this is the document that I wrote about building a component-based desktop:

https://mmcthrow-musings.blogspot.com/2020/04/a-proposal-for...


This is not the most developed form of this idea that I've seen, but it does contain a good justification section. I don't think I've ever seen someone trying to justify this before. (I'd challenge the idea that touchscreen users liked Windows 8: afaik, things were most confusing for them, unless they were already used to the much-more-sensible Windows Phone interface. Lots of important stuff in Windows 8 was hidden behind right-click, and Metro did not make it obvious that press-and-hold was doing anything until the context menu popped up. Don't get me started on Charms: https://devblogs.microsoft.com/oldnewthing/20180828-00/?p=99... tells you all you need to know about the extent to which they were actually grounded in UI research.)

For a system like this, you can't just have objects: you need some kind of interface abstraction. One example from current OSs is the webview: you want to be able to choose which of LibreWolf, Vivaldi and Servo provides the webview component. But you also don't want to be tied to one interface design (e.g. this is what is meant by "rich text", now and forevermore), since that constrains the art of the possible. If you want to preserve backwards-compatibility, this means you need to allow interface transformers / adapters provided externally ("third-party") to the components they allow to communicate.

Treating applications as monoliths isn't ideal, either: most applications are actually toolsuites. A word processor has multiple operations which can be performed on a document: some of these are tightly-linked to the document representation (e.g. formatting), but others are loosely-coupled (e.g. spellcheck). We can break these operations out as separate objects by constructing an interface for the document representation they expect: this would provide a kind of mutable view (called a "lens", in academic literature; known as "getters and setters" to most programmers), allowing GIMP plug-ins to see a GIMPDrawable while exposing a Krita Document to a Krita plug-in. (Or ideally something more specific than "Krita Document", but Krita's documentation is awful.) (These would, of course, be very complicated translation layers to write, so it might make more sense to do things the other way around to begin with: produce a simpler interface, and expose the resulting tools in both Krita and GIMP.)

In principle, documents can get arbitrarily complex. Microsoft's OLE architecture was a good first start, but it was still "composition of monoliths". You couldn't run spell-check on an OLE document and all its child documents. Perhaps a solution for this lies in ontology logs, though for pragmatic reasons you'd want a way to select the best translation from a given set of almost-commuting paths. (The current-day analogue for this would be the Paste Special interface: I'm sure everyone has a story about all of the options being lossy in different ways, and having to manually combine them to get the result you want. This is an inevitable failure mode of this kind of ad-hoc interoperability, and one we'd need to plan around.)

For describing interfaces, we want to further decouple what it is from what it looks like. If I update Dillo, I want all right-click context menu entries from the new version to appear, but I still want the overall style to remain the same. There are multiple approaches, including CSS and monkeypatching (and I've written about others: https://news.ycombinator.com/item?id=28172874), but I think we at least need a declarative interface language / software interface renderer distinction. Our interface language should describe the semantics of the interface, mapping to simple calls into the (stateful) object providing our user interface (sitting on top of the underlying API, to provide the necessary decoupling between the conceptual API, and the UI-specific implementation details). The semantics should at least support a mapping from WAI-ARIA, but ideally should support all the common UI paradigms in some way – obviously, in such a fashion that it is not too hard to convert a tabbed pane into a single region with section headings (by slapping another translation layer on top, or otherwise).

Then, there should be interface-editing interfaces, which will be relatively simple to produce once all the underlying work has been done. The interface-editing interface will, naturally, let you draw on backgrounds, spell-check your labels, change fonts… using the same tools and toolbars as you use in any other program – or a toolbar you've cobbled together yourself, by grabbing bits from existing applications.

---

Since translation can get quite involved in this scheme (e.g. if you're trying to use an Image Editor v1 pencil on an Image Editor v43 canvas, there might be 18 different changes to pixel buffer representation in the pile of compatibility layers), this system would benefit from being able to recompile components as-needed, to keep the system fast. We'd want a compiler with excellent support for the as-if rule, and languages high-level enough to make that easy. We'd also want to make sensible decisions about what to compile: it might make sense to specialise the Image Editor v1 pencil to use the Image Editor v43 interface, or it might make sense to compile the Image Editor v1 – Image Editor v43 compatibility chain into a single translation layer, or it might make sense to use a more generic Raster Canvas interface instead. This decision-making could take into account how the software is actually used, or we could make it the responsibility of distro maintainers – or even both, akin to Debian's popularity contest.

Recompilation tasks should be off-loaded to a queue, to give the user as much control as they want (e.g. they might not want to run a 30-minute max-out-the-processor compilation job while on battery, or an organisation might want to handle it centrally on their build servers). Since modular systems with sensible interfaces tend to be more secure (there are fewer places for vulnerabilities to hide, since modules are only as tightly-integrated as their interfaces support), we wouldn't expect to need as many (or as large) security updates, but the principles are similar.

This would only become a problem after a few years, though, so the MVP need not include any recompilation functionality: naïvely chaining interfaces is Good Enough™.


That's not really that much different from what already exists in the FOSS ecosystem and UNIX.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: