Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I'm curious just how far down the rabiit hole you can take this philosiphy.

Just how many APIs and programming interfaces and other interaction points of a computer can be eliminated or automated away or even made to adapt to user preferences?

What would this mean for the divide between the Operating System and its Applications?



One example of an interface which you could possibly get rid of is the password-interface.

Identify people by voice, the way they type, walk, their body-shape, fingerprint (already used), whatever you can think of. Especially when combined this could identify someone pretty accurately.

Of course, using these features is only the first step. If it's still an interface, a "say your password screen and look into the camera", that's no real progress (apart from getting rid of passwords). But when using it while interacting to authorize the itneraction, it would have a great effect - like checking fingerprints on the keyboard or mouse or even better an automatic iris-scan or anything.

(Tried to build the first step of a prototype for this in my cs-bachelor-thesis)


Slippery Slope arguments are logically fallacious. It's about knowing the product's true purpose and using good judgement. It's about following good design principles, such as Dieter Rams's ten principles for good design: https://www.vitsoe.com/gb/about/good-design.


Pretty far. Imagine one day not having a remote for your tv at all, and just looking at a certain spot(camera) on the TV saying, "TV, turn on" "next channel, next channel" "louder".

That would completely eliminate the need for a UI or menu or remote.


If I sit down in the couch, and move my focus to the TV and my brainpattern implies expectation of watching the TV, it should turn itself on.

If I sit down in the morning it should have learnt based on all previous mornings that I chose the morning news that I want to watch news.

If I sit down with a bowl of popcorn it should open the movies selection and let me take over using an interface from there.

If I sit down with my girlfriend it should filter the movie selection based on our previously recorded viewing history so we get the best movies for us both — if its just me it can hide all those romance chick flicks.

If I sit down wearing my Liverpool jersey and Liverpool is in fact playing a game or is mentioned in any program description it should default to showing me that.

Theres a whole lot that can be read from the environment of the device, quite a bit is technological feasible as of today (identifying a person by means of a camera and keeping track of viewing habits) and some that are slightly further out (reading your mental activity to determine that looking at the TV meant you intended to watch it.

A big thing here for me is automated personalization I think, this is a very viable next step.


It would get rid of the remote, but it doesn't get rid of the interface. And who wants to talk over a movie you're watching just to change the volume?


In that case just move your mouth and the camera should use facial & mouth movement recognition to know what you want.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: