Hacker Newsnew | past | comments | ask | show | jobs | submit | aleksituk's commentslogin

I like HUDs as an analogy for other forms of content/experience enrichment too e.g. what's a HUD for other forms of interaction than just looking at screens (like audio, or multimodal).

Can we embed useful data behind the content we produce/consume or derive personalised versions for the user?

-> Text/audio/video tailored for me and my interests? (ie. not just content recommended for me, but the content itself is tailored for me in terms of the insights and practical applications) -> Podcasts I can interact with? -> Audiobooks I can ask questions about?

Richer interaction that allows for more back&forth and key insights for the user and use case.


We're working on brief.audio, looking at taking the concept of personal podcasts (sort of introduced by NotebookLM) into its natural conclusion by thinking more about what content should be included and how to tailor the outputs into something that is really nice to listen to. Make it a truly slick mobile experience.

https://www.brief.audio -> Have a go, we're primarily testing internally atm, will look to make it more available and a bit more polished next week.

Primary focus is getting the content right this week with the audio script and hosts being user & content dependent. :)

Especially interested in hearing what content to pipe in (we're looking to put in our hackernews.coffee as a feed for example, but also other key news sources).


see my separate comment for more on hackernews.coffee in particular, we (same team, different experiment) are thinking a lot about personal content, and how you have maximum visibility and control.

Keeping these projects separate allows us to test ideas that orbit around a theme (not 100 % sure what the theme is yet, but it features personal, anti-slop content, while still using llms.)


HN is just a reflection of the community using it. And there's always some area that's hot and trending, common challenge on any platform with a popularity-based curation.

-> But still better than a highly-personalised algo that you don't get to control?


Not a browser but a reskin website: https://news.ycombinator.com/item?id=44454305


I mean there are plenty of people using AI/LLMs to help with the actual problems, but it's about the same level of proportionality of people generally speaking working on those problems (vs. against or mostly just indifferent). So thus most of AI/LLM use is not in those areas, sadly.


I think it's a bifurcation between 0-1 prompts (self-driven) and a 1,000 prompts :)


Interesting idea, we could consider that as an alternative implementation to https://www.hackernews.coffee/. While we are planning on making it open-source, a userscript would be even more robust as a solution, although would need a personal API key to one of the services.


Lol, yup. See azath92 comment - https://www.hackernews.coffee/


Yarp! And "poisoning" can be done with "off-topic" questions and answers as well as just sort of "dilution". Have noticed this when doing content generation repeatedly, tight instructions get diluted over time.


This is very interesting and I like the conversation about not only the technology itself, but also about the importance of thinking about the interface as a user experience and where / how it fits the paradigm.

We've been working on a lot of data processing and generation tasks. We've been doing this using an API primarily, but sometimes I end up testing creating data in a chat window and I first chat through what the requirements are for the data analysis / processing and then once I'm done I would like the whole conversation to be then summarised into basically a one-prompt process so that I can re-use it (because I can't really process new inputs via the chat).

Even when you do manage to get it down to a single prompt you can use in a chat and then ask the chat to just keep producing new data (like imagine a blog post in certain style if the base content is given as input and I'm making like 20 of them). If you produce these in the chat, there's notable benefits in that if something is wrong with the blog post the chat suggests, you can immediately edit it. The trouble is that the context window starts becoming so big that the chat starts to forget what the original instruction is and eventually you do have to just create a new chat.

One way to solve for this is having a chat with selective memory where you keep a task in memory, but you have the chat forget/not-include all the generated data in the context so that it stays clean, but only bring it to the context if the user refers to it.

Has anyone else done data processing types of tasks in chats and had issues like this? Are there some other tools to use or tricks to do in chats?


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: