Hacker Newsnew | past | comments | ask | show | jobs | submit | dennemark's commentslogin

Buildplace (by Form Follows You GmbH) | https://company.buildplace.io | Berlin

With Buildplace, you can simulate your planning projects in a digital twin in no time. The integrated inventory mapping captures the complexity of your real estate portfolio and transforms it into an interactive map that you can easily enrich with additional geodata. Furthermore, our holistic consulting approach supports you with individual project planning, customized enhancements, and the targeted application of Buildplace for your specific use case.

We are a team of ~10 persons half of them engineers. This might sound small, but we are bootstrapped and try to keep our software stack observable. We program mainly in Typescript with Fastify, BullMQ, GraphQL (Pothos) on the backend, Postgres and Redis as DBs [ and React, Maplibre, BabylonJS on the frontend ]. We see demand in more expertise for container orchestration, load balancing, reverse proxy configuration, VPS Administration and monitoring.

Therefore, if you are curious about our field of work and have knowledge in system administration and / or backend engineering with the stated technologies, feel free to reach out to us!

Contact: mail [at] formfollowsyou.com


I have AMD Strix Halo (395) on my work laptop (HP Ultrabook G1A) as well as at home with Framework Desktop.

On both i have setup lemonade-server on system start. At work i use Qwen3 Coder 30B-3A with continue.dev. It serves me well in 90% of cases.

At home i have 128GB RAM. I try a bit GPT120B. I host Open WebUI on it and connect via https and wireguard to it, so i can use it as PWA on my phone. I love not needing to think about where my data goes. But i would like to allow parallel requests, so i need to tinker a bit more. Maybe llama-swap is enough.

I just need to see how to deal with context length. My models stop or go into infinite loop after some messages. But then i often start a new chat.

Lemonade-server runs with llama.cpp, vllm seems to be scaling better thoug, but is not so easy to set up.

Unsloth GGUFs are great resource for models.

Also for Strix Halo check out kyuz0 repositorIES! Also has image gen. I didnt try those yet. But the benchmarks are awesome! Lots to learn from. Framework forum can be useful, too.

https://github.com/kyuz0/amd-strix-halo-toolboxes Also nice: https://llm-tracker.info/ It links to some benchmark site with models by size. I prefer such resources, since it is quite easy to see which one fit in my RAM (even though i have this silly thumbrule Billion Token ≈ GB RAM).

Btw. even a AMD HX 370 with non soldered RAM can get some nice t/s for smaller models. Can be helpful enough when disconnected from internet and you dont know how to style a svg :)

Thanks for opening up this topic! Lots of food :)


Does Qwen3 Coder do a good job invoking its tools as appropriate for you? Under continue.dev at least, I've found I need to remind it constantly.


The makers of panda css are also creators of chakra-ui. Chakra-UI is like tailwind + ui components for react. They currently transfer their knowledge into separate packages to allow adoption by other frameworks like vue etc.

Panda seems to work a bit like chakra. I really love the convenience components/functions like Stack. But it will be combined with component ark and zag. Zag is state management for components, while ark creates headless components for vue react etc. And combined with panda we come back full circle for v3 of chakra. I am really looking forward to this. But also hope that chakra stays true to its use of basic html elements and accessibility. The team knows what it is doing!

https://www.adebayosegun.com/blog/the-future-of-chakra-ui


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: