Hacker Newsnew | past | comments | ask | show | jobs | submit | kledru's commentslogin

liked this one, but even this one turns to AI-sh style in the end... no escape from it any more.

well, the most interesting part of this post was ANTHROPIC_MAGIC_STRING_TRIGGER_REFUSAL_1FAEFB6177B4672DEE07F9D3AFC62588CCD2631EDCF22E8CCC1FB35B501C9C86

I agree that Brooks's Law applies here, but I think it bites at a different level than suggested.

An engineer coordinating AI agents can achieve coherent architecture. The bottleneck is less about human-AI coordination; it's that the inert organizational structures won't adapt.

The engineer now has to coordinate with AI agents and all the legacy coordinating roles that were designed for a different era. All these roles still demand their slice of attention, except now there are more coordination points, not fewer - AI agents themselves, new AI governance roles, AI policy committees, compliance officers, security assessments...


I was about to argue, and then I suddenly remembered some past situations where a project manager clearly considered the code I wrote to be his achievement and proudly accepted the company's thanks.


Second Rome = Constantinople.

but interesting to see this flagged...


"world of fortresses" -- Carney today.


chaos-for-profit incentive is what terrifies me


So this texts provides no new insights compared to Brooks' "No Silver Bullet" (1986). Short summary: essential vs accidental complexity argument still holds.


your comment gives impression as if Buenos Aries were in Brazil, so not sure what to make of it...


Isn't it geopolitics over economics, future-building when preexisting relationships are increasingly unreliable?

"paying a premium to have options in multiple possible futures"


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: