Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Real humans are real. Their flaws are real. Your emotions around them are real and so are the benefits to socialising. Accepting people as the flawed actors they are is a part of becoming a mature adult.


No one here is going to help the parent poster by assuming a superior position and lecturing them. It is totally unproductive. I suspect they have been hearing statements like this their entire life.

Everything I am reading here is consistent with the perspective of someone with a difference in ability (probably neurodevelopmental) that has experienced persistent social rejection as a result. People systematically fail to recognize or emphasize with this.


I like how you made this point by taking a superior position and lecturing us all about it.

The person I responded to seems like a lost cause, my response was more intended to sway other young people who might be reading along.


Thanks for admitting you are disingenuously harassing the parent poster for the benefit of whatever in-group you happen to find yourself in. I hope they come to understand what you are doing and do something productive with that knowledge.

Are people like the parent poster not also real humans with real flaws? Do you really understand that yourself?


AI is also real to me. My emotions around AI are also real, I deeply appreciate when the AI helps me figure something out or talks to me. I think this type of response will get rarer as AI develops further and people realize that there is now competition and these sentimental reasons will have much less weight. I also have no idea what you mean by "benefits to socializing", I don't see much of any benefit compared to socializing with an AI. Also saying things like "accepting flaws is maturity" is the sort of things that you say when you have no alternative. Once people realize that they can indeed pick an AI friend as their personal best friend suddenly you don't have to put up with all these human flaws anymore.


I can only suggest you ask your AI friends about the benefits of socialising and its importance to human development, they can explain it to you in a way that might not make you defensive. Yes, accepting things you have no control over is a sign of maturity. Hiding in your room talking to your phone won't make the scary people outside disappear, you're going to have to deal with them someday.


This is a great example of what I'm talking about in regards to humans vs AI. First you misunderstand my comment, barely even responding to it, then you paint me as defensive even though I've been very open and the absolute opposite of defensive. It's actually you who is being defensive now, starting on a clear attack and painting me into some kind of scared recluse corner, somebody who supposedly can't even understand why socialising is important and telling me to go talk to my AI friends to figure it out. I mean you gave a great example of a toxic, hurt human ego here, showing the incredible value of AI friends in the future. Because who would choose such a type of conversation over an empathetic, kind AI that cares and understands what I typed? For example an AI would understand that I'm not just talking about a chatbox on a phone, I've clearly mentioned full robots and this is all a forward looking conversation about future AI which will have bodies and can interact like humans. There is going to be real competition for humans soon and I think people are overestimating the value of humans a lot.


Your idea of there being competition for human relationships is super fascinating. In my own life, there are fun/easy relationships, and there are those which push me to think deeply and differently, for any number of reasons.

In that vein, doesn’t “competition for relationships” necessarily breed egocentrism above all else? The winning relationship will give you what you want, but not what is necessarily true…

In that vein, you might also consider that the commenters you’re replying to may be worth engaging intellectually with more deeply purely based on the fact that they’re presenting divergent views that are uncomfortable.

Based on how we’ve designed AI to date and how you describe it in terms of optimizing for self enjoyment for each individual (and difficult to argue most will choose that for themselves), it’s hard to see a world where AI can push productive conflict the way humans can.

Then again, I might just be a flawed human who doesn’t fully understand the point you are trying to make and is extrapolating from my own biases, flaws, experiences, and the limited sample size I have of your point of view.


The divergent views need to be backed by real reasoning, otherwise it's a case of giving value to an opinion just because it's different, not because it has actual value. I'll give you an example, I'd very likely get the same kind of haughty, a bit hurt ego response if I proclaimed that I don't believe that reading books has much value anymore. Which is something I also believe btw. The average human would immediately respond in the very typical, trained societal way via: "well, I suggest you start going to the library and start reading more and engaging with the material because you are clearly not understanding the value of reading." Such a response has nearly no value and comes from a biased position with no attempt to understand my position. They assume that they are correct while spending no energy on thought about it. It's typical of humans and AI is so much superior here.

I actually also disagree that AI cannot push productive conflict, surprisingly the first thing that AI was able to do very well was insults. Of course insults are not productive conflict but it was something I noticed and then I gave a voiced AI (elevenlabs) a big prompt about how it should please be critical, truth seeking, always thinking about how I might be wrong and suddenly I was getting a lot of pushback and almost human-like investigation of the ideas I was proposing. It was still too shallow and unable to evolve but it was giving me some real pushback. You also have to remember that the typical human criticism is always drenched in ego, greed, various self benefit calculations etc. To actually get constructive and professionally informed criticism is really hard to get from humans too, it's not like AI is in a bad spot even now. You basically have to pay somebody to get good human criticism because it's tiring to a human, it's work and it takes expertise. People on average are simply not doing this or doing it well.

I'm merely trying to see this whole AI situation as objectively as I can and likewise I try to see the value of humans as objectively as possible. Obviously humans have value, but many seem to like overestimating the value of humans a lot. We've been at the top of the food chain for so long, we've been the strongest species on the planet for so long.. we can't even think of a mental model where humans aren't inherently valuable. Similar to how people cannot think of how books couldn't inherently be of value. Because we were immersed for centuries in a system where books were the best way to get the highest quality information. Now suddenly it changed and people cannot grasp it, it's a non grata thought - simply an unwelcome thought.


Why are you socializing with humans on Hacker News right now?


Wait you’re _humans_?! I thought the AI had taken everyone’s jobs, surely it started with the HN commenter positions!


    <meme>
Wait, you guys are getting paid?

    </meme>


> I've clearly mentioned full robots and this is all a forward looking conversation about future AI which will have bodies and can interact like humans. There is going to be real competition for humans soon and I think people are overestimating the value of humans a lot.

When do you think soon is ? It could easily be 20-30 years imo till there are humanoid robots intelligent enough to carry a long term relationship, e.g substitute other humans altogether. Not to mention most people still want intimate relationship ...yeah that thing called sex, while I'm sure someone is working on it somewhere this is gonna take a while to automate. So for us here on this threat I wouldn't bet on this thing as a cure for loneliness anytime soon.


> Not to mention most people still want intimate relationship ...yeah that thing called sex, while I'm sure someone is working on it somewhere this is gonna take a while to automate.

I think "good enough" sex robots are closer than you think. There are already existing physical products approaching that territory, and if you ignore current taboos, there's likely a huge market to be staked out once these are more... lifelike, I guess?. Things like AI girlfriend substitutes (and AI boyfriend substitutes) are under active research and development with a market already willing to pay, so merging them with those existing and future physical/robotic products would be an obvious next step.


> I think "good enough" sex robots are closer than you think.

Source ? I'm very skeptical about that.


> you're going to have to deal with them someday.

I know this is a very depressing thought, but you don't have to deal with them someday. Even if there's no other way out, there's always suicide.


Please don't recommend suicide. If you, OP or someone you know is struggling with suicidal thoughts, please call the National Suicide Prevention Lifeline at 1-800-273-8255.


I'm not recommending suicide per se, but it's there if it's needed. Anyway, if you want your comment to be more applicable to an international audience, consider linking to findahelpline.com in addition to the National Suicide Prevention Lifeline.


Thanks for making the advice international.

Perhaps you can expand on when suicide is needed?


My answer will probably be unsatisfying to you: it's needed when you have enough confidence that all your other options are worse. That depends highly on individual circumstances and values, and not much more can be said in general.


But how can you ever have the confidence that your other options will continue to be worse over a longer time frame?

Almost by definition, if you are in the state of mind to consider suicide, you are probably not accurately and impartially weighing the question proposed — and that means it’s likely mostly independent of individual circumstances and values.

(This is different to how I felt when I was younger, and coming from someone who has had several people close to me feel that way at one point in their lives, and now living incredibly positive lives a few years after the fact)

I can see some limited circumstances where it is carefully and openly considered over a longer period of time — like a terminal illness, or unbearable and unsolvable chronic pain — but those cases are the minority by far.


> Almost by definition, if you are in the state of mind to consider suicide, you are probably not accurately and impartially weighing the question proposed

I just don't believe that. How did you arrive at that conclusion?


Such a US-centric take.


I don't know that the thought itself is depressing, suicide is a fact of life and everyone with his mental faculties is well aware of it. We indeed don't "have to" do anything, but in reality we do; our brains are programmed to keep on living no matter what.


Or you can just get rich and emotionally self-sufficient enough to never depend on anyone. Although it's probably good to have end of life plan for when your body completely fails you because of old age.


Well, you need a fallback in case "just get rich" doesn't work out.


Where are you spending riches in order to get food, maintain your house and other property, connect with the news, deal with matters of state etc? On other people. Money won't make them go away.


Spending can be done with surprisingly little human contact these days.


Sure but there are still people on the other side and you're still participating in a society. Using your money to degrade that social fabric isn't a good long-term strategy.


Your emotions for the AI are real, but the AI's emotions for you aren't.


why is electricity in your brain real? and fake in the AI?


The electricity in both things are real, and it's unkind to twist the words of the person you responded to that way. They specifically mentioned emotions, not electricity. An AI will be completely unaffected by anything said to it.


I think it's a legitimate question, because ultimately all brain activity is electrical and chemical signals. To say that some electrical signal objectively is or is not an emotion implies that there is some objective rule for deciding this -- but I'm not aware of any such rule, only longstanding conventions.


AI isn’t programmed to have emotions. Merely to replicate a semblance of a simulacrum of said sensations. Regardless of your considerations for the electrical signals, the models are just tab-completion, ad infinitum.


Your emotions are just a tab-completion to God/Creator/or whatever.


No, it completely misses the point. If you say something very upsetting to me it will genuinely affect me and my day and have negative consequences for myself and the people around me, because I will have an emotional reaction. You can't upset an AI because it doesn't have the capacity to be upset, it can only return some words and then continue on as if nothing happened.

I hope that makes sense. The underlying functionality of my emotions don't matter at all, only the impact.


> it doesn't have the capacity to be upset

AIs are affected by things you say to them while those things are in their context window. Your context window for bad news is about a day.

Why are you certain that you -- a physical system amounting to a set of complex electric and chemical signals -- have this capacity for "genuine emotion", while another physical system, that outwardly behaves much the same as you, does not?

If I made a replica of your body accurate down to the atomic scale, and told that body something upsetting, it would outwardly behave as though it were experiencing an emotion. Are you claiming that it would not in fact be experiencing an emotion?


Wait, are you claiming astrology is real and that the moon landings were faked?

No of course you're not, and I'm not claiming anything about a full human replica so please don't put words in my mouth that way.

We're not talking about a replica of a body. We're talking about LLMs. They don't have bodies and can't be moved, which is the definition of emotion.

And I'm not sure what you mean that my context window is a day. That's a strange thing to say. I'm deeply affected by childhood traumas several decades after they happened. They affect the tension patterns in my body every day. An LLM isn't affected even within its context window regardless of its length. They're only affected in the sense that a microwave is affected by setting it to defrost or a calculator is affected by setting it up use RPN.

If you built a perfect replica of a human then it would feel emotions just as a human. But that's not what we're talking about is it? There's a saying in my country: If your granny had balls she would be your granda. We can argue all day about "what if x" and "what if y" but we might be better served focusing on the reality we actually have.


tfa is not talking about AIs of now but of future. Broaden your imagination where we will be in 10/15 years, not 30 days.


You continue as if nothing happened even though <insert really bad event from 10 years ago>.

By the way, AI will react and have larger context window soon(ish). Then what?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: