Human intelligence is not general intelligence. If it were, you'd be able to use your conscious thoughts to fight off diseases and you wouldn't need your immune system, for example.
The problem I see isn't that AGI isn't possible, that's not even surprising. The problem is the term "AGI" caught on when people really meant "AHI" or artificial _human_ intelligence, which is fundamentally distinct from AGI.
AGI is difficult to define and quite likely impossible to implement. AHI is obviously implementable but I'm unaware of any serious public research that has made significant progress towards this goal. LLMs, SSMs or any other trainable artificial systems are not oriented towards AHI and, in my opinion, are highly unlikely achieve this goal.
> Human intelligence is not general intelligence. If it were, you'd be able to use your conscious thoughts to fight off diseases and you wouldn't need your immune system, for example.
The above is nonsense. Mostly because we actually do use our conscious thoughts to fight disease when our immune system can't. That's what medical science is for. We used our intelligence to figure out antibiotics, for example.
The broadly accepted meaning of AGI is human intelligence on a machine. Redefining it to mean something else does nothing useful.
> Mostly because we actually do use our conscious thoughts to fight disease when our immune system can't.
I think in some sense you're right. There's a higher level way to address disease that humans have made progress on.
But in the critical sense more related to the point I was making I completely disagree. The sense I'm speaking of is we do not (maybe one day we well) directly affect disease states without our brains the way our immune system does. It's a very complicated process that we know works but we absolutely do not understand all the mechanisms involved like we do with say, solving a calculus equation.
My point is, if we could do this our central nervous system would also be the immune system. But it is not because it operates in a entirely different cognitive space than our conscious brain. There are many examples of this, like regulating your body's blood sugar. We know the endocrine system is doing this but we are not actively involved in say the way we are when speaking to one another. The examples are actually countless and go far behind what the human cognitive system is currently capable of. AGI, by definition, would have to not only encompass intelligence in all these different cognitive spaces but also encompass intelligence in any arbitrary future space.
> The broadly accepted meaning of AGI is human intelligence on a machine.
Then the name is inaccurate to the point of deception. You've just described artificial human intelligence, not artificial general intelligence.
Just because we don't have direct conscious control over our white blood cells or pancreas does not mean we don't have general intelligence. We may not control them, but we have the ability to figure out how they work. Our intelligence is general in the sense we can understand body functions, or invent calculus, or develop relativity, or any unlistable number of other things. If a machine can do that - understand arbitrary concepts, even if its at or below human abilities, we'd have AGI. Instead with LLMs we have a tool that can mimic understanding, but when you go a bit deeper its clear that understanding is not present. Not to say LLMs don't have uses, they obviously do, but they are not intelligent.
But in this world, in this universe, there are lots of problems to solve. Humans understand these problems more than any other organism or machine we know of, but we are not general. The most we can say is we are the most general. There are far too many problem domains that are beyond the capability of humans to solve to call human intelligence general intelligence. The pancreas _does_ have direct control over its problem domain. That makes it a specific form of cognition. Maybe one day we will have that too. So does that mean when that day comes we are more general? I believe it does.
I like Francois Chollet's definition of intelligence: efficiency of skill acquisition, not demonstration of skill. I don't know if I should attribute that to him but for me, he is the first person I heard put it so succinctly. Using that definition there, is currently no known learning architecture that can acquire _any_ arbitrary skill efficiently. You and I do not currently do not have the ability to acquire the skill to consciously regulate bodily functions. It is a form of cognition that doesn't map to any thing we're aware of. Understanding how it works, in principle doesn't mean you have the skill to perform it.
> There are far too many problem domains that are beyond the capability of humans to solve to call human intelligence general intelligence.
I'm curious about this. Is that simply not a limitation of our current knowledgebase? That is, as we figure out more about reality, we will eventually conquer those domains as well.
Or do you mean there are domains that are provably beyond the structural capability of our brains? For instance, abstract things like higher-dimensional geometry or number theory which is hard for people to visualize "natively" in their brains. Yet people regularly solve problems in those types of fields. Sure, we rely on tools like computers or pen and paper, but we do solve those problems.
Similarly, take your point about pancreas: sure, our brains cannot do the things it does, that is simply due to lack of the requisite "actuators" connecting our brains to the organ, an artifact of our evolution. But we do understand a lot of the biological mechanisms involved in its operation, enough to treat related problems, again through "tools" like medication and surgery. As we learn more about how they work, that increasingly becomes a problem domain our brains are "capable of solving."
As such, I don't see how these examples show that the brain is not "generally intelligent", unless you exclude tool use, which to me seems like incorrectly conflating cognition and action.
I'm leaning "no" and "yes, for what we know so far" to your questions.
I think tools are fine depending on how you define a tool and its relationship to human intelligence. If we build an artificial pancreas that learns about the body its sitting in and is able to function just as well as a natural pancreas would we say humans solved this problem? In some sense because humans built the machine. But not in another sense because humans are not the machine. Just like we say "AlphaZero beat the world's best human chess player". We don't say usually say "the humans who designed AlphaZero beat the world's best chess player". Did the humans who designed AlphaZero master chess? Not necessarily.
My argument is that there are examples of cognition that we know the human brain currently doesn't operate in. Are these learnable by the human brain? We don't know yet so we can't say the human brain is completely general.
As I type this and as you read this our bodies are constantly rebuilding themselves. Maybe if you connect the appropriate actuators we can do the same thing with our conscious minds? I highly doubt it due to the nature of the problem and how inefficient the brain would be in solving this sort of problem. It likely wouldn't work, but it's too far for me to say "rovably beyond the structural capability of our brains". I'll just say "I'm very pleased I don't have to do that right now". Developing a living organism, either starting from a single cell or from a fully grown adult, in real time in the real world is a very difficult problem to solve and the human brain would be terribly inefficient at it.
Control does equal understanding. The pancreas may control its problem domain, but its not cognition any more than a circuit breaker thinks about cutting off power when the current gets too high, or a motion sensor thinks about opening a door when a person comes close. These are example of control not intelligence. We have no direct control over what happens inside the sun, but that doesn't stop us from developing an understanding of how those fusion processes work.
The problem I see isn't that AGI isn't possible, that's not even surprising. The problem is the term "AGI" caught on when people really meant "AHI" or artificial _human_ intelligence, which is fundamentally distinct from AGI.
AGI is difficult to define and quite likely impossible to implement. AHI is obviously implementable but I'm unaware of any serious public research that has made significant progress towards this goal. LLMs, SSMs or any other trainable artificial systems are not oriented towards AHI and, in my opinion, are highly unlikely achieve this goal.