I have two major contentions. First, I'm a neuroscientist by training, so hearing anything related to "upload my brain" just makes me weep. The only realistic way to "upload your brain" is to pump your body full of fixatives (paraformaldehyde), section it on a cryostat and then perform electron microscopy across every single slice. Then put it all back together (fixing skewing artifacts, etc) on a computer.
THEN make a simulation out of it, where every atom of your body is replicated with pristine accuracy. Otherwise you are not yourself anymore.
I'm not saying it's impossible. I'm saying it's so far from likely that being near-immortal due to bio-enhancements is almost certainly more practical. Or you can develop some "scanning ray gun" which does the same thing without killing you first, I suppose.
Second, the whole notion of "exponentially increasing intelligence" I find silly. Are we more intelligent than homo sapiens 3000 years ago? Perhaps more scholarly, more literate, more capable of engineering complex systems. But our inherent, base intelligence is not any different. Just applied to different problems and starting at different bases of knowledge.
Edit: Musing on this, I find a "Matrix"-style hostile takeover of highly intelligent AI much more likely than humans "transcending" into some super-computing cloud of blissful hyperintelligence. That's seems much more likely than humans ever fully shucking our meat bag bodies.
1) I'm surprised that you as a neuroscientist make a definitive claim that we need to simulate every atom of our body to ensure that "you" is still "you". Clearly, cutting my nails destructs my persona ;) (Less flippant, it's kind of hard to say which parts of us make us ourselves. I certainly don't see much of a consensus on that)
2) What about the old idea of replacing one neuron at a time with a simulation? (The "Moravec Upload") Caveat: Provided our identity is just defined by our neurons, which is certainly more than up for debate.
3) I'm not aware that there's a claim of exponentially increasing human intelligence for the singularity. The idea is for technology to become self-improving. We'll see about that, but it's certainly a different claim than what you're refuting.
Preface: Obviously I'm not making any 100% claims. Nothing is impossible, someday, etc etc
1) Ok, I was generalizing for the sake of brevity. Agree that there isn't a consensus, but it's pretty safe to say that it is somewhere between "your brain" and "everything".
2) To do this, you would also have to effectively freeze all the other neurons in your brain. Neurons don't operate in isolation. If you remove one from the network, the rest start (ever so slightly) re-adjusting weights, synapses, dendritic arbors, etc. So I suppose if you could put someone into a hibernation where their neurons were effectively frozen, this may be possible. Ignoring the invasiveness aspect too.
3) I suppose this boils into a big argument about what intelligence actually is, and I don't want to start an internet argument about that. Suffice to say, no one really has any idea how to define or quantify intelligence. How do you improve intelligence? It certainly isn't the addition of more knowledge (although that helps you become effectively more intelligent, since you can stand on the shoulders of those before you). If you take my brain and speed it up so I can do the same thing but faster, am I more intelligent? I don't have a good answer to that.
Perhaps hyper intelligence is nothing more than being able to think intelligently really really fast.
> "If you remove one from the network, the rest start (ever so slightly) re-adjusting weights, synapses, dendritic arbors, etc."
I am not terribly familiar with neuroscience, so you will have to forgive me for asking, but why would this be particularly troubling? Is the concern that surrounding neurons would neglect to "hook up" with the synthetic one, once they got used to nothing being there?
Yep, pretty much. And even if they do reconnect, it is highly likely that their relationship will be different. Change in number of synapses, strength, even physical locations make a difference. Synapses at the end of a dendritic arbor affect the neuron differently than those that are close to the root. Even inter-synapse spacing is important, as adjacent synapses have modulatory effects on the reactivity of it's neighbors.
None of this is overly important per-se. Your brain doesn't have a specific pattern of arrangements that must be just so to make a human. But as you start changing these little details, the changes will eventually bubble up to create a different "you".
How much different is hard to say, maybe not even enough to be noticeable. This is firmly in "theoretical" territory. But considering how drastically people can change from relatively small strokes, I would hazard that there isn't a huge amount of deviance allowed.
Perhaps it's possible if you can do it finely enough that only single neurons are changing, allowing time to resynapse (or artificially recreating those connections?). Not sure.
Trouble getting real neurons and synthetic neurons to interact would certainly throw a wrench in everything, but I am not so sure that concerns about changes to the individual are much of a show stopper. If you are digitizing a "person", they are going to be dramatically altered one way or the other.
Either way though you are going to create an intelligence that will likely be unable to relate to biological people, and biological people will probably have just as much difficulty relating to it. I wouldn't be surprised if an uploaded person was viewed as some sort of psychopath after the procedure, no matter how perfect the procedure was. (And considering how biological people will probably treat it, it would probably be very justified in thinking the same of them)
As long as the individual is happy with the results, no matter how changed they become, that is probably the best we could hope to shoot for.
You seem very confused about what "Singularity" denotes.
You also seem to be confusing "the only way with current technology" with "the only way, ever".
You also seem to be claiming that highly intelligent AI couldn't solve the upload problem. And that it would necessarily be hostile - this is likely if we were to sample randomly from mind-space, but hopefully we will be careful about which highly intelligent AIs we create...
> I'm not saying it's impossible. I'm saying it's so far from likely that being near-immortal due to bio-enhancements is almost certainly more practical. Or you can develop some "scanning ray gun" which does the same thing without killing you first, I suppose.
Seriously, I see near-immortality due to bio-engineering to be way more likely in the remote future (next few hundred years) than uploading your brain. Similarly, I see strong, general AI to be way more likely.
I'll even say that I see full, human brain simulations to be way more likely than the ability to upload your own into the net.
I'm all for being hopeful and optimistic, but sometimes physics is just not on your side.
If you described the way modern internet works to an engineer 100 years ago, they'd pull their hair out too trying to understand such a system using the technology of their day.
For brain uploading, my money is on pure algorithms. Throw a lifetime of data on top of base human emulation software, possibly influenced by known genetic data, and then iterate through an insane number of possible brains using an insane quantity of CPU until you get a virtual brain that most closely matches up with the provided data. It'll be at best a crude approximation of the original human, but just like with MP3s, nobody will really care as long as their departed loved one feels like the same person.
The "singularity" is just a mathematical abstraction, as with dark energy or string theory. The math is highly likely to describe a real phenomenon, but anyone extrapolating further detail based on that math is basically just guessing.
Similarly, if you were to ask an engineer 100 years ago to predict the changes in technology that would shape the future, he would be completely off track as well. This is another reason Singularity pushes seem so out.
Uploading the brain is not the one and only thing implied with singularity, according to Kurzweil. Near-immortality and general strong AI is also in line with it. The one main point is that when AI / technology gets good enough, it takes off and leaves normal brain intelligence behind. Everything else is speculation.
I'm just hopping in to recommend "Saturn's Children" by Stross, which this comment has reminded me of.
AI are created by building immortal human brain simulations, raising them, then cloning the results. Humans are not able to upload and eventually go extinct.
> claiming that highly intelligent AI couldn't solve the upload problem.
Solving the upload problem possibly requires imaging / scanning at a level of detail that might be outside what is possible within the constraints of physics. If it turns out we need atomic level information for the whole body, the project might be hosed from the beginning, something that better ideas or more thinking can't overcome.
So because solving a problem might require something that might be outside the constraints of physics, I'm implying that solving that problem requires violating physics? That doesn't make sense.
Also, correct me if I'm wrong, but are you implying that a neuron/synapse-level scan might not be sufficient?
Positing "highly intelligent AI" as a solution to brain uploading basically says "you know that implausible bit of science fiction, let's solve it with another implausible bit of science fiction". Until you can demonstrate the plausibility of super-AI, you can't propose it as a solution to a similarly implausible problem.
"Implausible" might be the wrong word--my point is, you can't say some fantastic future technology we have no idea how to build is possible by saying it'll be enabled by another fantastic future technology we have no idea how to build, because that doesn't really get you any closer to the solution.
"Are we more intelligent than homo sapiens 3000 years ago?"
We might be. Unless the Flynn effect[0] is around to just make us feel good about ourselves. We only have data as far back as the 1930's, but this trend could have been around for a while.
>where every atom of your body is replicated with pristine accuracy. Otherwise you are not yourself anymore.
Thats a very strong claim which you provide no basis for, in that comment or in your follow-ups.
Maybe replication at the level of neurons, as opposed to atoms, would work? Or maybe at an even higher level? We don't know yet.
>Are we more intelligent than homo sapiens 3000 years ago? Perhaps more scholarly, more literate, more capable of engineering complex systems. But our inherent, base intelligence is not any different.
We don't have the ability to self-modify, or to scale our intelligence horizontally (e.g. by copying it to more hardware), which an entity executing on a computer system could have.
That's one of the main points of the argument for singularity.
Your post is extremely dismissive of the singularity arguments, but you don't seem very familiar with them (e.g. that the singularity argument doesnt apply to just human upload, and also covers artificial intelligence (read the wikipedia article)).
I'd personally agree that people who say a singularity will happen at some specific date soon are over-speculating too.
"We don't have the ability to [...] scale our intelligence horizontally (e.g. by copying it to more hardware), which an entity executing on a computer system could have. That's one of the main points of the argument for singularity."
Count me as skeptical about that. Scaling horizontally means making it a communications problem, which the last 50 years have hinted is a bigger deal than you might think.
> Thats a very strong claim which you provide no basis for, in that comment or in your follow-ups. Maybe replication at the level of neurons, as opposed to atoms, would work? Or maybe at an even higher level? We don't know yet.
Actually, worded correctly, it's a reasonable claim (assuming pure materialism). If there is nothing but matter and energy and their interaction, then a perfect simulation of all of your cells would undeniably be a perfect simulation of you.
The strong and unsubstantiated claim for which no basis is provided is that simply replacing the neurons in your brain would be "enough." There are good reasons to believe that the neurons in your skull are not sufficient (are you still you absent sensory input?) and apart from a lot of philosophical navel-gazing there's no proof forthcoming. Even pure materialism is an assumption--a good and reasonable one when it comes to doing science, but not something I'm willing to gamble my consciousness on.
Of course we can scale our intelligence horizontally. We have almost seven billion of us walking around. It turns out scaling horizontally gets you a lot, too. You might be able to scale horizontally more efficiently if you had a bunch of brains-in-vats that are telepathically linked into some sort of hive mind, but probably only by constant factors.
We can also self-modify within certain limited parameters (learning, drugs, etc.), but that's the most any intelligence can do. Whether we can widen those parameters by more than a constant factor is an open question.
For the Computer Scientists, the brain is far less like a CPU with an instruction memory neatly defining behavior, and far more like an FPGA where the configuration of countless lookup-tables defines the behavior of the design.
> "exponentially increasing intelligence" I find silly
Not really that difficult to imagine at least. Imagining being able to do all-nighters all the time. Imagine being able to focus entirely on a problem instead of being distracted by reddit. Then imagine being able to duplicate yourself without limit to collaborate on problems. Imagine a million copies of yourself working 24-hours a day without distraction on how to solve the resource problems need to create a billion copies of yourself doing the same thing. Then a trillion. Then imagine these trillion brains creating simulators that run a trillion times faster simulating DNA that can do in seconds what biolabs need a year to do. Once intelligence is free from the biological substrate, it's hard to see any limit.
I think we can say that brains/neurons have the lowest communication overhead of any computational approach known. Common computer architectures that move contents of memory to computational hardware through a limited number of ports is incredibly inefficient by comparison. There is massive room for improvement. We're still only using 2 dimensions in silicon, basically.
It sounds like you know what they mean by "intelligence," but you're instead choosing to use your preferred definition and call them wrong based on semantics. Being more scholarly, more literate and capable of solving much bigger problems sounds like something that could reasonably be summarized as "more intelligent."
Fair enough. I mused on this in one of the responses up above. I still don't think that makes a person more innately intelligent. More capable, perhaps, but not more intelligent.
But, humans are pretty fond of defending our position in the cosmos and our idea of "human intelligence", so maybe this is just speciesism at work =)
What about a Neuromancer style brain upload where a computer takes a series of very long incredibly detailed interviews and brain analysis and constructs a simulation of that person? Sure it might not be exact but, as you said, it really can't be you unless it's the whole thing anyway. At least this could create a computer that would react like a specific human most of the time and I don't think that something of that level of sophistication is really that far away (as in <50 years at current progress).
"Or you can develop some 'scanning ray gun' which does the same thing without killing you first, I suppose."
Ok, who made the joke about that, something along the lines of, "Then what? Your uploaded self sends you text messages like, 'Just got out of the hot tub with Carmen Electra; wish you could be here'?"
However why on earth would brain uploading be done all at once. Given the right nanotech you could begin at one part of the brain - slowly replacing biological neurons with artificial ones.
Once all your neurons are artificial - just download their state. This is pure speculation and way out in the future - like 100 years - but it could be fun :-)
Nanotech is another catch-all term that makes me unhappy.
You know what is a really awesome piece of nanotechnology? All the proteins and enzymes that are making your neurons function, right this very moment. How do you download their state? Look at their conformation and who they are interacting with.
Nanotech isn't just making a computer smaller. Much of your body already operates on the "nano" level, and does it pretty efficiently.
I'll admit nanotech really isn't my field, but I understand enough of it to know that it isn't a panacea. If a materials science person wants to jump in on this I'd be appreciative (to either confirm/deny what I said)
This is somewhat like how the 'Ndoli Device' works in some of the short stories of Greg Bear. (I'd especially recommend the story 'Learning To Be Me'.)
Ah, yes, of course... 'Learning To Be Me' is Egan's story, and the 'Ndoli Device' or 'Ndoli Jewel' appears in some of Egan's other stories as well. (I vaguely recall seeing the same term used elsewhere, maybe a Bear book, and thought it might be a specific allusion back to either Egan or some other common precursor.)
Here's a criticism of the Singularity I've seen a couple times that originates from the web comic, Pictures For Sad Children.
"There's a special kind of nerd though who thinks computers will overtake mankind in thirty years, changing humanity in ways incomprehensible to us now. Ignoring the third of the world without electricity. So the singularity is the nerd way of saying, "in the future, being rich and white will be even more awesome". It's flying car bullshit: surely the world will conform to our speculative fiction. Surely we're the ones who will get to live in the future. It gives spiritual significance to technology developed primarily for entertainment or warfare, and gives nerds something to obsess over that isn't the crushing vacuousness of their lives."
Although in my mind that's more of a criticism of the adulation of the Singularity by its adherents than of the actual Singularity itself.
> Ignoring the third of the world without electricity ... "in the future, being rich and white will be even more awesome"
There is a portion of the "Singularitarians" pushing for the Singularity because such a large segment of the world is so disadvantaged, they want humanity to move forward. Their line is more "in the future, being human will be even more awesome".
(For the most part, this group has a fairly low opinion of Kurzweil and his accelerating curves or whatever he calls them.)
The nerd adulation comes from the idea that having understanding of computers will be more awesome, not being rich and white. The fact that they make that substitution shows a certain mired pathos.
EDIT:
FWIW, I think the Kurzweil view is ridiculously optimistic.
It has always struck me as odd that people dismiss the singularity as a nerd fantasy. They latch on to one view of it and make up their minds about the idea itself.
The people I know that take it seriously are much more afraid of it than anything.
They think that it will be an extinction event rather than the nerd rapture.
This is exactly the mindset among evangelical christians that believe the rapture is right around the corner - and every generation of evangelical back to the 1800s has believed that the rapture is imminent and of course, that they as individuals will be saved.
It reifies the teleology and savior complex that make monotheistic religions troublesome. It ignores morality and aesthetics in favor of an unknowable, uncontrollable machine logic.
It kinda has a similar problem to the Hegelian dialectic that Marx built his stuff on. The Singularity argument observes some trends and presumes they will continue along the same profile, carrying our society towards a specific phase-change that we've never seen before. Extrapolating trends is one thing; arguing that some novel configuration is inevitable is another.
That's interesting, can you elaborate? The dialectic is troublesome, especially from the perspective of Deleuze's multitide, but I don't immediately see how it is tied to a claim of novel configuration.
In my experience with Marx, the dialectic mostly let Marx approach Smith and Ricardo's works in a way that allowed for the inclusion of a wider conception of the conditions and consequences of capital. Whereas a WSJ oped might say "capital has benefits! the downsides are simply worth it", Marx takes a benefit and a downside, considers them 'an internal contradiction of capital', then uses a dialectical mode to reveal some hidden 3rd variable.
Are you referencing Das Kapital's inevitability of the rate of profit to decline? Or something in one of the more explicitly political texts about the inevitability of communism? I can only really discuss Das Kapital, which I feel is more of a historical reflection on political economy than the explicit party politics you might find in the Manifesto.