> Johnny Depp may seem like a strange casting choice, but given his penchant for bizarre roles, we imagine he’ll play a mad scientist quite convincingly.
Actually it seems like a really predictable casting choice. When will people get tired of seeing Depp playing a weirdo? I don't even have to guess that Helena Bonham Carter is going to be bug-eyed with big crazy hair in this one too...
There are some celebrity actors who have stopped playing new roles, and made the roles they take on into studies of their public personas rather than explorations of a character.
If he gets along with Nolan and gets pushed in the right ways he might not phone it in, but all of his recent work points to him being emptily encouraged to do the same thing repeatedly.
Yup. I really like his more "normal" roles-- like his lucid moments (which were most of the movie) in "Secret Window" http://www.imdb.com/title/tt0363988/
Hopefully it will make an entertaining movie, but man, I really hate all things associated with "The Singularity". Nothing fills me with more irrational (impotent) rage than people babbling on about the Singularity.
I totally agree. Just hearing the name Kurzweil enrages me these days.
But! I think the singularity deserves to have a presence in fiction, even if it can't have one in reality. Some of the best sci-fi is built on whimsical and unrealistic technologies.
It doesn't have a presence in reality, it apparently won't have a presence in reality any time soon, but saying it can't happen is about as egotistical as we could be. Our generation has not yet discovered a means to synthesize this type of intelligence, but there is still plenty of time left to figure it out. We just don't know yet if it can happen or not.
That's perfectly fine, but Kurzweil is putting some rather precise dates on these things happening and how they're going to happen. I actually think strong AI is probably inevitable, but I have no reason to think it's going to take 30 years instead of 3,000.
The credibility of the claims is somewhat diminished by the convenience. His argument is literally "I am going to be able to live just long enough to live forever and resurrect my dead father."
And assuming we did invent strong AI, so what? It doesn't necessarily or immediately lead to a singularity. We already have several billion human brains and have had them for quite some time.
I can not rewrite my neurons at will or make a copy of myself to run in parallel. Does AI having those two traits lead to the singularity? I do not know it is to vaguely defined in this conversation, will it be bigger then the industrial revolution, I would bet yes.
You can rewrite your neurons too. It's called learning.
Here's a thought experiment though, more in the direction you meant. Imagine you had a tool that let you rewire your own neurons however you wanted. Again, so what? There are something like 100 trillion synapses in an adult brain. What are the chances that you'll use this tool to actually improve your own intelligence (when evolution has been playing that game for millions of years) vs just giving yourself a lobotomy?
Now, how is it any different in the hands of an artificial intelligence that is no more intelligent than us?
All of this futurism just seems like bad assumptions stacked on top of more bad assumptions. I still think Pinker said it best:
"There is not the slightest reason to believe in a coming singularity. The fact that you can visualize a future in your imagination is not evidence that it is likely or even possible. Look at domed cities, jet-pack commuting, underwater cities, mile-high buildings, and nuclear-powered automobiles—all staples of futuristic fantasies when I was a child that have never arrived. Sheer processing power is not a pixie dust that magically solves all your problems."
"There is not the slightest reason to believe in a coming singularity ... Look at domed cities, jet-pack commuting, underwater cities, mile-high buildings, and nuclear-powered automobiles"
That seems like a pretty terrible analogy. All of those examples are possible today, but their cost doesn't justify their utility, and nobody pursues them. On the other hand, it's fairly easy to think of practical uses for artificial intelligence, and research in this area is actively pursued by researchers around the world.
Like I said, it would take someone incredibly egotistical (apparently such as a Harvard psychology professor) to suggest artificial intelligence isn't possible based on where we are right now, where we were 30 years ago, or where it appears we're headed over the next 30 years. Based on this quote, Pinker's conception of the world doesn't appear to extend beyond the span of his own lifetime.
"The fact that you can visualize a future in your imagination is not evidence that it is likely or even possible."
Nor is the fact that some of his childhood fantasies didn't come to fruition evidence that it's impossible or even unlikely. What's his point? One bad argument doesn't right another.
"Like I said, it would take someone incredibly egotistical [...] to suggest artificial intelligence isn't possible..."
There is a difference between a strong AI position and the statement that such an intelligence would be qualitatively different than currently operating, natural intelligence, and a further one between that and the statement that such an intelligence would be quantitatively better (whatever that means), which is the "singularity" as far as I can tell.
That is a tautology (because the inverse is a paradox: If you prove something has travelled faster than light, that would invalidate the current definition of "faster.")
The singularity was around before Kurzweil. Long before. He's a charlatan and has sullied the whole notion of the singularity. It bums me out every time I hear his name attached to the term.
Hopefully you're only expressing disdain for people who treat it as a scientific inevitability, rather than a science fiction trope. As a sci-fi trope, it's pretty darn cool.
I have two major contentions. First, I'm a neuroscientist by training, so hearing anything related to "upload my brain" just makes me weep. The only realistic way to "upload your brain" is to pump your body full of fixatives (paraformaldehyde), section it on a cryostat and then perform electron microscopy across every single slice. Then put it all back together (fixing skewing artifacts, etc) on a computer.
THEN make a simulation out of it, where every atom of your body is replicated with pristine accuracy. Otherwise you are not yourself anymore.
I'm not saying it's impossible. I'm saying it's so far from likely that being near-immortal due to bio-enhancements is almost certainly more practical. Or you can develop some "scanning ray gun" which does the same thing without killing you first, I suppose.
Second, the whole notion of "exponentially increasing intelligence" I find silly. Are we more intelligent than homo sapiens 3000 years ago? Perhaps more scholarly, more literate, more capable of engineering complex systems. But our inherent, base intelligence is not any different. Just applied to different problems and starting at different bases of knowledge.
Edit: Musing on this, I find a "Matrix"-style hostile takeover of highly intelligent AI much more likely than humans "transcending" into some super-computing cloud of blissful hyperintelligence. That's seems much more likely than humans ever fully shucking our meat bag bodies.
1) I'm surprised that you as a neuroscientist make a definitive claim that we need to simulate every atom of our body to ensure that "you" is still "you". Clearly, cutting my nails destructs my persona ;) (Less flippant, it's kind of hard to say which parts of us make us ourselves. I certainly don't see much of a consensus on that)
2) What about the old idea of replacing one neuron at a time with a simulation? (The "Moravec Upload") Caveat: Provided our identity is just defined by our neurons, which is certainly more than up for debate.
3) I'm not aware that there's a claim of exponentially increasing human intelligence for the singularity. The idea is for technology to become self-improving. We'll see about that, but it's certainly a different claim than what you're refuting.
Preface: Obviously I'm not making any 100% claims. Nothing is impossible, someday, etc etc
1) Ok, I was generalizing for the sake of brevity. Agree that there isn't a consensus, but it's pretty safe to say that it is somewhere between "your brain" and "everything".
2) To do this, you would also have to effectively freeze all the other neurons in your brain. Neurons don't operate in isolation. If you remove one from the network, the rest start (ever so slightly) re-adjusting weights, synapses, dendritic arbors, etc. So I suppose if you could put someone into a hibernation where their neurons were effectively frozen, this may be possible. Ignoring the invasiveness aspect too.
3) I suppose this boils into a big argument about what intelligence actually is, and I don't want to start an internet argument about that. Suffice to say, no one really has any idea how to define or quantify intelligence. How do you improve intelligence? It certainly isn't the addition of more knowledge (although that helps you become effectively more intelligent, since you can stand on the shoulders of those before you). If you take my brain and speed it up so I can do the same thing but faster, am I more intelligent? I don't have a good answer to that.
Perhaps hyper intelligence is nothing more than being able to think intelligently really really fast.
> "If you remove one from the network, the rest start (ever so slightly) re-adjusting weights, synapses, dendritic arbors, etc."
I am not terribly familiar with neuroscience, so you will have to forgive me for asking, but why would this be particularly troubling? Is the concern that surrounding neurons would neglect to "hook up" with the synthetic one, once they got used to nothing being there?
Yep, pretty much. And even if they do reconnect, it is highly likely that their relationship will be different. Change in number of synapses, strength, even physical locations make a difference. Synapses at the end of a dendritic arbor affect the neuron differently than those that are close to the root. Even inter-synapse spacing is important, as adjacent synapses have modulatory effects on the reactivity of it's neighbors.
None of this is overly important per-se. Your brain doesn't have a specific pattern of arrangements that must be just so to make a human. But as you start changing these little details, the changes will eventually bubble up to create a different "you".
How much different is hard to say, maybe not even enough to be noticeable. This is firmly in "theoretical" territory. But considering how drastically people can change from relatively small strokes, I would hazard that there isn't a huge amount of deviance allowed.
Perhaps it's possible if you can do it finely enough that only single neurons are changing, allowing time to resynapse (or artificially recreating those connections?). Not sure.
Trouble getting real neurons and synthetic neurons to interact would certainly throw a wrench in everything, but I am not so sure that concerns about changes to the individual are much of a show stopper. If you are digitizing a "person", they are going to be dramatically altered one way or the other.
Either way though you are going to create an intelligence that will likely be unable to relate to biological people, and biological people will probably have just as much difficulty relating to it. I wouldn't be surprised if an uploaded person was viewed as some sort of psychopath after the procedure, no matter how perfect the procedure was. (And considering how biological people will probably treat it, it would probably be very justified in thinking the same of them)
As long as the individual is happy with the results, no matter how changed they become, that is probably the best we could hope to shoot for.
You seem very confused about what "Singularity" denotes.
You also seem to be confusing "the only way with current technology" with "the only way, ever".
You also seem to be claiming that highly intelligent AI couldn't solve the upload problem. And that it would necessarily be hostile - this is likely if we were to sample randomly from mind-space, but hopefully we will be careful about which highly intelligent AIs we create...
> I'm not saying it's impossible. I'm saying it's so far from likely that being near-immortal due to bio-enhancements is almost certainly more practical. Or you can develop some "scanning ray gun" which does the same thing without killing you first, I suppose.
Seriously, I see near-immortality due to bio-engineering to be way more likely in the remote future (next few hundred years) than uploading your brain. Similarly, I see strong, general AI to be way more likely.
I'll even say that I see full, human brain simulations to be way more likely than the ability to upload your own into the net.
I'm all for being hopeful and optimistic, but sometimes physics is just not on your side.
If you described the way modern internet works to an engineer 100 years ago, they'd pull their hair out too trying to understand such a system using the technology of their day.
For brain uploading, my money is on pure algorithms. Throw a lifetime of data on top of base human emulation software, possibly influenced by known genetic data, and then iterate through an insane number of possible brains using an insane quantity of CPU until you get a virtual brain that most closely matches up with the provided data. It'll be at best a crude approximation of the original human, but just like with MP3s, nobody will really care as long as their departed loved one feels like the same person.
The "singularity" is just a mathematical abstraction, as with dark energy or string theory. The math is highly likely to describe a real phenomenon, but anyone extrapolating further detail based on that math is basically just guessing.
Similarly, if you were to ask an engineer 100 years ago to predict the changes in technology that would shape the future, he would be completely off track as well. This is another reason Singularity pushes seem so out.
Uploading the brain is not the one and only thing implied with singularity, according to Kurzweil. Near-immortality and general strong AI is also in line with it. The one main point is that when AI / technology gets good enough, it takes off and leaves normal brain intelligence behind. Everything else is speculation.
I'm just hopping in to recommend "Saturn's Children" by Stross, which this comment has reminded me of.
AI are created by building immortal human brain simulations, raising them, then cloning the results. Humans are not able to upload and eventually go extinct.
> claiming that highly intelligent AI couldn't solve the upload problem.
Solving the upload problem possibly requires imaging / scanning at a level of detail that might be outside what is possible within the constraints of physics. If it turns out we need atomic level information for the whole body, the project might be hosed from the beginning, something that better ideas or more thinking can't overcome.
So because solving a problem might require something that might be outside the constraints of physics, I'm implying that solving that problem requires violating physics? That doesn't make sense.
Also, correct me if I'm wrong, but are you implying that a neuron/synapse-level scan might not be sufficient?
Positing "highly intelligent AI" as a solution to brain uploading basically says "you know that implausible bit of science fiction, let's solve it with another implausible bit of science fiction". Until you can demonstrate the plausibility of super-AI, you can't propose it as a solution to a similarly implausible problem.
"Implausible" might be the wrong word--my point is, you can't say some fantastic future technology we have no idea how to build is possible by saying it'll be enabled by another fantastic future technology we have no idea how to build, because that doesn't really get you any closer to the solution.
"Are we more intelligent than homo sapiens 3000 years ago?"
We might be. Unless the Flynn effect[0] is around to just make us feel good about ourselves. We only have data as far back as the 1930's, but this trend could have been around for a while.
>where every atom of your body is replicated with pristine accuracy. Otherwise you are not yourself anymore.
Thats a very strong claim which you provide no basis for, in that comment or in your follow-ups.
Maybe replication at the level of neurons, as opposed to atoms, would work? Or maybe at an even higher level? We don't know yet.
>Are we more intelligent than homo sapiens 3000 years ago? Perhaps more scholarly, more literate, more capable of engineering complex systems. But our inherent, base intelligence is not any different.
We don't have the ability to self-modify, or to scale our intelligence horizontally (e.g. by copying it to more hardware), which an entity executing on a computer system could have.
That's one of the main points of the argument for singularity.
Your post is extremely dismissive of the singularity arguments, but you don't seem very familiar with them (e.g. that the singularity argument doesnt apply to just human upload, and also covers artificial intelligence (read the wikipedia article)).
I'd personally agree that people who say a singularity will happen at some specific date soon are over-speculating too.
"We don't have the ability to [...] scale our intelligence horizontally (e.g. by copying it to more hardware), which an entity executing on a computer system could have. That's one of the main points of the argument for singularity."
Count me as skeptical about that. Scaling horizontally means making it a communications problem, which the last 50 years have hinted is a bigger deal than you might think.
> Thats a very strong claim which you provide no basis for, in that comment or in your follow-ups. Maybe replication at the level of neurons, as opposed to atoms, would work? Or maybe at an even higher level? We don't know yet.
Actually, worded correctly, it's a reasonable claim (assuming pure materialism). If there is nothing but matter and energy and their interaction, then a perfect simulation of all of your cells would undeniably be a perfect simulation of you.
The strong and unsubstantiated claim for which no basis is provided is that simply replacing the neurons in your brain would be "enough." There are good reasons to believe that the neurons in your skull are not sufficient (are you still you absent sensory input?) and apart from a lot of philosophical navel-gazing there's no proof forthcoming. Even pure materialism is an assumption--a good and reasonable one when it comes to doing science, but not something I'm willing to gamble my consciousness on.
Of course we can scale our intelligence horizontally. We have almost seven billion of us walking around. It turns out scaling horizontally gets you a lot, too. You might be able to scale horizontally more efficiently if you had a bunch of brains-in-vats that are telepathically linked into some sort of hive mind, but probably only by constant factors.
We can also self-modify within certain limited parameters (learning, drugs, etc.), but that's the most any intelligence can do. Whether we can widen those parameters by more than a constant factor is an open question.
For the Computer Scientists, the brain is far less like a CPU with an instruction memory neatly defining behavior, and far more like an FPGA where the configuration of countless lookup-tables defines the behavior of the design.
> "exponentially increasing intelligence" I find silly
Not really that difficult to imagine at least. Imagining being able to do all-nighters all the time. Imagine being able to focus entirely on a problem instead of being distracted by reddit. Then imagine being able to duplicate yourself without limit to collaborate on problems. Imagine a million copies of yourself working 24-hours a day without distraction on how to solve the resource problems need to create a billion copies of yourself doing the same thing. Then a trillion. Then imagine these trillion brains creating simulators that run a trillion times faster simulating DNA that can do in seconds what biolabs need a year to do. Once intelligence is free from the biological substrate, it's hard to see any limit.
I think we can say that brains/neurons have the lowest communication overhead of any computational approach known. Common computer architectures that move contents of memory to computational hardware through a limited number of ports is incredibly inefficient by comparison. There is massive room for improvement. We're still only using 2 dimensions in silicon, basically.
It sounds like you know what they mean by "intelligence," but you're instead choosing to use your preferred definition and call them wrong based on semantics. Being more scholarly, more literate and capable of solving much bigger problems sounds like something that could reasonably be summarized as "more intelligent."
Fair enough. I mused on this in one of the responses up above. I still don't think that makes a person more innately intelligent. More capable, perhaps, but not more intelligent.
But, humans are pretty fond of defending our position in the cosmos and our idea of "human intelligence", so maybe this is just speciesism at work =)
What about a Neuromancer style brain upload where a computer takes a series of very long incredibly detailed interviews and brain analysis and constructs a simulation of that person? Sure it might not be exact but, as you said, it really can't be you unless it's the whole thing anyway. At least this could create a computer that would react like a specific human most of the time and I don't think that something of that level of sophistication is really that far away (as in <50 years at current progress).
"Or you can develop some 'scanning ray gun' which does the same thing without killing you first, I suppose."
Ok, who made the joke about that, something along the lines of, "Then what? Your uploaded self sends you text messages like, 'Just got out of the hot tub with Carmen Electra; wish you could be here'?"
However why on earth would brain uploading be done all at once. Given the right nanotech you could begin at one part of the brain - slowly replacing biological neurons with artificial ones.
Once all your neurons are artificial - just download their state. This is pure speculation and way out in the future - like 100 years - but it could be fun :-)
Nanotech is another catch-all term that makes me unhappy.
You know what is a really awesome piece of nanotechnology? All the proteins and enzymes that are making your neurons function, right this very moment. How do you download their state? Look at their conformation and who they are interacting with.
Nanotech isn't just making a computer smaller. Much of your body already operates on the "nano" level, and does it pretty efficiently.
I'll admit nanotech really isn't my field, but I understand enough of it to know that it isn't a panacea. If a materials science person wants to jump in on this I'd be appreciative (to either confirm/deny what I said)
This is somewhat like how the 'Ndoli Device' works in some of the short stories of Greg Bear. (I'd especially recommend the story 'Learning To Be Me'.)
Ah, yes, of course... 'Learning To Be Me' is Egan's story, and the 'Ndoli Device' or 'Ndoli Jewel' appears in some of Egan's other stories as well. (I vaguely recall seeing the same term used elsewhere, maybe a Bear book, and thought it might be a specific allusion back to either Egan or some other common precursor.)
Here's a criticism of the Singularity I've seen a couple times that originates from the web comic, Pictures For Sad Children.
"There's a special kind of nerd though who thinks computers will overtake mankind in thirty years, changing humanity in ways incomprehensible to us now. Ignoring the third of the world without electricity. So the singularity is the nerd way of saying, "in the future, being rich and white will be even more awesome". It's flying car bullshit: surely the world will conform to our speculative fiction. Surely we're the ones who will get to live in the future. It gives spiritual significance to technology developed primarily for entertainment or warfare, and gives nerds something to obsess over that isn't the crushing vacuousness of their lives."
Although in my mind that's more of a criticism of the adulation of the Singularity by its adherents than of the actual Singularity itself.
> Ignoring the third of the world without electricity ... "in the future, being rich and white will be even more awesome"
There is a portion of the "Singularitarians" pushing for the Singularity because such a large segment of the world is so disadvantaged, they want humanity to move forward. Their line is more "in the future, being human will be even more awesome".
(For the most part, this group has a fairly low opinion of Kurzweil and his accelerating curves or whatever he calls them.)
The nerd adulation comes from the idea that having understanding of computers will be more awesome, not being rich and white. The fact that they make that substitution shows a certain mired pathos.
EDIT:
FWIW, I think the Kurzweil view is ridiculously optimistic.
It has always struck me as odd that people dismiss the singularity as a nerd fantasy. They latch on to one view of it and make up their minds about the idea itself.
The people I know that take it seriously are much more afraid of it than anything.
They think that it will be an extinction event rather than the nerd rapture.
This is exactly the mindset among evangelical christians that believe the rapture is right around the corner - and every generation of evangelical back to the 1800s has believed that the rapture is imminent and of course, that they as individuals will be saved.
It reifies the teleology and savior complex that make monotheistic religions troublesome. It ignores morality and aesthetics in favor of an unknowable, uncontrollable machine logic.
It kinda has a similar problem to the Hegelian dialectic that Marx built his stuff on. The Singularity argument observes some trends and presumes they will continue along the same profile, carrying our society towards a specific phase-change that we've never seen before. Extrapolating trends is one thing; arguing that some novel configuration is inevitable is another.
That's interesting, can you elaborate? The dialectic is troublesome, especially from the perspective of Deleuze's multitide, but I don't immediately see how it is tied to a claim of novel configuration.
In my experience with Marx, the dialectic mostly let Marx approach Smith and Ricardo's works in a way that allowed for the inclusion of a wider conception of the conditions and consequences of capital. Whereas a WSJ oped might say "capital has benefits! the downsides are simply worth it", Marx takes a benefit and a downside, considers them 'an internal contradiction of capital', then uses a dialectical mode to reveal some hidden 3rd variable.
Are you referencing Das Kapital's inevitability of the rate of profit to decline? Or something in one of the more explicitly political texts about the inevitability of communism? I can only really discuss Das Kapital, which I feel is more of a historical reflection on political economy than the explicit party politics you might find in the Manifesto.
Agreed. The problem is that mainstream SF movies have not nearly caught up with written SF. I mean, The Matrix was "mind-blowing" in it's day, even when the idea of simulated universes was old hat in written SF.
Banks has basically crammed, well, everything into the Culture books. When just the idea of uploading phases people you'll have a hard time explaining why a sentient Ship would cocoon a human body in another sentient gel-field suit in order to rescue the image of an old scientist that is currently experiencing the slow time of a truly ancient race of creatures that count time in galactic rotations. Etc.
I mean, we're talking about a universe where self-replicating nano-bots are considered a joke (or at least a pleasurable pastime for warships for target practice). Meanwhile grey goo scenarios haven't even entered the zeitgeist.
Agreed, Gladiator is my favorite soundtrack of all time, and I've never heard a soundtrack of Zimmer's that I didn't like. I'd put him right up there with John Williams even though he's only received a fraction of the awards.
I actually like Hans Zimmer a lot more than John Williams - the latter has a very distinctive style that can seem very in-your-face, more like an "Oh hey, this was done by John Williams" than something that best fits the film. Harry Potter, Star Wars, and Indiana Jones are all recognizable as Williams soundtracks, even though they're very different films.
Zimmer seems to vary his music a lot more to fit the needs of the film - it's just universally good, it's not universally Zimmer. The Lion King is very different from Crimson Tide, which is very different from Gladiator, which is very different from The Dark Knight and Inception.
BTW, I think Basil Poledouris (The Hunt for Red October, Wind) was another underrated film composer whose music is much better than his reputation.
> more like an "Oh hey, this was done by John Williams" than something that best fits the film
Really? I would be tempted to argue the opposite. The theme for Jaws, for example, is instantly recognizable and well known precisely because it fit the movie so well. Star Wars is near-impossible to separate from its music - from the main theme, to the imperial march - and his use of choral music in Phantom Menace had a big impact in the years that followed, much the way the "Inception sound" has been showing up all over the place in the last few years.
And citing the bigger action-adventure movies doesn't really make a good point. Williams has done a wide variety of music for movies, all of which is very good (Hook, Schindler's List, ET, etc.)
I do think music has changed over time in how it's used in film, and that Zimmer has a more "modern" style, where the soundtrack ties more closely to the emotional ups and downs than Williams does. (Nothing wrong with that, I like them both.)
It says that he goes on to continue his research once he is uploaded. So perhaps him being uploaded to the machine is distinct from him trying to create the first sentient computer.
He is trying to create a sentient machine before he dies and his uploaded self is continuing his research along those same lines.
Given what's happened with the last two Batman films, I think Christopher Nolan deserves the benefit of the doubt for all of his casting choices.
When Anne Hathaway was chosen to play Selina Kyle, everyone thought she would flop. After the movie, many were saying that she had the best performance.
I don't even need to drive home how incredibly, colossally wrong people were about Heath Ledger being cast as the Joker.
I don't respect him solely for choosing challenging projects. I respect his ability to direct really smart films, which are self-consistent. I usually don't care much about contradictions in films, as I think "It's just a film." But with Nolan's films, if there are contradictions, I don't find them. It's as if he has a very good set of rules for a film and sticks to the rules meticulously. And I enjoy that a lot, which I guess makes me a Nolan fan.
Well, if by "the rules" you mean "having plot holes you could sink the Titanic in"[1] and "the cheesiest lines ever uttered"[2] then yeah, you have a point...
[1] - WW1 showed what happens when a bunch of people with pistols charge at a bunch of people with machine guns.
[2] - " So, you came back to die with your city." "No. I came back to stop you."
Ugh, No. 2 literally made me facepalm. Despite that, I still think that's a problem with the writing, not the directing, though he likely could have changed that line if he felt the need.
In regards to No. 1, they were already pretty close when they started the 'charge', and the machine gunners were not in trenches. Although far too few people seemed to be getting hit with gunfire during that charge scene.
Those are fair points. I'm thinking those examples are from the Batman films, which I like in a way, but which are my least favourite Nolan films, although I think they are the most well known. I was more referring to films such as Memento and Inception. But then again, I still might have missed something or other.
Well, it's relative. Within the context of superhero films, I think it is a masterpiece if for no other reason than the bar already being set so low. Outside of that, it's a really great action movie.
I think his films are methodical almost to a fault. They feel like puzzles to me. I could definitely understand if someone didn't find much value in that.
Nolan seems to bring a complement of actors between projects (e.g. Inception vs. Batman trilogy). Will be interesting to see if he continues this trend.
Actually it seems like a really predictable casting choice. When will people get tired of seeing Depp playing a weirdo? I don't even have to guess that Helena Bonham Carter is going to be bug-eyed with big crazy hair in this one too...