Hacker Newsnew | past | comments | ask | show | jobs | submit | Russell91's commentslogin

4 of 5 of the biggest tech company founders were in their early 20s. So the biggest ideas clearly slanted towards young visionary founders, but the statistics do skew back towards older founders for all but the largest companies.


One also expects the most variance at the tail-end of the success curve, because the sample size is so small. Give it a few centuries and we'll have some data about that part of the spectrum of success stories.


This seems to redefine the question away from "what does a successful entrepreneur look like" in an attempt to maintain the narrative that "young visionaries" are the most successful.


That is only if you think only 'tech' companies are big and they are the biggest ideas and they are the majority in Fortune 2000, but it's not. There are lot of companies that have much smarter ideas and are clearly big enough to be considered as 'successful', they are mostly skewed towards older founders.


Shocking I know, but there are many industries besides tech.


Apple's true success came with Second Steve i.e. mature Steve, not first Steve. Amazon, Oracle, Intel founders weren't in their 20s.

Microsoft, Google, and Facebook indeed were started by young founders. Twenty years later Google might be the only meaningful one standing. Time will tell.


> Twenty years later Google might be the only meaningful one standing. Time will tell.

It's been more than 20 years and Microsoft is still right there.


As I mentioned elsewhere, Bezos was 29 when he quit his full time job and 30 when he wrote up the business plan for Amazon. I think that's close enough for this argument.


I think this is very important, basically that if you look at the data by something like market cap it would probably skew very young. E.g. AirBnB and Dropbox are (IIRC, may have changed by now) the majority of total valuation for ycom, and those companies were both started by early 20-somethings.


YC itself skews young, at least in part I'd say because the sort of help it provides is something that is more needed the younger you are. It's vanishingly unlikely that I will need YC's money, connections, or imprimatur for a startup at this stage in my life. When I was 24 starting a company, it would have been hugely helpful.


Which 4/5? Zuck yes, Bezos no, Page/Brin: doubtful they were early 20s. Maybe 20s, but not early 20s.

EDIT: Page was 25 and Brin was 24: https://en.wikipedia.org/wiki/Larry_Page


> When code is written only once and most of the cost comes afterward, that seems like an impossible choice to defend.

> I can't imagine how anyone working on large code bases with other people would want to do this. Yes, implicitness is more fun and beautiful at the beginning, but it becomes a nightmare after a short time for anyone other than the original coder.

Good points, it seems like the arguments for implicitness may have been stronger in the past, when programming languages were less developed. Think of RollerCoaster Tycoon being written almost entirely in assembly in '99 by a single programmer. You'd have plenty of incentive for implicit standards. When you have modern languages with well optimized abstractions, all that implicitness ends up losing out. But if you compare the total amount of work that went into, say, Rust, with the cost of a single dude just building an awesome game, you see that explicitness only wins when it gets to cheat and use way more resources. So yes, explicitness is always better in the limit, but when resources are more constrained, implicitness is so nimble it will just crush the competition.


Excuse me, I'm a bit confused by your answer.

Are you saying that Assembly is more implicit than Rust or C++?

What do you use to measure "implicitness"?


I'd interpret to mean that borrow checking for example can to be explicitly demanded in rust code, whereas in assembly a lot of the safety is manually assured and often enough only implicitly in the code.

> > Think of RollerCoaster Tycoon being written almost entirely in assembly in '99 by a single programmer. You'd have plenty of incentive for implicit standards

On the other hand you could say that it's the rust compiler source that is rather explicit about the mechanics and the game code would be explicit only by extension.


> Yes, I totally agree. Yann LeCun, Geoff Hinton, Jurgen Schmidhuber and others did unpopular work for a long time.

...

> Until then, I'll be ... rolling my eyes at brain analogies.

Maybe you don't realize this, but these guys made more brain analogies than you can count over the same period to which you attribute their greatness. Meanwhile, they were attacked year after year by state-of-the-art land grabbers saying the same things you just did.

> isn't being presented as basic research on a risky hypothesis.

It is basic research, but it's not a risky hypothesis. Existing neuromorphic computers achieve 10^14 ops/s at 20 W. Thats 5 Tops/Watt. The best GPUs currently achieve less than 200 Gops/Watt. Where is the risk in saying that a man-made neuromorphic chip can achieve more per dollar than a GPU. There is no risk, and suggesting that this field is somehow has too much risk for advances to be celebrated is absolutely crazy.


Non-neuromorphic (analog) deep learning chip startup here. We're forecasting AT LEAST ~50 TOPS/watt for inference.


Sure - I guess it's productive for me to answer why this doesn't disagree with my comment. By the time you get the software to hook up that kind of low bit precision (READ: neuromorphic) compute performance with extreme communication-minimizing strategies (READ: neuromorphic), which will invariable require compute colocated, persistent storage (READ: neuromorphic) in any type of general AI application, you're not exactly making the argument that neuromorphic chips are a bad idea.

We literally have to start taking neuromorphic to mean some silly semantics like "exactly like the brain in every possible way" in order to disagree with it.

Edit: also, to ground this discussion, there are extremely concrete reason why current neural net architectures will NOT work with the above optimizations. That's the primary motivation for talking about "neuromorphic", or any other synonym you want to coin, as fundamentally different hardware. AI software ppl need to have a term for hardware of the future, which simply won't be capable of running AlexNet well at all, in the same way that a GPU can't run CPU code well. I think the term "neuromorphic" to describe this hardware is as productive as any.


Which existing neuromorphic computers achieve 10^14 ops/s at 20 W? If you compare them to GPUs, those "ops" better be FP32 or at least FP16.

Also, you forgot to tell us what is that "extremely concrete reason why current neural net architectures will NOT work with the above optimizations".


>Which existing neuromorphic computers achieve 10^14 ops/s at 20 W? If you compare them to GPUs, those "ops" better be FP32 or at least FP16.

The comparison is of 3 bit neuromorphic synaptic ops against FP8 pascal ops. That factor is important (as it means that the neuromorphic ops are less useful), but it turns out to be dwarfed by the answer to your second question:

> Also, you forgot to tell us what is that "extremely concrete reason why current neural net architectures will NOT work with the above optimizations".

this is rather difficult to justify in this margin. But the idea is that proposals such as those above (50 Tops) tend to be optimistic on the efficiency of the raw compute ops. But these proposals really don't have much to say about the costs of communication (e.g. reading from memory, transmitting along wires, storing in registers, using buses, etc.). It turns out that if you don't have good ways to reduce these costs directly (and there are some, such as changing out registers for SRAMs, but nothing like the 100x speedup from analog computing), you have to just change the ratio of ops / bit*mm of communication per second. There are lots of easy ways to do that (e.g. just spin your ops over and over on the same data), but the real question is how to get useful intelligence out of your compute when it is data starved. This is an open question, and (sadly), very few ppl are working on it, compared to say low-bit-precision neural nets. But I predict this sentiment will be changed over the next few years.

Edit for below: no one is suggesting 50 Top/w hardware running alex net software to my knowledge (though would love to hear what they are proposing to run at that efficiency) . Nvidia among others are squeezing efficiency for cv applications with current software, but this comes at the cost of generality (it's unlike the communication tradeoffs they're making on that chip will make sense for generic AI research), and further improvements will rely on broader software changes, esp revolving around reduced communication. There are a lot of interesting ways to reduce communication without sacrificing performance, such as using smaller matrix sizes, which would reverse the state of the art trends.


Regarding your first answer, sounds like you're doing apples to oranges comparison here. What are those "synaptic ops"? Xavier board is announced to be capable of 30 Tops (INT8) at 30W, so even if your neuromorphic chip does 100 Tops at 20W, assuming for a second those ops are equivalent to INT3 operations, this makes them very similar in efficiency.

And you still haven't answered my second question: what is the reason the future neuromorphic chips won't be able to run current neural net architectures?

I'm not even sure what you are talking about at the end of your comment. The 50Tops/W figure was promised for an analog chip, designed to run modern DL algorithms. Sounds pretty reasonable, and I don't see how your arguments apply to it. Are you saying we can't build an analog chip for DL? Why does it have to be data starved?


Our hardware can run AlexNet...


In an integrated system at 50 tops/watt? How are you going to even access memory at less than 20 fJ per op? Like, you're specifically trying to hide the catch here. If we were to take you at face value, we'd have to also believe that Nvidia is working on an energy optimized system that is 50x worse for no good reason.

For reference, reading 1 bit from a very small 1.5kbit sram, which is much cheaper than the register caches in a gpu, costs more than 25 fJ per bit you read.


So this is locked up in "secret sauce". But as a hint, the analog aspect can be exploited.


Look, it sounds like your implying compute colocated storage in the analog properties of your system (which is exactly what a synaptic weight is btw), on top of using extremely low bit precision. So explicitly calling your system totally non-neuromorphic is a little deceiving. But even then I find this idea that you're going to be running the AlexNet communication protocol to pass around information in your system to be a little strange. If you're doing anything like passing digitized inputs through a fixed analog convolution then you're not going to beat the SRAM limit, which means that instead you have in mind keeping the data analog at all times, passing it through an increasing length of analog pipelines. Even if you get this working, I'm quite skeptical that by the time you have a complete system, you'll have reduce communication costs by even half the reduction you achieve in computation costs on a log scale. It's of course possible that I'm wrong there (and my entire argument hinges on the hypothesis that computation costs will fall faster than communication - which is true for CMOS but may be less true for optical), but this is really the only projection on which we disagree. If I'm right, then regardless of whether you can hit 50 Tops (or any value) on AlexNet, you'd be foolish not to reoptimize the architecture to reduce communication/compute ratios anyway.


Oh, I see what you meant now. Yes, when processing large amount of data (e.g. HD video) on an analog chip, DRAM to SRAM data transfer can potentially be a significant fraction of the overall energy consumption. However, if this becomes a bottleneck, you can grab the analog input signal directly (e.g. current from CCD), and this will reduce the communication costs dramatically (I don't have the numbers, but I believe Carver Mead built something called "Silicon Retina" in the 80s, so you can look it up).

Power consumption is not the only reason to switch to analog. Density and speed are just as important for AI applications.


I should clarify, once data enters the chip, we provide 50 tops/W. The transfer from dram is not included.


We certainly have enough compute at this point. 10^15 flops should be more than enough to run the brain by pretty much any analysis. Part of the issue is that evolution had at least 1 million such creatures over 52,000,000 weeks to improve since monkeys. So while human intelligent design of AI will certainly be a better algorithm than evolution, we may actually be a bit shy computationally of easily training an AI system, in spite of having more than enough to realize one.


Yes! Sometimes you know that a solution will take a particular mathematical form, without knowing what the parameters will be. So you can write down a program (function) that can express any solution of that form, and use an optimization algorithm e.g. gradient descent on labeled examples, to figure out which specific instance of your possible solutions works best.


Elon Musk hadn't started his simulation yet /s


I always come to these hyperloop criticisms expecting to find some sort of fatal flaw in the physics of energy efficient supersonic travel. But to my surprise, they instead tend to be pessimists saying things like, "You'll never get past my friends: the regulators, the government bureaucrats, and especially the lawyers!. We will drive up your costs and make you look foolish".

First, no one said that designing this thing in the USA means it has to be deployed in the USA. Countries without common law legal systems get around these unnecessary costs much easier.

Second, if these are seriously the only objections, then thank god we are actually building this thing. I could see complaints if it were some $100 billion publicly funded project, but the fact that less than $1 billion in private capital has already gone so far into demonstrating the technological feasibility of such an innovative transportation mechanism is a huge win.


Reading through the criticisms the consensus is "sure, this can work if you allocate 10-100x more money than estimated." I'm not sure if it's truly innovative if it costs more, places considerable stress (up to 0.5g!) on the rider/cargo (what the author calls a "barf ride"). The analysis picks apart the touted benefits of additional speed when ultimately the capacity is lower and it takes longer to depart. In the case of cargo, the author argues that the "last mile" of cargo takes the longest, and as such there's little benefit to sending it at 1200kph if it then needs to be offloaded and trucked around anyways.

To summarize:

- construction costs are underestimated or completely wrong

- capacity is lower than standard HSR

- Hyperloop claims power usage is higher for HSR than it actually is

- forces aren't adequately accounted for even with canting

- assuming perfect canting the pylons must take the additional force

- the majority of travel time is to/from the station + security screening.

To me, it seems like it's optimizing for the wrong section of the trip.


I think the biggest point the article makes is the great deal of care that needs to be taken in packing loads to meet weight requirements and avoid movement of goods in transit.

What makes sense for international air mail doesn't make sense for a four hour journey by truck, especially if that truck can take a container straight from the ship to the city depot without carefully repacking everything, or go door to door with smaller loads.


Yes. With what exactly does freight-hyperloop try to compete? Trucks are more flexible while freight rail is struggling on most routes and speed seems of least concern there.


Maybe I'm totally unaware, but don't trucking, air freight and sea freight have the same problems w/ regards to weight requirements and movement of goods in transit? Is this not a solved problem? Build Hyperloop Freight to fit a ULD and we're all set no?


As the article notes, goods would move a lot more when they're subjected to the G-forces of a hyperloop than the occasional rolling of a ship or even much milder G-forces of an aircraft taking off, which they have to be packed pretty carefully for as is. And whilst all transport methods have a weight limit and some degree of load-balancing requirement, it's much more of a restriction for the load of a capsule fired at supersonic speed through a curving tube that has to resist forces imposed upon it than a bigger lorry chugging along at a sedate 60mph.


Container ships will have a dozen plus containers loaded across. Even if all the cargo shifted to one side of each container, the general balance of the vessel would remain roughly the same.


A train that accelerates at 0.5g to 1200kph would be awesome and I would ride it at every available opportunity.


The issue is not acceleration front/back, which can be dealt with in design and is fairly stable and consistent, but acceleration as much as 0.5G side/side, which is also more intermittent and against which it's difficult to build seats that would protect customers.


Really it's just a question of track design as these are 100% predictable G forces. If you really need to turn you can just roll the cabin and keep these g's directly under you and or slow down apraoching it. Elevators can also quickly have a fairly wide range (0.8 to 1.2) g's and people don't really get sick in them.

Highways for example have fairly wide banks which people rarely notice. https://sterlingpearce.files.wordpress.com/2015/02/img_0054....


Aeroplanes are probably a more accurate approximation of what you'd feel. You get hardly any G force from banked highways, your speed is just too low. Now I don't really get plane sick, I will quite happily sit through a few barrel rolls (which is far more extreme than the hyperloop (although that would be fun)), but there's plenty of people who would get sick from a few 1.5 G banked turns.

Elevators are different because you don't spend more than about 1 minute in an elevator.


High end cars can pull up to 1g lateral forces and many cars can pull up to 1g breaking on a good road surface. Relatively low top speed just means you don't accelerate very long. I have hit weightlessness over a hill at high speeds followed by 1.x g on the other side, so 0.5 g does not seem bad.

Remember 1g * 1 second is only ~22mph.


For sure, my car is a pile of junk and can hit 1g braking.

The problem is when it's sustained. That's why people get sick on roller coasters and when doing aerobatics (extreme examples, I know).


That doesn't really solve it; there's limits on how much you can rotate the cabin, and how quickly you can rotate it. Lots of math here: https://pedestrianobservations.wordpress.com/2013/08/13/loop...


What about one where you feel that, acting in various directions, at every curve, every speed transition, and every elevation change. Because that's the "barf ride" being discussed, not merely getting thrown to the back of your seat during initial acceleration.


That sounds even more fun. :D

Although as the poster mentioned below, a passenger service would gradually rotate the passenger compartment to keep the G-forces vertical, a la a banking aircraft, so all you'd likely notice would be feeling heavier for a few seconds.


The "barf ride" comments were the result of calculations which included rotating the passenger compartment.

See https://pedestrianobservations.wordpress.com/2013/08/13/loop... linked from the original article.


Interesting link, thanks! I should have clicked more on the original article. :)

This seems to be the comment in question:

> This is worse than sideways acceleration: track standards for vertical acceleration are tighter than for horizontal acceleration, about 0.5-0.67 m/s^2, one tenth to one seventh what Musk wants to subject his passengers to. It’s not transportation; it’s a barf ride.

I find the statement that vertical acceleration is 'worse than sideways acceleration' to be highly suspect (otherwise why would you bother to bank into turns?) - I would guess that the track standards are for +/- 0.5ms^-2 and mostly target short-period up-and-down bouncing motions rather than smoothly applied upwards acceleration.


The analysis suggests that the speeds involved would require a few minutes of said g-forces, not seconds.


Yeah, seriously. I'd fly out to California just to take a trip on it.


To be fair, Musk's original paper advocating the concept was structured as a criticism of the supposed inefficiency of rail alternatives, and was heavy on deliberate cost underestimates and light on physics. So it's not exactly unreasonable that most critiques start off with the economics.

This critique, however, raises quite specific physics-related logistical reasons why Hyperloop is a ludicrously impractical solution for most forms of freight and uneconomical for the rest. Things it doesn't mention, even implicitly, include regulation, government and lawyers. Perhaps you'd get more out of these hyperloop criticisms if you read them?


The best legitimate criticisms I've seen are here: https://m.youtube.com/watch?v=RNFesa01llk

Basically expansion and contraction as well as just plain vulnerability of running a vacuum make it extremely difficult to do practically. The phrase he uses, "all the problems of space travel except in a gun barrel" is a pretty accurate description of the issue. Any loss of vacuum will almost guarantee loss of life. No realistic demo has been done even on a small scale as of yet.


Ahh yes, the infamous Thunderf00t "debunking" video. Never have so many Hyperloop myths been created by a single piece of content! I've debunked-the-debunking a half dozen times myself, and it shows no signs of slowing down.

The big problem with his air cannon theory is, pipes are not lossless! Friction and viscous interactions between the entering air and the tube wall will slow down that "shock front" to a highway speed wind in only a couple km. So unless it's so close that the pod can't stop in time (and actually derails at the breach), there should be no loss of life.

As far as thermal expansion, that can be solved with sliding interfaces every few hundred meters. The Hyperloop has switched to mag-lev designs for levitation, so the ultra smooth interior is no longer necessary. There will be some air leakage at the joint of course, but this can easily made up by the pumping stations installed at intervals along the track.

The pressure in the Hyperloop was specifically chosen so that simple mechanical vacuum pumps can be used -- no turbomolecular or cryopumps needed. The Hyperloop runs at 1/1000th of an atmosphere. True "vacuum trains" use vacuums at about 1/1,000,000th of an atmosphere, meaning the pumping is about 1000 times harder (not because the pressure difference is meaningfully different, but because you only get 1/1000th as much air per "stroke" or "cycle").

The Large Hadron Collider uses a harder vacuum still (about 1/1,000,000,000,000 of an atmosphere, and yes that's a trillionth of an atmosphere), which is why LordHumungous's experience mentioned earlier isn't terribly applicable. The LHC is doing something a lot harder than the Hyperloop.


You assume that Hyperloop's tube will run a vacuum, which is a misconception. The Hyperloop One design uses low pressure tubes which is completely different from being completely evacuated.


Vacuum doesn't exist for all practical purposes, every form of vacuum is just low pressure. So if you don't say anything about what level of vacuum or how low the pressure is your statement is meaningless.


Watch his second video, he provides extra explanations including that one.

https://www.youtube.com/watch?v=DDwe2M-LDZQ


Without a vacuum it's just a train with no need to be enclosed.


its effectively a vacuum


Why would loss of vacuum necessarily result in loss of life? I would expect loss of vacuum to result in sudden deceleration, but not with a deadly level of force.


Air being drawn in results in pressure on the capsule, propelling it. Check the video out for a demo.


There are three mitigating points for that issue:

First, you would get choked flow at the inlet, and if you do the math it actually takes a massive opening to really make a dangerous shockwave.

Eg, if a joint in the tube disconnects and the two halves pull apart, the inlet area is the tube circumference * gap between halves. The two halves would need to separate by the full diameter of the tube (~15 ft) to get full atmospheric pressure going in both sides. A realistic gap of a couple inches to a couple feet is far less scary.

Second, these incidents will be exceptionally rare. Gaps that big are equivalent to a freeway overpass collapsing. It's a total failure scenario, so you see a single digit number of them a year in the US.

Lastly, the speed of sound isn't all that quick on the scale of this thing. It's about 13 miles/min, a fifth of a mile a second.

If you place emergency fill valves (one time use things, dirt cheap, just explosive bolts in the simplest case) you can flood a whole section of the tube with atmosphere in a controlled manner over say 15 seconds, and your shockwave will be largely stopped after 3 miles.

An emergency fill like that would also stop all pods in the region as they hit very high drag, so you don't even need to carry heavy emergency brakes on every vehicle.


Seems like you could also solve this with blast gates. If the tube breaks open behind the train, slam a gate closed to stop additional air from entering. There'd likely be air in the tube already, but with no additional supply, the propellant force would diminish rapidly.


Okay, I was assuming a failure in front of the train. If the failure is behind the train, it's still an extremely solvable problem. Blast gates to stop the inflow of new air. Or inject air in front as nickparker suggested. I don't see that this is a legitimate blocker to this technology.


That demo is unrealistic. The entire end of the tube isn't going to open all at once. Instead, any realistic leak would be pretty small in comparison.


Forces of nature or other things never suddenly disrupt infrastructure.

See: http://whns.images.worldnow.com/images/22787728_SA.jpg


It's an elevated track so it really could be designed to deal with that kind of break. Water, Oil, and Gas pipelines for example have long dealt with those issues without breaking when designed correctly.



How do oil pipes do this? Automatic cutoffs every mile?


There are a wide range of solutions: https://sfwater.org/modules/showdocument.aspx?documentid=577... http://www.pbs.org/wgbh/amex/pipeline/peopleevents/e_quakes....

Sliders https://www.quora.com/Why-do-oil-pipelines-that-transfer-oil...

Now doing this without bending the long pipe is going to be harder, but you really could design this thing so riders don't notice a magnitude 9 earthquake.

PS: In the end you get into cost benefit designing it to survive an aircraft impact is probably not worth it let alone a nuke, but predictable earthquakes are not that hard.


Big water infrastructure breaks too.

My wife was PM on an emergency project to replace a 72" main feeding a city that failed.


I don't understand why a tube section couldn't come away? What if a bomb went off, a truck hit a support beam, or an earthquake caused a section to topple over...

I don't think it is unrealistic at all.


1) Because the pods are not designed to go that fast through the air.

2) Because in the case of an explosive recompression there might not be a tube anymore.


This is a hollow answer. "Not designed for" is an absurd claim for a thing still in prototyping. Same for the "explosive recompression" claim. You're making this claim about a system that doesn't exist.

I'm not bullish on hyperloop but I see no reason to believe that the engineers can't design a pod capable of not disintegrating in air, or capable of designing a track with a failure mode better than completely falling apart.


Aren't the acceleration figures a criticism of the physics?

Same with the argument about centralization. It's a practical non starter if sticking the load on a truck that just goes to the destination is faster than shifting the load off the truck onto the hyperloop and then back onto a truck.


I dunno... I'd feel much more comfortable if all humanities smartest technical people were trying to make the next whatsapp clone /s


Maybe in elementary school people should get high marks for effort. I don't know.

But in the case of a company raising funding and claiming it wants to build a ton of subsidized infrastructure to service the public, it isn't the case that we should say, "This is better than some more vacuous application of the money." Because it's not at all clear that Hyperloop is actually a good plan to reintroduce high speed rail into America when so many problems keep coming to light about it.


Hyperloop is definitely vacuous..


Contrarily, I'd feel more comfortable if "idea guys" like Musk would leave the engineering to actual engineers.


Then you'll be glad to know that's what he's doing with Hyperloop - he has just released the idea as a paper, but he's not directly involved in its development, which is being done by independent companies.


You mean he released a paper renaming vactrains to hyperloop.


Don't challenge the Musk hero worship!


Dozens of engineers from Tesla and SpaceX contributed in the Hyperloop whitepaper. Plus Elon considers himself an engineer, even though he has a degree in physics.


I consider myself a brain surgeon.


Elon Musk: I'll make the Wiki.


Maybe there is no single fatal flaw. The point is, though, that there are many, many issues that need to be addressed and it is very clear already now that the hyperloop will be uneconomical. In other words:

It might be possible to force it into existence, but it won't be worth it.


Just now, I read this piece and the linked New York Magazine piece (which, without mentioning regulators, managed to make the whole outfit sound like the kind of scam Hacker News usually delights in outing). I do not see any arguments of the form you're suggesting. It's not the author's friend "the regulators" that are going to make the Hyperloop fail; it's his friends "physics" and "economics".


> I always come to these hyperloop criticisms expecting to find some sort of fatal flaw in the physics of energy efficient supersonic travel. But to my surprise, they instead

Except, as the article points out, it's not actually faster, cheaper, or efficient than competing technologies. Those seem like really fatal flaws.

> Countries without common law legal systems get around these unnecessary costs much easier.

That same logic also applies to the competing technologies. The same magic wand that lets you build a hyperloop for 1/10th what it would cost in the US also lets you build the tracks for HSR for 1/10th what it would cost in the US. You can't compare the cost of building a hyperloop in China to the cost of building HSR in California and get a meaningful answer.

Nobody is saying that the hyperloop is impossible; they're saying that it's a plan to build something objectively worse than high speed rail at prices that seem broadly comparable.


That's a strawman. Most of the criticisms are technological and security related. How do you keep an enormous vacuum tube with fast-moving pods inside secure against even simple attacks?


?

Derail a train = put a rock on the tracks.


That will maybe derail one train. Versus the whole tube exploding at the speed of sound and/or the shockwave hitting every pod traveling at speeds designed for near vacuum.


Physics 303: "pipes are not lossless! Friction and viscous interactions between the entering air and the tube wall will slow down that "shock front" to a highway speed wind in only a couple km." (quote from schiffern post). You may want to look at derivatives of Bernoulli's equation, like Darcy–Weisbach equation.

Plus there is a planned emergency repressurization of the whole tube that would eliminate even that medium speed shock front. Plus a small 10cm diameter bullet hole would imply in less than 1% of the flow of a completely broken tube (a much harder feat to do), per laws of conservation of energy and matter.

As pods travel with 4km or so of separation, at most one pod will be affected. That is much less people than a derailed train.


What part of "acceleration will lead passengers to vomit all over themselves" is a bureaucratic issue?


so many are using the 'playing at moonshots with other peoples money' criticism.

It's not like investors are blind or dumbstruck just because it's an idea from Musk.

They do their due diligence and if they think the plan has potential, they invest.

I'm sure there were countless technical issues and big challenges around airplanes in the initial stages. And cars. And trains. And, you know, every new technology.

Could Hyperloop (One) fail miserably? Sure. Could it succeed? Also.

Let's just wait and see.

Also, whenever there are invested interests of established competition, they usually drum up the PR machine against new ideas, just as a precaution.

Just look at the fascinating history of fridges vs ice box companies for one of countless examples.


One need look no farther than Silicon Valley to see a plethora of starry eyed investors who are either blind, dumbstruck, or simply haven't done due diligence. Isn't the bubble about to pop... again?


I don't think we are in a bubble anything like the dot com debacle.

The industry as a whole is far more mature now, and paths to monetization are clearer. Competition is high, so is the competence of most VCs and investors.

I'd also place an engineering project like Hyperloop in a whole different space than the typical SV stuff.


"paths to monetization are clearer." Indeed. Try to generate enough hype to bilk IPO investors or failing that get bought out by a firm that actually makes money.


It's a criticism common to most future tech, not just the hyperloop.

For example, there was lots of hang-wringing over Fusion power back in the 1970's, about how if the bureaucrats would just give it more funding then it would happen: https://upload.wikimedia.org/wikipedia/commons/a/ab/U.S._his...

I'm not saying any of this tech is without merit, just that it's common to need buy-in from many organizations to make it happen.


I think you are misinterpreting the criticism. There's a difference between "regulator, bureaucrats & lawyers will stop you" with "there is no compelling advantage with this approach.

It's flashy and all to say you're going to break all the rules, but you could just as easily do that with conventional means of transportation.


Statute law systems are the socialism of law, Common-law, this collaborative law making is the 'free market' problem, and it does not have a calculation problem via the legislator/statute marker.


Pessimism seems to be the default for the HN hivemind. Would be cool if we all just encouraged everyone working on moonshots.


Nobody here is trying to crush dreams. The author, and many others, are simply trying to point out that there are much better uses of time and money.

The factors (i.e. physics) underpinning the economics of freight shipping in all forms are well understood. Can a hyperloop be built? Sure. Is it better than existing intermodal cargo shipment infrastructure? Yes and no. It certainly offers speed, but doomedly at the expense of economies of scale: https://www.usmma.edu/sites/usmma.edu/files/docs/CMA%20Paper...

Here's a thoughtful assessment by Olaf Merk posted last month that draws the same conclusion as the author: http://shippingtoday.eu/musk_maritime/

Bottom line, if you can find enough high value bulk cargo that benefits significantly from high speed transit to only two or three "ports", then hyperloop is your answer. However, that's a very (perhaps nonexistent) niche in the industry.

What we (transport engineering community) need to be expending resources on is: a.) eliminating greenhouse gas emissions, b.) designing a system of transport that scales with global trade demand (vice building 20K+ TEU megaships that can only be served by a few ports and promote trucking congestion in ports). Autonomous shipping, AI implementations in both freight forwarding (the logistics side) and transport control systems themselves, and applications of H2 fuel cells are technologies with great promise.


> Pessimism seems to be the default for the HN hivemind.

At the time of typing this the pessimism has been entirely reasonable and I'd say in the minority, with Xorlev outlining the fundamental issues with Hyperloop (capacity, costs, solving the wrong end of the problem) and KirinDave pointing out how while yes, there are more vacuous ways of spending money, when you're mucking around with other people's dough there's an expectation on return on investment, otherwise it's a charity.

I'd like to see the Hyperloop built just as I'd like all kinds of cool, complex and not necessarily practical technologies develop. But it's not unfair to point out the economic and physical realities that will get in the way of those things.


Pessimism is an incredibly important tool, especially for engineers.

Positive thinking never solved anything.


Theranos.


The 2016 Macbook Pro uses the "shallow" keyboard.


Their hardware is not the best anymore.

Just visited the Palo Alto Apple store and I'm pretty sure a Microsoft Surface Tablet has a better keyboard than the new MacBook Pro. They really smushed down the keyboard to make the whole configuration thinner, but whereas the old Pro's were halfway between the quality of a dome keyboard and a mechanical keyboard IMO, the new ones are even worse than domes.


I certainly didn't like the shallow keyboard on the Macbook. I don't even particularly like the keyboard on the Macbook Air I'm typing this on because of the short key travel. But I'm curious as to whether I could get used to the new butterly keyboards. Also, would be interesting what an ergonimcs expert would have to say about them.


Another crazy dream I have for a real developer laptop would be to have like a port with real gpio pins. Since parallel ports went the way of the dodo, there is no simple way to interface with home built electronics anymore. I got a raspberry pi 3 recently and have been having a lot of fun with it. Also a built in fpga would be cool. You would be able to do a lot of prototyping that way, basically make computers fun again instead of a locked down albeit polished appliance.


She has voting control, which is why she is still there.

[1] http://www.forbes.com/sites/petercohan/2015/11/03/theranos-l...


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: