There's only the fact that we probably just CANNOT perceive extremely advanced civilizations.
An insect crawling around a 75 year old brick house that is covered in ivy and moss will have NO idea that the object it is walking upon is NOT part of it's natural environment. That brick house seems as natural to the environment as the grass, and trees, and rocks, and streams nearby it -- to the bug at least.
Similarly we take our telescope out and see what looks like a natural organic universe with organic galaxies and normal looking stars etc...
Because we don't have solar system sized brains and billion year life spans we are absolutely hopeless to realize that theres' a lot of massive artificial structures in this universe. We're too bug-like to even be able to perceive them from our natural environment.
*we do know of massive cosmic structures like filaments, voids, and the great wall. So it is possible we as humans are starting to notice the "house" in the woods since our theories of physics cannot really explain why these structures exist at these massive scales (we would expect uniformity at those scales). See [here](https://en.wikipedia.org/wiki/List_of_largest_cosmic_structu...)
This idea that can't perceive sufficiently advanced civilizations doesn't hold up to scrutiny. It comes down to energy and mass (ie resources). Both of them essentially necessitate a civilization becoming large. You can argue that not every civilization will be expansionist but those that aren't tend to get swallowed up by those that are, at least based on Earth's history.
So for energy, a likely path will be the Dyson Swarm, meaning a cloud of orbitals. Many mistakenly think a Dyson Sphere was a rigid shell around a star. It never was. There's no material, actual or even theorized, that has the rigidity to sustain that. Because of that confusion, many now prefer the nomenclature of "Dyson Swarm" over "Dyson Sphere".
Dyson Swarms have the advantage of creating incredible amounts of living room and solving energy needs with relatively low tech (ie solar). They can also be built incrementally. A cloud of orbitals that capture the Sun's energy with orbitals between Venus and Mars will (IIRC) have a mean distance between them of ~100,000km.
Why is this important? Because the only way to get rid of heat in space is by expelling mass or, more likely, radiating it away into space. You can reuse waste heat to some degree but it's not perfect (because thermodynamics) and you can't totally avoid radiating heat away totally anyway. The wavelength of such radiation is entirely dependent on the temperature. At any likely temperature, that means infrared ("IR") radiation.
So a Dyson Swarm around our Sun would stick out like a sore thumb with a massive IR signature. There's really no hiding it. And we're capable of detecting it.
Conversey, there's really no hiding from any civilization capable of such feats of engineering. Plus any such civilization would be capable of sterilizing the galaxy out of any competition.
I don't understand why people fixate on Dyson spheres. Why would a Kardashev type 2 civilization rely on something that a type 0.7 civilization could come up with? Surely in the intervening 1.3 they could come up with something we can't. Dyson spheres are a type 0.7 imagination exercise. A type 2 civilization wouldn't just extend our technology - it would transcend it. Maybe they use dark energy, maybe they gather energy from simulations. We can't understand.
There are billions of engines sitting around the universe; might as well just use those.
The more advanced versions tend to simply use more radiative heat. they might arrange stellar nurseries, modify stars to be more efficient, or use black holes for engines, but even then you’re still just feeding them stars anyways.
extremely advanced civilization would be living around black holes because they are perfect heat dumps (they get colder as you add matter!) and would enable fundamentally better technology.
Chemistry and reactions would absolutely still be a thing. Reactions happen underwater all the time such as the complex decay of organic matter.
The fire meta get's postponed until trapping air inside bags happens (could be seaweed/skin based bags).
Then you need to make a habit of collecting a bunch of air and trapping it and then can begin exploring chemical reactions in the air.
ex: take dead but not decomposed organic matter, dry it out in hot air bag (maybe cover the bag in black squid ink and float the bag of air in the ocean out in the sun's rays for day to warm it up.
Then eventually you need to have the insight to do friction based experiments in the bag with dried materials and then one discovers fire in a massive breakthrough not dissimilar to when humans created Bose Einstein Condensates for the first time in highly specialized environments.
Nothing here says "impossible" to me. I bet if whales had fingers to easily manipulate matter they might've already done all this by now.
While I agree that there is no logical reason that underwater organisms could not become highly intelligent or advance to the level of doing experiments with fire, it is clear that being underwater is an additional barrier.
As such, the number of intelligent underwater civilizations, that could get near our present level of advancement, would likely be significantly lower. Not impossible (because of how large the universe is), but some order of magnitude, less possible.
> While I agree that there is no logical reason that underwater organisms could not become highly intelligent or advance to the level of doing experiments with fire, it is clear that being underwater is an additional barrier.
Meanwhile, a few thousand lightyears away, some sort of talking crab is rubbishing the idea that industrial civilisation could arise on land; after all, they wouldn't even have access to hydrothermal vents! What would they do for energy, burn plants?
(I really think we're inclined to build a _lot_ of unwarranted assumptions into what industrial civilisation has to look like and how you have to get there, because it's what we did.)
> As such, the number of intelligent underwater civilizations, that could get near our present level of advancement, would likely be significantly lower.
Unless of course, having opposable thumbs and >50 year lifespan and intelligence in the water causes you to go through a completely different developmental path than land based creatures. We just don't know.
Why so complicated? There could be many 'mini-labs' in underwater caves, accidental discovery of inverse diving bell so to speak. With trapped gases of any sort, by whichever process(volcanism?) pushing the water out downwards, while unable to escape upwards. Ready to explore, and mess around with. Maybe even in something like free floating coral reefs. Or below the ice.
Shareholders can vote and decide the direction of a company. They should also be held liable for any problems the company causes.
If the company is fined it should come out of company and then shareholder pockets. I might even add courts should be able to award damages by directly fining share holders.
If a company does something severely illegal then very large shareholders should risk jail time.
It’s your company after all as a shareholder. You own it.
It’s no different if your dog bites someone or child breaks the law. You have to pay the fines.
Under that twisted logic Israel would be perfectly justified with nuking Palestine. They voted for terrorists, therefore they should be liable for everything their country caused.
If the rent in a city is too high you are not going to get the MOST interesting restaurants, bars, and clubs. You are going to get only the businesses that will DEFINITELY convince an investor to write a check to dump on such high rents; regardless of whether that is a good idea or not.
The PhD cohorts for R1 Universities hasn't really gotten any bigger than 50 years ago. The number of academic jobs hasn't really gotten any bigger either. The only people having success in the system are the types of people that seem low-risk to the system.
So of course we should expect a decline in innovative ideas as time rolls on.
The only way to reverse this is to literally create more tenured jobs (or perhaps temporary tenure ex: 10 years you're guaranteed employment) and increase the size of PhD student cohorts so that they are large enough that iconoclasts can fit in again.
You make a really good point. Defining risk in terms of financial ROI and trying to assess it algorithmically and default-gating more and more endeavors behind such risk assessment is a surprisingly coherent model for a lot of widespread modern rot
Thanks for being one of the few commenters questioning the microconomic angle itself. In my view, the problem is that no rational economic agent will ever embrace the idea that research should be expensive. Well some very few of us do, we realize that we are competing against Nature and the Universe (not other economic entities), so one of the sensible avenues left to us is to try and recuperate all the losses that have already gone into past research. (maybe Kepler was trying desperately to salvage Hellenic heliocentrism? He also had a thing for platonic solids, after all)
Reinvent the way academic careers work. No need to even have the concept of tenure. No need to get a PhD degree to be recognized as someone researching something.
Anyone should be able to publish research without being on such restrictive terms. The article's main takeaway is that the institution of academia has perverse incentives that hold back innovation. I believe the way jobs and degrees are structured are part of that.
How do you decide which researchers to invest in? Ultimately you need some kind of institution to organize this, and I promise you it will end up very similar to a university.
Also, tenure is generally seen as a very important part of research. Without tenure your livelihood is dependent on your output and your relationships, which creates numerous perverse incentives. You can't research things which might lead nowhere, or you'll lose your job. You can't prove some other senior researcher wrong, or they might work to have you fired.
With tenure, this all goes away: you are free to choose your own path in research, and empowered to know you don't have any superiors who can lose you your livelihood if you step on their toes, or their friends' toes. Of course, this is the ideal. In practice, lots of tenured professors actually want to become ever more wealthy and powerful and are not content with the guarantees of their tenuership.
> I mean, anybody can do research and publish their papers in whatever journal.
Not really true. Publishing research in a reputable journal without an academic affiliation is many times more difficult. Sure you don't need a PhD to publish, but you pretty much need an academic affiliation.
Good or bad, researchers without an affiliation are often taken as crackpots.
I'd say if someone can't do calculus based statistics then a lot of high earning career paths (ex: machine learning, data science, actuary, quant) are not available to them.
That doesn't mean you won't be rich. It's just some of the lowest hanging fruit are not an option.
Yah, failure to understand statistics is a big risk financially.
I remember a rich man interviewed on TV who said he got his start making money in high school by running gambling games. He understood statistics while the other kids did not, and although the game was fair, he cleaned up regularly.
Take a walk through a Vegas casino, and you'll see legions of people who do not understand statistics and pay a heavy price for that.
> you'll see legions of people who do not understand statistics and pay a heavy price for that.
At the risk of stating the obvious, and not adding to the conversation, I think we all know that people putting their life savings into slot machines aren't doing so because they don't understand expected value. They may or may not understand that they are going to lose all their money, but they are gambling because they are addicted/have some kind of mental health problem. Knowledge of statistics doesn't really affect things for problem gambling.
As for those putting modest amounts of money into gambling, most of them will tell you that card games/etc. are fun, and are therefore worth it.
Many of these people claim to have a "system" which will enable them to win. I've talked with some of them. None of them I've spoken to had money. Coincidence?
Watch people at the slots. Do they look like they're having fun? Not to me.
Personally, I've gambled a few times. Lost money. I don't like losing money, it is not entertaining to me in the slightest.
Tell me about people who play the lottery, picking their "lucky numbers". It's sad.
> Many of these people claim to have a "system" which will enable them to win. I've talked with some of them. None of them I've spoken to had money. Coincidence?
The original thread was about how statistics education will not cause people of gambling. Of course people almost always lose money gambling, except for very rare exceptions, but that doesn't really have anything to do with my point that people spending meaningful amounts of money on gambling are addicted. Addicts aren't going to just tell you that they gamble, because they are addicted(maybe some will but not in general).
> Personally, I've gambled a few times. Lost money. I don't like losing money, it is not entertaining to me in the slightest.
Some people could probably say the same thing about video games, but nobody disputes that some people enjoy video games.
This comes down to the old saying "everything is memorization at the end of the day".
Some people literally memorize answers. Other folks memorize algorithms. Yet other folks memorize general collections of axioms/proofs and key ideas. And perhaps at the very top of this hierarchy is memorizing just generic problem solving strategies/learning strategies.
And while naively we might believe that "understanding is everything". It really isn't. Consider if you are in the middle of a calculus exam and need to evaluate $7 \times 8$ by calculating $7+7+7+7...$ and then proceed to count on your fingers up to 56 because even $7+7$ wasn't memorized. You're almost certainly not going to make it past the first problem on your exam even though you really do understand exactly whats going on .
Similar things are true for software engineering. If you have to stackoverflow every single line of code that you are attempting to write all the way down to each individual print statement and array access it doesn't fucking matter HOW well you understand whats going on/how clear your mental models are. You are simply not going to be a productive/useful person on a team.
At some point in order to be effective in any field you need to eventually just KNOW the field, meaning have memorized shortcuts and paths so that you only spend time working on the "real problem".
To really drive the point home. This is the difference between being "intelligent" versus "experienced".
> Consider if you are in the middle of a calculus exam and need to evaluate $7 \times 8$ by calculating $7+7+7+7...$ and then proceed to count on your fingers up to 56 because even $7+7$ wasn't memorized. You're almost certainly not going to make it past the first problem on your exam even though you really do understand exactly whats going on.
This is not a counterexample because exams aren't an end goal. The process of filling out exams isn't an activity that provides value to society.
If an exam poorly grades a student who would do great solving actual real-world problems, the exam is wrong. No ifs. No buts. The exam is wrong because it's failing the ultimate goal: school is supposed to increase people's value to society and help figure out where their unique abilities may be of most use.
> Similar things are true for software engineering. If you have to stackoverflow every single line of code that you are attempting to write all the way down to each individual print statement and array access it doesn't fucking matter HOW well you understand whats going on/how clear your mental models are. You are simply not going to be a productive/useful person on a team.
If their mental models are truly so amazing, they'd make a great (systems) architect without having to personally code much.
I can't totally agree with your counter-counter example. Most non trivial problems are time bound, deadline exist, and no matter how well ingrained you are in first principles thinking you won't be useful if it takes months to come up with a solution.
To know something includes speed of regurgitation. Consider a trauma surgeon. You want them to know, off the top of their head, lots of stuff. You don’t want them taking their time and looking things up. You don’t want them redefining everything from first principles each time they perform surgery.
Knowing a topic includes instant recall of a key body of knowledge.
I would say knowing and understanding is not necessarily the same. In this example the surgeon having both understanding and memory/knowing is best/required. If I had to pick between the two, I want the one that understands my particular trauma, even if that means they have give instructions for someone else or a machine to performing it.
I think an example closer to the above posts would be: If I needed cpr or defibrillation, I would much prefer a paramedic be next to me and make that call and performance than a med student or a defibrillator manufacture's electrical engineer.
I’m ABD in math. It was 30 years ago that I decided to not get a Ph.D. because I realized that I was never going to be decent at research. In the last 30 years I have forgotten a great of mathematics. It is no longer true that I know Galois Theory. I used to know it and I know the basic idea behind and I believe I can fairly easily relearn it. But right now I don’t know it.
That's wild, we all use AES cipers w/ TLS/HTTPS everyday - and Galois fields are essential to AES - but few people understand how HTTPS works.
The field is probably onto post-AES, PQ algos where Galois Theory is less relevant; but back then, it seemed like everyone needed to learn Galois Theory, which is or isn't prerequisite to future study.
The problem-solving skills are what's still useful.
Perhaps problem-solving skills cannot be developed without such rote exercises; and perhaps the content of such rote exercises is not relevant to predicting career success.
> Re: "this is not a counter example because exams aren't an end goal..." for any end goal with a set end time there are habits that need to be second nature and information that one needs to know in order to achieve that goal. If you lack those habits and don't know those facts it's going to be very hard to achieve that goal.
I used the example of a calculus test and not being able to do addition. But this really could be any example. It could have even been a Wide Receiver failing to read the play thats happening quickly enough despite being physically fit enough to execute the right play in hindsight.
>Re: they'd make a great (systems) architect...
But you wouldn't hire them as a programmer. My sentence was biased in the sense that "team" meant "team of software engineers". You would hire them for a different job sure.
Also good mental model here just means "Always knowing and being able to clearly articulate what I need to accomplish next to write my code". It doesn't even mean they are good at designing systems but lets go with that example anyways below:
The Architect version of this is that they perhaps have perfectly clear mental models of exactly how to code (memorizing very obscure language shortcuts and syntactic sugar and writing very clear code when they know what to build) but they cannot for the love of god think critically about what a design should be BEFORE they implement it far enough to reach a major issue.
And you would rightly say "well I would never hire that guy as an architect but I might have hired them as a programmer thats led by more senior folks". At the end of the day you are only hiring people for the parts of their mental models that are useful.
And the ability to clearly recall facts about that their domain is basically the fundamental detail here.
I agree with you that memorization is an optimization for getting daily task done (maybe not as optimal when novel solutions are needed; understanding/mental model might win out here). But we have tools to help take the load off memorization. The person that `understands` addition not as 7 + 7 but as incrementing a number a certain amount of times can use a calculator to solve the problem in a more efficient way.
I would probably not make a developer who had great mental models but lacked coding chops my first hire. Nor the programmer that could make code do amazing things but can not grasp the domain model. I would, however, probably consider them(the mental model one) the 100th to clean up backlogged bug fixes, and the code whiz to implement the more technically difficult backend niche feature/optimization. As much as it pains me to say it, github copilot chat works surprisingly well IF you can give it a clear concise description of the model and expectations. Then someone with an excellent mental model can create the smaller lego pieces and put it together, minimal coding required. Not only for the popular languages, I play with it from time to time using clojure.
To support your point, I think the role of memory in creative work is highly underrated.
I’ve seen up close a few people who could fairly be described as “most creative researchers in the world” (in my field at least) according to metrics such as h-index and Nobel prizes. It always strikes me how essential exceptional memory is to what they do — they have detailed, immediate recall of the problems in their field and, to the extent this recall is no longer present, then they are no longer producing groundbreaking work. Their recall of facts outside the field is often nothing special.
Imagination, creativity, intelligence all seem to rely on memory in order to operate.
> And perhaps at the very top of this hierarchy is memorizing just generic problem solving strategies/learning strategies.
I'm not sure this counts as memorization. I don't even think you can really "memorize" high level learning and problem solving strategies, even when explained by an expert. You kind of have to re-discover them internally. And then, there are people who "memorized" the explanation and are completely unable to put it into practice because to them it's just a word sequence, instead of an internalized change to the way you perceive and work with problems.
You absolutely can. I remember struggling with some problems on AOPS and then reading in a book "always consider smaller $n$ when dealing with a problem that is difficult because of large $n$" and ever since then that habit has stuck. Whenever I have a problem thats hard and involves numbers and i'm stuck I just remember to ask "what if the numbers were smaller? what do we do then?"
If that isn't memorizing something and making a new habit as a kid then I don't know what memorizing means.
Said another way, the ability to remember to "____" when dealing with a problem of type "___" is what I mean by "memorize".
> Whenever I have a problem thats hard and involves numbers and i'm stuck I just remember to ask "what if the numbers were smaller? what do we do then?"
I think you underestimate the amount of internalized understanding of the "unblock yourself on a difficult problem by solving a simpler version of it" strategy that you possessed or unlocked at learn-time which allowed you to notice its effectiveness. Isn't the sentence more of an easily-retrievable mnemonic for a concept that's much more complicated (than just the information transferred by language) and requires a particular background to recognize how useful it is?
They’re called heuristics in problem-solving literature. Both heuristics and meta-heuristics have been used in planning software. Heuristics from one system are sometimes reused in another system. So, you can memorized generic, problem-solving strategies.
I don’t know how much human brains do in that area vs non-memorization approaches. Ive read about how practicing rational, problem solving in specific domains to bake those heuristics into one’s intuition for faster responses. Most of us have done that, too. Any type of intuitive, problem solving probably involves memorization for that reason.
Memorization is caching. You need it because otherwise you'd be too slow at anything, but you can't possibly memorize everything, and the whole point of understanding is so you don't have to. And like with any caching, the more you store, the more it costs to maintain it, and the longer the lookups become. If you want to cram a lot of stuff into it, you may need to start doing actual, expensive work - e.g. spaced repetition - to keep it all.
AS for memorizing generic problem solving strategies - I don't think it's about not memorizing, but rather that understanding comes through examples, and if you learn high-level stuff without actually applying it in practice, and experiencing the process, then you haven't actually learned the high-level stuff, you just think so, and will parrot the description without comprehending it.
Nah, there's such a thing as creative thinking, idea generation, and connecting existing ideas in new ways. I wouldn't mind a coder that has to look at stack overflow a lot but is able to figure out a new method to do something better.
You absolutely would never hire a coder that needs to google "how to access an array by index" every-time they need to access an index of an array.
You can say a politically correct answer like "i don't care how they do it, as long as they get it done" but such a coder will DEFINITELY take months to finish what might take someone else hours.
Such a coder might still be able to suggest new methods to do something better and if there job description was "organizational optimizer" perhaps thats fine but as soon as you also expect software output out of this person you will quickly realize that you take for granted how valuable someone that has fully memorized a bunch of fundamentals up to and including some problem strategies truly is.
That makes no sense to me. If this coder has to access array by index twenty times a day, then he is going to remember it, eventually, no? If is it rare that he has to do it, then why memorize it?
You really think there is more value in remembering how to do something in some arbitrary, shitty, programming language than understanding the concept of doing it? With understanding the idea you can do it in any language, at any time, it is just a few seconds away.
It makes no sense because it indeed makes no sense. People who successfully solve realworld problems understand concepts and ideas and how to apply them, they understand how to iterate and extrapolate.
I've met too many people who can do a specific thing but actually have no idea what's going on for the GP's logic to hold any water at all.
It's not about the value in remembering syntax. It's the value in being able to recall a concept from memory.
Memory is a key part of learning. Understanding is great for learning new concepts, but you want to already know a concept. That way lies knowledge and experience.
It varies, but it often comes down to deep expertise combined with creativity, years of toil, and standing on the shoulders of giants. Cf. Fermat’s Last Theorem, bounded gaps between primes, the Weil conjectures, the Poincaré conjecture, etc.
> At some point in order to be effective in any field you need to eventually just KNOW the field, meaning have memorized shortcuts and paths so that you only spend time working on the "real problem".
Yes, there is a "habitus" to mastery. It becomes you, or you become it, so to speak.
But pedagogically speaking, I think what people miss is that you can't really use or think about something you don't remember.
I love this long detailed conversation with many people jumping in, and 0 references to philosophers of the mind… gee guys, I wonder how we could crack this code? Even the paper itself cites one cognitive psychologist then moves on! A bit of relatable intellectual arrogance from us SWEs/mathematicians, I think — we are “on top of the world” right now.
FWIW I think you in particular are exactly right. I always think of Schopenhauer’s quote, and I think any software engineer might appreciate it: human memory isn’t storing items received from the world in a database, it’s more like folding creases into a napkin so that it naturally tends to fall into that shape in the future. In other words: remembering an event is equivalent to developing the skill of imagining a scene/dataframe that relates to that event.
In specific math terms: math is a collection of intellectual tools building on one another. You can certainly practice the ability to apply tools in new situations, but if you don’t also practice the ability to recall the tools themselves, it’s useless.
But is that actually what human memory is like? AFAIK nobody actually understands the internals. The "philosophers of the mind" who claim to know are the ones guilty of arrogance, not those who don't cite them.
Well, we should collect some evidence and write a book! If we did, it would be filed into the philosophy of mind section, I believe ;)
We don’t know everything, but we have more evidence than “it’s a black box” - in fact, that’s basically the scholastic / Aristotlean view that was conquered by our friends Bacon, Hume and Kant a few hundred years ago.
> If you have to stackoverflow every single line of code that you are attempting to write all the way down to each individual print statement and array access
Then you may be a perfectly adequate programmer. This, what, doubles the length of time it takes to type out the program? Triples? Typing out the program is not what takes the time!
I've just spent a couple of days writing a plugin in a language I don't know. (The system documentation spends two paragraphs explaining how hard it is to solve the problem I solved.) Yes, I had to look up absolutely everything (including basic language syntax – repeatedly), and that was really annoying, but most of my time and effort went into figuring out how to do the thing.
"everything is memorization at the end of the day"
Only somebody who has never thought about or studied human cognition would memorize such a thing. ;)
But in all seriousness, memory isn't even memory isn't just memorization. Much of it is attention, some would even say attention is all you need. ;)
In all seriousness though, arguably, reducing the human mind down to a single dimension like "recall" (or attention) while ignoring other dimensions like emotion, creativity and so on is probably good evidence that human cognition is neither simple, nor unidimensional, for some of us humans at least. Ymmv
I'd say memorization and building expertise are orthogonal.
Expertise is lossy intuitive reasoning. It's pattern recognition based on practice and experience. Then there is logical reasoning based on memorized facts, which is a fallback mechanism people use when they don't have the necessary skills. It usually fails, because it's inefficient, it doesn't scale, and it doesn't generalize.
Sometimes memorization is necessary, but it's often not the actual point. When kids are asked to memorize the multiplication table, they are not really supposed to memorize it. They are supposed to build a mental model for multiplying numbers without resorting to first principles or memorized answers. Then if your model can calculate 7 * 8, you can also use it to calculate 7e10 * 8e11, even if you haven't memorized that specific fact.
The multiplication table doesn't have patterns, or it only has a few. You really do need to remember all of the 100 results. I know what 7*8 is, and I know the rules for exponents, so I can compute 7e10*8e11. But I can't "deduce" what 7*8 is by any rule, it's just a fact I remember. I have certainly not added 7 to itself 8 times in decades.
But you can break this into a different problem knowing that 2^3 = 8, and doing 7*2*2*2.
This isn't as fast but is in a way more useful because while 7*8 is fairly easy to remember you're not going to remember 17*8 etc but you can problem solve it fairly quick.
There are other ways of seeing the multiplication table as well. For example 9 times something can be thought of as 9*x = 10*x-x.
I never learnt these, but simply realised over time that there are different approaches to doing calculations.
> But you can break this into a different problem knowing that 2^3 = 8, and doing 7*2*2*2.
Doing that multiplication all the way through is super slow. When they said "can't" they meant in an effective sense, since they did mention repeated addition as an option. And that's not an effective way to get there.
> There are other ways of seeing the multiplication table as well. For example 9 times something can be thought of as 9*x = 10*x-x.
Yes, you can do that one. But that's just about the only fast trick there is.
> Yes, you can do that one. But that's just about the only fast trick there is.
I dunno about that. For division, anyway, there's a bunch of fast tricks that give you a decent approximation (i.e. decent precision, maybe to the nearest integer)
Someone recently was surprised that I worked out the VAT (Value Added Tax, 15%) on a very large number in a few seconds. It's because its 10% of the number plus `(10% of the number)/2`.
It's easy to get 10% of any number. It's easy to halve any number. It's a fast trick because there's two easy operations.
There's a bunch of similar "tricks": 1%, 10%, 25% and 50% are fast to calculate in your head (at most 2 easy operations, like `(half of half of N)`). Then you either add or subtract N. Or you multiply N by 2.
At most three easy operations gives you 1%, 2%, 4%, 5%, 10%, 11%, 12%, 14%, 15%, 20%, 21%, 24%, etc
To someone who doesn't know how you are getting the answer it might seem like you are a human calculator because you get so many of those quickly, and they don't see the ones you don't do in 3 easy operations (say, 13%, which is 10% + 1% + (1% * 2)).
IOW, it looks like a very impressive trick, but it isn't.
I completely disagree. First of all, at the time children learn the multiplication table, they definitely don't know the concept of exponentiation. Secondly, 7*2*2*2 is not some immediately obvious shortcut.
Also, learning multiplication with numbers higher than 10 still relies on knowing the multiplication table. 17*8 is 7*8=56, hold the 5, 1*8 + 5 = 13, so 136.
> Also, learning multiplication with numbers higher than 10 still relies on knowing the multiplication table
You've actually just proved my point - you used a method of breaking down the problem into a different problem and then solving it rather than simply memorising.
If you give the same question to multiple people there will be numerous ways different people use to go about solving it.
As an example, I might solve this by doing
20*8 = 160
3*8 = 24
160 - 24 = 136
Or
10*8 = 80
7*8 = 56
80+56 = 136
And I might apply different tools like the one I originally mentioned within these calculations. I know that 80+20 is 100 and so "borrow" 20 from 56, so that I can easily add 100 and 36 together.
These ways of calculating happen in your mind very quickly if this is how you get used to calculating.
Sure, but all of those work for numbers higher than 10, and all assume you know the multiplication table by heart. The multiplication table (the result of multiplying every number between 1 and 10 with each other) is something you have to memorize. You can get away with memorizing only some of these results and computing the others based on them, but it's basically impossible to do any more complex arithmetic if you don't know most of it by rote memorization.
> Also, learning multiplication with numbers higher than 10 still relies on knowing the multiplication table. 17*8 is 7*8=56, hold the 5, 1*8 + 5 = 13, so 136.
I think the reason I do it this way is because I get an approximation sooner when the numbers are very large i.e. I get the most significant digit first, and can stop calculating when I get the precision I require.*
Of course there is a pattern: (n+1)x = nx+x. Your brain can learn it just fine, and then it can multiply numbers without having to burden your slow inefficient symbolic reasoning machinery with rules and facts.
How is that pattern useful for replacing memorization of the multiplication table? 7*8 = 6*8 + 8 - fine. I still need to memorize what 6*8 is, or go through the extraordinarily slow process of expanding this recursively: definitely not an option in school.
It's a linear function your brain can learn. Your brain, not the conscious you. A lot of learning is about bypassing the slow inefficient consciousness that thinks it's in charge.
In sports and other physical activities, you don't memorize the right moves. You practice them until you can do them automatically. The same approach also works with cognitive activities.
If that were how people learned the multiplication table, you would see that people take longer to come up with the result of 9*9 than it takes them to compute 3*5. I have never seen anyone work this way, and so I believe it's far more likely people just remember the results in a table. "9*9=81" is simply a fact you have stored, not a computation you perform, even subconsciously.
Edit: I should also note that it's pretty well known people learn arithmetic as symbol manipulation and not some higher order reasoning. The reason this is pretty well established is that historically, the switch from Roman numerals to Arabic numerals led to a huge flurry of arithmetic activity, because it was so much easier to do arithmetic with the new symbols. If people had learned by subconsciously calculating the underlying linear functions and not through symbolic manipulation, the switch would have been entirely irrelevant. Yet for most mathematicians in Europe at the time, doing 27*3 was much easier than doing XXVII*III.
People are more than their conscious minds. A neural network can compute a linear function with a small domain without having to store each case separately.
I never memorized the multiplication table, because I found it boring and unnecessary. When I had to multiply numbers, some answers just appeared automatically, while I could calculate the rest quickly enough. Over time, more and more answers would appear magically, until I no longer had to calculate at all.
Some other things I had to memorize. Those were usually lists of arbitrary names with no apparent logic behind them. And if I didn't need them often enough, they never became more than lists of random facts. For example, I often can't tell the difference between sine and cosine without recalling the memorized definitions.
Or, to give another example, Finnish language has separate words for intercardinal directions (such as northeast). Usually when I need one of them, I have to iterate over the memorized list, until I find the name for the direction I had in mind. Similarly, I had to iterate over the six locative cases in Finnish grammar whenever I needed a name for one of them.
Whether it happens consciously or unconsciously, computation takes time. So, if your theory that the brain computes the results instead of remembering them were true, it should take measurably longer to compute 9*9 than it takes to compute 2*3. I am certain that doesn't happen for me, but it could be measured for others as well.
> When I had to multiply numbers, some answers just appeared automatically, while I could calculate the rest quickly enough. Over time, more and more answers would appear magically, until I no longer had to calculate at all.
This is prefectly explained by some results becoming memorized as you see them more and more, and makes no sense if your unconscious mind were computing things. If your brain was computing these results unconsciously because it had learned the function to apply, it should have come up with results automatically for any (small) multiplication. That it didn't, and you had to consciously do the computation for some numbers, is pretty clear proof that you slowly memorized the same multiplication table, but only filled it in gradually.
Overall I'm not advocating for the importance of cramming the multiplication table. I'm just saying that people who want to do mental arithmetic, or even pen-and-paper arithmetic, can only realistically do it if and when they learn the multiplication table by heart. And, that the reason the multiplication table is taught to children is strictly to have them memorize it so that they can do arithmetic without a calculator at realistic speeds.
From my point of view, what happened with the multiplication table was practice without memorization, while the word lists were memorization without practice. Two different approaches to learning with two different outcomes.
Oh, absolutely, you learned it through practice instead of pure memorization, which overall tends to be a better strategy. But just like chess masters tend to mostly learn game positions (through practice, note trying to do rote memorization) and not some advanced logic, I am pretty sure you learned the specific results and did not learn some subconscious algorithm deeper than "lookup 9*5 in the stored table".
Being too early is a privilege. Not everyone CAN invent things that are valuable but ahead of their time.
If you can then on the one hand don't expect tremendous accolade but also do continue working on what you're doing if it feels obvious to you that this is important. Today with the pace at which technology is advancing people tend to receive credit in their lifetime. In Grassman's day that was just NOT the case.
If more people took risks building out ideas they "felt" were the right thing to do the world would be a substantially better place.