> There is no body of research based on randomized, controlled experiments indicating that such teaching leads to better problem solving.
I'm sorry but one don't exactly come across randomized controlled experiments in teaching very often... not to even mention ones that are well designed... so this isn't saying much.
This is only one piece within a larger argument. You need to read on to understand what the rest of the argument is.
The form of the argument is this: there is no direct evidence for X, but there is a mountain of circumstantial evidence supporting "not X", so therefore, almost certainly, "not X."
X = "we can teach students how to solve problems in general, and that will make them good mathematicians able to discover novel solutions irrespective of the content"
I have read the rest of the argument. However, my take upon reading it is that this is just one more contribution in a back-and-forth argument about every aspect that has been studied in math education. Despite the fact that this was published in 2010, the landscape in 2024 very much points to "it's unclear" as the answer to "is [anything] effective?", at least for me, unfortunately.
Usually when there's a replication crisis, people talk about perverse incentives and p-hacking. But there's 2 things I want to mention that people don't talk as much about:
- Lack of adequate theoretical underpinnings.
- In the case of math education, we need to watch out for the differences in what researchers mean by "math proficiency." Is it fluency with tasks, or is it ability to make some progress on problems not similar to worked examples?
> Is it fluency with tasks, or is it ability to make some progress on problems not similar to worked examples?
That's an interesting point. Ideally students would have both. My impression is that the latter is far less trainable, and the best you can do is go through enough worked examples, spread out so that every problem in the space of expected learning is within a reasonably small distance to some worked example.
I.e., you can increase the number of balls (worked examples with problem-solving experiences) in a student's epsilon-cover (knowledge base), but you can't really increase epsilon itself (the student's generalization ability).
But if you know of any research contradicting that, I'd love to hear about it.
> Lack of adequate theoretical underpinnings.
If you have time, would you mind elaborating a bit more on this?
My impression is that general problem-solving training falls into the category of lack of adequate theoretical underpinnings, but I doubt that's what you mean to refer to with this point.
> That's an interesting point. Ideally students would have both. My impression is that the latter is far less trainable, and the best you can do is go through enough worked examples, spread out so that every problem in the space of expected learning is within a reasonably small distance to some worked example.
I simply mean that researcher team A will claim a positive result for method A because their test tested task fluency, while team B will claim a positive result for method B because their test tested ability to wade through new and confusing territory. (btw, I think "generalization ability" is an unhelpful term here. The flip side to task fluency I think more of as debugging, or turning confusing situations into unconfusing situations.)
> If you have time, would you mind elaborating a bit more on this?
I don't know what good theoretical underpinnings for human learning looks like (I'm not a time traveler), but to make an analogy imagine chemistry before the discovery of the periodic table, specifically how off-the-mark both sides of arguments in chemistry must have been back then.
> My impression is that general problem-solving training falls into the category of lack of adequate theoretical underpinnings, but I doubt that's what you mean to refer to with this point.
By the way, I see problem solving as a goal, not as a theory. If your study measures mathematical knowledge without problem solving, your tests will look like standardized tests given to high school students in the USA. The optimal way to teach a class for those tests will then be in the style of "When good teaching leads to bad results" that Alan Schoenfeld wrote about in regards to NYC geometry teachers.
You seem to have linked a collection of general research on teaching and learning, which I am aware of exists. I'm talking about randomized controlled trials, where you assign a group of students to receive the intervention and another group to not receive it, and if it's single- or double-blinded, without them and/or the researchers being aware of which group they are in. Even writing this brings up logistical questions about how you might get a reliable research result doing this for teaching (instead of, say, medicine, where it's easy to fool a patient into thinking a placebo is the drug).
> Maybe you haven’t had reasons to come across such research before
No op, but I’ve “come across” a lot of education research. By “come across”, I mean I’ve read so much that it makes my eyes bleed.
There is some good research that yields interesting and compelling results. Rare, but out there. Usually by an individual researcher and maybe with a team. Almost never by a school of education of significant size or by (almost?) any specific field in education.
Results in education are challenging to replicate by a different researcher in a slightly different context, and studies are often trivially easy to replicate and come out with a competing/contrary conclusion by controlling a variable that the original researcher mentioned but did not control for (e.g., motivated subjects versus unmotivated subjects).
Additionally, much research in education is not well-designed, or is well-designed but on a relatively meaningless topic. There is a lot of touchy-feely research out there (like the idea that folks can learn math with just problem solving skills), and folks p-hack the hell out of data to support their a priori conclusions. It’s a smart thing to do to maximize funding and/or visibility in academic journals, but it is absolutely irresponsible in the quest for “truth” and knowledge, which one would hope our education researchers would want (n.b.,they largely don’t).
I would agree there are a lot of problems with a lot of education research. Many purported findings do not replicate or are otherwise impossible to replicate.
However, there are also many findings that are actually legit. As you say, they're rare, but there are enough of them to paint a surprisingly complete picture when you pull them together.
I'm sorry but one don't exactly come across randomized controlled experiments in teaching very often... not to even mention ones that are well designed... so this isn't saying much.