There are no problems that can't be modeled without category theory. One of the most foundational category theorems is the Yoneda Lemma, which directly states that any problem phrased in the language of categories can be translated to the language of sets and functions. The same is true of every mathematical object with a definition in terms of sets - you could always replace the name with its definition.
The contribution of category theoretic language to the implicit framework of a theory can't be larger than the definition of a category, which is very small. You could be asking "why use groups when sets with an associative operation exhibiting closure, an identity and an inverse are more approachable?" Abstract algebra is based on a library of definitions that refer to types of operations on sets that are simple enough to be common. A tool or a technique are not the kind of things you can find in a definition.
Rings, vector spaces and modules get a sort of instant acceptance for what they are, but categories have believers and disbelievers. I am curious about how that can happen.
It is because without vector spaces, you could not do linear algebra, which you need for everything. Without categories, you cannot do category theory, which you need for ... what exactly?
Of course you can do linear algebra without vector spaces. Leibniz didn't know about vector spaces, yet he was doing linear algebra. It just happens that the use of vector spaces massively helps thinking about linear algebra problems.
CT is applied to many domains. For a concrete example look up ZX calculus, which is used to optimize quantum circuits.
> It just happens that the use of vector spaces massively helps thinking about linear algebra problems.
That is the point here.
> CT is applied to many domains.
Yes, but in which of those does it massively help? I just looked up ZX calculus, and I am sure you can formulate that better by not mentioning CT at all.
As usual, quick googling reveals that the ZX calculus is really just a CT repackaging of Penrose diagrams, which were discovered and used without any involvement of CT.
The contribution of category theoretic language to the implicit framework of a theory can't be larger than the definition of a category, which is very small. You could be asking "why use groups when sets with an associative operation exhibiting closure, an identity and an inverse are more approachable?" Abstract algebra is based on a library of definitions that refer to types of operations on sets that are simple enough to be common. A tool or a technique are not the kind of things you can find in a definition.
Rings, vector spaces and modules get a sort of instant acceptance for what they are, but categories have believers and disbelievers. I am curious about how that can happen.