No, it's a representation of which actions lead to good outcomes given a set of input data. There is no explicit symbolic reasoning about causal factors or their outcomes involved in classic RL, and it's very unlikely that any such symbolic representation evolves implicitly under the hood. A neural net in an RL system is just a souped-up version of the tabular data used in the earliest RL systems.
The reinforcement learning framework is perfect for representing cause and effect. An agent could learn that in a state of no fire, taking an action of rubbing sticks together would transition into a state of having fire. This concept is formalized as learning the dynamics function.