There has been research showing AI doesn't actually think or reason to get to an answer. In retrospect I suspect that what we are really saying is that AI doesn't simulate. A basic simulation is to iteratively process an initial state using rules to progress to an end state. There is intuition which is the first thoughts or reaction, then simulation to determine if our first thoughts would match our simulated results before finally doing something in the real world. In software engineering we first write code, we might take moments to review our code and step through in our head what the code does and if that looks good we then run it to see if it actually worked. The reviewing of our own code is us simulating before compiling and/or running. Reasoning models seem to try to follow this behavior but they make mistakes in simulation. I suspect simulation requires far more attention or requires a kind ordering that AI currently doesn't have.