I think I get parts of what you are saying: The brain prefers what is known to be ok ("consistency") rather than changing to an unknown way of thinking or state which could lead to danger even if its better?
Precisely. These leads to far more evolutionary stability on the species level.
An axiom is an assumption we make in a logical system. Let's take standard mathematics. We have the underlying assumption that we are using the set of real numbers, or sometimes complex numbers. If you make the assumption that we are using the set of integers (which are whole numbers only), we move into a realm of math called discrete math, where 2+2 can equal 0.
In geometry, when we use proof by contradiction, we make the assumption that the logical principal of the 'disjunctive syllogism' is true. In any mathematical system, we have to make assumptions that cannot be proved using the rules of the system itself. This is called Godel's Incompleteness Theorem.
So when we develop AI, inherent in the program are mathematical axioms, logical assumptions, that we've assumed in order to use as a foundation of the AIs logical.
The brain has no foundation in any logical axiom or system. It is a simply an experience machine based on Hebbian Learning, otherwise known as "fire together, wire together", where neurons that have coinciding experience make stronger connections. So the brain tends towards the connections its made in favor of other behavior. It has no logical basis to determine that a new experience might be more beneficial and to strengthen those connection; the new connections only get strengthened through repeated experience.
Yes, there are many kinds of meditation that make your brain more susceptible to suggestion, or rather, increase the rate of Hebbian Leaning (I would look this term up). You can look up what kinds of meditation can do this. Meditation that puts you in a trance-like state would be the area I would look into first.