Anchoring gives us a sense that in decision making we are bringing a lot of parallel activity to bear to arrive at a conclusion. We also suspect that encounters of sequential events must have affect our deliberations.
Does HTM work this way too? Very much so.
Some key aspects of how an HTM works:
- Model Hierarchy. Models are learned by building on previous models and over time into a hierarchy of reusable models. One can even think of memory and model as one in the same. While the potential for “memory” exists in the HTM (and likely our cortex), it is only when patterns have been built that we might really consider these real memories.
- It’s all predictions. Likewise memories & models are also simultaneously predictions. A model seeks opportunities for confirmation every chance it gets.
- Temporal structure. We often think of memories as static and spatial, like images- say of our cat. And our power of recognition is exemplary (ie. satisfying the prediction “is that my cat?”). But what is also baked in and less obvious is that models exist in association with many contexts (other models), and significatly those connected directly in time. Yes, I am good a recognizing my wife’s voice saying “Hi, I’m home”, but she typcially returns around 5:30p and of course I am expecting (predicting) it.
- Sparse representation. And finally memories, because they can be built from other models, can also be represented what we call very “sparsely”. Combine pieces to get entirely new models very compactly, which at the same time explains the power,speed, and efficiency of the human cortex.
- memory = model = prediction (and later also = action!),
- memory’s are also always wired in a temporal structure, and
- memories can be incredibly reliable and efficient because they are based on sparse representations.
Now we can play out some of the hypothetical ways and HTM helps us understand Anchoring. In most anchoring experiments the two task are related somehow even though they technically have nothing to do with one another. For example both may involve numbers. And of course, the two tasks are linked in time, one right after the other. With the first task invoking models related to numbers and forming predictions, perhaps it is not surprising these models remain active and influential through the second estimation task.
And one other aspect comes to light. And that is the value of making a wrong prediction. As we said these sparse representations can quickly form complex models, in part by allowing less than perfect predictions to proceed. In typical anchoring experiments the tasks at hand are not critical to survival for example. It would be interesting to explore the anchoring bias relative to the importance of a decision and see if it breaks down there. [Put another way, “It doesn’t really matter what I put out there as an estimate, but it sure will be interesting to see if these two task were really linked in some way when I hear the answer!”]
Considering these aspects of the HTM then might let us predict how anchoring plays out. It is likely:
- The sequential nature of the tasks is essential to impacting the observable effect.
- Finding further model links should strengthen the effect. For example invoking the anchoring not just with a number but the same units as well (say currencies) should strengthen the effect.
- We might hypothesize there is a model serving the notion of “estimation of numbers”, and that creates the expectation of a series of related tasks. Changing estimation domains, such as predicting a non-numeric answer for the second task likely reduce the anchoring bias.
As HTM models can further inform and clarify how our minds engage these kinds of tasks, we expect them to also spur the development of new tools to help deliver information and inform decision making across a range of applications.