• home
  • what we do
  • blog
  • about us
  • contact
Darwin's Grove

Does Envy vs. Greed apply to Marketing?

10/23/2013

1 Comment

 
Picture
What if assumptions are wrong?   By a little? By a lot? These kind of what ifs can create interesting insights.  A recent example of testing big assumptions in finance may offer an interesting new perspective for marketing.

It’s not greed…
At the risk of simplifying too much,  financial models rely on the widely held assumption that greed drives the market.  But one theory submitted by Eric Falkenstein says Why Envy Dominates Greed.  While he goes on to build a compelling case in his book one of the clearest arguments is based on evolution and cognitive science.  Basically, the social nature of man is built on a capability to monitor relative status, hence envy, rather on the unbounded demands of uncapped needs, or greed.  Cognitively, if greed were dominant, we’d likely get mentally swamped in trying to make decisions allocating our resources and energies.And that little assumption can change a lot.

Falkenstein goes on to generate new financial models based on the changed assumption of envy with empirical testing that seems to show how this may be a real improvement to classical financial theory.

Picture
Are we making the same assumption in marketing?
Sales and Marketing have long used the model of the “funnel” to structure their efforts.   This notion of the funnel has a couple attributes which standout:
  • It assumes the customer proceeds through the funnel sequentially in a process of satisfying the need for consumption (a.k.a. greed)
  • The structure is built from the perspective of the organization’s view and the customer is, well, fodder.

If these assumptions are incorrect could it help us rethink how we structure sales and marketing?  Beyond greed or envy is there another way to view our ways of connection with the customer?  Could this help us rethink the funnel? 

More soon!

1 Comment

Anchoring, can we model it?

5/28/2013

1 Comment

 
We have decided to “warm-up” on Anchoring since a lot of people have either heard of it as well as have experienced it themselves.  During decision making anchoring occurs when an initial piece of information inappropriately influences subsequent decisions.  And understanding  Hierarchical Temporal Models (HTM) takes us a long way toward developing a useful model.

Anchoring gives us a sense that in decision making we are bringing a lot of parallel activity to bear to arrive at a conclusion.  We also suspect that encounters of sequential events must have affect our deliberations.

Does HTM work this way too?  Very much so.

Some key aspects of  how an HTM works:
  • Model Hierarchy. Models are learned by building on previous models and over time into a hierarchy of reusable models.  One can even think of memory and model as one in the same.  While the  potential for “memory” exists in the HTM (and likely our cortex), it is only when patterns have been built that we might really consider these real memories.
  • It’s all predictions.  Likewise memories & models are also simultaneously predictions.   A model seeks opportunities for confirmation every chance it gets.
  • Temporal structure. We often think of memories as static and spatial, like images- say of our cat.  And our power of recognition is exemplary (ie. satisfying the prediction “is that my cat?”).  But what is also baked in and less obvious is that models exist in association with many contexts (other models), and significatly those connected directly in time.  Yes, I am good a recognizing my wife’s voice saying “Hi, I’m home”, but she typcially returns around 5:30p and of course I am expecting (predicting) it.
  • Sparse representation. And finally memories, because they can be built from other models, can also be represented what we call very “sparsely”.  Combine pieces to get entirely new models very compactly, which at the same time explains the power,speed, and efficiency of the human cortex.

To simplify then:
  1. memory = model = prediction (and later also = action!),
  2. memory’s are also always wired in a temporal structure, and
  3. memories can be incredibly reliable and efficient because they are based on sparse representations.

Now we can play out some of the hypothetical ways and HTM helps us understand Anchoring.  In most anchoring experiments the two task are related somehow even though they technically have nothing to do with one another.  For example both may  involve numbers.   And of course, the two tasks are linked in time, one right after the other.  With the first task invoking models related to numbers and forming predictions, perhaps it is not surprising these models remain active and influential through the second estimation task.

And one other aspect comes to light.  And that is the value of making a wrong prediction.  As we said these sparse representations can quickly form complex models, in part by allowing less than perfect predictions to proceed.  In typical anchoring experiments the tasks at hand are not critical to survival for example.  It would be interesting to explore the anchoring bias relative to the importance of a decision and see if it breaks down there.  [Put another way,  “It doesn’t really matter what I put out there as an estimate, but it sure will be interesting to see if these two task were really linked in some way when I hear the answer!”]

Considering these aspects of the HTM then might let us predict how anchoring plays out.  It is likely:

  • The sequential nature of the tasks is essential to impacting the observable effect.
  • Finding further model links should strengthen the effect.  For example invoking the anchoring not just with a number but the same units as well (say currencies) should strengthen the effect.
  • We might hypothesize there is a model serving the notion of “estimation of numbers”, and that creates the expectation of a series of related tasks.  Changing estimation domains, such as predicting a non-numeric answer for the second task likely reduce the anchoring bias.


As HTM models can further inform and clarify how our minds engage these kinds of tasks, we expect them to also spur the development of new tools to help deliver information and inform decision making across a range of applications.
1 Comment

Catch something viral!

12/17/2008

1 Comment

 

One of the things we do is design, instrument, and execute viral marketing programs, and we find in working with our clients that breaking down the nature of viral marketing into its primary components helps get everyone on the same page.

Two separate ideas underlie every viral marketing effort out there. Looking at them separately can help you prioritize your efforts and improve chances of success.

The first requirement of any successful viral campaign is recognizing the need to meet each individual user's purchase value threshold ( or sometimes also thought of as “willingness to pay”). Let's call this the direct user value. In the old days, this used to be called things like "winning customers one at a time" or even more vaguely "know your customer".  While it sounds blatantly obvious, we also see over and over how this is simply overlooked. While you might argue that when you have thousands of customers, the network effects will create fantastic customer value (more on this later), it’s too easy to forget that you need to provide real value, early on, to every single adopter.

Keeping the early focus on those first adopters should make you reconsider each feature of your offering. Is your "AllYogaMats.com" site really going to grow virally from having social networking functionality or would just making it easier to use with a better "Find the Right Mat" search interface make customers buy? Those first eBay adopters (was it Pez collectors?) were well served right out of the gate by the simple-to-understand auction model and easy-to-use site, even though it didn't have a big audience yet.

The second component of viral business models is what most people actually think of first: the network effect. While everyone is familiar with this effect by now, it simply states the value of the network increases with the square of the number of users. The classic example is fax machines. It was not much use to be the first purchaser of a fax machine unless you had a thing for thermal paper rolls. Real viral models must provide some additional user value due to the size of the network- real network effect value.

The problem is we are often optimistic about both the numbers of users our campaigns can reach (at a reasonable cost) and the network effect's actual value a user gets. But it’s also hard to convince users that at a future date when they and lots of others join there will be some value for them. Distinguishing between direct user value and network effect value helps focus where efforts should be applied at each stage of business growth.

direct user value + network effect = viral growth!

 




1 Comment

    Archives

    October 2013
    August 2013
    May 2013
    March 2013
    June 2009
    March 2009
    February 2009
    December 2008
    November 2008

    Categories

    All
    About
    Anchoring
    Behavioral Analytics
    Behavioral Arbitrage
    Behavioral Finance
    Behavioral Marketing
    Curiosity
    Darwins Grove
    Empathy
    Marketing
    Openness
    Prospect Theory
    Viral Marketing

    RSS Feed


© 2014 Darwin's Grove LLC. All Rights Reserved.