Businesses seek to maximize the value they can obtain from their revenue models. Price is the key lever decision-makers can operate to influence revenue, and in recent years a growing number of businesses have sought to implement strategies for actively managing the price lever – strategies such as demand management and revenue optimization. However businesses are also highly sensitive to the perception by individual consumers and the society at large that their prices are fair, in other words that they do not violate widely held individual or societal norms. Fair pricing matters – it matters to me, and to you, and perhaps ever more so in a climate characterized by economic uncertainty, downward pressure on demand and a perceptible decrease in the citizenry’s trust of public and private institutions.
Fortunately for business decision-makers, fair pricing and optimal pricing are not at odds with each other but can comfortably coexist. Over the course of the coming weeks my colleagues at Sentrana and I will be approaching the rich topic of fair pricing in a series of exchanges on this blog.
debating the age-old question of fair price
What is a fair price? This question has perplexed humanity throughout history. Leading thought output of the ages, from Aristotle’s Nicomachean Ethics to the Summa Theologicae of Thomas Aquinas, Pierre de Fermat’s probability proofs and Adam Smith’s classsical economics, have all weighed in with considered opinions on the fairness and justness of alternative ways to price economic goods and services, and the debate continues today. A series of letters exchanged between Blaise Pascal and Pierre de Fermat in 1654 is often regarded as a primal cause of the development of modern probability theory: this exchange was actually an attempt to establish a scientific basis for the notion of fair price. In his paper “The Unity and Diversity of Probability” Rutgers professor Glenn Shafer shows how these letters created hypothetical games of value that we today can recognize as the application of probability methods to defend a price as ‘fair’ under conditions of uncertainty. Continue reading →
In a previous posting (“Quantitative Intuition: It’s Not Counterintuitive”) I described some of the advancements that have been made in bringing together the disparate worlds of quantitative methods and human intuition, ending on the rather happy note that advanced scientific micromarketing models today are capable of introducing qualitative human judgment and experience into quantitative models, such that the models are able to “learn” from humans about important factors such as competitive threats, nuanced negotiation strategies and even meteorological vagaries – factors that traditionally have been difficult to crunch into the binary 1s and 0s of machine language. The human brain works in a hierarchical manner, embedding propositions within propositions to think a potentially infinite number of thoughts. In the example I used in the last posting, a sales rep who reads about a national wholesaler coming to town to open a discount distribution center can nearly instantaneously form a series of mental propositions to evaluate the importance of that news and the probability of potential outcomes that may (or may not) require decisive competitive action from the sales rep’s firm. Continue reading →
“Burn the mathematics” wrote economist Alfred Marshall in a letter to a friend, musing about the proper role of mathematics and scientific inquiry in the field of economics. That 19th century cogitation would seem to be a prêt-a-porter soundbite for these latter days of the 21st century’s first decade – a time in which the mathematical infrastructure that underpins longstanding economic and financial theories stands accused of all manner of malfeasance, particularly given its presumed role in the decade’s signature economic event – the financial market meltdown of 2008. The logic behind the accusation goes roughly thus: More complex (but not necessarily more “accurate”) models allow for more complex instruments to be created. Increased complexity means it takes more time to process and then fully comprehend what the numbers may be telling you. At the same time, though, technology allows buy and sell orders to be executed almost instantaneously through electronic trading systems. Time is of the essence, and ponderously complex computations simply won’t do. A seemingly elegant (and fast, and commercially viable) shortcut is discovered and becomes the currency of the day. The models’ outputs come to be trusted blindly simply because there is no time to question them (and too much money to be made by using them). The impenetrable Greek letters obfuscate the sensitivity of the models to changes in important assumptions – which is fine for a few years because those assumptions (e.g. rising housing prices) don’t change – but then all of a sudden they do. The models start losing more money than they make. Then the chasm widens further as the high levels of leverage in the system make themselves felt. The losses accelerate dramatically, wiping out years of profits in just a few months. Burn the mathematics, indeed.
But let’s take a different look at this apparent tight coupling of mathematics and dire outcomes. Our recent correspondence with an author who has been widely published on the subject of Wall Street’s use of mathematical models recently offered to us an interesting opinion. His point was that the problem with the models was not so much their complexity, but rather that they were models in the first place. His argument was that you can’t ever perfectly hedge model risk. Now, I agree with that observation: a model by definition selects some aspects of reality to represent and omits others, and the choice of what to include and what to omit is subject to human error, therefore fallible and not perfectly hedgable. But I take issue with the idea that the fault lies in the existence of the models themselves. Models can be misused – I think that much is clear. But the notion that models are all doomed to failure obscures a deeper truth about the goals of predictive modeling; namely that you can seek either to reduce the world or truly explain it. By trying to elegantly reduce the world to as few predictor variables as possible, you are more likely to be sowing the seeds of future failure, because complexity and actual drivers of outcomes are taken out of the equations to make them more solvable (or perhaps sellable, as in the case of the Gaussian copula function that was behind Wall Street’s demise, as we discussed in a previous posting “You Can’t Punt Away the Dimensionality Curse”). Predictive modelers don’t have to go down that road, however: they can also set out with the goal not of reducing an entire system to a single neat, tractable equation, but to quantify and explain all of the relationships that dictate outcomes to the absolute fullest extent possible. Tractability and computability are things to address later in the process, through technological means, but they should not dictate the fundamental mathematical approach at the outset. Continue reading →