Trade spend outlays continue to dominate the sales & marketing budgets of foodservice manufacturers. This is despite a persistently high level of dissatisfaction with the cumbersome administrative burdens of trade spend programs and the lack of measurable results. Manufacturers want a clearer understanding of how targeted trade promotions influence downstream demand, but instead they become enmeshed in unproductive administrative paperwork such as resolving and processing duplicate claims.
The current trade spend paradigm also does not work in the best interests of distributors. While they do benefit in the short term from the financial impact of the trade dollars they receive from their suppliers, distributors do not obtain insights from current trade spend practices that could help them more effectively grow demand across products and categories. Of more benefit would be product and assortment education from their suppliers, enabling them to identify tangible ways to tap into new sources of sales growth.
Category management, a standard practice in many retail sectors that is now gaining currency in foodservice, can be a way to attain this knowledge, use it to effectively drive growth for both manufacturers and distributors, and ultimately to phase out the unproductive aspects of the current trade spend paradigm. Continue reading →
SKU proliferation has been a fact of life in foodservice much as it has been in other industries in recent years. Proliferation creates considerable pressure throughout the value chain to make tough decisions about SKU assortment across numerous product categories. In foodservice the problem is not shelf space as it is in retail; rather, it is the limited amount of product information that a sales representative can manage in his or her head in order to match the right products with the right customers on a daily basis in real time. As managing assortment has grown more complex, manufacturers and their downstream partners have looked to SKU rationalization to reduce streamline product offerings and manage inventory costs for improved category performance. While SKU rationalization can address these challenges to some extent, it does not get to the core of the problem. The most effective way to improve category performance is to increase demand for products in that category. In turn, the best way to grow demand is to seamlessly match unique customers with the products whose attributes they most highly value. This requires a holistic category management approach, supported by robust data analytics that can take into account the key levers of demand – assortment, promotions, pricing and purchase timing. Continue reading →
In large organizations pricing is everybody’s problem, but everybody looks at the problem in a different way. Salespeople earn a livelihood by offering their customers prices that result in completed sales. Account managers have to keep track of tens of thousands of price rules governing products, brands and customers. Bean counters in the finance department are concerned about the relationship between prices and costs. C-suite executives are motivated by how price contributes to the market share, revenue growth and profitability numbers they have to report to their shareholders every quarter. And somewhere in the organization somebody is clamoring for a “just this once!” exception to some pricing policy in order to achieve an immediately pressing milestone.
These are all valid concerns. The problem is that the decision makers are sitting in different parts of the organization, their objectives are often in conflict with each other (or at the very least require trade-offs and compromises), and they are not armed with sufficient information to understand the broader impact of each price decision on firmwide performance. Continue reading →
Veteran marketing managers can tell war stories of battles fought to secure marketing budgets – the pitches and cajoling to focus C-suite attention on the strategic and the tactical importance of effective marketing campaigns. Getting something close to the budget you want may be just cause for heaving a big sigh of relief, but these days few marketing managers will be found clinking glasses of Veuve Clicquot in celebration. Once the budget is in hand the real work begins. The economic downturn has put constraints on the total number of dollars you have to spread among competing projects, but it has done nothing to constrain the nearly limitless ways those dollars can be allocated. “Do more with less” is the mantra of the day. To make those scarcer dollars go further means relying on more than traditional finger-in-the-wind gut instincts to tell you what campaigns will work and what campaigns won’t work. Campaign marketing – the art of pulling together targeted messages for specific geographic markets, consumer segments and product types – is in need of a healthy dose of scientific rigor. Continue reading →
Solving the micromarketing challenges of the Information Age
We live in the Age of Information, so we are told. Never before has so much raw data existed bearing testament to every pulsebeat of human commerce, every touchpoint between a customer and a good or service. The problem for decision-makers, according to the conventional wisdom, is Information Overload – volumes more data to analyze than the human brain can easily digest. But it is not that simple – there are deeper challenges below the surface.
Information is not always where you need it
While the conventional wisdom is right in the aggregate, the lush and dense information rainforest starts to turn remarkably arid and sparse as you drill down into the nuanced segments of your demand environment. At the micromarket level, infrequent transactional activity in the long tail of customers and SKUs yields little insight to inform decision making. Managers thus face challenges that go well beyond the simplistic construct of TMI (too much information). They need tools for managing the real information problems in their micromarkets. These tools need to address head-on the challenges posed by what we call the 4-Cs: Continue reading →
The other day I conducted a little thought exercise, and it brought me back to a question that often comes up in my line of work: the fleetingness of brand loyalty in the age of marketing message saturation and the daunting challenge for brand managers and other decision-makers whose livelihoods depend on the existence of such loyalty among their customers. Happily for those who walk the brand beat, there is a ray of hope in this otherwise cautionary tale.
Olay, Nivea, Neutrogena and L’Oreal are all established beauty products brands with a broad array of medium-priced product lines and multiple product offerings in each. More to the point, for purposes of this thought exercise of mine, is that each of them offers a range of good quality facial cleansers, a product I buy on average about once every two months. The exercise was to determine what, if any, brand loyalty existed in my facial cleanser purchases over the last 2 years. The answer appeared to be: none. Nada. At some point over those past 24 months and (give or take) 12 purchases, my domestic shelf space has been occupied by at least one representative facial cleanser SKU from each of those brands. I wondered why this was the case. And then I remembered that it was not always thus. Long ago (more years than I care to disclose) there was a rather splendid product by Neutrogena called the Facial Cleansing Bar. Continue reading →
“Burn the mathematics” wrote economist Alfred Marshall in a letter to a friend, musing about the proper role of mathematics and scientific inquiry in the field of economics. That 19th century cogitation would seem to be a prêt-a-porter soundbite for these latter days of the 21st century’s first decade – a time in which the mathematical infrastructure that underpins longstanding economic and financial theories stands accused of all manner of malfeasance, particularly given its presumed role in the decade’s signature economic event – the financial market meltdown of 2008. The logic behind the accusation goes roughly thus: More complex (but not necessarily more “accurate”) models allow for more complex instruments to be created. Increased complexity means it takes more time to process and then fully comprehend what the numbers may be telling you. At the same time, though, technology allows buy and sell orders to be executed almost instantaneously through electronic trading systems. Time is of the essence, and ponderously complex computations simply won’t do. A seemingly elegant (and fast, and commercially viable) shortcut is discovered and becomes the currency of the day. The models’ outputs come to be trusted blindly simply because there is no time to question them (and too much money to be made by using them). The impenetrable Greek letters obfuscate the sensitivity of the models to changes in important assumptions – which is fine for a few years because those assumptions (e.g. rising housing prices) don’t change – but then all of a sudden they do. The models start losing more money than they make. Then the chasm widens further as the high levels of leverage in the system make themselves felt. The losses accelerate dramatically, wiping out years of profits in just a few months. Burn the mathematics, indeed.
But let’s take a different look at this apparent tight coupling of mathematics and dire outcomes. Our recent correspondence with an author who has been widely published on the subject of Wall Street’s use of mathematical models recently offered to us an interesting opinion. His point was that the problem with the models was not so much their complexity, but rather that they were models in the first place. His argument was that you can’t ever perfectly hedge model risk. Now, I agree with that observation: a model by definition selects some aspects of reality to represent and omits others, and the choice of what to include and what to omit is subject to human error, therefore fallible and not perfectly hedgable. But I take issue with the idea that the fault lies in the existence of the models themselves. Models can be misused – I think that much is clear. But the notion that models are all doomed to failure obscures a deeper truth about the goals of predictive modeling; namely that you can seek either to reduce the world or truly explain it. By trying to elegantly reduce the world to as few predictor variables as possible, you are more likely to be sowing the seeds of future failure, because complexity and actual drivers of outcomes are taken out of the equations to make them more solvable (or perhaps sellable, as in the case of the Gaussian copula function that was behind Wall Street’s demise, as we discussed in a previous posting “You Can’t Punt Away the Dimensionality Curse”). Predictive modelers don’t have to go down that road, however: they can also set out with the goal not of reducing an entire system to a single neat, tractable equation, but to quantify and explain all of the relationships that dictate outcomes to the absolute fullest extent possible. Tractability and computability are things to address later in the process, through technological means, but they should not dictate the fundamental mathematical approach at the outset. Continue reading →
A single mathematical formula brought ruin to the global financial markets. What happened was not a failure of quantitative methods per se but rather a lesson in the perils of ignoring real-world complexities in favor of deceptively elegant shortcuts.
The fault, dear investor, lies not in the head of AIG’s Financial Products Group or members of the Bear Stearns Investment Committee or any other anthropomorphic entity: rather it was a single mathematical formula that apparently felled the pillars of global finance. That’s the gist of a recent article in the 3.17 edition of Wired magazine entitled “Recipe for Disaster: The Formula that Killed Wall Street” by Felix Salmon. The formula, known as a Gaussian copula function (when is the last time that term became a fixture of the public discourse?), purported to solve the mother of all securitization problems: establishing default correlation factors between the many constituents of the pools of mortgages and other credit obligations whose cash flows served as the underpinning for the complex derivative securities known as collateralized debt obligations (CDOs). Awareness of the potential in this arcane formula helped power the CDO market to some $4.7 trillion in volume over the course of the housing bubble years of this decade. As the Wired article explains, the explosive commercial viability of this formula can be explained by its use of a simple sleight of hand. Rather than modeling out the default correlation implications of pools of thousands upon thousands of individual mortgage obligations – an extremely complex undertaking requiring powerful algorithms and massively robust computational processing technology – the CDO market’s Wall Street practitioners used a shortcut that appeared elegant but proved deadly: using the market price of credit default swaps (CDSs) as a proxy for the actual historical data.
What happened in essence was that the CDO market ran up against one of the most challenging of quantitative modeling problems: the dimensionality curse. This refers to what happens in complex environments where numerous variables interact with each other and all of the resulting combinatorial possibilities influence the economic value. The addition of an incremental variable to the pool exerts an exponential effect on the number of possible outcomes. Think of a simple case: if you have a pool of two variables then the number of potential outcomes is four: add a third dimension (variable) to the mix and the potential outcomes expand to nine, and so on. In an environment like pools of thousands of mortgage obligations or credit card receivables influenced by a bevy of macro- and micro-economic, behavioral, seasonal and other random factors there are literally billions of combinatorial outcomes that could affect the incidence, magnitude and frequency of default events and hence the price of the CDOs whose economic value derives from those pools. Getting to the right answers – and doing so with enough speed to satisfy the blistering pace of 24-7 investment markets every day – is a daunting challenge to say the least. So when Daniel X. Li, a quantitative analyst at JPMorgan Chase, posited the use of CDS prices as a proxy for historical data in a 2000 paper published in the Journal of Fixed Income Securities, the CDO market rejoiced and basically punted away the dimensionality curse by using this shortcut. The reasoning and the assumptions employed proved to be flawed and the disastrous results are entirely visible to the naked eye in all their graphic detail.
In quantitative methods as in life there are no free lunches. You can’t simply punt away the dimensionality curse – you have to embrace it and try to achieve mastery over it using all the knowledge and technology tools at your disposal. At Sentrana we deal with dimensionality curse problems every day – the demand markets for the products and services our clients sell are highly complex environments: tens of thousands of products for thousands of customers in hundreds of locations reachable by any number of marketing vehicles and sales channels. Modeling these environments is not for the faint-hearted: but the problems are not impossible. The computational technology does exist, as does the modeling science. The critical ingredient is the will and determination of those who practice quantitative methods in business to forego the easy outs and stay focused on solving the real problems, however daunting.
Perhaps the field of quantitative methods needs a variation of the medical profession’s Hippocratic Oath: First of all, do no harm. Clearly the Wall Street experiment egregiously failed that standard. Let’s hope that the next time some arcane mathematical formula figures into the cultural Zeitgeist it will be for better, not for worse.