The Price Sensitivity Meter (PSM) is a market technique for determining consumer price preferences. It was introduced in 1976 by Dutch economist Peter van Westendorp. The technique has been used by a wide variety of researchers in the market research industry. The PSM approach has been a staple technique for addressing pricing issues for the past 20 years. It historically has been promoted by many professional market research associations in their training and professional development programs. The PSM approach continues to be used widely throughout the market research industry and descriptions can be easily found in many market research websites.
The assumption underlying PSM is that respondents are capable of envisioning a pricing landscape and that price is an intrinsic measure of value or utility. Participants in a PSM exercise are asked to identify price points at which they can infer a particular value to the product or service under study. PSM claims to capture the extent to which a product has an inherent value denoted by price.
The traditional PSM approach asks four price-related questions, which are then evaluated as a series of four cumulative distributions, one distribution for each question. The standard question formats can vary, but generally take the following form:
- At what price would you consider the product to be so expensive that you would not consider buying it? (Too expensive)
- At what price would you consider the product to be priced so low that you would feel the quality couldn’t be very good? (Too cheap)
- At what price would you consider the product starting to get expensive, so that it is not out of the question, but you would have to give some thought to buying it? (Expensive/High Side)
- At what price would you consider the product to be a bargain—a great buy for the money? (Cheap/Good Value)
The cumulative frequencies are plotted, and PSM advocates claim interpretive qualities exist for any intersecting of the cumulative frequencies for each of the four price categories. Note that the standard method requires that two of the four cumulative frequencies must be inverted in order to have the possibility of four intersecting points. Conventional practice inverts the cumulative frequencies for "too cheap" and "cheap/good value".[1]
The general explanation of intersecting cumulative frequencies varies. A common description of the intersections is that the crossing of "too cheap" and "expensive" can be the lower bound of an acceptable price range. Some describe this as the "point of marginal cheapness" or PMC. Similarly, the intersection of the "too expensive" and "cheap" lines can be viewed as the upper bound of an acceptable price range. An alternative description is the "point of marginal expensiveness" or PME.
Intersections where there is generally more agreement is the point at which the "expensive" line crosses the "cheap" line. This is described as the "indifference price point" or IPP. The IPP refers to the price at which an equal number of respondents rate the price point as either "cheap" or "expensive".
Finally, the intersection of the "too cheap" and "too expensive" lines represents an "optimal price point" or OPP. This is the point at which an equal number of respondents describe the price as exceeding either their upper or lower limits. Optimal in this sense refers to the fact that there is an equal tradeoff in extreme sensitivities to the price at both ends of the price spectrum.
While Van Westendorp himself did not attempt to solve the demand estimation problem (only pricing) three important extensions to the technique Newton/Miller/Smith (NMS), Martin Rayner Interpolation (MRI) and Roll/Achterberg (RA) were subsequently developed to address this problem and estimate demand.
Newton/Miller/Smith assume no purchases on the too expensive or too cheap curves, adding instead two further questions about the probability of purchase at the expensive and cheap prices: e.g.
- At the
how likely are you to purchase the product in the next six months? Scale 1 (unlikely) to 5 (very likely). - At the
how likely are you to purchase the product in the next six months? Scale 1 (unlikely) to 5 (very likely).
Combining the price responses with the probability responses allows drawing a revenue curve to estimate the price point delivering the maximum revenue.