Common Misinterpretations of E-commerce Analytics Data

This article delves into the frequent missteps many e-commerce businesses make when interpreting data on user activity

On an average day, a modestly sized e-commerce platform accumulates over 200 GB of data from users browsing and placing orders. However, the sheer volume of data isn't as crucial as the insights derived from it.

This article is penned in collaboration with the marketing managers and analysts of one of the successful e-commerce projects we've developed, who've generously decided to share their experiences with the world. The actual product name has been requested to remain confidential, so let's refer to this store as ExampleShopName. The discussion will revolve around common mistakes that can easily be made while analyzing data in retail and e-commerce. Hopefully, this will provide a fresh perspective on your product or give an insight into the questions product analysts deal with in our domain. We'll avoid delving into the numbers, focusing instead on the underlying reasons.

Beyond Averaging: The Myth of the "Average" Customer

Historically, we believed that ExampleShopName's customers shared a common trait: shopping provided them an endorphin rush. New clothing was their source of inspiration, making them feel more fashionable and unique.

We operated within this paradigm, targeting such customers, seeking them out, and figuring out how to engage them with our advertising. We prioritized the latest collections and new arrivals in our catalog rankings, successfully attracting these customers. However, in doing so, we overlooked a significant portion of potential buyers.

The issue lay in our approach to data; we viewed it in aggregate, focusing on monthly or daily overall figures. We observed the number of sold baskets, the average transaction value, and purchasing frequency, for example, once every three months. Essentially, we were looking at the average characteristics of our "average" customer.

This approach gave us a glimpse of only a small fraction of our customer base. Diving deeper and segmenting it revealed a diverse range of behaviors: some customers placed three orders a day, while others only made a purchase every six months. The average figures we relied on didn't tell us much about our actual customers.

Now, we've identified 13 major customer segments, each with distinct purchasing habits, frequency, and product expectations. What matters to one segment, like discount size, may differ from what another values, such as next-day delivery. By recognizing these differences, we now tailor our offerings to meet the specific needs of each customer, moving beyond catering to just the "average" user.

The Survivorship Bias: Not Everyone is Saved by Dolphins

Having addressed the fallacy of the "average user," our approach now meticulously explores various customer segments. However, a comprehensive perspective is essential; focusing solely on our existing clientele introduces the risk of survivorship bias.

A classic illustration of survivorship bias involves dolphins. Many have heard tales of dolphins aiding drowning individuals by pushing them to the surface with their noses—a playful behavior among these creatures. It's plausible that there are instances where dolphins, instead of rescuing, inadvertently drag people deeper underwater. Such cases go unreported; we only hear from those grateful survivors, leaving the fate of others unknown.

In business, this bias manifests when we analyze only our "survivors"—the customers we've successfully attracted, neglecting the broader potential market. For instance, if the majority of ExampleShopName's sales come from clothing and footwear, it might seem logical to deduce that our clientele has no interest in sports equipment, thereby deciding against expanding this category due to its current lower profitability compared to apparel and shoes. Following this logic, stocking up exclusively on clothing might appear as the most lucrative strategy.

However, the reality is that products like dumbbells and yoga mats sell less not because of a lack of interest, but due to our neglect of this segment. It's unlikely that someone would visit ExampleShopName for fitness equipment, given they're accustomed to purchasing these items elsewhere.

Yet, the market for such goods may be larger than anticipated. Our failure to invest time, conduct market analysis, or understand consumer behavior and demand elasticity for these products has limited our perspective. The unit economics of sports equipment differ markedly from apparel—customers don't buy ten pairs of dumbbells to return the ones that don't fit, as they might with clothing.

Identifying such biases and distortions is where customer research becomes invaluable. By conducting surveys or focus groups, we ensure inclusion not just of our existing audience but also of those who have never purchased from ExampleShopName. This approach yields a more accurate representation of the overall market.

Causation vs. Correlation: Don't Mix Them Up

Observing two phenomena occurring simultaneously doesn't necessarily imply a cause-and-effect relationship between them. Take, for example, the rise in ice cream sales during summer, which coincides with an increase in sunburn incidents. It would be erroneous to conclude that banning ice cream would reduce sunburns.

In our professional endeavors, we often make similar mistakes. We notice an uptick in purchases while a specific metric simultaneously increases. When the metric falls, so do the sales, tempting us to believe in a direct correlation and thus, a means to influence customer behavior.

Here's a scenario from our experience at ExampleShopName. The store offers a try-before-you-buy delivery option, allowing people to order multiple sizes and styles, keeping what fits and returning the rest. However, orders without this option see significantly fewer returns.

So, should we eliminate the try-before-you-buy feature to reduce returns and boost profits?

Not exactly. The no-try option was predominantly chosen by customers confident in their purchase, such as those buying hygiene products or accessories. Eliminating the try-on service would deter many users from shopping with us, ultimately reducing our profits.

Obvious Isn't Always Effective

Every hypothesis needs testing and critical evaluation, even those that seem logical and beneficial at first glance.

At ExampleShopName, we periodically introduce dynamic filters like "Picnic Attire" or "Beach Vacation Essentials." These seasonal filters sound reasonable, right? Yet, despite years of testing across various themes, they fail to take off.

Fortunately, these filters don't negatively impact our metrics, so we revisit the idea and continue experimenting at colleagues' requests. The real issue arises when a hypothesis consumes time, resources, and money, possibly missing out on other opportunities, only to result in a negative metric impact.

A prime example of how an intuitive idea can hurt profits is the promotion of seasonal items. For instance, sales of t-shirts, shorts, and flip-flops increase in the summer. So, should we prioritize these items in our catalog and display them first, expecting a sales boost?

Not necessarily. If customers are actively seeking these items, they'll purchase them regardless. However, if we stop showcasing non-seasonal items like pants and bags, their sales will likely decline as customers simply forget about them.

Ramp Up, Ramp Down: Wait for Real Results

Changes in business metrics can sometimes take a while to manifest.

In new cities, we launch delivery points following a specific scheme. Initially, we start with courier delivery—through partners or our own services. It's expensive, but it allows us to gauge demand. Once we accumulate a critical mass of orders, we open a pickup point.

City residents, already accustomed to courier delivery, may take some time to learn about the new pickup point and decide on making another order. Meanwhile, the expenses for operating the new point start accruing immediately after its opening.

Had we not anticipated a temporary dip in profits following an opening, our expansion into many regions would have been impossible.

The same effect is observed when testing certain features. We make changes that affect repeat purchases or the return rate of users. To see how changes affect customers who make purchases every six months, we need to run really long tests.

Additionally, some changes may initially boost performance but eventually lead to a decrease in revenue—or vice versa. We're constantly learning to predict and analyze such situations.

Analysis Paralysis: Waiting for Data Can Be the Biggest Mistake

ExampleShopName proudly labels itself a data-driven company, with our own A/B testing platform, several analytics teams, and a user experience research team. All product decisions are backed by experiments. However, sometimes the collection and analysis of data can be the biggest mistake.

Imagine: the site's conversion rate suddenly drops. We have several hypotheses about the cause but can't gather all available information or conduct a comprehensive study because the issue needs an immediate resolution.

Therefore, we conduct quick studies and make decisions based solely on our knowledge of the site and its users. We implement changes based on hypotheses without waiting for a complete picture. If the changes positively affect conversion, we adopt them. If not, we review the results and try different approaches.

Endlessly analyzing the reasons and consequences of a conversion decrease is possible. However, it's more practical to propose hypotheses, select the most valid based on our experience and discussions with a limited user sample, and launch it in an A/B test.

Sometimes we lack the time and means to gather data, assess risks, and calculate profits. We have to rely solely on our experience and knowledge. Moreover, at a certain point, further data collection becomes costlier than making an incorrect decision. Then, it's better to make a mistake than do nothing.

Incidentally, this is why one of the core principles at ExampleShopName is not to fear mistakes. Continuous communication and sharing experiences within the team help us avoid repeating the same errors.

Need help setting up user action analytics for your product?

We hope the examples shared in this article are not only useful for us. If you have any questions or are interested in integrating the most advanced user behavior analytics methodologies for your product, feel free to book a free call with our CTO or leave your contact details on our website, and we will get in touch with you!

Or just drop a message: contact@idealogic.dev

Last updated