General
|
Retail
| Articles

How We Use Machine Learning at Olapic

Modern consumers expect more from brands than ever before. As a result of digital proliferation and advances in data science, today’s brands are tasked with delivering consistent, personalized, high-quality experiences across a growing set of relevant channels. Of course, this presents quite a challenge for brand marketers, who must develop enough on-brand content to address their various audiences, and do so both quickly and at scale.

This is called the “Content Crunch,” and at Olapic, we believe that user-generated, or “earned,” content is a potential solution to help brands drive both engagement and performance from their audiences. While user content presents an enormous opportunity for brands, collecting and moderating it can become entirely overwhelming without a proper process in place. Specifically, there are three main challenges that can impede progress, especially for brands with a high volume of assets:

  1. How do brands filter out off-brand content and surface imagery and videos that are most relevant?
  2. How can brands add context and relevance (the right products and the right metadata) to their content assets?
  3. How can brands identify content that is most likely to outperform other assets based on a variety of factors?

Based on our experience, working with hundreds of the world’s top brands over several years, we’ve developed several solutions using machine-learning to answer these questions for our clients and help them through this process.

Filtering off-brand content

Photosafe

Marketing teams are often looking to amass a set of on-brand content that matches the desired aesthetic and provides a consistent look and feel to their campaigns. While UGC can help scale this content growth, it can be challenging to separate high-quality content from the rest of the noise. In order to identify on-brand content and surface it, we start by working with our clients to gather a set of brand guidelines at the start of our engagement. We then apply those guidelines, along with training data from our moderation experts, and compare them with guidelines from similar brands to identify trends or standard exclusions (such as “not-safe-for-work” content or imagery from resellers). This mix is referred to in the industry as “human in the loop” and helps us realize far greater performance from our algorithms.

When these filters work within our platform, they take the burden off of brands and reduce the hours needed to moderate content. What is that you don’t want in your pictures? Selfies? Text? A collage? Alcohol? Kids? Pets? You name it.

Caption Translation

While looking at images to determine their viability is a critical step, there are more considerations a brand must make to identify the best content. Specifically, brands must review the captions associated with each image to ensure the complete asset is a good fit for the brand.  For brands with large international audiences, translating captions from various languages can be quite a challenge. At Olapic, we’ve incorporated a machine-learning solution that helps brands translate captions easily for more efficient moderation.

Adding context and relevance

Photolens

Once we’ve identified a set of on-brand content, and gained permission from the creators, the next step is to map products and tags to each asset. We work with our clients to ingest product feeds (in the case of e-commerce brands), or location feeds (in the case of travel/hospitality brands), and again merge the information with training data from our moderation team, who manually tags photos to ensure accuracy. Again, this human in the loop process can be intensive, but is extremely valuable, as it enables brands to better activate their content, helping take consumers from a point of inspiration to a point of purchase with less friction. Our Photolens tool uses computer vision to identify the right objects in a picture, vastly simplifying the process of tagging pictures to the right products in your catalog.

Image labels

Apart from product-specific information, it helps to add greater context to images being deployed by brands. This can include information such as the presence of animals, landscapes, and colors, enabling our client brands to create custom collections and to segment their media library more effectively

The addition of this metadata gives our customers the ability to search for pictures with specific content, greatly improving the search experience. Do you need some cats to prepare your promotion for National Cat Day? You got it.

Identifying the best content

Photorank

Finally, once the content has been curated and tagged, we send it through one additional process powered by machine-learning. Our Photorank algorithm has benefited from more than 20 billion unique data points, which inform how the content impacts conversion rates across a variety of factors. As a result, it is able to analyze complex asset characteristics and make recommendations on which content is most likely to perform.

This process yields a powerful recommendation set, sorting content by predictive performance to aid in marketers’ placement strategies. Then sorted by Photorank in the Olapic platform, content is of high and consistent quality.

 

The compelling value in our machine-learning program is that it can help brands save time and make more intelligent decisions about their content strategy. Specifically, Olapic can help surface on-brand content most likely to perform, and quickly identify products featured within images to shorten the path to purchase. We are very proud of the work our engineering teams have done to build a platform utilizing best-in-class machine-learning strategies. Additionally, we believe that the combination of this technology and human experience is what separates our services from the rest of the visual marketing space. As technology continues to evolve, we are focused on adapting alongside it in order to continue delivering value to the brands we work with.

More Articles

See All