Shorter and Sweeter: How Alibaba is Optimizing Product Titles for Mobile

This article is part of the Academic Alibaba series and is taken from the paper entitled “Multi-Source Pointer Network for Product Title Summarization” by Fei Sun, Peng Jiang, Hanxiao Sun, Changhua Pei, Wenwu Ou, and Xiaobo Wang, and accepted by CIKM 2018. The full paper can be read here.

For e-commerce platforms like Alibaba, the surging popularity of mobile app-based shopping means more than just revenue. Along with the opportunity to reach shoppers anywhere comes the need to fit crystal-clear product titles into the shrunken interfaces of mobile devices — a task demanding automated AI solutions, given the volume and variety of merchandise available.

While conventional sentence summarization technology offers the basis for a solution, shortening product titles involves stricter requirements than other summarization tasks. Most importantly, where some amount of error might be acceptable in other media, product titles present a zero-tolerance threshold for inaccuracies. Adding to the complexity, online merchants often load product titles with repetitive terms they believe will help them appear in web searches, challenging summarization technologies to extract key terms without losing or distorting important information.

Summarized (left) and original (right) product titles for a Nintendo Switch console in Taobao’s mobile app. Note the abundance of indirectly-related terms in the original title, likely submitted by the merchant for search-engine optimization (SEO) purposes.

To overcome these hurdles, researchers at Alibaba have developed an original multi-source pointer network (MS-Pointer) to ensure that two constraints — not introducing irrelevant information and retaining all key information — can be effectively applied in all title summaries. Using a pointer mechanism to avoid introducing irrelevant information and a soft gating mechanism to retain key information from a novel knowledge encoder, the team was able to successfully generate a model that outperformed other methods and significantly improved click-through rates in online trials.

Realizing the MS-Pointer Model

Underlying the Alibaba team’s achievement, several existing technologies provided a conceptual basis for building the MS-Pointer model.

Previous summarization systems have relied on sequence-to-sequence (seq2seq) frameworks, which map the original text to a vector from which a decoder derives a summary. Departing from this “vanilla” approach, more recent efforts yielded the original pointer networks which the MS-Pointer model evolved directly from. In pointer networks, an attention mechanism is employed as a pointer to select tokens from the input as output, rather than picking tokens from a predefined vocabulary.

While previous pointer networks have performed well in practice, they are prone to occasionally dropping a product brand or commodity name from a summary — a source of error that the Alibaba team expressly sought to eliminate. To do so, they introduced a novel knowledge encoder in addition to the encoder for the source title. This knowledge encoder encodes the brand name and commodity name using a long short-term memory (LSTM) unit, as is likewise done for the source title. This enables the decoder to generate a shortened title by copying words from not only the title encoder but also the knowledge encoder, in turn allowing the model to learn a data-driven method for decoding key information from the knowledge encoder.

In effect, the MS-Pointer model learns to generate the brand name and commodity name (together, background knowledge) in a title by picking words from the knowledge encoder. The model also learns to copy words from different encoders by adjusting a gating weight factor, which functions as a classifier and instructs the decoder to extract information from the appropriate encoders.

An overview of the MS-Pointer model. Here, Background Knowledge denotes brand and commodity name information for the Nintendo Switch console shown in the previous image.

Building a Dataset

To prepare a testing environment for the MS-Pointer model, the Alibaba team needed to develop a new product title summarization dataset from data. The two key information groups for this dataset were the original product titles and their corresponding short titles, and the products’ brand names and commodity names.

To set a “gold standard” for evaluating machine-generated outputs, a group of professional editors worked to develop a set of titles and shortened titles based on items in Taobao’s product recommendation channel. The brand and commodity names for these products were then collected from the corresponding databases. In all, the resulting dataset included more than 400,000 pairs of original and shortened titles spanning 94 product categories.

Testing and Results

In testing, the MS-Pointer model faced off against a number of competing baselines, collectively grouped as abstractive and extractive methods. Leveraging three standard metrics, researchers were able to automatically evaluate each model’s performance and determine its specific failings where needed. Results indicated that extractive methods like MS-Pointer outperformed abstractive methods, with MS-Pointer proving more flexible and effective than its nearest competitors.

As well as manual evaluation against human challengers, online A/B testing in the actual Taobao environment showed that shortened titles generated by MS-Pointer improved user click-through rates by a significant margin. Generally, goods like electronic devices displayed higher click-through rate improvements than clothing or cosmetic products, likely due to the importance shoppers of the latter categories place on descriptive terms which had to be dropped from the shortened titles.

Click-through rate data from a week’s A/B testing in Taobao shows MS-Pointer’s clear advantage over its nearest rival.

In terms of the key requirement that shortened titles retain the brand names of original titles, MS-Pointer achieved a remarkably low error rate of 2.89%, with most errors owing to the use of words which do not belong in standard vocabulary. This rate proved reducible in online testing by mapping these out-of-vocabulary (OOV) words with unique embeddings.

The full paper can be read here.

Alibaba Tech

First hand and in-depth information about Alibaba’s latest technology → Facebook: “Alibaba Tech”. Twitter: “AlibabaTech”.




First-hand & in-depth information about Alibaba's tech innovation in Artificial Intelligence, Big Data & Computer Engineering. Follow us on Facebook!

Love podcasts or audiobooks? Learn on the go with our new app.

Recommended from Medium


[ML UTD 31] Machine Learning Up-To-Date — Life With Data

Building vs Buying Training Data Software Considerations

ML Mini Project: Car Price Predictor

Heart Disease Classification using Machine Learning

Understanding Racial Bias in Machine Learning Algorithms

Become rustic: MLOps without Fancy Stacks and with Zero Cost

Beginners Introduction To NLP

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store
Alibaba Tech

Alibaba Tech

First-hand & in-depth information about Alibaba's tech innovation in Artificial Intelligence, Big Data & Computer Engineering. Follow us on Facebook!

More from Medium

Custom AI Solutions or Ready-To-Use Products? How to Approach AI Software Development?

Custom AI Solutions or Ready-To-Use Products?

Low-resource NLP: multilingual sentiment analysis in less than a day

Conversational AI in Fraud Management: Can your Bots keep your Customers safe?

Scraping substack metadata using undocumented unofficial API