“Better Rewards Yield Better Summaries: Learning to Summarise Without References”, 2019-09-03 (; backlinks; similar):
Reinforcement Learning (RL) based document summarization systems yield state-of-the-art performance in terms of ROUGE scores, because they directly use ROUGE as the rewards during training. However, summaries with high ROUGE scores often receive low human judgement.
To find a better reward function that can guide RL to generate human-appealing summaries, we learn a reward function from human ratings on 2,500 summaries. Our reward function only takes the document and system summary as input. Hence, once trained, it can be used to train RL-based summarization systems without using any reference summaries.
We show that our learned rewards have higher correlation with human ratings than previous approaches. Human evaluation experiments show that, compared to the state-of-the-art supervised-learning systems and ROUGE-as-rewards RL summarization systems, the RL systems using our learned rewards during training generate summarieswith higher human ratings.
The learned reward function and our source code are available at Github.