Document Summarization with Long-Document Transformers

An example use case for fine-tuned long document transformers. Model(s) are trained on book summaries. Architectures in this demo are LongT5-base and Pegasus-X-Large.

Want more performance? Run this demo from a free Google Colab GPU

Load Inputs & Select Parameters

Enter/paste text below, or upload a file. Pick a model & adjust params (optional), and press Summarize!

See the guide doc for details.

Model Name
Beam Search: # of Beams
Examples

Generate Summary

Summarization should take ~1-2 minutes for most settings, but may extend up to 5-10 minutes in some scenarios.

Output will appear below:

Results & Scores

Download the summary as a text file, with parameters and scores.

Scores roughly represent the summary quality as a measure of the model's 'confidence'. less-negative numbers (closer to 0) are better.

Summary Output

Summary will appear here!

Aggregate Summary Batches

Aggregate the above batches into a cohesive summary.

Aggregate summary will appear here!

Advanced Settings

Refer to the guide doc for what these are, and how they impact quality and speed.

0.3 1.1
token batch length
1 5
no repeat ngram size