An example use case for fine-tuned long document transformers. Model(s) are trained on book summaries. Architectures in this demo are LongT5-base and Pegasus-X-Large.
Want more performance? Run this demo from a free Google Colab GPU
Enter/paste text below, or upload a file. Pick a model & adjust params (optional), and press Summarize!
See the guide doc for details.
Summarization should take ~1-2 minutes for most settings, but may extend up to 5-10 minutes in some scenarios.
Download the summary as a text file, with parameters and scores.
Scores roughly represent the summary quality as a measure of the model's 'confidence'. less-negative numbers (closer to 0) are better.
Aggregate the above batches into a cohesive summary.
Refer to the guide doc for what these are, and how they impact quality and speed.