Transformer-Based Text Summarization: A Deep Learning Approach with Hybrid Optimization

Research output: Contribution to journalArticlepeer-review

Abstract

The amount of data on the internet is expanding rapidly. Thus, it is crucial to present essential information concisely. This would reduce the reading time and help minimize human effort. Therefore, a transformer-based text summarization technique is introduced in this paper. Initially, tokenization is applied to divide the text into words. Next, word embedding uses Global Vectors for word representation (GloVe) to represent the words in vectors. The embedded vector is given as input to the encoder with transformer architecture. This structure has Multi-Head Attention (MHA) and positional encoding context, which helps to identify the important context for summarization and to understand long-range dependencies. Each sentence is then scored based on its importance, and the sentences that have the top scores are separated. In the abstractive summarization stage, a Pointer-Generator Network (PGN) is introduced to create new words using its vocabulary. Furthermore, the cheetah optimizer’s exploration phase is combined with the exploitation phase of the Hippopotamus Optimization Algorithm (HOA) to improve the summary quality. The simulation analysis indicates that this proposed technique has higher Recall-Oriented Understudy for Gisting Evaluation (ROUGE) values than the existing summarization techniques.

Original languageEnglish
Pages (from-to)962-971
Number of pages10
JournalInternational Arab Journal of Information Technology
Volume22
Issue number5
DOIs
StatePublished - Sep 2025

Keywords

  • Deep learning
  • extractive and abstractive
  • optimization
  • text summarization
  • transformer

Fingerprint

Dive into the research topics of 'Transformer-Based Text Summarization: A Deep Learning Approach with Hybrid Optimization'. Together they form a unique fingerprint.

Cite this