TY - JOUR
T1 - Transformer-Based Text Summarization
T2 - A Deep Learning Approach with Hybrid Optimization
AU - Alboaneen, Dabiah
N1 - Publisher Copyright:
© 2025, Zarka Private University. All rights reserved.
PY - 2025/9
Y1 - 2025/9
N2 - The amount of data on the internet is expanding rapidly. Thus, it is crucial to present essential information concisely. This would reduce the reading time and help minimize human effort. Therefore, a transformer-based text summarization technique is introduced in this paper. Initially, tokenization is applied to divide the text into words. Next, word embedding uses Global Vectors for word representation (GloVe) to represent the words in vectors. The embedded vector is given as input to the encoder with transformer architecture. This structure has Multi-Head Attention (MHA) and positional encoding context, which helps to identify the important context for summarization and to understand long-range dependencies. Each sentence is then scored based on its importance, and the sentences that have the top scores are separated. In the abstractive summarization stage, a Pointer-Generator Network (PGN) is introduced to create new words using its vocabulary. Furthermore, the cheetah optimizer’s exploration phase is combined with the exploitation phase of the Hippopotamus Optimization Algorithm (HOA) to improve the summary quality. The simulation analysis indicates that this proposed technique has higher Recall-Oriented Understudy for Gisting Evaluation (ROUGE) values than the existing summarization techniques.
AB - The amount of data on the internet is expanding rapidly. Thus, it is crucial to present essential information concisely. This would reduce the reading time and help minimize human effort. Therefore, a transformer-based text summarization technique is introduced in this paper. Initially, tokenization is applied to divide the text into words. Next, word embedding uses Global Vectors for word representation (GloVe) to represent the words in vectors. The embedded vector is given as input to the encoder with transformer architecture. This structure has Multi-Head Attention (MHA) and positional encoding context, which helps to identify the important context for summarization and to understand long-range dependencies. Each sentence is then scored based on its importance, and the sentences that have the top scores are separated. In the abstractive summarization stage, a Pointer-Generator Network (PGN) is introduced to create new words using its vocabulary. Furthermore, the cheetah optimizer’s exploration phase is combined with the exploitation phase of the Hippopotamus Optimization Algorithm (HOA) to improve the summary quality. The simulation analysis indicates that this proposed technique has higher Recall-Oriented Understudy for Gisting Evaluation (ROUGE) values than the existing summarization techniques.
KW - Deep learning
KW - extractive and abstractive
KW - optimization
KW - text summarization
KW - transformer
UR - https://www.scopus.com/pages/publications/105015760577
U2 - 10.34028/iajit/22/5/10
DO - 10.34028/iajit/22/5/10
M3 - Article
AN - SCOPUS:105015760577
SN - 1683-3198
VL - 22
SP - 962
EP - 971
JO - International Arab Journal of Information Technology
JF - International Arab Journal of Information Technology
IS - 5
ER -