Improving paraphrase generation using supervised neural-based statistical machine translation framework

Document Type

Article

Source of Publication

Neural Computing and Applications

Publication Date

1-1-2023

Abstract

In phrase generation (PG), a sentence in the natural language is changed into a new one with a different syntactic structure but having the same semantic meaning. The present sequence-to-sequence strategy aims to recall the words and structures from the training dataset rather than learning the words' semantics. As a result, the resulting statements are frequently grammatically accurate but incorrect linguistically. The neural machine translation approach suffers to handle unusual words, domain mismatch, and unfamiliar words, but it takes context well. This work presents a novel model for creating paraphrases that use neural-based statistical machine translation (NSMT). Our approach creates potential paraphrases for any source input, calculates the level of semantic similarity between text segments of any length, and encodes paraphrases in a continuous space. To evaluate the suggested model, Quora Question Pair and Microsoft Common Objects in Context benchmark datasets are used. We demonstrate that the proposed technique achieves cutting-edge performance on both datasets using automatic and human assessments. Experimental findings across tasks and datasets demonstrate that the suggested NSMT-based PG outperforms those achieved with traditional phrase-based techniques. We also show that the proposed technique may be used automatically for the development of paraphrases for a variety of languages.

ISSN

0941-0643

Publisher

Springer Science and Business Media LLC

Disciplines

Computer Sciences

Keywords

Neural machine translation, Neural-based statistical machine translation, Phrase generation, Statistical machine translation

Scopus ID

85164967749

Indexed in Scopus

yes

Open Access

no

Share

COinS