Abstract

Aspect-based sentiment analysis (ABSA) enables a systematic identification of user opinions on particular aspects, thus improving the idea creation process in the initial stages of product/service design. Large language models (LLMs) such as T5 and GPT have proven powerful in ABSA tasks due to their inherent attention mechanism. However, some key limitations remain. First, existing research mainly focuses on relatively simpler ABSA tasks such as aspect-based sentiment analysis, while the task of extracting aspects, opinions, and sentiment in a unified model remains largely unaddressed. Second, current ABSA tasks overlook implicit opinions and sentiments. Third, most attention-based LLMs use position encoding in a linear projected manner or through split-position relations in word distance schemes, which could lead to relation biases during the training process. {\color{blue}This paper incorporates domain knowledge into LLMs by introducing a new position encoding strategy for the transformer model. This paper addresses these gaps by (1) introducing the ACOSI (Aspect, Category, Opinion, Sentiment, Implicit Indicator) analysis task, developing a unified model capable of extracting all five types of labels in ACOSI analysis task simultaneously in a generative manner; (2) designing a new position encoding method in the attention-based model; and (3) introducing a new benchmark based on ROUGE score that incorporates design domain knowledge inside. The numerical experiments on manually labeled data from three major e-Commerce retail stores for apparel and footwear products showcase the domain knowledge inserted transformer} method's performance, scalability, and potential.

This content is only available via PDF.