Get 20M+ Full-Text Papers For Less Than $1.50/day. Start a 14-Day Trial for You or Your Team.

Learn More →

Goal selection and feedback for solving math word problems

Goal selection and feedback for solving math word problems Solving Math Word Problems (MWPs) automatically is a challenging task for AI-tutoring in online education. Most of the existing State-Of-The-Art (SOTA) neural models for solving MWPs use Goal-driven Tree-structured Solver (GTS) as their decoders. However, owing to the defects of the tree-structured recurrent neural networks, GTS can not obtain the information of all generated nodes in each decoding time step. Therefore, the performance for long math expressions is not satisfactory enough. To address such limitations, we propose a Goal Selection and Feedback (GSF) decoding module. In each time step of GSF, we firstly feed the latest result back to all goal vectors through goal feedback operation, and then the goal selection operation based on attention mechanism is designed for generate the new goal vector. Not only can the decoder collect the historical information from all generated nodes through goal selection operation, but also these generated nodes are always updated timely by goal feedback operation. In addition, a Multilayer Fusion Network (MFN) is proposed to provide a better representation of each hidden state during decoding. Combining the ELECTRA language model with our novel decoder, experiments on the Math23k, Ape-clean, and MAWPS datasets show that our model outperforms the SOTA baselines, especially on the MWPs of complex samples with long math expressions. The ablation study and case study further verify that our model can better solve the samples with long expressions, and the proposed components are indeed able to help enhance the performance of the model. http://www.deepdyve.com/assets/images/DeepDyve-Logo-lg.png Applied Intelligence Springer Journals

Goal selection and feedback for solving math word problems

Applied Intelligence , Volume 53 (12) – Jun 1, 2023

Loading next page...
 
/lp/springer-journals/goal-selection-and-feedback-for-solving-math-word-problems-2kz0RMqEkM

References (47)

Publisher
Springer Journals
Copyright
Copyright © The Author(s), under exclusive licence to Springer Science+Business Media, LLC, part of Springer Nature 2022. Springer Nature or its licensor holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.
ISSN
0924-669X
eISSN
1573-7497
DOI
10.1007/s10489-022-04253-1
Publisher site
See Article on Publisher Site

Abstract

Solving Math Word Problems (MWPs) automatically is a challenging task for AI-tutoring in online education. Most of the existing State-Of-The-Art (SOTA) neural models for solving MWPs use Goal-driven Tree-structured Solver (GTS) as their decoders. However, owing to the defects of the tree-structured recurrent neural networks, GTS can not obtain the information of all generated nodes in each decoding time step. Therefore, the performance for long math expressions is not satisfactory enough. To address such limitations, we propose a Goal Selection and Feedback (GSF) decoding module. In each time step of GSF, we firstly feed the latest result back to all goal vectors through goal feedback operation, and then the goal selection operation based on attention mechanism is designed for generate the new goal vector. Not only can the decoder collect the historical information from all generated nodes through goal selection operation, but also these generated nodes are always updated timely by goal feedback operation. In addition, a Multilayer Fusion Network (MFN) is proposed to provide a better representation of each hidden state during decoding. Combining the ELECTRA language model with our novel decoder, experiments on the Math23k, Ape-clean, and MAWPS datasets show that our model outperforms the SOTA baselines, especially on the MWPs of complex samples with long math expressions. The ablation study and case study further verify that our model can better solve the samples with long expressions, and the proposed components are indeed able to help enhance the performance of the model.

Journal

Applied IntelligenceSpringer Journals

Published: Jun 1, 2023

Keywords: NLP; MWPs; Attention; Decoding; Goal selection and feedback

There are no references for this article.