|
|
Event-Driven Story Writing Based on Three-Act Structural Chain-of-Thought and Semantic Self-Consistency |
HUANG Yuxin1,2, ZHAO Yuan1,2, YU Zhengtao1,2, WU Lei1,2, MA Jiushun1,2 |
1. Faculty of Information Engineering and Automation, Kunming University of Science and Technology, Kunming 650504; 2. Key Laboratory of Artificial Intelligence in Yunnan Province, Kunming University of Science and Technology, Kunming 650504 |
|
|
Abstract Event-driven story writing aims to create coherent stories that conform to event content based on limited background and event information. However, existing methods often suffer from semantic incoherence and plot conflicts due to insufficient reasoning about complex event relationships. To address these problems, a method for event-driven story writing based on three-act structural chain-of-thought and semantic self-consistency is proposed in this paper. Before generating the story, diverse story examples are selected to enable the model to learn different storytelling styles. During the story generation, a chain-of-thought is designed based on three-act structure of setup, confrontation and resolution, guiding the model to reasonably plan the story content and avoid plot inconsistencies. After the story is generated, semantic self-consistency is introduced to simulate the writer's deliberation process, selecting the most semantically consistent, coherent and relevant story from multiple generated versions. Experiments show that the proposed method improves BLEU-4 and BERTScore metrics and demonstrates certain advantages in human evaluations as well.
|
Received: 13 June 2024
|
|
Fund:Supported by National Natural Science Foundation of China(No.62266027, U21B2027, U23A20388), Science and Technology Major Special Projects of Yunnan Province(No.202302AD080003, 202303AP140008),Fundamental Research Major Special Projects of Yunnan Province(No.202401BC070021), Kunming University of Science and Technology's "Double First Class" Joint Special Project(No.202201BE070001-021) |
Corresponding Authors:
YU Zhengtao, Ph.D.,professor. His research interests include na-tural language processing and machine translation.
|
About author:: HUANG Yuxin, Ph.D., associate profe-ssor. His research interests include natural language processing and text summarization. ZHAO Yuan, Master student. His research interests include natural language processing and story generation. WU Lei, Master student. His research interests include natural language processing and comment generation. MA Jiushun, Master student. His research interests include natural language processing and text summarization. |
|
|
|
[1] TAMBWEKAR P, DHULIAWALA M, MARTIN L J, et al. Controllable Neural Story Plot Generation via Reward Shaping // Proc of the 28th International Joint Conference on Artificial Intelligence. San Francisco, USA: IJCAI, 2019: 5982-5988. [2] FAN A, LEWIS M, DAUPHIN Y.Hierarchical Neural Story Gene-ration // Proc of the 56th Annual Meeting of the Association for Com-putational Linguistics(Long Papers). Stroudsburg, USA: ACL, 2018: 889-898. [3] RADFORD A, WU J, CHILD R, et al. Language Models Are Unsupervised Multitask Learners[C/OL].[2024-05-10]. https://cdn.openai.com/better-language-models/language_models_are_unsupervised_multitask_learners.pdf. [4] LEWIS M, LIU Y H, GOYAL N, et al. BART: Denoising Sequence-to-Sequence Pre-training for Natural Language Generation, Translation, and Comprehension // Proc of the 58th Annual Meeting of the Association for Computational Linguistics. Stroudsburg, USA: ACL, 2020: 7871-7880. [5] RAFFEL C, SHAZEER N, ROBERTS A, et al. Exploring the Li-mits of Transfer Learning with a Unified Text-to-Text Transformer. Journal of Machine Learning Research, 2020, 21(1): 5485-5551. [6] HUANG Y X, GU H L, YU Z T, et al. Enhancing Low-Resource Cross-Lingual Summarization from Noisy Data with Fine-Grained Reinforcement Learning. Frontiers of Information Technology and Electronic Engineering, 2024, 25(1): 121-134. [7] 邹傲,郝文宁,靳大尉,等.基于语句融合和自监督训练的文本摘要生成模型.模式识别与人工智能, 2022, 35(5): 401-411. (ZOU A, HAO W N, JIN D W, et al. Text Summary Generation Model Based on Sentence Fusion and Self-Supervised Training. Pa-ttern Recognition and Artificial Intelligence, 2022, 35(5): 401-411.) [8] TAN B W, YANG Z C, AL-SHEDIVAT M, et al. Progressive Ge-neration of Long Text with Pretrained Language Models // Proc of the Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. Stroudsburg, USA: ACL, 2021: 4313-4324. [9] ZHANG Z X, WEN J X, GUAN J, et al. Persona-Guided Planning for Controlling the Protagonist's Persona in Story Generation // Proc of the Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. Strouds-burg, USA: ACL, 2022: 3346-3361. [10] RASHKIN H, CELIKYILMAZ A, CHOI Y, et al. PLOTMACHINES: Outline-Conditioned Generation with Dynamic Plot State Tracking // Proc of the Conference on Empirical Methods in Natural Language Processing. Stroudsburg, USA: ACL, 2020: 4274-4295. [11] TANG C, ZHANG Z H, LOAKMAN T, et al. EtriCA: Event-Triggered Context-Aware Story Generation Augmented by Cross Attention // Proc of the Conference on Empirical Methods in Natural Language. Stroudsburg, USA: ACL, 2022: 5504-5518. [12] BROWN T B, MANN B, RYDER N. Language Models Are Few-shot Learners // Proc of the 34th International Conference on Neural Information Processing Systems. Cambridge, USA: MIT Press, 2020: 1877-1901. [13] YANG K, KLEIN D, PENG N Y, et al. DOC: Improving Long Story Coherence with Detailed Outline Control // Proc of the 61st Annual Meeting of the Association for Computational Linguistics(Long Papers). Stroudsburg, USA: ACL, 2023: 3378-3456. [14] XIE Z H, COHN T, LAU J H.The Next Chapter: A Study of Large Language Models in Storytelling // Proc of the 16th Interna-tional Natural Language Generation Conference. Stroudsburg, USA: ACL, 2023: 323-351. [15] BUBECK S, CHANDRASEKARAN V, ELDAN R. Sparks of Artificial General Intelligence: Early Experiments with GPT-4[C/OL]. [2024-05-10].https://arxiv.org/pdf/2303.12712. [16] WEI J, WANG X Z, SCHUUURMANS D, et al. Chain-of-Thought Prompting Elicits Reasoning in Large Language Models // Proc of the 36th International Conference on Neural Information Processing Systems. Cambridge, USA: MIT Press, 2022: 24824-24837. [17] ZHANG Y B, GAO S X, HUANG Y X, et al. 3A-COT: An Attend-Arrange-Abstract Chain-of-Thought for Multi-document Su-mmarization. International Journal of Machine Learning and Cybernetics, 2024. DOI: 10.1007/s13042-024-02225-0. [18] WANG X Z, WEI J, SCHUUURMANS D, et al. Self-Consistency Improves Chain of Thought Reasoning in Language Models[C/OL].[2024-05-10]. https://openreview.net/pdf?id=1PL1NIMMrw. [19] ZHANG Z S, ZHANG A, LI M, et al. Automatic Chain of Thought Prompting in Large Language Models[C/OL].[2024-05-10]. https://openreview.net/pdf?id=5NTt8GFjUHkr. [20] REIMERS N, GUREVYCH I. Sentence-BERT: Sentence Embe-ddings Using Siamese BERT-Networks // Proc of the Conference on Empirical Methods in Natural Language Processing and the 9th In-ternational Joint Conference on Natural Language Processing. Strouds-burg, USA: ACL, 2019: 3982-3992. [21] MOSTAFAZADEH N, CHAMBERS N, HE X D, et al. A Corpus and Cloze Evaluation for Deeper Understanding of Commonsense Stories // Proc of the Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Te-chnologies. Stroudsburg, USA: ACL, 2016: 839-849. [22] LIN C Y. ROUGE: A Package for Automatic Evaluation of Summaries // Proc of the Text Summarization Branches Out. Stroudsburg, USA: ACL, 2004: 74-81. [23] PAPINENI K, ROUKOS S, WARD T, et al. BLEU: A Method for Automatic Evaluation of Machine Translation // Proc of the 40th Annual Meeting of the Association for Computational Linguistics. Stroudsburg, USA: ACL, 2002: 311-318. [24] ALIHOSSEINI D, MONTAHAEI E, BAGHSHAH M S. Jointly Measuring Diversity and Quality in Text Generation Models // Proc of the Workshop on Methods for Optimizing and Evaluating Neural Language Generation. Stroudsburg, USA: ACL, 2019: 90-98. [25] ZHANG T Y, KISHORE V, WU F, et al. BERTScore: Evaluating Text Generation with BERT[C/OL].[2024-05-10]. https://openreview.net/pdf?id=SkeHuCVFDr. [26] KENTON J, CHANG M W, LEE K, et al. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding // Proc of the Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Long and Short Papers). Stroudsburg, USA: ACL, 4171-4186. [27] GUAN J, MAO X X, FAN C J, et al. Long Text Generation by Modeling Sentence-Level and Discourse-Level Coherence // Proc of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing(Long Papers). Stroudsburg, USA: ACL, 2021: 6379-6393. [28] OUYANG L, WU J, JIANG X, et al. Training Language Models to Follow Instructions with Human Feedback // Proc of the 36th International Conference on Neural Information Processing Systems. Cambridge, USA: MIT Press, 2022: 27730-27744. [29] TEAM G, MESNARD T, HARDIN C, et al. Gemma: Open Mo-dels Based on Gemini Research and Technology[C/OL].[2024-05-10]. https://arxiv.org/pdf/2403.08295. [30] TEAM G, ANIL R, BORGEAUD S, et al. Gemini: A Family of Highly Capable Multimodal Models[C/OL].[2024-05-10]. https://arxiv.org/pdf/2312.11805. [31] TOUVRON H, MARTIN L, STONE K, et al. LLAMA 2: Open Foundation and Fine-Tuned Chat Models[C/OL].[2024-05-10]. https://arxiv.org/pdf/2307.09288. [32] RUBIN O, HERZIG J, BERANT J.Learning To Retrieve Prompts for In-Context Learning // Proc of the Conference of the North American Chapter of the Association for Computational Linguistics:Human Language Technologies. Stroudsburg, USA: ACL, 2022: 2655-2671. [33] CHEN Y K, SONG Z Q, WU X Z, et al. MTG: A Benchmark Suite for Multilingual Text Generation // Proc of Annual Confe-rence of the North American Chapter of the Association for Computational Linguistics. Stroudsburg, USA: ACL, 2022: 2508-2527. [34] BLEVINS T, ZETTLEMOYER L. Language Contamination Helps Explain the Cross-Lingual Capabilities of English Pretrained Models // Proc of the Conference on Empirical Methods in Natural Language Processing. Stroudsburg, USA: ACL, 2022: 3563-3574. [35] MALKIN D, LIMISIEWICZ T, STANOVSKY G. A Balanced Data Approach for Evaluating Cross-Lingual Transfer: Mapping the Linguistic Blood Bank // Proc of the Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. Stroudsburg, USA: ACL, 2022: 4903-4915. |
|
|
|