Название: Intelligent Computer Mathematics: 17th International Conference, CICM 2024 Автор: Andrea Kohlhase, Laura Kovács Издательство: Springer Серия: Lecture Notes in Artificial Intelligence Год: 2024 Страниц: 367 Язык: английский Формат: pdf (true), epub Размер: 50.2 MB
This book constitutes the refereed proceedings of the 17th International Conference on Intelligent Computer Mathematics, CICM 2024, held in Montréal, Québec, Canada, during August 5–9, 2024.
The 21 full papers presented were carefully reviewed and selected from 28 submissions. These papers have been categorized into the following sections: AI and LLM; Proof Assistants; Logical Frameworks and Transformations; Knowledge Representation and Certification; Proof Search and Formalization & System Descriptions.
The Conference on Intelligent Computer Mathematics (CICM) brings together the many separate communities that have developed theoretical and practical solutions for mathematical applications in Artificial Intelligence, computation, deduction, knowledge management, or user interfaces.
This paper explores the potential of leveraging Large Language Models (LLMs) for the tasks of automated annotation and Part-of-Math (POM) tagging of equations. Traditional methods for math term annotation and POM tagging rely heavily on manually crafted rules and limited datasets, which often result in scalability issues and insufficient adaptability to new domains. In contrast, LLMs, with their vast knowledge and advanced natural language understanding capabilities, present a promising alternative. Our methodology involves crafting prompts for LLMs to elicit answers that can be read as key-value pairs where the keys are math terms and the values are the corresponding annotations. We also investigate the effect on the performance of LLMs when we provide in the prompt different levels of context, such as the sentence or paragraph containing the input equation. The performance is evaluated by consistency between the ground truth and the output of LLMs. Consistency is assessed by a separate LLM session and with a different prompt. We propose that LLMs could play a key role in automating the annotation and tagging of mathematical content, thereby enhancing the accessibility and utility of mathematical knowledge in digital libraries and beyond.
Recent Nature Language Processing (NLP) and Math Language Processing (MLP) advancements can be attributed to the advances of Large Language Models (LLMs). LLMs, such as ChatGPT, has shown impressive performance in many MLP tasks, like math reasoning and solving math word problems. In the specific context of mathematics, one of the significant challenges is the automated annotation and Part-of-Math (POM) tagging of equations. Part-of-math tagging is the process of identifying and labeling different components within mathematical equations, such as variables, operators, functions, and constants, to understand their roles and relationships within the equation.
In this research, we investigated the possibility of using LLMs to annotate math equations. The premise of our research is rooted in the understanding that the NLP prowess of LLMs, coupled with their extensive knowledge bases, can provide a superior alternative to conventional methods. By crafting targeted prompts, we aim to elicit responses from LLMs in the form of key-value pairs, where the keys represent mathematical terms and the values signify the corresponding annotations, thereby streamlining the annotation process and enhancing the math-term annotation of equations.
Contents:
AI and LLM - Using Large Language Models to Automate Annotation and Part-of-Math Tagging of Math Equations - Automated Mathematical Discovery and Verification: Minimizing Pentagons in the Plane - Using General Large Language Models to Classify Mathematical Documents Proof Assistants Logical Frameworks and Transformations Knowledge Representation and Certication Proof Search and Formalization System Descriptions
Скачать Intelligent Computer Mathematics: 17th International Conference, CICM 2024
|