模式识别与人工智能
Saturday, Jul. 26, 2025 Home      About Journal      Editorial Board      Instructions      Ethics Statement      Contact Us                   中文
Pattern Recognition and Artificial Intelligence
22 Judgement and Disposal of Academic Misconduct Article
22 Copyright Transfer Agreement
22 Proof of Confidentiality
22 Requirements for Electronic Version
More....
22 Chinese Association of Automation
22 National ResearchCenter for Intelligent Computing System
22 Institute of Intelligent Machines,Chinese Academy of Sciences
More....
 
 
2025 Vol.38 Issue.5, Published 2025-05-25

Special Topics of Academic Papers at the 27th Annual Meeting of the China Association for Science and Technology   
   
Special Topics of Academic Papers at the 27th Annual Meeting of the China Association for Science and Technology
385 Document-Level Neural Machine Translation with Target-Side Historical Information Fusion
WANG Xiaocong, YU Zhengtao, ZHANG Yuan, GAO Shengxiang, LAI Hua, LI Ying
Existing document-level neural machine translation methods struggle to effectively capture long-distance contextual information on the target side, resulting in incoherent translations. To address this issue, a method for document-level neural machine translation with target-side historical information fusion is proposed. First, the contextual representations of the source language are derived via a multi-head self-attention mechanism. Second, the preceding context representations of the target language are obtained using another multi-head self-attention mechanism. Next, an attention with linear biases is employed to dynamically inject the historical information into the current target language representation. Finally, a higher-quality translation is obtained by integrating the source language representation with the enhanced preceding context representation of the target language. Experimental results on multiple datasets demonstrate that the performance of the proposed method is superior. Moreover, the proposed method effectively improves the coherence and completeness of document-level translations through incorporating long-sequence information modeled by recurrent mechanisms during decoding.
2025 Vol. 38 (5): 385-396 [Abstract] ( 68 ) [HTML 1KB] [ PDF 981KB] ( 60 )
397 Edge-Texture Dual Feature Aggregation for Image Inpainting via Structural Transformation Completion
ZHANG Rongguo, WEN Yihao, HU Jing, WANG Lifang, LIU Xiaojun

The deficiencies in restoring plausible edge structures and complete textures within missing regions still emerge in existing neural network-based approaches for image inpainting. To address these issues, a method for edge-texture dual feature aggregation for image inpainting via structural transformation completion(ETSTC) is proposed. First, a structure transform completer module integrating axial attention and contextual transformer is designed. The module is combined with a structure smoother module to further complement and optimize edge structures. Thus, both local edge details and global structural patterns are effectively captured while edge noise and artifacts are suppressed. Second, an edge-guided feature aligner and an edge-texture dual-feature aggregator are developed. Scaling and offset parameters are adaptively learned to effectively resolve scale and offset discrepancies in dynamic aggregation of edge structural features and texture features across different feature space levels, and thereby the image inpainting performance is improved. Finally, experiments on three datasets verify the feasibility and effectiveness of ETSTC.

2025 Vol. 38 (5): 397-411 [Abstract] ( 48 ) [HTML 1KB] [ PDF 6268KB] ( 39 )
412 Network Formal Concepts Acquisition Based on Equiconcepts
AI Sensen, WAN Qing, LI Jinhai
In the network formal context induced by graph network data, global network formal concepts and local network formal concepts are obtained by introducing the set connectivity on the basis of formal concepts and semiconcepts respectively, and there is a close relationship between the set connectivity and the equiconcepts of the formal context. Therefore, there must be a correlation between the two types of network formal concepts and equiconcepts. In this paper, for network formal contexts, a method for obtaining all connected subsets of the object set is first proposed by means of the equiconcepts,and some properties of the connected sets are characterized through concept-induced operators. Next, a method is presented for deriving the equiconcepts of the subcontext from the equiconcepts of the original formal context. Subsequently, the methods for acquiring global network formal concepts and local network formal concepts are obtained from the equiconcepts of the subcontext. Finally, numerical experiments illustrate the effectiveness and feasibility of the proposed acquisition methods for the two types of network formal concepts.
2025 Vol. 38 (5): 412-424 [Abstract] ( 30 ) [HTML 1KB] [ PDF 671KB] ( 45 )
425 Personalized Federated Subgraph Learning with Embedding Alignment and Parameter Activation
LU Tianying, ZHONG Luying, LIAO Shiling, YU Zhengxin, MIAO Wang, CHEN Zheyi
By integrating subgraph learning and federated learning, federated subgraph learning achieves collaborative learning of subgraph information across multiple clients while protecting data privacy. However, due to different data collection methods of clients, graph data typically exhibits the non-independent and identically distributed(Non-IID) characteristics. Meanwhile, there are significant differences in the structure and features of local graph data across clients. These factors lead to difficult convergence and poor generalization during the training of federated subgraph learning. To solve these problems, a personalized federated subgraph learning framework with embedding alignment and parameter activation(FSL-EAPA) is proposed. First, the personalized model aggregation is performed based on the similarity between clients to reduce the interference of Non-IID data on the overall model performance. Next,the parameter selective activation is introduced during model updates to handle the heterogeneity of subgraph structural features. Finally, the updated client models are utilized to provide positive and negative clustering representations for local node embeddings to aggregate the local nodes with the same class. Thus, FSL-EAPA can fully learn feature representations of nodes, and thereby better adapts to the heterogeneous data distributions across different clients. Experiments on real-world benchmark graph datasets validate the effectiveness of FSL-EAPA. The results show that FSL-EAPA achieves higher classification accuracy under various scenarios.
2025 Vol. 38 (5): 425-441 [Abstract] ( 32 ) [HTML 1KB] [ PDF 1138KB] ( 39 )
442 Heterogeneous Graph Structure Learning Based on Contrastive Learning and Structure Update Mechanism
GUO Ningyuan, SUN Guoyi, LI Chao
Heterogeneous graph neural networks hold significant advantages in complex graph data mining tasks. However, existing methods typically follow a supervised learning paradigm. Therefore, they are highly dependent on node labeling information and sensitive to noisy links in the original graph structure. As a result, their applications in labeling-scarce scenarios are limited. To address these issues, a method for heterogeneous graph structure learning based on contrastive learning and structure update mechanism(HGSL-CL) is proposed. The learning target is first generated as the anchor view from the original data. The type-aware feature mapping and weighted multi-view similarity computation are combined to generate the learner view. Subsequently, the anchor view is iteratively optimized through the structure update mechanism, and the node representations in two views are obtained using semantic-level attention. Finally, node representations from both views are projected into a shared latent space via a multi-layer perceptron. The graph structure optimization is achieved by the cross-view synergistic contrastive loss function, and a positive sample filtering strategy fusing node topological similarity and attribute similarity is introduced to enhance the discriminative ability of contrastive learning. Experiments on three datasets show that HGSL-CL outperforms other baseline models in node classification and clustering tasks. Moreover, the learned graph structure can be generalized to semi-supervised scenarios, and HGSL-CL achieves better performance than the original baseline models. The results demonstrate the effectiveness of graph structure learning. The source code of HGSL-CL is available at https://github.com/desslie047/HGSL-CL.
2025 Vol. 38 (5): 442-456 [Abstract] ( 42 ) [HTML 1KB] [ PDF 1332KB] ( 46 )
457 Fine-Grained Face Detection Method Based on Anchor Loss Optimization
LIU Jialong, LI Guanghui, DAI Chenglong
In unconstrained environments, face images exhibit the characteristics of complex backgrounds and varying scales. Current face detectors suffer from an imbalanced number of anchors matched to the faces in label assignment and the receptive field growth limited by convolutional kernels in feature extraction. These issues lead to the difficulty of fine-grained optimization of the network. To address these issues, a fine-grained face detection method based on anchor loss optimization(FALO) is proposed. First, the relationship between the number of anchors matched to the faces and the loss is analyzed, and an anchor loss optimization algorithm is introduced to fine-tune the classification and localization loss during training. Second, a context feature fusion module is designed to effectively extract multi-scale features from the background. Finally, convolutional neural networks and self-attention mechanisms are considered comprehensively, and a self-attention auxiliary branch is constructed to supplement the receptive field of the detector and improve the attention to faces with different aspect ratios. Experiments on multiple datasets demonstrate that FALO achieves both real-time computational efficiency and high-precision detection, and it exhibits certain advantages in hard sample mining.
2025 Vol. 38 (5): 457-471 [Abstract] ( 39 ) [HTML 1KB] [ PDF 3020KB] ( 40 )
472 Graph Neural Network Classifier Based onDecoupled Label Propagation and Multi-node Mixup Regularization
HE Wenwu, LIU Xiaoyu, MAO Guojun
Graph neural network-distilled multilayer perceptrons(MLPs) balance inference performance and efficiency in graph-related tasks to some extent. However, MLPs treat graph nodes independently and struggle to explicitly capture neighborhood information of target nodes. Thus, their inference performance is limited. To solve this problem,a graph neural network classifier based on decoupled label propagation and multi-node mixup regularization(DLPMMR) is proposed. DLPMMR trains the MLP classifier under a knowledge distillation framework to ensure basic inference performance with high inference efficiency. During the training phase, a naive and hyperparameter-free double combination strategy is employed for multi-node mixup to enhance node diversity. A mixup regularization term is then constructed to explicitly control the complexity of the MLP so as to improve its generalization ability and robustness. During the inference phase, label propagation is introduced to incorporate missing neighborhood information into the predictions of the MLP. By decoupling target nodes from their neighboring nodes, the influence of neighbor node information on the classification decision of the target node is effectively regulated, and thus the inference accuracy of MLP is further enhanced. Experiments on five benchmark graph node classification datasets demonstrate that DLPMMR exhibits strong robustness and superior performance.
2025 Vol. 38 (5): 472-483 [Abstract] ( 37 ) [HTML 1KB] [ PDF 797KB] ( 32 )
模式识别与人工智能
 

Supervised by
China Association for Science and Technology
Sponsored by
Chinese Association of Automation
NationalResearchCenter for Intelligent Computing System
Institute of Intelligent Machines, Chinese Academy of Sciences
Published by
Science Press
 
Copyright © 2010 Editorial Office of Pattern Recognition and Artificial Intelligence
Address: No.350 Shushanhu Road, Hefei, Anhui Province, P.R. China Tel: 0551-65591176 Fax:0551-65591176 Email: bjb@iim.ac.cn
Supported by Beijing Magtech  Email:support@magtech.com.cn