25 May 2025, Volume 38 Issue 5
  
    Special Topics of Academic Papers at the 27th Annual Meeting of the China Association for Science and Technology
  • WANG Xiaocong, YU Zhengtao, ZHANG Yuan, GAO Shengxiang, LAI Hua, LI Ying
    2025, 38(5): 385-396. Abstract ( 7 ) Download PDF ( 1 ) Rich HTML ( 6 )

    Existing document-level neural machine translation methods struggle to effectively capture long-distance contextual information on the target side, resulting in incoherent translations. To address this issue, a method for document-level neural machine translation with target-side historical information fusion is proposed. First, the contextual representations of the source language are derived via a multi-head self-attention mechanism. Second, the preceding context representations of the target language are obtained using another multi-head self-attention mechanism. Next, an attention with linear biases is employed to dynamically inject the historical information into the current target language representation. Finally, a higher-quality translation is obtained by integrating the source language representation with the enhanced preceding context representation of the target language. Experimental results on multiple datasets demonstrate that the performance of the proposed method is superior. Moreover, the proposed method effectively improves the coherence and completeness of document-level translations through incorporating long-sequence information modeled by recurrent mechanisms during decoding.

  • ZHANG Rongguo, WEN Yihao, HU Jing, WANG Lifang, LIU Xiaojun
    2025, 38(5): 397-411. Abstract ( 7 ) Download PDF ( 1 ) Rich HTML ( 6 )

    The deficiencies in restoring plausible edge structures and complete textures within missing regions still emerge in existing neural network-based approaches for image inpainting. To address these issues, a method for edge-texture dual feature aggregation for image inpainting via structural transformation completion(ETSTC) is proposed. First, a structure transform completer module integrating axial attention and contextual transformer is designed. The module is combined with a structure smoother module to further complement and optimize edge structures. Thus, both local edge details and global structural patterns are effectively captured while edge noise and artifacts are suppressed. Second, an edge-guided feature aligner and an edge-texture dual-feature aggregator are developed. Scaling and offset parameters are adaptively learned to effectively resolve scale and offset discrepancies in dynamic aggregation of edge structural features and texture features across different feature space levels, and thereby the image inpainting performance is improved. Finally, experiments on three datasets verify the feasibility and effectiveness of ETSTC.

  • AI Sensen, WAN Qing, LI Jinhai
    2025, 38(5): 412-424. Abstract ( 8 ) Download PDF ( 0 ) Rich HTML ( 7 )

    In the network formal context induced by graph network data, global network formal concepts and local network formal concepts are obtained by introducing the set connectivity on the basis of formal concepts and semiconcepts respectively, and there is a close relationship between the set connectivity and the equiconcepts of the formal context. Therefore, there must be a correlation between the two types of network formal concepts and equiconcepts. In this paper, for network formal contexts, a method for obtaining all connected subsets of the object set is first proposed by means of the equiconcepts,and some properties of the connected sets are characterized through concept-induced operators. Next, a method is presented for deriving the equiconcepts of the subcontext from the equiconcepts of the original formal context. Subsequently, the methods for acquiring global network formal concepts and local network formal concepts are obtained from the equiconcepts of the subcontext. Finally, numerical experiments illustrate the effectiveness and feasibility of the proposed acquisition methods for the two types of network formal concepts.

  • LU Tianying, ZHONG Luying, LIAO Shiling, YU Zhengxin, MIAO Wang, CHEN Zheyi
    2025, 38(5): 425-441. Abstract ( 6 ) Download PDF ( 0 ) Rich HTML ( 5 )

    By integrating subgraph learning and federated learning, federated subgraph learning achieves collaborative learning of subgraph information across multiple clients while protecting data privacy. However, due to different data collection methods of clients, graph data typically exhibits the non-independent and identically distributed(Non-IID) characteristics. Meanwhile, there are significant differences in the structure and features of local graph data across clients. These factors lead to difficult convergence and poor generalization during the training of federated subgraph learning. To solve these problems, a personalized federated subgraph learning framework with embedding alignment and parameter activation(FSL-EAPA) is proposed. First, the personalized model aggregation is performed based on the similarity between clients to reduce the interference of Non-IID data on the overall model performance. Next,the parameter selective activation is introduced during model updates to handle the heterogeneity of subgraph structural features. Finally, the updated client models are utilized to provide positive and negative clustering representations for local node embeddings to aggregate the local nodes with the same class. Thus, FSL-EAPA can fully learn feature representations of nodes, and thereby better adapts to the heterogeneous data distributions across different clients. Experiments on real-world benchmark graph datasets validate the effectiveness of FSL-EAPA. The results show that FSL-EAPA achieves higher classification accuracy under various scenarios.

  • GUO Ningyuan, SUN Guoyi, LI Chao
    2025, 38(5): 442-456. Abstract ( 9 ) Download PDF ( 0 ) Rich HTML ( 8 )

    Heterogeneous graph neural networks hold significant advantages in complex graph data mining tasks. However, existing methods typically follow a supervised learning paradigm. Therefore, they are highly dependent on node labeling information and sensitive to noisy links in the original graph structure. As a result, their applications in labeling-scarce scenarios are limited. To address these issues, a method for heterogeneous graph structure learning based on contrastive learning and structure update mechanism(HGSL-CL) is proposed. The learning target is first generated as the anchor view from the original data. The type-aware feature mapping and weighted multi-view similarity computation are combined to generate the learner view. Subsequently, the anchor view is iteratively optimized through the structure update mechanism, and the node representations in two views are obtained using semantic-level attention. Finally, node representations from both views are projected into a shared latent space via a multi-layer perceptron. The graph structure optimization is achieved by the cross-view synergistic contrastive loss function, and a positive sample filtering strategy fusing node topological similarity and attribute similarity is introduced to enhance the discriminative ability of contrastive learning. Experiments on three datasets show that HGSL-CL outperforms other baseline models in node classification and clustering tasks. Moreover, the learned graph structure can be generalized to semi-supervised scenarios, and HGSL-CL achieves better performance than the original baseline models. The results demonstrate the effectiveness of graph structure learning. The source code of HGSL-CL is available at https://github.com/desslie047/HGSL-CL.

  • LIU Jialong, LI Guanghui, DAI Chenglong
    2025, 38(5): 457-471. Abstract ( 9 ) Download PDF ( 2 ) Rich HTML ( )

    In unconstrained environments, face images exhibit the characteristics of complex backgrounds and varying scales. Current face detectors suffer from an imbalanced number of anchors matched to the faces in label assignment and the receptive field growth limited by convolutional kernels in feature extraction. These issues lead to the difficulty of fine-grained optimization of the network. To address these issues, a fine-grained face detection method based on anchor loss optimization(FALO) is proposed. First, the relationship between the number of anchors matched to the faces and the loss is analyzed, and an anchor loss optimization algorithm is introduced to fine-tune the classification and localization loss during training. Second, a context feature fusion module is designed to effectively extract multi-scale features from the background. Finally, convolutional neural networks and self-attention mechanisms are considered comprehensively, and a self-attention auxiliary branch is constructed to supplement the receptive field of the detector and improve the attention to faces with different aspect ratios. Experiments on multiple datasets demonstrate that FALO achieves both real-time computational efficiency and high-precision detection, and it exhibits certain advantages in hard sample mining.

  • HE Wenwu, LIU Xiaoyu, MAO Guojun
    2025, 38(5): 472-483. Abstract ( ) Download PDF ( ) Rich HTML ( )

    Graph neural network-distilled multilayer perceptrons(MLPs) balance inference performance and efficiency in graph-related tasks to some extent. However, MLPs treat graph nodes independently and struggle to explicitly capture neighborhood information of target nodes. Thus, their inference performance is limited. To solve this problem,a graph neural network classifier based on decoupled label propagation and multi-node mixup regularization(DLPMMR) is proposed. DLPMMR trains the MLP classifier under a knowledge distillation framework to ensure basic inference performance with high inference efficiency. During the training phase, a naive and hyperparameter-free double combination strategy is employed for multi-node mixup to enhance node diversity. A mixup regularization term is then constructed to explicitly control the complexity of the MLP so as to improve its generalization ability and robustness. During the inference phase, label propagation is introduced to incorporate missing neighborhood information into the predictions of the MLP. By decoupling target nodes from their neighboring nodes, the influence of neighbor node information on the classification decision of the target node is effectively regulated, and thus the inference accuracy of MLP is further enhanced. Experiments on five benchmark graph node classification datasets demonstrate that DLPMMR exhibits strong robustness and superior performance.