模式识别与人工智能
Home      About Journal      Editorial Board      Instructions      Ethics Statement      Contact Us                   中文
Pattern Recognition and Artificial Intelligence
22 Judgement and Disposal of Academic Misconduct Article
22 Copyright Transfer Agreement
22 Proof of Confidentiality
22 Requirements for Electronic Version
More....
22 Chinese Association of Automation
22 National ResearchCenter for Intelligent Computing System
22 Institute of Intelligent Machines,Chinese Academy of Sciences
More....
 
 
2023 Vol.36 Issue.5, Published 2023-05-25

Papers and Reports    Researches and Applications   
   
Papers and Reports
383 Skill Level Reduction and Necessary and Sufficient Conditions of Forward-Graded(Backward-Graded) Knowledge Structure in Fuzzy Formal Context
FENG Danlu, LI Jinjin, LI Zhaowen, ZHOU Yinfeng, YANG Taoli
Fuzzy skill mapping is a pathway to construct knowledge structure. However, applying the basic local independence model to the forward-graded(backward-graded) knowledge structure results in unrecognizable problem. Therefore, under the premise of fuzzy skill mapping, two problems are solved in this paper: excessive time consumption in skill reduction and searching for the necessary and sufficient conditions of the forward-graded(backward-graded) knowledge structure. Firstly, based on the fuzzy skill context, a pair of operators is constructed, and the simple closure space is acquired directly through the fuzzy skill concept lattice determined by the pair of operators. At the same time, the minimum skill proficiency corresponding to each knowledge state is obtained. Secondly, the concept of skill level reduction is proposed. Redundant skill level is reduced by label skill reduction, and the algorithm of skill level reduction is provided. In addition, the necessary and sufficient conditions for inducing forward-graded(backward-graded) simple closure space from fuzzy skill mapping are presented, along with an algorithm for obtaining the forward-graded problem set and the backward-graded problem set. Finally, comparative experiments on five UCI datasets verify the feasibility and effectiveness of the proposed algorithm, and the forward-graded problem set and the backward-graded problem set are obtained.
2023 Vol. 36 (5): 383-406 [Abstract] ( 349 ) [HTML 1KB] [ PDF 980KB] ( 355 )
407 Underwater Image Enhancement Network Based on Visual Multi-head Attention and Skip-Layer Whitening
CONG Xiaofeng, GUI Jie, HE Lei, ZHANG Jun
Due to the phenomena of light absorption and scattering, as well as the presence of small particles in underwater environment, underwater images suffer from the problems of color imbalance and detail distortion. To address this issue, an underwater image enhancement network based on visual multi-head attention and skip-layer whitening is proposed in this paper. A hierarchical architecture is adopted, feature extraction is performed by the encoding path and image reconstruction is carried out by the decoding path. The main components of the encoding and decoding paths are visual multi-head self-attention blocks. Instance whitening is applied to shallow features. The features of shallow layer after instance whitening are embedded into the features of deep layer by skip-layer connection as skip-layer whitening path. Content loss and structure loss are employed in the training process of the proposed network. The comparative experiment on benchmark underwater image datasets demonstrates the effectiveness of visual multi-head self-attention and instance whitening for underwater enhancement task, both quantitatively and visually.
2023 Vol. 36 (5): 407-418 [Abstract] ( 387 ) [HTML 1KB] [ PDF 4935KB] ( 436 )
419 Lightweight Inverse Separable Residual Information Distillation Network for Image Super-Resolution Reconstruction
ZHAO Xiangqiang, LI Xiyao, SONG Zhaoyang
The application of the deep learning-based image super-resolution reconstruction algorithm on mobile devices is limited, due to the sharp increase of parameters and high computational cost caused by performance requirement. To solve this problem, a lightweight inverse separable residual information distillation network for image super-resolution reconstruction is proposed in this paper. Firstly, a progressive separable distillation shuffle module is designed to extract multi-level features and in the meantime keep the model lightweight. Multiple feature extraction connections are employed to learn a more distinguishing feature representation, and thus the network acquires more useful information from distillation. Then, a contrast perception coordinate attention module is designed to fully leverage channel-aware and position-sensitive information, enhancing the feature selection capability. Finally, a progressive compensation residual connection is put forward to improve the utilization of shallow features and compensate for the texture detail features of the network. Experiments show that the proposed algorithm achieves a good balance between model complexity and reconstruction performance, yielding excellent subjective and objective quality in the reconstructed high-resolution images.
2023 Vol. 36 (5): 419-432 [Abstract] ( 352 ) [HTML 1KB] [ PDF 1549KB] ( 402 )
Researches and Applications
433 Attribute and Scale Selection Based on Test Cost in Consistent Multi-scale Decision Systems
WU Di, LIAO Shujiao, FAN Yiwen
The processing of multi-scale decision systems can simplify the complex problem, and simultaneous selection of attributes and scales is an important method in this process. In addition, the influence of cost factors is often taken into consideration in practical data processing. However, there is no research on cost factors in the simultaneous selection of attributes and scales. To solve this problem, the method of attribute and scale selection based on test cost in consistent multi-scale decision systems is proposed in this paper. Firstly, a corresponding rough set theoretical model is constructed. Both attribute and scale are considered in definitions and properties of the constructed theoretical model, and the test cost-based attribute-scale significance function is provided. Then, on the basis of concepts and properties of rough set applicable to multi-scale decision systems, a heuristic algorithm for simultaneous selection of attributes and scales is proposed. Experiments on UCI dataset show that the proposed algorithm significantly reduces the total test cost and improves computational efficiency.
2023 Vol. 36 (5): 433-447 [Abstract] ( 245 ) [HTML 1KB] [ PDF 718KB] ( 327 )
448 Unstructured Pruning Method Based on Neural Architecture Search
WANG Xianbao, LIU Pengfei, XIANG Sheng, WANG Xingang
Due to the difficulty of using objective criteria to remove redundant units in deep neural networks, pruned networks often exhibit a sharp decline in performance. To address this issue, an unstructured pruning method based on network architecture search(UPNAS) is proposed. Firstly, a mask learning module is defined in the search space to remove the redundant weight parameters. Then, layer-wise relevance propagation is introduced, and a layer-wise relevance score to each network weight is assigned during the backward propagation process to measure the contribution of each weight to the network output and assist in the update of binary mask parameters. Finally, a unified update is performed on the network weights, architecture parameters and layer-wise relevance scores. Experiment on CIFAR-10 and ImageNet classification datasets shows that UPNAS can maintain the generalization ability of the network in high pruning rate scenarios and meet the requirements for model deployment.
2023 Vol. 36 (5): 448-458 [Abstract] ( 442 ) [HTML 1KB] [ PDF 906KB] ( 502 )
459 Chinese Event Extraction Method Based on Graph Attention and Table Pointer Network
LIU Wei, MA Yawei, PENG Yan, LI Weimin
The existing Chinese event extraction methods suffer from inadequate modeling of dependencies between an event trigger word and all its corresponding arguments, which results in weakened information interaction within an event and poor performance in argument extraction, especially when there is argument role overlap. To address this issue, a Chinese event extraction method based on graph attention and table pointer network(ATCEE) is proposed in this paper. Firstly,pre-trained character vectors and part-of-speech tagging vectors are fused as feature inputs. Then, the enhanced feature of the event text is obtained by a bidirectional long short-term memory network. Next, a character-level dependency syntax graph is constructed and introduced into multi-layer graph attention network to capture long-range dependencies among constituents of the event text. Subsequently, dependencies between an event trigger word and all its corresponding arguments are further enhanced via a table filling strategy. Finally, the learned table feature is input into a fully connected layer and table pointer network layer for joint extraction of trigger words and arguments. Consequently, long argument entities can be identified better by decoding argument boundaries with a table pointer network. Experimental results indicate that ATCEE method significantly outperforms previous event extraction methods on Chinese benchmark datasets, ACE2005 and DuEE1.0. In addition, the overlap problem of the event argument role is solved by introducing character-level dependency feature and table filling strategy to some extent. The source code of ATCEE can be found at the following website: https://github.com/event6/ATCEE.
2023 Vol. 36 (5): 459-470 [Abstract] ( 529 ) [HTML 1KB] [ PDF 823KB] ( 551 )
471 Dual View Contrastive Learning Networks for Multi-hop Reading Comprehension
CHEN Jinwen, CHEN Yuzhong
Multi-hop reading comprehension is an important task in machine reading comprehension, aiming at constructing a multi-hop reasoning chain from multiple documents to answer questions with requirement of combining evidence from multiple documents. Graph neural networks are widely applied to multi-hop reading comprehension tasks. However, there are still shortcomings in terms of insufficient acquisition of context mutual information for the multiple document reasoning chain and the introduction of noise due to some candidate answers being mistakenly judged as correct answers solely based on their similarity to the question. To address these issues, dual view contrastive learning networks(DVCGN) for multi-hop reading comprehension are proposed. Firstly, a heterogeneous graph-based node-level contrastive learning method is employed. Positive and negative sample pairs are generated at the node level, and both node-level and feature-level corruptions are introduced to the heterogeneous graph to construct dual views. The two corrupted views are updated iteratively through a graph attention network. DVCGN maximizes the similarity of node representations in dual views to learn node representations , obtain rich contextual semantic information and accurately model the current node representation and its relationship with the remaining nodes in the reasoning chain. Consequently, multi-granularity contextual information is effectively distinguished from interference information and richer mutual information is constructed for the reasoning chain. Furthermore, a question-guided graph node pruning method is proposed. It leverages question information to filter answer entity nodes, narrowing down the range of candidate answers and mitigating noise caused by similarity expressions in evidence sentences. Finally, experimental results on HOTPOTQA dataset demonstrate the superior performance of DVCGN.
2023 Vol. 36 (5): 471-482 [Abstract] ( 418 ) [HTML 1KB] [ PDF 774KB] ( 789 )
模式识别与人工智能
 

Supervised by
China Association for Science and Technology
Sponsored by
Chinese Association of Automation
NationalResearchCenter for Intelligent Computing System
Institute of Intelligent Machines, Chinese Academy of Sciences
Published by
Science Press
 
Copyright © 2010 Editorial Office of Pattern Recognition and Artificial Intelligence
Address: No.350 Shushanhu Road, Hefei, Anhui Province, P.R. China Tel: 0551-65591176 Fax:0551-65591176 Email: bjb@iim.ac.cn
Supported by Beijing Magtech  Email:support@magtech.com.cn