To read this content please select one of the options below:

Web intelligence-enhanced unmanned aerial vehicle target search model based on reinforcement learning for cooperative tasks

Mingke Gao (The 32nd Research Institute of China Electronics Technology Group Corporation, Shanghai, China)
Zhenyu Zhang (School of Computer Engineering and Science, Shanghai University, Shanghai, China)
Jinyuan Zhang (School of Computer Engineering and Science, Shanghai University, Shanghai, China)
Shihao Tang (School of Computer Engineering and Science, Shanghai University, Shanghai, China)
Han Zhang (School of Computer Engineering and Science, Shanghai University, Shanghai, China)
Tao Pang (The 32nd Research Institute of China Electronics Technology Group Corporation, Shanghai, China)

International Journal of Web Information Systems

ISSN: 1744-0084

Article publication date: 19 March 2024

Issue publication date: 30 April 2024

18

Abstract

Purpose

Because of the various advantages of reinforcement learning (RL) mentioned above, this study uses RL to train unmanned aerial vehicles to perform two tasks: target search and cooperative obstacle avoidance.

Design/methodology/approach

This study draws inspiration from the recurrent state-space model and recurrent models (RPM) to propose a simpler yet highly effective model called the unmanned aerial vehicles prediction model (UAVPM). The main objective is to assist in training the UAV representation model with a recurrent neural network, using the soft actor-critic algorithm.

Findings

This study proposes a generalized actor-critic framework consisting of three modules: representation, policy and value. This architecture serves as the foundation for training UAVPM. This study proposes the UAVPM, which is designed to aid in training the recurrent representation using the transition model, reward recovery model and observation recovery model. Unlike traditional approaches reliant solely on reward signals, RPM incorporates temporal information. In addition, it allows the inclusion of extra knowledge or information from virtual training environments. This study designs UAV target search and UAV cooperative obstacle avoidance tasks. The algorithm outperforms baselines in these two environments.

Originality/value

It is important to note that UAVPM does not play a role in the inference phase. This means that the representation model and policy remain independent of UAVPM. Consequently, this study can introduce additional “cheating” information from virtual training environments to guide the UAV representation without concerns about its real-world existence. By leveraging historical information more effectively, this study enhances UAVs’ decision-making abilities, thus improving the performance of both tasks at hand.

Keywords

Acknowledgements

The research reported in this paper was supported in part by the National Key Research and Development Program of China under the grant No. 2022YFB4500900.

Citation

Gao, M., Zhang, Z., Zhang, J., Tang, S., Zhang, H. and Pang, T. (2024), "Web intelligence-enhanced unmanned aerial vehicle target search model based on reinforcement learning for cooperative tasks", International Journal of Web Information Systems, Vol. 20 No. 3, pp. 289-303. https://doi.org/10.1108/IJWIS-10-2023-0184

Publisher

:

Emerald Publishing Limited

Copyright © 2024, Emerald Publishing Limited

Related articles