To read this content please select one of the options below:

Deep understanding of radiology reports: leveraging dynamic convolution in chest X-ray images

Tarun Jaiswal (Department of Computer Application, National Institute of Technology Raipur, Raipur, India)
Manju Pandey (Department of Computer Application, National Institute of Technology Raipur, Raipur, India)
Priyanka Tripathi (Department of Computer Application, National Institute of Technology Raipur, Raipur, India)

Data Technologies and Applications

ISSN: 2514-9288

Article publication date: 29 November 2023

33

Abstract

Purpose

The purpose of this study is to investigate and demonstrate the advancements achieved in the field of chest X-ray image captioning through the utilization of dynamic convolutional encoder–decoder networks (DyCNN). Typical convolutional neural networks (CNNs) are unable to capture both local and global contextual information effectively and apply a uniform operation to all pixels in an image. To address this, we propose an innovative approach that integrates a dynamic convolution operation at the encoder stage, improving image encoding quality and disease detection. In addition, a decoder based on the gated recurrent unit (GRU) is used for language modeling, and an attention network is incorporated to enhance consistency. This novel combination allows for improved feature extraction, mimicking the expertise of radiologists by selectively focusing on important areas and producing coherent captions with valuable clinical information.

Design/methodology/approach

In this study, we have presented a new report generation approach that utilizes dynamic convolution applied Resnet-101 (DyCNN) as an encoder (Verelst and Tuytelaars, 2019) and GRU as a decoder (Dey and Salemt, 2017; Pan et al., 2020), along with an attention network (see Figure 1). This integration innovatively extends the capabilities of image encoding and sequential caption generation, representing a shift from conventional CNN architectures. With its ability to dynamically adapt receptive fields, the DyCNN excels at capturing features of varying scales within the CXR images. This dynamic adaptability significantly enhances the granularity of feature extraction, enabling precise representation of localized abnormalities and structural intricacies. By incorporating this flexibility into the encoding process, our model can distil meaningful and contextually rich features from the radiographic data. While the attention mechanism enables the model to selectively focus on different regions of the image during caption generation. The attention mechanism enhances the report generation process by allowing the model to assign different importance weights to different regions of the image, mimicking human perception. In parallel, the GRU-based decoder adds a critical dimension to the process by ensuring a smooth, sequential generation of captions.

Findings

The findings of this study highlight the significant advancements achieved in chest X-ray image captioning through the utilization of dynamic convolutional encoder–decoder networks (DyCNN). Experiments conducted using the IU-Chest X-ray datasets showed that the proposed model outperformed other state-of-the-art approaches. The model achieved notable scores, including a BLEU_1 score of 0.591, a BLEU_2 score of 0.347, a BLEU_3 score of 0.277 and a BLEU_4 score of 0.155. These results highlight the efficiency and efficacy of the model in producing precise radiology reports, enhancing image interpretation and clinical decision-making.

Originality/value

This work is the first of its kind, which employs DyCNN as an encoder to extract features from CXR images. In addition, GRU as the decoder for language modeling was utilized and the attention mechanisms into the model architecture were incorporated.

Keywords

Citation

Jaiswal, T., Pandey, M. and Tripathi, P. (2023), "Deep understanding of radiology reports: leveraging dynamic convolution in chest X-ray images", Data Technologies and Applications, Vol. ahead-of-print No. ahead-of-print. https://doi.org/10.1108/DTA-07-2023-0307

Publisher

:

Emerald Publishing Limited

Copyright © 2023, Emerald Publishing Limited

Related articles