Authors
Megan Lantz, Emil Y Sidky, Ingrid S Reiser, Xiaochuan Pan, Gregory Ongie
Publication date
2024/2/15
Journal
arXiv preprint arXiv:2402.10010
Description
Deep neural networks used for reconstructing sparse-view CT data are typically trained by minimizing a pixel-wise mean-squared error or similar loss function over a set of training images. However, networks trained with such pixel-wise losses are prone to wipe out small, low-contrast features that are critical for screening and diagnosis. To remedy this issue, we introduce a novel training loss inspired by the model observer framework to enhance the detectability of weak signals in the reconstructions. We evaluate our approach on the reconstruction of synthetic sparse-view breast CT data, and demonstrate an improvement in signal detectability with the proposed loss.
Scholar articles