Complex-domain neural network achieves state-of-the-art coherent imaging accuracy, reducing exposure time and data volume by more than one order of magnitude.
Computational imaging holds the promise of revolutionizing optical imaging with its wide field of view and high-resolution capabilities. Through the joint reconstruction of amplitude and phase — a technique known as “coherent imaging or holographic imaging” — the throughput of an optical system can expand to billions of optically resolvable spots. This breakthrough empowers researchers to gain crucial insights into cellular and molecular structures, making a significant impact on biomedical research.
Despite the potential, existing large-scale coherent imaging techniques face challenges hindering their widespread clinical use. Many of these techniques require multiple scanning or modulation processes, resulting in long data collection times to achieve a high resolution and signal-to-noise ratio. This slows down imaging and limits its feasibility in clinical settings due to tradeoffs between speed, resolution, and quality.
Recent image-denoising methods have offered a potential solution. They employ denoising algorithms during the iterative reconstruction process, aiming to enhance imaging quality even with sparse data. Traditional methods, however, are computationally complex, while deep learning-based techniques tend to have poor generalization and may sacrifice image details.
In a study published in the journal Advanced Photonics Nexus, a team of researchers demonstrated a complex-domain neural network that significantly enhances large-scale coherent imaging. This opens new possibilities for low-sampling and high-quality coherent imaging in various modalities. The technique exploits latent coupling information between amplitude and phase components, leading to multidimensional representations of complex wavefronts. The framework shows strong generalization and robustness across various coherent imaging modalities.
The researcher team, from the Beijing Institute of Technology, the California Institute of Technology, and the University of Connecticut, constructed a network using a two-dimensional complex convolution unit and complex activation function. They also developed a comprehensive multi-source noise model for coherent imaging, encompassing speckle noise, Poisson noise, Gaussian noise, and super-resolution reconstruction noise. The multi-source noise model benefits the domain-adaptation ability from synthetic data to real data.
The reported technique was applied to several coherent imaging modalities, including Kramers-Kronig relations holography, Fourier ptychographic microscopy, and lensless coded ptychography. Extensive simulations and experiments showed that the technique maintains high-quality reconstructions and efficiency while significantly reducing exposure time and data volume – by an order of magnitude. The high-quality reconstructions offer significant implications for subsequent high-level semantic analysis, such as high-accuracy cell segmentation and virtual staining, potentially fostering the development of intelligent medical care.
The capability for rapid, high-resolution imaging with reduced exposure time and data volume provides immense potential for real-time cell observation. Moreover, the integration of this technology with artificial intelligence diagnosis could unlock the secrets of complex biological systems and push the boundaries of medical diagnostics.
Reference: “Complex-domain-enhancing neural network for large-scale coherent imaging” by Xuyang Chang, Rifa Zhao, Shaowei Jiang, Cheng Shen, Guoan Zheng, Changhuei Yang and Liheng Bian, 4 July 2023, Advanced Photonics Nexus.