Training with synthetic data

 

Our data set contains 200 images from the Hubble Space Telescope (HST) archive that are divided into training, evaluation and validation data sets. The images are captured by the UVIS (UV/Visible channel) detector on the Wide Field Camera 3 (WFC3), which is a ∼ 4000 × 4000 pixel detector of two CCDs (2051 × 4096 each), and a 31 pixel gap between them. From a broad variety of filters, we select two wide filters – F555W and F606W that correspond to 530.8 and 588.7 nm pivot wavelengths. Henceforth we refer to these images as the real data. They are used as the ground truth when training the network. For the input to the network we use synthetic data that are generated based on the real data but with additional noise and shorter exposure times. for more information about synthetic data read the paper -  Section 2.1.

During this project we trained more then 30 different networks with changes to the loss function, number of input/output channels, various exposure time ratios and different types of up-sampling.  In comparison with current methods that use stacking of large numbers of exposures, our method, Astro U-net, is less time consuming and is able to yield results of equivalent quality. Astro U-net is a fully-convolutional neural network for image de-noising and enhancement, which only requires the exposure time ratio as input. Moreover, Astro U-net can handle images of different scales. For the results see the paper.

 

 

Results

 

Paper: A. Vojtekova et al. 2020, MNRAS, Learning to Denoise Astronomical Images with U-nets 

Presentation: ESAC TechTalk