Residual Dense Net De-Interlace Filter (RDNDIF)
Work in progress deep de-interlacer filter. It is based on the architecture proposed by Bernasconi et al. from Disney Research | Studios. Original publication
While the publication appears to voluntarily omit some implementation details, the implementation presented here may not match exactly the one initially thought by the authors. First, the RDB does not add the convoluted input feature maps to the output of the network. In image denoising, we add back the input as we expect the RDB to remove noise of a still shot. Here, the network is trying to predict missing fields. Adding back unsuited temporal data at the output would, intuitively for me, increase aliasing which is undesired.
Additional dependencies for training:
- Load an odd number N of consecutive fields to predict the complementary fields of frame ⌊N / 2⌋ + 1. The model expects ⌊N / 2⌋ + 1 to be a bottom field (flip vertically the input if it is not).
- Predict the complementary field of frame ⌊N / 2⌋ + 1 with NNEDI3(field=3). Use frame with field ⌊N / 2⌋ (top) and ⌊N / 2⌋ + 1 (bottom), and get the second frame returned by NNEDI3 (= the estimate).
- Provide the set of fields to the network. Add the result to the even fields of the estimate.
- Visualize the output with show_img(output+estimate)
tl;dr for training:
kernel_sizes = [[13, 7, 3], [9, 5, 3], [5, 3, 3]] n_rdb_blocks = 3 depths_of_rdbs = 3 # This value is provided again in case kernel_size of a block is constant for all components net = RDBlockNet(n_rdb=n_rdb_blocks, rdb_depths=depths_of_rdbs, kss=[[13, 7, 3], [9, 5, 3], [5, 3, 3]]) train(net, './vimeo90k', n_epochs=10)
For evaluation with VS:
net = load_network('./model_path') filter = Dndif(net) clip_out = filter.dndif(clip, tff=True) ``