How to use nnedi3 to de-interlace video in FFmpeg
Published
Updated
The nnedi3 library is an incredibly powerful de-interlace and upscale tool that uses deep learning and neural networks to process video. It is able to produce an incredibly high quality result at the cost of a significant amount of CPU time when compared to more conventional de-interlacing algorithms such as yadif or IVTCC. While these algorithms are much faster and can produce real-time results, they may cause motion blur and other visual artifacts in the resulting video.
It should be noted that the nnedi3 library included within FFMPEG only has the ability to de-interlace video, not upscale it. If you want to utilize these features, you will likely need to use a library such as AviSynth+ or VapourSynth.
What is video interlacing?
Interlaced video is a practice commonly used in broadcast video in which the source frame rate is preserved by effectively halving the data in each frame. By interlacing the resulting data in each frame, you can trick the eye into thinking the frame rate is higher than it actually is. It is used because there are limitations on the bandwidth available for broadcast.
How does nnedi3 work?
The nnedi3 library uses a neural network of a configurable size to effectively re-generate the missing content in between each of the fields for each frame.
Example of how to use nnedi3 on the command line:
First you will need to download the nnedi3_weights.bin file which is used by the neural network in the library. You can find it on Github. Place the file in the same directory as your source file or modify the script below to point to the correct path.
ffmpeg /
-i "source-video.mkv" /
-map 0:0 -map 0:1 /
-vf "nnedi=weights=nnedi3_weights.bin" /
-c:v libx264 /
-preset slow /
-crf 19 /
-c:a aac /
-b:a 256k /
"output-video.mkv"
The above script will generate an de-interlaced output video file and process the source video using the slow algorithm with a large 256k network.