Topic: motion compensated deinterlacing?

Hi there,

a wild idea: SVP has to fully understand the whole motion situation from one frame to the next, in order to do its job, right? So I'm wondering if it would be feasible to make use of all the algorithms and knowledge in SVP to implement motion compensated deinterlacing as part of SVP?

How do you handle interlaced anime for example, anyway? Do you currently require deinterlacing to be performed before SVP? I think if you moved deinterlacing into SVP, that could produce a nice quality/performance improvement, even over the best currently available AviSynth deinterlacing scripts, or what do you think? I mean the best algos out there for deinterlacing use their own motion interpolation, but I suppose what SVP does it probably superior to those in terms of quality vs. performance ratio.

Just a wild thought, though. If you don't like the idea, just toss it. I'd love it, though. Have been wishing for high quality motion compensated deinterlacing in madVR for a long time.

Another thought: Would it be hard for SVP to "export" motion information for other filters (or e.g. madVR) to reuse? E.g. debanding, denoising, even sharpening etc algos might be tuned to benefit from knowing which part of the video moves where from one frame to the next. It might even be possible to "misuse" SVP to simply analyze a video sequence, without actually letting SVP modify it, just to use the motion vectors for other purposes. E.g. if you don't want to add deinterlacing, it might be possible to tell SVP to analyze a video sequence, then gather the detected motion information from SVP, and then use that to implement motion compensated deinterlacing externally. Basically you could extend SVP into an open motion toolbox which other software could use for all kinds of algorithms.

What do you think?

Re: motion compensated deinterlacing?

P.S: Just to explain. Motion compensated deinterlacing done by SVP would work something like this:

-------

1)
Split the video into fields (e.g. 60 fields per second for NTSC). Field X now has the odd lines set and the even lines are missing. Field X+1 has the even lines set and the odd lines are missing.

2)
Use a good interpolation algorithm, e.g. NNEDI3, to interpolate the missing scanlines of each field, so each field is turned into a full frame.

3)
Now run SVP on 2) and recalculate every frame, using SVP. No new frames need to be added, just every existing frame needs to be recalculated, using the neighbor frames.

4)
Now create the final frames, by combining the known "good" scanlines from the original fields, and inserting the missing scanlines by using the results of step 3).

-------

There are 2 variants of this processing pipeline:

a) The simple variant would be for step 3) to use frames from step 2).
b) The more complicated, and probably slightly higher quality variant would be to run 3) and 4) "interleaved". Meaning that if step 3) wants to recalculate frame X, it would use frame X-1 from 4) and frame X+1 from 2).

-------

Variant a) might be possible to implement externally, if SVP could be used as a "toolbox". Variant b) would probably be hard to realize efficiently that way, so if we want to have Variant b), it would probably have to be implement inside of SVP.

Re: motion compensated deinterlacing?

I personally think that

1. interlaced video is a past, it's something like VHS now big_smile
2. every Intel's or NV's user have an option to do high quality HW deinterlace right in the video decoder

Would it be hard for SVP to "export" motion information for other filters (or e.g. madVR) to reuse?

Right now SVP is just an Avisynth filter.
Link with avisynth.dll, run the script, feed it with video frames and you'll get motion vectors from the SVAnalyse function.

Re: motion compensated deinterlacing?

Pretty much all broadcasts here in Europe are 576i50 or 1080i50. I think also in the USA many broadcasts are still 480i60 or 1080i60. So while deinterlacing is a thing of the past for Blu-Rays, UHD Blu-Rays and custom encodes, it's still very very important for TV watching, especially live sport events (e.g. soccer and stuff). GPU HW deinterlace is decent quality, but can't really compete with good motion compensated deinterlacing.

I'm not sure how easy it will be for madVR to access motion vectors from "inside" of AviSynth. I'll see if I can get access to that. If that doesn't work, can we maybe talk about alternative APIs/interfaces?

Re: motion compensated deinterlacing?

can we maybe talk about alternative APIs/interfaces?

I think it's pointless unless madVR will host some frame server by itself.

I vote for the Vapoursynth (cause it's licensed under LGPL and has working 64-bit version)

Re: motion compensated deinterlacing?

Just my 2c. I thought about SVP doing deinterlace many times myself. but I guess it won't happen sad

Re: motion compensated deinterlacing?

take a look at QTGMC performance
ok, let's say it will be 2 times faster with SVP's core libraries

(and it won't)

so what? even 5 times performance increase will be too small for a real time usage

Re: motion compensated deinterlacing?

I believe that SVP's one would be with much better quality. Heck, sometimes after I enable SVP I see better prediction of some particles than I would predict myself looking frame by frame! Just look how SVP interpolates frames of dropping liquid over the face here https://www.youtube.com/watch?v=iRUSQm5ZskQ

9 (edited by vivan 07-09-2015 01:47:41)

Re: motion compensated deinterlacing?

Chainik wrote:

take a look at QTGMC performance
ok, let's say it will be 2 times faster with SVP's core libraries

(and it won't)

so what? even 5 times performance increase will be too small for a real time usage

i5-4670k + 576i + QTGMC() = 16 fps in single-thread mode (and ~25% cpu load). So even with just MT it should be fast enough for SD video.
1080i is 5 times more pixels, so if it was 5 times faster (SVPlib, opencl nnedi, a bit faster preset)... it will sound realistic?

Re: motion compensated deinterlacing?

About performance: SVP itself is optimized for real-time usage, right? The only other thing eating performance for motion compensated deinterlacing would be NNEDI3. I'm not sure how many neurons QTGMC uses by default for NNEDI3. Running maybe 32 NNEDI3 neurons via OpenCL is more than real time on a decent GPU today even for 1080i60 content. Ok, so copyback would have to be used, but LAV Video Decoder's latest copyback algorithm seems to be very fast and efficient. So I think this way doing all this in real time should be possible. Maybe some compromises on settings would have to be used. If all else fails, super-xbr could be used instead of NNEDI3, which is a magnitude faster and has very good edge interpolation quality, too.

Of course it's all your decision, Chainik, if you don't like the idea, I fully understand. You probably have enough work your hands as it stands. Just thought it might be something worth looking into because SVP already does most of the work necessary for this.

I'll be looking into Vapoursynth when I find some time. LGPL makes all the difference for me.

11 (edited by .m4jX 07-09-2015 22:22:52)

Re: motion compensated deinterlacing?

does this even matter if madshi includes avisynth/vapoursynth in madVR? If we can use SVP inside of madVR, we can just let madVR deinterlace, which in my opinion has better deinterlacing than any single avisynth script out there.

would look something like this:
LAV decodes -> madVR deinterlaces -> madVR upscales -> SVP interpolates -> madVR does the last steps of rendering.
The current problem with using madVR and SVP is not only that you can't use madVRs deinterlacing (which really sucks if you need to IVTC US TV), but also if you use NNEDI3 (or any other alogrithm, but it's more noticeable with NNEDI3 since it needs a lot more power than everything else) you need a lot more GPU power since you need to upscale 60 frames instead of 24, integration of SVP in madVR would fix that.
Would also make it possible to sharpen or deband with madVR before interpolating.

Re: motion compensated deinterlacing?

.m4jX wrote:

does this even matter if madshi includes avisynth/vapoursynth in madVR? If we can use SVP inside of madVR, we can just let madVR deinterlace, which in my opinion has better deinterlacing than any single avisynth script out there.

Are you talking about telecined movies (IVTC / forced film mode) or about native interlaced content (sports, DXVA deinterlacing)?

I suppose my film mode is pretty good. But I always thought that something like QTGMC would beat the hell out of DXVA deinterlacing?

.m4jX wrote:

LAV decodes -> madVR deinterlaces -> madVR upscales -> SVP interpolates -> madVR does the last steps of rendering.

That would involve copyback. Not impossible, but I'm not sure how good that would perform.

13 (edited by .m4jX 09-09-2015 01:56:58)

Re: motion compensated deinterlacing?

madshi wrote:

Are you talking about telecined movies (IVTC / forced film mode) or about native interlaced content (sports, DXVA deinterlacing)?

I am mostly talking about telecined content, I very very rarely encounter other interlaced content (except PsF which doesn't need detinterlacing). When I do I'm actually fine with ffdshows deinterlacing which can always be used with SVP anyways. but something like QTGMC would probably better in those cases.

madshi wrote:

That would involve copyback. Not impossible, but I'm not sure how good that would perform.

it would surely need less GPU power than having to NNEDI3 upscale additional 36 frames per second, or not?
currently I can only use 16 neurons to stay under 16.6ms rendering time, but my GPU is more powerful than the average user has. Lots of people can't use NNEDI3 with SVP at all.
not everyone uses NNEDI3 of course maybe performance would be worse with Super-xbr, jinc or lanczos, because you don't save as much power as NEEDI3 when you have less frames to upscale.

Re: motion compensated deinterlacing?

Well, if you upscale before SVP, you do increase the amount of work SVP has to do. So while it would probably allow you to use higher settings in madVR, you might bring SVP into trouble. There's not free lunch, unfortunately. But theoretically it would be possible to offer both options.

Re: motion compensated deinterlacing?

...also SVP currently supports YV12 only
dealing with RGB after madVR's upscaling needs major modifications in SVP libs itself and will drop performance a lot

Re: motion compensated deinterlacing?

Well, I could output YV12 to SVP, that wouldn't be such a big problem for me. But then of course madVR would have to upscale chroma *again* after SVP. So it's all not ideal. Anyway, let's not get ahead of ourselves. The first step would be to add a Vapoursynth frame server to madVR, which sits in front of madVR's GPU rendering.