Yes. These will be the main disadvantages of this filter. I am, however, at the moment preparing a huge list of benefits of the new filter over the previous one. It is going to be really long posts. I hope that I will be able to convince you, but also at the same time encourage other users to the new filter. Please give me some time though, English is not my native language, so it will take me until late evening or maybe even tomorrow to write all the posts. I really care - I am a big fan of real-time motion interpolation.

But now I will briefly address the drawbacks. A LOT of dependencies and a huge loading time can probably be offset by using a fast NVMe SSD based on PCIe 4.0 x4 interface, for example: Seagate FireCuda 530 with 7300 MB/s sequential read. In a year's time we may have 2x faster drives with the widespread adoption of the PCIe 5.0 x4 interface, so maybe even 14600 MB/s!

But what is most important and what I value most about SVP is that it gives users choice. SVP gives countless configurations and choices for each user and thus leaves other programs like DmitriRender behind.

Please give users the choice to use the RIFE filter for VapourSynth based on ncnn or the RIFE filter for VapourSynth based on PyTorch in SVP.

And I'm back to preparing more posts....

As we all probably already know, since June 8 SVP supports RIFE AI interpolation at BETA stage: https://www.svp-team.com/news/

For this interpolation SVP uses the RIFE filter for VapourSynth created by HolyWu:
https://github.com/HomeOfVapourSynthEvo … cnn-Vulkan

20 days ago HolyWu wrote:

The algorithm is not fast enough for playback, so you probably wanna go with SVP. Anyway I will have another CUDA version which is 3 times faster than Vulkan.

https://github.com/HomeOfVapourSynthEvo … n/issues/5


And here it is: a brand new RIFE filter for VapourSynth which beats its predecessor in performance and eliminates several other drawbacks: https://github.com/HolyWu/vs-rife

Hence my big request to Chainik and MAG79 to add this new RIFE filter for VapourSynth to SVP: https://github.com/HolyWu/vs-rife

Chainik wrote:

240p works great, 360p is too much big_smile

That is what I was about to ask. Every GPU should have a resolution threshold at which it can handle and one where it can't.

Here is what we know:
NVIDIA GeForce RTX 2060 Mobile
240p works great
360p is too much

Can you share what your GPU, VRAM, CPU utilization was at 240p and 360p?

I cannot believe it! You did it!

Tell me, did you somehow manage to get the performance of the CUDA?

Is it possible to get smooth x2 playback with RIFE on your GPU on some resolution? For example 480p? 720p?

...and thank you very much for that:
https://github.com/nihui/rife-ncnn-vulkan/issues/22

I keep my fingers crossed!

Chainik wrote:

... plus very long (re-)start time. a video player will hang for 5-10 secs on each seek, for example.

Are you using a fast NVMe SSD? Maybe it takes that long to load those 2.5 GB of dependencies each time? Have you checked what is the bottleneck in that 5-10 seconds: GPU, CPU, HDD/SDD?

Would it be possible to check if the same amount of time would be needed if you put all data with all dependencies in RAM Disk?

Chainik wrote:

actually there no changes needed in the "RIFE filter for VapourSynth" itslef
the problem is "ncnn with CUDA" is not completed yet, some functions necessary for RIFE are not implemented... and the last commit was on Jan 18.

This is probably even better. NCNN has a much broader application than motion interpolation, so there's a good chance that someone will want to improve NCNN performance by optimizing it for CUDA. It's not clear if the lack of updates are some temporary issues, lack of times, or something out of the blue? On the other hand, it's very interesting how so many new tools dedicated to AI and using AI are being developed in just the last year of time!

So we have 3 options:

1. Implement the "original" RIFE in SVP - disadvantage: 2.5 GB of dependencies
2. Implement the RIFE filter for VapourSynth in SVP - disadvantage: worse performance
3. Ask the developer of the RIFE filter for VapourSynth to implement the code https://github.com/atanmarko/ncnn-with- … ocator.cpp

Alternatively there is a 4th option...

Chainik maybe you could look into whether you would be able to somehow combine the codes from point 3 above to speed up the RIFE filter for VapourSynth code to be able to watch motion interpolated videos in real time without transcoding using AI RIFE interpolation and SVP? Just as an experiment. You are a motion interpolation enthusiast, just like all of us on this forum. The difference, however, is that you have a knowledge of how it all works that we do not.

Still such a question arises: Flowframes performs 2 operations:

-interpolation of motion using, for example, RIFE algorithms
-transcoding to the resulting video file or image files (PNG).

I am curious how much time could be saved if this second operation was omitted, i.e. what I am aiming at: sending the finished result of motion interpolation directly to the monitor, without transcoding and saving on HDD/SSD?

That is, it would be best for the author of the RIFE filter for VapourSynth to implement ncnn-with-cuda in his code. That 3x performance gain is worth it. It gets a bit complicated.

If I understand correctly RIFE filter for VapourSynth does not need these dependencies?

Great!!!

Does that mean: laptop rtx 2060 --> 704*528 @30->60 --> 72 fps transcoding speed?

That would already make sense!

On the other hand RIFE filter for VapourSynth without CUDA optimization might not be enough.

Now the question is whether to watch motion interpolated videos in real time without transcoding using AI RIFE interpolation and SVP...

...RIFE filter for VapourSynth is necessary?

Or would RIFE alone be sufficient: https://github.com/hzwer/Arxiv2020-RIFE

„Q: What's the difference between RIFE CUDA and RIFE NCNN? Which one should I use?
A: The results should be identical, however, RIFE-NCNN also runs on AMD cards, CUDA only on Nvidia. If you have an Nvidia card, use CUDA as it's faster.”
https://github.com/n00mkrad/flowframes

For now we have an attempt to create a universal version of RIFE filter for VapourSynth based RIFE ncnn Vulkan. If in the future there will be a RIFE filter for VapourSynth dedicated for CUDA then there is a chance for some performance increase. I also wonder if it will be possible to increase performance with Tensor Cores?

Chainik wrote:

bottom line - I think we could integrate this for the SVPcode usage

Is it possible to implement RIFE so that we can watch motion interpolated videos in real time without transcoding? I care about this the most, and probably not only me. We can transcode movies using Flowframes, however, every transcoding means loss of some quality, unless we use -x265-params-lossless=1, but then we need huge HDD space;-)

Yes, I know. For starters, materials in DVD resolution and below are only with a powerful GPU.

Chainik wrote:

is it supposed to use all the GPU? hmm because I only see up to 10% GPU load regardless of gpu_thread value

No idea what this is all about. However, if 10% GPU usage produces these results then there is hope for an increase in fps as much as 10X!!! I have no idea about programming, but I think a programmer with a programmer will very quickly be able to come to ways to use the full power of the GPU.


It may sound strange in the context of what I am writing about and striving for, but at the moment I don't even have a way to check how Flowframes works. I was supposed to build a PC based on the AMD Ryzen Threadripper 3970X in early 2020, but had to postpone plans.... Now I know that a good GPU will still come in handy. Well... I'm looking forward, especially to SVP on this giant CPU and compare the effects with RIFE on a good GPU. For the moment I'm relying on the opinions of others, and they are extremely enthusiastic about RIFE.

Chainik wrote:

laptop rtx 2060 --> 704*528 @30->60 --> 24 fps transcoding speed

I think more powerful PC dedicated graphics cards, especially the latest GeForce RTX 30 should perform better. However, even this setup gives hope that at least DVD-quality material can be interpolated in real time. At least for now, as the RIFE algorithms are constantly being optimised.

NVIDIA GeForce RTX 2060 Mobile - relative performance:
https://www.techpowerup.com/gpu-specs/g … bile.c3348

If anyone still doesn't know what RIFE is here are links to the source:

RIFE: https://github.com/hzwer/Arxiv2020-RIFE

RIFE ncnn Vulkan: https://github.com/nihui/rife-ncnn-vulkan

Also, you can check for yourself the motion interpolation effects of several AI-based methods, including RIFE, in a Windows environment using:

Flowframes: https://nmkd.itch.io/flowframes
https://github.com/n00mkrad/flowframes

RIFE in SVP: https://www.svp-team.com/wiki/RIFE_AI_interpolation
-------------------------------

RIFE filter for VapourSynth: https://github.com/HomeOfVapourSynthEvo … cnn-Vulkan

SVP with real-time RIFE support could provide better results and less artifacts than the motion interpolation algorithms in the best TVs!

Question to developers: is it now possible to try to combine RIFE with SVP?

I can pay $100 for SVP 5.0 beta with REAL TIME RIFE support. Even though I already paid for the current version.

You could even develop two versions at once: a standard SVP and an SVP AI version for super powerful GPUs.

2X 720p or 2x SD with the best interpolation algorithm real time is enough for $100. I'm serious. I hope I'm not the only one who thinks so.

If not the creators of the SVP then who can do it? Just a rhetorical question, of course.

If already 2080Ti can interpolate 2X 720p with 30+FPS, what can do such monsters as 3090? And this is just the beginning. After all, the algorithm will be optimized all the time: „2021.2.9 News: We have updated the RIFEv2 model, faster and much better!” https://github.com/hzwer/arXiv2020-RIFE

Also, there will be a strong emphasis on AI algorithm support in the new GPU models.

And finally, the most important: SVP with real-time RIFE support can provide better results and less artifacts than the motion interpolation algorithms in the best TVs!

270

(28 replies, posted in Using SVP)

Maybe you could do some tests on 4K HDR 10-bit movies?

23.976fps movie x 4 = 95.904Hz (monitor)
25fps movie x 4 = 100hz (monitor)

271

(28 replies, posted in Using SVP)

kathampy wrote:

I found that 2160p 10-bit Blu-rays stutter at 120 fps but are smooth at 90 fps in mpv. Other lower bit-rate 2160p 10-bit videos are smooth at 120 fps. I don't see any meaningful increase in CPU / GPU usage between the two, so I don't know what the bottleneck is.


Thanks kathampy for sharing information about your hardware and software settings with us.

120 fps smooth but 90 fps stutter? This reminds me of something...

120 fps DP 1.4 can't go past 4K 98hz in HDR10 mode without dropping to 4:2:2, unless DSC is supported. Maybe some conversion raises 4:2:0 to a higher level and DP 1.4 is the bottleneck? Predator X27 doesn't support DSC. Maybe this is the reason?

272

(53 replies, posted in Using SVP)

MAG79 wrote:

Rules to remove artifacts:
1. You need to see maximum frames as possible. So, you need to see all source frames. It is possible only with integer multiplier. x2, x3 and so on.
2. You need to see calculated frames with minimum artifacts. It is possible if calculated frames are close to source frames. Best choice is x3 multiplier. You will see one original frame and two calculated with delta 1/3 of interframe time.
3. This rule is individual but I like block artifacts with maximum smoothness:
frame interpolation: uniform
SVP shader: 10. by blocks
artifacts masking: disabled
motion vectors precision: one pixel (or half pixel)
motion vectors grid: 14 px
decrese grid step: disabled
search radius: small and fast (or small)
wide search: disabled
width of coarse top level: small
processing of scene changes: blend adjacent frames

Great post. Thank you!
I am going to have x5 multiplier in my next HTPC.

273

(36 replies, posted in Using SVP)

OK, thank you.

274

(36 replies, posted in Using SVP)

Thank you. That means that even with R7 3800X you play with drops? Have you checked if there are as many droped frames when you use SVP as without it? You tested it 2 months ago, maybe now with the new full HDR support it will be better?

275

(28 replies, posted in Using SVP)

@kathampy

1. On which monitor/TV do you watch 4K/10-bit HDR movies and do you use x2 (48Hz), 60Hz, x3 (72Hz) or maybe x5 (120Hz) interpolation?

2. What’s your CPU?

3. Are you still using quad-channel 4 x 8 GB 4000 MHz C17 memory?

4. What’s your mpv configuration?

I won't be able to help you, because I'm just planning to buy an HTPC myself, but I think this information may be useful to solve the problem and also for me for future configuration.