Hi dlr5668, thanks for taking a moment to test vs-rife v1.3.0.

If I understand correctly the file you are trying to interpolate is:
[LowPower-Raws] Boku no Hero Academia S4 - 08 (BD 1080P x265 Ma10p FLAC).mkv

RIFE does not currently support 10-bit colour depth, hence perhaps the problems with the Main 10 profile. Have you tried interpolating 8-bit files?


If you still have some time, could you check if it is possible to smoothly interpolate in real time on your NVIDIA GeForce RTX 3070 Ti (GA104) @ 1830 MHz graphics card with the following configuration:

real time playback, x2 interpolation
720p 23.976 video file
SVP with mpv with vs-rife(PyTorch)
RIFE model: 3.8
scale=1
FP32 or(and) FP16

I would love to test it myself, but I don't have a suitable graphics card yet. If I knew what video resolution could be interpolated with vs-rife in real time it would make it more easly for me to choose a graphics card to buy.

Thanks for asking what I want. I think the answer to that question will lighten the context of my posts a bit.

I want to:

1. Take advantage of the fact that hzwer is actively developing RIFE AI interpolation and actively communicating with the community: https://github.com/hzwer/arXiv2020-RIFE

2. Take advantage of the fact that HolyWu is actively developing the RIFE filter for VapourSynth based on PyTorch and is actively communicating with the community: https://github.com/HolyWu/vs-rife

Of course, I would very much like to see the RIFE filter for VapourSynth based on ncnn developed as well: https://github.com/HomeOfVapourSynthEvo … cnn-Vulkan Here the problem arises that HolyWu is dependent on the nihui project for this filter: https://github.com/nihui/rife-ncnn-vulkan

And this is where the problem lies... nihui has not been developing this project for 4 months and does not respond to users problems: https://github.com/nihui/rife-ncnn-vulkan/issues Of course we should be grateful that he has made the code available for us all to use. We must remember that he makes his work available for free and that any support depends on his good will and the time he has available.

However, this does not change the fact that we do not know if there will ever be any update to the code created by nihui. The situation is similar with NVIDIA's GPU implementation of CUDA inference support for the well-known Tencent NCNN inference engine: https://github.com/atanmarko/ncnn-with-cuda further development of which is essential to be able to efficiently interpolate in real time using Vulkan.

If nothing changes, the long-awaited RIFE interpolation model 4 prepared by hzwer will only be usable in SVP thanks to HolyWu and the actively developed the RIFE filter for VapourSynth based on PyTorch: https://github.com/HolyWu/vs-rife

This is exactly what I want: I ask everyone including you Chainik to test the RIFE filter for VapourSynth based on PyTorch with SVP for real-time interpolation and take advantage of the fact that all possible problems and suggestions for improvements can be reported to HolyWu and hzwer and there is a good chance that they will be addressed for the benefit of all of us who want to have frame interpolation without artifacts.

Good news!

On 20 September poisondeathray wrote on the FrameRateConverter thread:

The other issue is model availability - I'm finding 2.3 and 2.4 generally better than the 3.x versions . But we can probably ask HolyWu to make other models available for the Vapoursynth Pytorch version

https://forum.doom9.net/showthread.php? … mp;page=26

...and immediately HolyWu has prepared a new 1.3.0 version of the RIFE filter for VapourSynth based on PyTorch: https://github.com/HolyWu/vs-rife/releases/tag/v1.3.0

Please, if anyone has a suitable graphics card capable of working with this filter, try it in collaboration with SVP and the RIFE filter for VapourSynth based on PyTorch for real-time interpolation and please share your results on this thread.

As long as HolyWu is actively developing this filter, we can ask him to add or change something in the code of this filter.

Fadexz wrote:
Chainik wrote:

mpv just hangs after switching to RIFE profile in a real time
no idea why

transcoding seems to work

Only when not in windowed mode for some reason, maybe something to do with the sizing change

Could you please elaborate a bit on this topic?
Did you manage to interpolate a video in real time according to these descriptions:
https://www.svp-team.com/forum/viewtopi … 023#p79023
https://www.svp-team.com/forum/viewtopi … 183#p79183 

Which scale in RIFE? What graphics card? What file resolution? Did it not crash?

Fadexz wrote:

Most say RIFE 3.1 is the better, 2.3 is the best though generally.

This is what they say on the FrameRateConverter thread: https://forum.doom9.net/showthread.php? … mp;page=25

They extensively test RIFE and work on its implementation into FrameRateConverter. Their developer is actively communicating with HolyWu https://github.com/HomeOfVapourSynthEvo … n/issues/7  so there is a chance VapourSynth RIFE filters will be developed, which SVP could also benefit from.

Unfortunately FrameRateConverter like others that have recently implemented AI RIFE focus on interpolation with reencode. In this case, time is of no importance, so they can use any VapourSynth RIFE filter, for convenience even the one using ncnn with Vulcan.

SVP is famous for its real-time interpolation. Here, any refinement that affects the interpolation time matters. Therefore only the VapourSynth RIFE filter with PyTorch and CUDA https://github.com/HolyWu/vs-rife is currently fast enough to interpolate in real time.

This is also why I quoted the creator of RIFE himself, who claims that the 3.8 model is 2x faster than the 2.4. This is a huge difference, for someone who wants to interpolate in real time.

However, I wouldn't get too attached to the 3.8 model and dwell too much on the differences to older models, because it's almost certain that we're in for a great future with AI RIFE:

To-do List

Multi-frame input of the model

Frame interpolation at any time location

Eliminate artifacts as much as possible

Make the model applicable under any resolution input

Provide models with lower calculation consumption

https://github.com/hzwer/Practical-RIFE

I can't help you with g-sync, but if you manage to use it, you will get the best possible sync and you can use x3 or x5 interpolation in SVP.

If you fail with g-sync, then if you want to reduce stuttering on panning images to a minimum, then as I wrote x5 in SVP and 119.88 Hz in TV.

dlr5668 wrote:

At least my mpv is not crashing big_smile

Great!!! This is the message I was waiting for!


Could you please check such configuration:

fps number encoding with 720p 23.976, scale=1, FP16 and FP32, RIFE 3.8 with SVP

and then:

real time playback x2: 720p 23.976, scale=1, FP16 and FP32, RIFE 3.8 with SVP with MPV

Chainik wrote:

> What then does scale=2

the opposite? upscale before, downscale after

So, scale=1 will probably be the optimal solution. Somehow I don't believe that upscaling will bring any significant improvement. On the other hand, downscaling is not only cheating, but also a significant reduction in quality.

Chainik wrote:

if I understand it correctly, scale=0.5 just downscales the whole frame before the "AI" magic, then upscales the result back --> you just get lower resolution intermediate frames
this is why it's "cheating"

your video has very high resolution such as 4K, we recommend set --scale=0.5 (default 1.0). If you generate disordered pattern on your videos, try set --scale=2.0. This parameter control the process resolution for optical flow model.

https://github.com/hzwer/arXiv2020-RIFE … DME.md#L60

What then does scale=2 and scale=4 do?

I cannot believe it!

1920x1080 23.976 x2 and 54,0fps with RIFE!!!

So, NVIDIA GeForce RTX 3070 Ti is capable of producing enough frames per second in 1080p files today to think about x2 real-time interpolation!!!

Please post more, you are doing a great job!

Now, the most important question: do you have the same problem as Chainik on the Turing microarchitecture when attempting real-time interpolation?

Chainik wrote:

mpv just hangs after switching to RIFE profile in a real time
no idea why

Thank you very much!!!

1. The second one - the refresh rate that SVP is converting.

2A. If you are using TV then check if the internal TV interpolation does not give better results (less artifacts) than SVP. If not, see 2B.

2B. If you are using SVP with TV check if TV has Adaptive-Sync. If not then see 2C.

2C. If your video source is 23.976 hz and TV allows 119.88 hz then use x5 interpolation in SVP (23.976 hz x 5 = 119.88 hz).

Теперь в SVP есть еще более быстрая версия RIFE. Даже в 3 раза быстрее. Не стесняйтесь протестировать ее и поделиться своими результатами:
https://www.svp-team.com/forum/viewtopic.php?id=6281

I would be very grateful if you could test SVP with RIFE filter for VapourSynth (PyTorch) on 3070ti.

I would be particularly interested in how the following 2 issues compare on the new Ampere microarchitecture:

1. Do you have the same problem as Chainik on the Turing microarchitecture when attempting real-time interpolation?

Chainik wrote:

mpv just hangs after switching to RIFE profile in a real time
no idea why

2. What is the performance when choosing half precision (FP16)? Here quite different results appear:

Chainik wrote:

fp16 vs. fp32 --> 64 fps vs. 53 fps (704*528 video); 31 fps vs. 28 fps (720p) hmm


the fp16 speed is phenomenal (~x4)

https://github.com/hzwer/arXiv2020-RIFE/issues/188

RIFE filter for VapourSynth (PyTorch) is already completely capable of producing enough frames per second in HD files today to think about real-time interpolation. Below are examples of using RIFE in FlowFrames:

RTX 3080 - 1280x720p - 55.5 FPS
RTX 3090 - 1280x720p - 60 FPS

More examples here:
https://github.com/n00mkrad/flowframes/ … chmarks.md

It should be mentioned that these are examples from 8 months ago, and the latest 3.8 algorithm is much faster, as I wrote above.

Are there any people reading this thread who have already tried the RIFE filter for VapourSynth (PyTorch) implemented in SVP?

HolyWu just released a new 1.2.0 version yesterday:
https://github.com/HolyWu/vs-rife/releases/tag/v1.2.0

Chainik, could you take a second look at running this filter in SVP in real time?

HolyWu is actively supporting the RIFE filter for VapourSynth based on PyTorch, maybe he will be able to help you over the implementation of this filter for real-time interpolation?

Thank you very much for the upgrade and sharing the test results.


Chainik wrote:

fp16 vs. fp32 --> 64 fps vs. 53 fps (704*528 video); 31 fps vs. 28 fps (720p) hmm

Strange...
cheesywheesy claims x4 on 3090.
n00mkrad, creator of Flowframes, claims that fp16 does not work with newer models:
https://github.com/hzwer/arXiv2020-RIFE/issues/188

Maybe it's a matter of the interpolation model used, or the GPU? I unfortunately can't check this as I don't have a compatible graphics card yet. Maybe someone else could check the speed in SVP and in Flowframes?


Chainik wrote:

- "AI model": does nothing with Pytorch version;

Does this mean that you are using a default model in the Pytorch version? If so, this is version 3.5:
https://github.com/HolyWu/vs-rife/blob/ … t__.py#L10

If so, it is neither the latest model nor, unfortunately, the fastest.
3.8 model is the fastest!

Based on the evaluation of dozens of videos, the v3.8 model has achieved an acceleration effect of more than 2X while surpassing the effect of the RIFEv2.4 model.

https://github.com/hzwer/arXiv2020-RIFE/issues/176

new model (3.8). It sometimes give worse results, other times better. But it is a looot faster.

https://grisk.itch.io/rife-app/devlog/2 … 207-update


Chainik, could you take a second look at running this filter in SVP in real time.

Could you please try to use VLC or any other DirectShow player?
Could you please try to use other some very, very low resolution file?
Could you please try to wait a few more minutes?
Have you checked, maybe hardware is the bottleneck? VRAM? RAM?

I'd love to help, but I'm neither a programmer nor have a proper graphics card currently.

HolyWu is actively supporting the RIFE filter for VapourSynth based on PyTorch, maybe he will be able to help you over the implementation of this filter for real-time interpolation?

Chainik, you will know best what to ask. I can of course write a request myself, but apart from the fact that it doesn't work I won't be able to add anything else, because I simply don't know anything about it :-(

Chainik wrote:

there's no "precision" param in the "vsrife" filter
https://github.com/HolyWu/vs-rife/blob/ … t__.py#L16

Now it is!!!!!!!!!!!!!!

HolyWu just released a new version yesterday:
https://github.com/HolyWu/vs-rife/releases/tag/v1.1.0

I guess he reads this thread;-) If so, thanks a lot HolyWu!



Selecting half precision (FP16) increases the performance by up to 4x:

the fp16 speed is phenomenal (~x4)

https://github.com/hzwer/arXiv2020-RIFE/issues/188 


Chainik, this is yet another reason to take a second look at running this filter in SVP in real time. The speed should increase dramatically!

HolyWu is actively supporting the RIFE filter for VapourSynth based on PyTorch, maybe he will be able to help you over the implementation of this filter for real-time interpolation?

Chainik, you will know best what to ask. I can of course write a request myself, but apart from the fact that it doesn't work I won't be able to add anything else, because I simply don't know anything about it :-(

Chainik, did you manage to solve the problem of mpv hanging after switching to RIFE profile in a real time?

What about other players?

"start playback in mpv (preferably) or VLC, DirectShow players may also work via Vapoursynth Filter (VPSF)"

https://www.svp-team.com/wiki/RIFE_AI_interpolation

Have you tried using VLC or any other DirectShow player?
Have you tried using other some very, very low resolution file?
Have you tried waiting a few more minutes?
Have you checked, maybe hardware is the bottleneck? VRAM? RAM?

I'd love to help, but I'm neither a programmer nor have a proper graphics card currently.

Chainik, am I perhaps too impatient with asking about the problem of mpv hanging after switching to RIFE profile in a real time? Do you need a little more time to look for solutions?

At the moment we have 4 cases:

1. SVP+RIFE filter for VapourSynth (ncnn) - transcode very slowly
2. SVP+RIFE filter for VapourSynth (ncnn) - run very slowly in real time
3. SVP+RIFE filter for VapourSynth (PyTorch) - transcode 2-3 times faster
4. SVP+RIFE filter for VapourSynth (PyTorch) - mpv hangs



"start playback in mpv (preferably) or VLC, DirectShow players may also work via Vapoursynth Filter (VPSF)"

.https://www.svp-team.com/wiki/RIFE_AI_interpolation

Have you tried using VLC or any DirectShow player?
Have you tried using some very, very low resolution file?
Have you tried waiting a few minutes?

I'd love to help, but I'm neither a programmer nor have a proper graphics card.

Maybe someone else besides Chainik could test the latest RIFE filter for VapourSynth based on PyTorch and share their impressions on this thread?

If anyone hasn't tried it yet, here are the advantages of RIFE over MVTools2:
http://forum.doom9.net/showpost.php?p=1 … stcount=17
http://forum.doom9.net/showpost.php?p=1 … stcount=24

This is an improvement on an aspect that has been raised many times on this forum:
https://www.svp-team.com/forum/viewtopic.php?id=5500
https://www.svp-team.com/forum/viewtopic.php?id=6139
https://www.svp-team.com/forum/viewtopic.php?id=4382

That's not good...

the fp16 speed is phenomenal (~x4)

https://github.com/hzwer/arXiv2020-RIFE/issues/188


And was it possible to solve the problem of mpv hanging after switching to RIFE profile in a real time?

Congratulations on the release of SVP for Android today!
I hope this will be another successful milestone in SVP development, alongside the RIFE implementation.

Hopefully there will now be a little more time on finding a solution to the problem of mpv hanging after switching to RIFE profile in a real time. Or maybe something has already been done?


By the way I have another question:

In the original RIFE project "scale" and "math precision" are two separate parameters to choose from.

The first one is the 64th line:

https://github.com/hzwer/arXiv2020-RIFE … deo.py#L64

Whereas math precision is 62 line:

https://github.com/hzwer/arXiv2020-RIFE … deo.py#L62

Selecting half precision (FP16) increases the performance by up to 4x:

the fp16 speed is phenomenal (~x4)

https://github.com/hzwer/arXiv2020-RIFE/issues/188

Now my question, because I don't really understand this fragment:

Chainik wrote:

"scale: Controls the process resolution for optical flow model. Try scale=0.5 for 4K video."
"Math precision" = Single ----> scale=1.0
"Math precision" = Half ---> scale=0.5 - increases encoding speed by 2

What does the above setting refer to? I guess "scale", but then where do you set the calculation precision: full precision (FP32) and half precision (FP16)?

I hope that a solution can be found. It is the real-time interpolation that I care most about. Of course it could be tomorrow, the day after tomorrow next week, whenever you have some time to find the cause.

Thank you once again!

Thank you very much!

I was just writing the 2nd point of my essay, but it looks like it's not necessary. OK, maybe I'll still finish it and paste it in here sometime. It is not important now!

Chainik, please write if you were able to interpolate in real time a higher resolution file using the 3.8 model than last time?

Chainik wrote:

GPU; VRAM; CPU utilization; SVP index

424*240 - 23% (65% compute_1); 0.9 GB; zero; 1.0
640*360 - 9% (100% compute_1); 0.9 GB; zero; 0.76

https://www.svp-team.com/forum/viewtopi … 96&p=2


3.8 model is the fastest!

Based on the evaluation of dozens of videos, the v3.8 model has achieved an acceleration effect of more than 2X while surpassing the effect of the RIFEv2.4 model.

https://github.com/hzwer/arXiv2020-RIFE/issues/176

And now I will present below the main advantages of the new RIFE filter for VapourSynth over its predecessor:

1. The RIFE filter for VapourSynth based on PyTorch: https://github.com/HolyWu/vs-rife is at least 3 times faster in performance than the RIFE filter for VapourSynth based on ncnn: https://github.com/HomeOfVapourSynthEvo … cnn-Vulkan

Developer confirmation of the above filters:
https://github.com/HomeOfVapourSynthEvo … n/issues/5

SVP developer confirmation:
https://github.com/nihui/rife-ncnn-vulkan/issues/22

Confirmation by one more person:
https://github.com/nihui/rife-ncnn-vulkan/issues/26


This is obviously a major advantage, especially if we think about real-time RIFE AI interpolation. This new PyTorch-based filter is especially useful for SVP users, as SVP is the only program on the market that allows real-time motion interpolation using AI with a GUI, that is friendly and easy to use for any user.

If someone wants to transcode video using RIFE AI motion interpolation then of course they have a wider choice of tools, because in addition to SVP they can choose from Flowframes, DVDFab or Rife-App among others.  When transcoding, the speed does not matter so much, because whether the procedure takes 3 hours or 9 hours, you still have to wait patiently.

However, when we want to perform interpolation in real time the situation looks completely different. Here, either the interpolation works in time or it does not. Either it is possible to watch or it is not.

The situation is worse than when playing computer games, where for example we either have 120fps or 40fps. In the latter case it is still possible to play, although with much less comfort. However, gamers can spend a lot of money, time and overclocking to gain at least those extra 10% fps!

And we have the chance to increase the performance by 3 times!!!

Chainik wrote:

we can't bundle PyTorch with SVP

I see, in that case, if someone wanted to use the RIFE filter for VapourSynth based on PyTorch in SVP there may be a message warning of all the consequences of downloading this package with instructions on how to do it.

I know that people with poor connections may have problems downloading, as well as old HDDs will take a long time to load data into RAM or VRAM, but let's give it a try.