776 (edited by Xenocyde 16-05-2023 21:02:23)

Re: New RIFE filter - 3x faster AI interpolation possible in SVP!!!

zSoi wrote:

Some newbie questions here :

Owner of a rtx3080 12600k, I like the smoothness SVP can get me, but sometimes artifacts are annoying.

What can I expect from this rife ai engine ? is it worth the time to set it up ? I'm watching movies on an ultrawide HDR display 3440x1440, usually x5 frames per sec. (and I don't know if its revelant but my movies are 4k and about 30/50 Gb per movie.

From what I read, RTX3080 is not enough for 4K HDR with RIFE. Even the RTX4090 struggles at more than X3. Also, RIFE V2 seems to be bugged for 4K vids as it lowers the resolution for some reason. But there supposedly is something better than RIFE, we're waiting for UHD to unveil it roll

Re: New RIFE filter - 3x faster AI interpolation possible in SVP!!!

zSoi wrote:

Some newbie questions here :

Owner of a rtx3080 12600k, I like the smoothness SVP can get me, but sometimes artifacts are annoying.

What can I expect from this rife ai engine ? is it worth the time to set it up ? I'm watching movies on an ultrawide HDR display 3440x1440, usually x5 frames per sec. (and I don't know if its revelant but my movies are 4k and about 30/50 Gb per movie.

Just download SVP and install the RIFE pack. With your PC spec you can easily x2 any 4k 21:9 Video and x3 16:9 4k video (downscale video to monitor res then use RIFE)

Re: New RIFE filter - 3x faster AI interpolation possible in SVP!!!

Is there anyway to get rid of this thing that pops up everytime I start to play something with RIFE?  Once I close the window everything plays fine.

https://i.postimg.cc/htZWTCB2/Screenshot-2023-05-17-065328.png

779 (edited by Xenocyde 17-05-2023 22:06:09)

Re: New RIFE filter - 3x faster AI interpolation possible in SVP!!!

Xenocyde wrote:
FenceMan wrote:

Is there anyway to get rid of this thing that pops up everytime I start to play something with RIFE?  Once I close the window everything plays fine.

https://i.postimg.cc/htZWTCB2/Screenshot-2023-05-17-065328.png

It cannot be disabled if you use performance mode, but it only pops up when a certain resolution is not already stored in the timing cache. You need to let the timing build for ~3 minutes every time, don't close the window before it closes automatically. There are only a few timings for movies, but you may encounter specific ones for certain TV series.

Re: New RIFE filter - 3x faster AI interpolation possible in SVP!!!

Xenocyde wrote:
Xenocyde wrote:
FenceMan wrote:

Is there anyway to get rid of this thing that pops up everytime I start to play something with RIFE?  Once I close the window everything plays fine.

https://i.postimg.cc/htZWTCB2/Screenshot-2023-05-17-065328.png

It cannot be disabled if you use performance mode, but it only pops up when a certain resolution is not stored in the timing cache. You need to let the timing build for ~3 minutes every time, don't close the window before it closes automatically. There are only a few timings for movies, but you may encounter specific ones for certain TV series.


Yeah I wasn't letting it run thanks.

781 (edited by zSoi 17-05-2023 17:44:46)

Re: New RIFE filter - 3x faster AI interpolation possible in SVP!!!

aloola wrote:
zSoi wrote:

Some newbie questions here :

Owner of a rtx3080 12600k, I like the smoothness SVP can get me, but sometimes artifacts are annoying.

What can I expect from this rife ai engine ? is it worth the time to set it up ? I'm watching movies on an ultrawide HDR display 3440x1440, usually x5 frames per sec. (and I don't know if its revelant but my movies are 4k and about 30/50 Gb per movie.

Just download SVP and install the RIFE pack. With your PC spec you can easily x2 any 4k 21:9 Video and x3 16:9 4k video (downscale video to monitor res then use RIFE)

Any clue on the method to downscale before using rife ?

Also, if I can only achieve x2 aka ~60fps, I think I would rather leave x5 ~150fps with vaporsynth and some minor artifacts...

782

Re: New RIFE filter - 3x faster AI interpolation possible in SVP!!!

zSoi wrote:

Any clue on the method to downscale before using rife ?

Image Scaling
https://www.svp-team.com/wiki/Manual:Resizing

If you want to change the settings precisely:
https://www.svp-team.com/wiki/Manual:Advanced_SVP
https://www.svp-team.com/wiki/Manual:All_settings

783 (edited by UHD 19-05-2023 18:57:10)

Re: New RIFE filter - 3x faster AI interpolation possible in SVP!!!

Xenocyde wrote:

I understand you are most likely busy, but May is here.

My deepest apologies to: Blackfyre, DragonicPrime, Fortune424, Xenocyde and everyone else who waited for my answer.

784 (edited by UHD 19-05-2023 19:41:27)

Re: New RIFE filter - 3x faster AI interpolation possible in SVP!!!

What could be much better than RIFE for interpolating video frames?

Let me perhaps start with the basics. Why do we use RIFE and SVP?

Simple: a typical video has 24 frames per second, and with RIFE and SVP we can create intermediate frames and, for example, with x5 interpolation, have 120 frames per second and a smooth experience watching a movie or series.

The problem is that this 120 fps is far from what we would experience if we had the same movie natively recorded in 120 fps. My point is not that RIFE isn't perfect, because no algorithm can losslessly restore what was happening in front of the camera.

The point is that in typical 24 fps footage, artefacts in the form of motion blur have been artificially added to trick our brains into giving the impression of movement. As in photography, so in video, each frame has a specific exposure time. In photography, however, apart from a few artistic exceptions, we want the image to be clear without any blurring and therefore the exposure time to be as short as possible. In 24 fps video, the opposite is true: the long exposure time for each of the 24 frames is intended to simulate the effect of a larger number of frames through blurring. The effect is that during a running scene, for example, you don't see the runner's hand at all, just a blurry patch. This is the case at 24 fps with a typical film exposure time of 1/48 s. You can read more about this here: https://www.red.com/red-101/shutter-angle-tutorial

If the above example run had been recorded at 120 fps or 240 fps the runner's hand would be sharp and clear on every frame. RIFE and other typical AI algorithms will do well to interpolate 24 fps to 120 fps when the exposure time of each frame is very short at, say, 1/240 s and the details on each frame are sharp. However, there are no such videos, well, unless we create them ourselves, but then the question arises: why not record straight away in 120 fps or 240 fps? Then no RIFE would be necessary. Unfortunately, in most cases we have to make do with what we have, which is 24 fps and heavily blurred motion. In such a 24 fps video, the runner's hand will be blurred in the original and it will be blurred in the 120 fps created by the RIFE.

This is where the problem lies. We have a smooth 120 fps video obtained by interpolation with RIFE, but our brain nevertheless perceives that something is wrong. Something is artificial and many people hate interpolation precisely for this. Many explain to themselves that the magic of cinema is not there, that this was not the director's intention, that the effect is a soap opera.... however, this is just looking for an explanation that our brain hates what it sees. Can it be explained somehow?

Yes, it can be explained by the 'uncanny valley' effect: https://en.wikipedia.org/wiki/Uncanny_valley This whole magic of cinema is nothing more than the fact that 24 fps is so far from reality that our brain immediately knows it's a movie, entertainment and not reality. This is the same as an industrial robot for the 'uncanny valley' effect. We're 100% sure it's a robot so it doesn't cause us concern or uncertainty about what we're dealing with.

Now let's get back to what video frame interpolation does: 120 fps takes us away from the magic of cinema and brings us closer to reality. Of course, we all want movies to be more imersive and as real as possible: 3D, higher resolution and more frames per second. The problem is that there remains too much artificiality in the interpolation of video frames with RIFE (motion blur even at 120 or 240 fps), which causes us to fall right into that famous 'uncanny valley' when approaching reality. The film is smooth: 120 fps, but there is something wrong with it. Every frame has an unnaturally long blur for 120 fps. Despite 120 fps, certain details, instead of being clear and sharp, are blurred and distorted by motion blur.

Of course, we accept the compromises because we know that RIFE is still the best way to get us closer to the ideal and we have no alternative. In the same way that for many years we accepted artifacts from the basic SVP algorithm because there was no alternative. However, there are some people to whom, if we show the effect of RIFE or SVP interpolation, will still prefer the original 24 fps. Often this is due not to stubbornness or conservatism, but to the simple fact that there is something wrong with the interpolated image.

My dream is to interpolate the video frames of a 24 fps movie in such a way that we get the effect we would get from watching that movie shot natively in 120, 240 or 500 fps. Yes, we already have 500 Hz monitors and this effect can be achieved!

785 (edited by UHD 19-05-2023 19:42:32)

Re: New RIFE filter - 3x faster AI interpolation possible in SVP!!!

What do we need?

An AI algorithm that will simultaneously perform Video Deblurring and Video Frame Interpolation. Simple, right?

Not necessarily. Algorithms that could do this were already available at the inception of RIFE. So what is the problem and why have they not become popular?

Crucial to the inference quality of an AI model is its design as well as the database on which it was trained. The latter element caused that all Joint Video Deblurring and Frame Interpolation algorithms introduced more artefacts than simple Video Frame Interpolation models alone making them completely unsuitable for practical use.

The problem was in the database needed to train the Joint Video Deblurring and Frame Interpolation models. The database used for this was usually Adobe240, which was based on 240 fps videos. The frames from these videos were ready targets for algorithms and gave as a result what we would also like to have: 240 fps without motion blur. The problem was the input frames for the learning. In order to obtain 24 fps, several original frames were averaged creating new frames with synthetic motion blur. Models trained on such data gave great results, but only on databases with synthetic motion blur. In videos with real motion blur they created unacceptable artefacts ruling out their practical use.

This was the case until November 2022, when an academic paper describing the new training database was published, and in March 2023 the database was made available to all interested parties.

What is special about this database? I will use quotes from this paper:

Specifically, two BITRAN CS-700C cameras are physically aligned to the beam-splitter by laser calibration. During shooting, the light is split in half and goes into the camera with high and low frame rate modes. The low-frame-rate camera adopts long exposure scheme to capture the blurred video. Besides, a ND filter with about 10% transmittance is installed before the low-frame-rate camera to ensure photometric balance between the blurred frames and the corresponding sharp frames from the high-frame-rate camera.

RBI dataset. We use this customized hybrid camera system to collect 55 video pairs as real-world blur interpolation (RBI) dataset. The frame-rate of blurred video and the corresponding sharp video are 25 fps and 500 fps, respectively. The exposure time of blurred image is 18 ms, while the exposure time of sharp image is nearly 2 ms. This means that there are 9 sharp frames corresponding to one blurred frame, and 11 sharp frames exist in the readout deadtime between adjacent blurred frames.

This is the first database of its kind where the same scene is recorded by two cameras calibrated to less than one pixel, where:
the first camera records: 25 fps with 18 ms motion blur
the second camera records:  500 fps with 2 ms motion blur.

We also have models trained on the above database.

Do you want to see the effect of interpolation with BiT?

Turn on the animation of the mountain scene on the Official repository of BiT: https://github.com/zzh-tech/BiT
The same thing broken down into 30 interpolated frames can be found in the paper I quoted above: https://arxiv.org/abs/2211.11423

Before you write that you see artefacts near a woman's face or a man's hand look first at the motion blur input frame from which these 30 interpolated frames were created. Yes, the input frame is one big artefact, neither the girl's face exists nor the man's hand, there are only blurry spots. This is what we have in a typical fast action video. No super resolution algorithm like Real-ESRGAN will help here. RIFE will only move the blurry contours too.

I have nothing against super resolution algorithms. They are great for blur-free, static photos and for movies when there is no motion. However, if there is no motion then there is nothing to interpolate, and if there is motion then there is motion blur. The more motion, the more motion blur. Motion blur is responsible for the biggest artefacts in every movie.

786 (edited by UHD 19-05-2023 19:47:32)

Re: New RIFE filter - 3x faster AI interpolation possible in SVP!!!

What do we have and what more do we need?

We have a BiT++ model trained on the RBI database with real motion blur: https://drive.google.com/drive/folders/ … vnDo9C04HO

However, it turns out that the RBI database is too small and better results are obtained by the model trained on Adobe240 and then on RBI. This is indicated by the data in the paper: https://arxiv.org/abs/2211.11423 which I further confirmed with the author of the BiT model: https://github.com/zzh-tech/BiT/issues/4

Another point is that models trained to maximise PSNR and SSIM are not the optimal ones for practical purposes, as I mentioned earlier: https://www.svp-team.com/forum/viewtopi … 250#p82250

What we need is a model trained on perceptual loss that achieves the best result (lowest value) of LPIPS in the RBI with real motion blur tests. You can read why this is so important in the introduction to my rankings: https://github.com/AIVFI/Video-Frame-In … g-Rankings

In summary, we need a model:
- Joint Video Deblurring and Frame Interpolation
- trained on perceptual loss
- trained on Adobe240 and then on RBI

I am going to make a request to zzh-tech for such a model. I haven't had time to look into this, and I know that the way I make this request may determine whether we get such a model or not. So it is not such a simple matter.

I intended to write a request first, and then ask here on the forum for support of my request by all those who care about the best possible video frame interpolation.

I know that some people just want ready-made, working solutions in SVP and I remember well what indifference there was when I first wrote here about RIFE: https://www.svp-team.com/forum/viewtopic.php?id=6196

However, back then we were in a much better position, because we had both practical models and a ready-made RIFE filter for VapourSynth. In general, we were very lucky that the developer of RIFE was interested in building practical models. Unfortunately, most researchers who develop AI models mostly care about publishing and presenting the paper at a prestigious scientific conference and then getting citations. Practical application often does not matter at all. This is the sad truth.

Therefore, it is up to us to push for the development of models that will be most useful to us. It is for this purpose that I have started and want to update my repository on GitHub: https://github.com/AIVFI/Video-Frame-In … g-Rankings

Unfortunately, I am not a programmer and this repository is the pinnacle of my abilities. I want it to be visible on GitHub, because I believe that through it maybe some talented researchers will develop something that will be most useful to us and suitable for implementation in SVP. My repository is to show what is currently the real SOTA in the field of interpolation and what we need most.

Interestingly, already 2 of the 25 stars that my repository has received on GitHub are from the authors of Video Frame Interpolation models:
https://github.com/kinoud
https://github.com/danier97

It means that what I am doing makes sense. Life has taught me that if you want something you have to try to do everything you can to achieve it. Often it seems like we have no influence and yet there is always something that can be done. I am not a programmer, but I am trying here on this forum and on GitHub to do something that can maybe make me enjoy the best possible interpolation. I'm already happy with how far it's come with RIFE, but I want more and better because I know it's possible.

I will now try to prepare a request for a more practical BIT model and hope you will support my request on GiTHub. This may have an impact on whether we get such a model or not. 

I would like to prepare such a request later this weekend, but I know that writing about deadlines in my case is inappropriate to say the least. I've taken on extra job to earn money for NVIDIA GeForce RTX 4090, but apparently I've been a bit overwhelmed by it all time-wise and otherwise. However, I am not giving up.

Before I prepare a request, please write what you think about all the above.

787 (edited by Xenocyde 19-05-2023 20:12:32)

Re: New RIFE filter - 3x faster AI interpolation possible in SVP!!!

Wow... just wow... looks like it was well worth waiting for UHD's latest developments. I'm all for this new deblurring model, but how can we help exactly? What coding skills are needed? Maybe I can ask some coders I know, but what exactly do I tell them?

Re: New RIFE filter - 3x faster AI interpolation possible in SVP!!!

Thanks for the post. This sounds interesting. I'm not a programmer or anything so idk how much I can help, but I'd love to test it and share my results when possible. The thing I'm most curious about is probably the performance. A big benefit of Rife is being able to run it in real time now, so would love to see how this new model would perform and how the quality is in different videos. I'm looking forward to seeing more on this.

789

Re: New RIFE filter - 3x faster AI interpolation possible in SVP!!!

Xenocyde wrote:

Wow... just wow... looks like it was well worth waiting for UHD's latest developments. I'm all for this new deblurring model, but how can we help exactly? What coding skills are needed? Maybe I can ask some coders I know, but what exactly do I tell them?

Thanks for the offer of help.

I will first try to ask zzh-tech to train a practical model for us. It is known that this requires not only the ability to train the models he has, but also time and a lot of computing power and also a willingness to help us. The latter is very important and for the time being I think the best help might just be to join in with the request I will write. I will try to write something on Saturday.

The response will determine what we need next.

790

Re: New RIFE filter - 3x faster AI interpolation possible in SVP!!!

DragonicPrime wrote:

Thanks for the post. This sounds interesting. I'm not a programmer or anything so idk how much I can help, but I'd love to test it and share my results when possible. The thing I'm most curious about is probably the performance. A big benefit of Rife is being able to run it in real time now, so would love to see how this new model would perform and how the quality is in different videos. I'm looking forward to seeing more on this.

Unfortunately, this method is relatively slow, at least compared to RIFE, see Table 1: https://arxiv.org/pdf/2211.11423.pdf

Tensor Cores optimisation will probably significantly speed up the inference. For now, however, we need to have a model that is most useful to us, then we will think about what to do next.

I don't think anything is going to replace RIFE in the near future in terms of performance speed, but for lower resolution video, for video encoding where the highest possible quality is needed we may be looking for something better, albeit slower.

If graphics cards double in power every 2 years, who knows if BiT won't replace RIFE for real-time interpolation in a while, even though it will be slower. After all, for 4K the bottleneck for RIFE is probably no longer the graphics card but RAM.

Then there is the issue of directing researchers to move towards Joint Video Deblurring and Frame Interpolation, which I want to do, through my project, which still needs a lot of work: https://github.com/AIVFI/Video-Frame-In … g-Rankings

Re: New RIFE filter - 3x faster AI interpolation possible in SVP!!!

UHD wrote:
DragonicPrime wrote:

Thanks for the post. This sounds interesting. I'm not a programmer or anything so idk how much I can help, but I'd love to test it and share my results when possible. The thing I'm most curious about is probably the performance. A big benefit of Rife is being able to run it in real time now, so would love to see how this new model would perform and how the quality is in different videos. I'm looking forward to seeing more on this.

Unfortunately, this method is relatively slow, at least compared to RIFE, see Table 1: https://arxiv.org/pdf/2211.11423.pdf

Tensor Cores optimisation will probably significantly speed up the inference. For now, however, we need to have a model that is most useful to us, then we will think about what to do next.

I don't think anything is going to replace RIFE in the near future in terms of performance speed, but for lower resolution video, for video encoding where the highest possible quality is needed we may be looking for something better, albeit slower.

If graphics cards double in power every 2 years, who knows if BiT won't replace RIFE for real-time interpolation in a while, even though it will be slower. After all, for 4K the bottleneck for RIFE is probably no longer the graphics card but RAM.

Then there is the issue of directing researchers to move towards Joint Video Deblurring and Frame Interpolation, which I want to do, through my project, which still needs a lot of work: https://github.com/AIVFI/Video-Frame-In … g-Rankings

Ya I figured it would be slow, I just didn't look through all the info since there was a lot to go through lol. Although like I said I'm willing to help and share results when the time comes. Looking forward to it

792 (edited by UHD 19-05-2023 22:50:57)

Re: New RIFE filter - 3x faster AI interpolation possible in SVP!!!

A little bonus to whet your appetite;)

zzh-tech previously created a similar database with real motion blur, but only for Video Deblurring. You can see the effects for the AI Video Deblurring model here:
https://github.com/sollynoay/MMP-RNN/tr … NN#results

Go to:
Deblurring results on BSD

There is a link to the file there: BSD_MMPRNN_test.zip (3.6G)

Inside are 3 versions of the video frames:

_input - this is a frame of video recorded with motion blur
_gt - this is a frame of video recorded with a second synchronised camera without motion blur
_mmprnn - this is the effect of the algorithm performing Video Deblurring on _input files

the first camera records: 15 fps with 16 ms motion blur
the second camera records:  15 fps with 2 ms motion blur.

This is an example of what the RIFE algorithm is missing.

793 (edited by UHD 20-05-2023 18:04:21)

Re: New RIFE filter - 3x faster AI interpolation possible in SVP!!!

I've had a bit of a look at the inference time and I think I'm going to change my plans a bit....

A little clarification,

here are the 3 best Joint Video Deblurring and Frame Interpolation methods:

DeMFI - 19 November 2021 https://arxiv.org/abs/2111.09985
BiT - 21 November 2022 https://arxiv.org/abs/2211.11423
VIDUE - 27 March 2023 https://arxiv.org/abs/2303.15043

Developers of DeMFI, in their paper, demonstrated the superiority of their method over all the other best Joint Video Deblurring and Frame Interpolation methods to date, so it can be said that it was the best method at the time.

Now both BiT and VIDUE have shown superiority over DeMFI, so now it is these that should interest us most. However, we don't have the result of a test that, on the same database, shows which method is better: BiT or VIDUE? I assume that the differences in quality may be minimal.

There is, however, a very large difference in inference speed between BiT and VIDUE.  VIDUE is significantly faster than BiT.

VIDUE's inference time is:

0.27s for 8 new frames 1280x720 with Tesla V100 GPU & PyTorch

RIFE inference time for comparison is:

0.031s for 1 new frame 720p with NVIDIA TITAN X(Pascal) GPU & PyTorch
https://arxiv.org/abs/2011.06294

For me, real-time interpolation is a priority. Therefore, I would first like to prepare a request for the author to train the VIDUE model for practical applications.

Before that, however, I need to update my rankings so that VIDUE is there too. I guess that's the way to go, right?

I don't know how long it will take me, because I actually only now looked at the interpolation time and discovered all this. Previously I thought the 0.27s was for one frame and at a lower resolution, and here's such a pleasant surprise :-)

By this time I hope more people will have read my posts and perhaps a larger group will support my request. Of course, I also look forward to your feedback as to what you think of all this.

!!! Update !!!

VIDUE cannot be faster than RIFE because:

0.27s for VIDUE (8x720p)
0.16s for MIMOUNetPlus+RIFE (8x720p)

whereas MIMO-UNet+ is really fast (although it should be noted that it is only suitable for images and not video):

0.017s or 0.014s for MIMO-UNet+ (1x720p)
https://arxiv.org/abs/2108.05054

If we assume from this result that VIDUE is 3 times slower than RIFE it is still a revelation!

Unfortunately a little doubt is raised by another comparative result:

0.40s for CDVD-TSP+RIFE (8x720p)

CDVD-TSP is in fact one of the slowest Video Deblurring algorithms:

0.729s for CDVD-TSP (1x720p)
https://arxiv.org/abs/2207.13374

2.241s !!!!!!!!!!!!!!!! for CDVD-TSP (1x720p)
https://arxiv.org/abs/2205.12634

!!! End of update !!!

794

Re: New RIFE filter - 3x faster AI interpolation possible in SVP!!!

One more thing to make it not so beautiful: the number of input frames required for interpolation

2 frames RIFE
3 frames BiT
4 frames VIDUE
4 frames DeMFI

There may be complications when using scene detection and preparing the algorithm of what to interpolate and what not. This is of course something we will worry about in the future and I think it can be solved somehow. Using more than 2 frames obviously has advantages, such as when the motion accelerates. Using more input frames was also planned by the developer of RIFE: https://github.com/hzwer/Practical-RIFE#to-do-list

795 (edited by UHD 20-05-2023 19:43:11)

Re: New RIFE filter - 3x faster AI interpolation possible in SVP!!!

I for now will be busy updating my rankings, I do however have one more tidbit.

I know there are some people on this forum who are fluent in VapourSynth.

We can already test one of the AI Video Deblurring models in VapourSynth:

BasicVSR++ function for VapourSynth
https://github.com/HolyWu/vs-basicvsrpp

Probably many people associate this with one of the best Video Super-Resolution methods. However, not many know that there are additional models out there, including Video Deblurring.

We are specifically interested in this one:
7 = Video Deblurring (DVD)
https://github.com/HolyWu/vs-basicvsrpp … t__.py#L47

Be warned, this is a model trained on data with synthetic motion blur, so it will generate very unpleasant artefacts,

Be warned, this is a model trained on Charbonnier loss so it may lose fine detail.

Be warned, this is a very slow model without optimisation for Tensor Cores.

Other than that, this is one of the best Video Deblurring methods, with the BasicVSR++↓2 model achieving the highest PSNR value on the DVD dataset I've ever found: 34.78dB
https://arxiv.org/abs/2204.05308

If anyone has run this model and shared the results of its performance in scenes with dynamic action I would be very grateful.

Re: New RIFE filter - 3x faster AI interpolation possible in SVP!!!

UHD wrote:

If anyone has run this model and shared the results of its performance in scenes with dynamic action I would be very grateful.

I'm getting a new desktop setup with an i7-13700K and RTX 4080 next week, so I can test by then.

797

Re: New RIFE filter - 3x faster AI interpolation possible in SVP!!!

Xenocyde wrote:
UHD wrote:

If anyone has run this model and shared the results of its performance in scenes with dynamic action I would be very grateful.

I'm getting a new desktop setup with an i7-13700K and RTX 4080 next week, so I can test by then.

Super!

Hopefully also the fastest RAM - will come in handy with RIFE in 4K

798 (edited by Xenocyde 20-05-2023 21:38:38)

Re: New RIFE filter - 3x faster AI interpolation possible in SVP!!!

UHD wrote:
Xenocyde wrote:
UHD wrote:

If anyone has run this model and shared the results of its performance in scenes with dynamic action I would be very grateful.

I'm getting a new desktop setup with an i7-13700K and RTX 4080 next week, so I can test by then.

Super!

Hopefully also the fastest RAM - will come in handy with RIFE in 4K

It's Corsair Vengeance DDR5-5600 but it can be OC-ed to 6000 or even more, I'll see the exact limits.

799 (edited by UHD 20-05-2023 22:09:33)

Re: New RIFE filter - 3x faster AI interpolation possible in SVP!!!

It's Corsair Vengeance DDR5-5600 but it can be OC-ed to 6000 or even more, I'll see the exact limits.

I ask because Intel's CPUs can work with really fast memories, which cannot be said of AMD's CPUs. At the moment, the fastest Intel CPU memory you can probably buy is DDR5-8000

OK I just checked and DDR5-8000 is 3x more expensive than Corsair Vengeance DDR5-5600...

I am curious to see what RTX 4080 can do, because if indeed the bottleneck for 4K is RAM, then perhaps the results will be similar to RTX 4090.

800 (edited by Quaternions 22-05-2023 06:41:33)

Re: New RIFE filter - 3x faster AI interpolation possible in SVP!!!

UHD wrote:

This is the first database of its kind where the same scene is recorded by two cameras calibrated to less than one pixel, where:
the first camera records: 25 fps with 18 ms motion blur
the second camera records:  500 fps with 2 ms motion blur.

This achieves the desired data, but is absurdly over complicated when digital cameras and ffmpeg exist.  As long as the camera doesn't have any missing light data (no shutter effect), you can simply blend groups of frames from the high fps video using a filter to get video identical to the low fps sample.