https://www.svp-team.com/wiki/RIFE_AI_interpolation

It's Corsair Vengeance DDR5-5600 but it can be OC-ed to 6000 or even more, I'll see the exact limits.

I ask because Intel's CPUs can work with really fast memories, which cannot be said of AMD's CPUs. At the moment, the fastest Intel CPU memory you can probably buy is DDR5-8000

OK I just checked and DDR5-8000 is 3x more expensive than Corsair Vengeance DDR5-5600...

I am curious to see what RTX 4080 can do, because if indeed the bottleneck for 4K is RAM, then perhaps the results will be similar to RTX 4090.

Xenocyde wrote:
UHD wrote:

If anyone has run this model and shared the results of its performance in scenes with dynamic action I would be very grateful.

I'm getting a new desktop setup with an i7-13700K and RTX 4080 next week, so I can test by then.

Super!

Hopefully also the fastest RAM - will come in handy with RIFE in 4K

I for now will be busy updating my rankings, I do however have one more tidbit.

I know there are some people on this forum who are fluent in VapourSynth.

We can already test one of the AI Video Deblurring models in VapourSynth:

BasicVSR++ function for VapourSynth
https://github.com/HolyWu/vs-basicvsrpp

Probably many people associate this with one of the best Video Super-Resolution methods. However, not many know that there are additional models out there, including Video Deblurring.

We are specifically interested in this one:
7 = Video Deblurring (DVD)
https://github.com/HolyWu/vs-basicvsrpp … t__.py#L47

Be warned, this is a model trained on data with synthetic motion blur, so it will generate very unpleasant artefacts,

Be warned, this is a model trained on Charbonnier loss so it may lose fine detail.

Be warned, this is a very slow model without optimisation for Tensor Cores.

Other than that, this is one of the best Video Deblurring methods, with the BasicVSR++↓2 model achieving the highest PSNR value on the DVD dataset I've ever found: 34.78dB
https://arxiv.org/abs/2204.05308

If anyone has run this model and shared the results of its performance in scenes with dynamic action I would be very grateful.

One more thing to make it not so beautiful: the number of input frames required for interpolation

2 frames RIFE
3 frames BiT
4 frames VIDUE
4 frames DeMFI

There may be complications when using scene detection and preparing the algorithm of what to interpolate and what not. This is of course something we will worry about in the future and I think it can be solved somehow. Using more than 2 frames obviously has advantages, such as when the motion accelerates. Using more input frames was also planned by the developer of RIFE: https://github.com/hzwer/Practical-RIFE#to-do-list

I've had a bit of a look at the inference time and I think I'm going to change my plans a bit....

A little clarification,

here are the 3 best Joint Video Deblurring and Frame Interpolation methods:

DeMFI - 19 November 2021 https://arxiv.org/abs/2111.09985
BiT - 21 November 2022 https://arxiv.org/abs/2211.11423
VIDUE - 27 March 2023 https://arxiv.org/abs/2303.15043

Developers of DeMFI, in their paper, demonstrated the superiority of their method over all the other best Joint Video Deblurring and Frame Interpolation methods to date, so it can be said that it was the best method at the time.

Now both BiT and VIDUE have shown superiority over DeMFI, so now it is these that should interest us most. However, we don't have the result of a test that, on the same database, shows which method is better: BiT or VIDUE? I assume that the differences in quality may be minimal.

There is, however, a very large difference in inference speed between BiT and VIDUE.  VIDUE is significantly faster than BiT.

VIDUE's inference time is:

0.27s for 8 new frames 1280x720 with Tesla V100 GPU & PyTorch

RIFE inference time for comparison is:

0.031s for 1 new frame 720p with NVIDIA TITAN X(Pascal) GPU & PyTorch
https://arxiv.org/abs/2011.06294

For me, real-time interpolation is a priority. Therefore, I would first like to prepare a request for the author to train the VIDUE model for practical applications.

Before that, however, I need to update my rankings so that VIDUE is there too. I guess that's the way to go, right?

I don't know how long it will take me, because I actually only now looked at the interpolation time and discovered all this. Previously I thought the 0.27s was for one frame and at a lower resolution, and here's such a pleasant surprise :-)

By this time I hope more people will have read my posts and perhaps a larger group will support my request. Of course, I also look forward to your feedback as to what you think of all this.

!!! Update !!!

VIDUE cannot be faster than RIFE because:

0.27s for VIDUE (8x720p)
0.16s for MIMOUNetPlus+RIFE (8x720p)

whereas MIMO-UNet+ is really fast (although it should be noted that it is only suitable for images and not video):

0.017s or 0.014s for MIMO-UNet+ (1x720p)
https://arxiv.org/abs/2108.05054

If we assume from this result that VIDUE is 3 times slower than RIFE it is still a revelation!

Unfortunately a little doubt is raised by another comparative result:

0.40s for CDVD-TSP+RIFE (8x720p)

CDVD-TSP is in fact one of the slowest Video Deblurring algorithms:

0.729s for CDVD-TSP (1x720p)
https://arxiv.org/abs/2207.13374

2.241s !!!!!!!!!!!!!!!! for CDVD-TSP (1x720p)
https://arxiv.org/abs/2205.12634

!!! End of update !!!

A little bonus to whet your appetite;)

zzh-tech previously created a similar database with real motion blur, but only for Video Deblurring. You can see the effects for the AI Video Deblurring model here:
https://github.com/sollynoay/MMP-RNN/tr … NN#results

Go to:
Deblurring results on BSD

There is a link to the file there: BSD_MMPRNN_test.zip (3.6G)

Inside are 3 versions of the video frames:

_input - this is a frame of video recorded with motion blur
_gt - this is a frame of video recorded with a second synchronised camera without motion blur
_mmprnn - this is the effect of the algorithm performing Video Deblurring on _input files

the first camera records: 15 fps with 16 ms motion blur
the second camera records:  15 fps with 2 ms motion blur.

This is an example of what the RIFE algorithm is missing.

Ultray wrote:

I couldn't find anymore troubleshooting post about TensorRT+RIFE

https://www.svp-team.com/wiki/RIFE_AI_interpolation

DragonicPrime wrote:

Thanks for the post. This sounds interesting. I'm not a programmer or anything so idk how much I can help, but I'd love to test it and share my results when possible. The thing I'm most curious about is probably the performance. A big benefit of Rife is being able to run it in real time now, so would love to see how this new model would perform and how the quality is in different videos. I'm looking forward to seeing more on this.

Unfortunately, this method is relatively slow, at least compared to RIFE, see Table 1: https://arxiv.org/pdf/2211.11423.pdf

Tensor Cores optimisation will probably significantly speed up the inference. For now, however, we need to have a model that is most useful to us, then we will think about what to do next.

I don't think anything is going to replace RIFE in the near future in terms of performance speed, but for lower resolution video, for video encoding where the highest possible quality is needed we may be looking for something better, albeit slower.

If graphics cards double in power every 2 years, who knows if BiT won't replace RIFE for real-time interpolation in a while, even though it will be slower. After all, for 4K the bottleneck for RIFE is probably no longer the graphics card but RAM.

Then there is the issue of directing researchers to move towards Joint Video Deblurring and Frame Interpolation, which I want to do, through my project, which still needs a lot of work: https://github.com/AIVFI/Video-Frame-In … g-Rankings

Xenocyde wrote:

Wow... just wow... looks like it was well worth waiting for UHD's latest developments. I'm all for this new deblurring model, but how can we help exactly? What coding skills are needed? Maybe I can ask some coders I know, but what exactly do I tell them?

Thanks for the offer of help.

I will first try to ask zzh-tech to train a practical model for us. It is known that this requires not only the ability to train the models he has, but also time and a lot of computing power and also a willingness to help us. The latter is very important and for the time being I think the best help might just be to join in with the request I will write. I will try to write something on Saturday.

The response will determine what we need next.

What do we have and what more do we need?

We have a BiT++ model trained on the RBI database with real motion blur: https://drive.google.com/drive/folders/ … vnDo9C04HO

However, it turns out that the RBI database is too small and better results are obtained by the model trained on Adobe240 and then on RBI. This is indicated by the data in the paper: https://arxiv.org/abs/2211.11423 which I further confirmed with the author of the BiT model: https://github.com/zzh-tech/BiT/issues/4

Another point is that models trained to maximise PSNR and SSIM are not the optimal ones for practical purposes, as I mentioned earlier: https://www.svp-team.com/forum/viewtopi … 250#p82250

What we need is a model trained on perceptual loss that achieves the best result (lowest value) of LPIPS in the RBI with real motion blur tests. You can read why this is so important in the introduction to my rankings: https://github.com/AIVFI/Video-Frame-In … g-Rankings

In summary, we need a model:
- Joint Video Deblurring and Frame Interpolation
- trained on perceptual loss
- trained on Adobe240 and then on RBI

I am going to make a request to zzh-tech for such a model. I haven't had time to look into this, and I know that the way I make this request may determine whether we get such a model or not. So it is not such a simple matter.

I intended to write a request first, and then ask here on the forum for support of my request by all those who care about the best possible video frame interpolation.

I know that some people just want ready-made, working solutions in SVP and I remember well what indifference there was when I first wrote here about RIFE: https://www.svp-team.com/forum/viewtopic.php?id=6196

However, back then we were in a much better position, because we had both practical models and a ready-made RIFE filter for VapourSynth. In general, we were very lucky that the developer of RIFE was interested in building practical models. Unfortunately, most researchers who develop AI models mostly care about publishing and presenting the paper at a prestigious scientific conference and then getting citations. Practical application often does not matter at all. This is the sad truth.

Therefore, it is up to us to push for the development of models that will be most useful to us. It is for this purpose that I have started and want to update my repository on GitHub: https://github.com/AIVFI/Video-Frame-In … g-Rankings

Unfortunately, I am not a programmer and this repository is the pinnacle of my abilities. I want it to be visible on GitHub, because I believe that through it maybe some talented researchers will develop something that will be most useful to us and suitable for implementation in SVP. My repository is to show what is currently the real SOTA in the field of interpolation and what we need most.

Interestingly, already 2 of the 25 stars that my repository has received on GitHub are from the authors of Video Frame Interpolation models:
https://github.com/kinoud
https://github.com/danier97

It means that what I am doing makes sense. Life has taught me that if you want something you have to try to do everything you can to achieve it. Often it seems like we have no influence and yet there is always something that can be done. I am not a programmer, but I am trying here on this forum and on GitHub to do something that can maybe make me enjoy the best possible interpolation. I'm already happy with how far it's come with RIFE, but I want more and better because I know it's possible.

I will now try to prepare a request for a more practical BIT model and hope you will support my request on GiTHub. This may have an impact on whether we get such a model or not. 

I would like to prepare such a request later this weekend, but I know that writing about deadlines in my case is inappropriate to say the least. I've taken on extra job to earn money for NVIDIA GeForce RTX 4090, but apparently I've been a bit overwhelmed by it all time-wise and otherwise. However, I am not giving up.

Before I prepare a request, please write what you think about all the above.

What do we need?

An AI algorithm that will simultaneously perform Video Deblurring and Video Frame Interpolation. Simple, right?

Not necessarily. Algorithms that could do this were already available at the inception of RIFE. So what is the problem and why have they not become popular?

Crucial to the inference quality of an AI model is its design as well as the database on which it was trained. The latter element caused that all Joint Video Deblurring and Frame Interpolation algorithms introduced more artefacts than simple Video Frame Interpolation models alone making them completely unsuitable for practical use.

The problem was in the database needed to train the Joint Video Deblurring and Frame Interpolation models. The database used for this was usually Adobe240, which was based on 240 fps videos. The frames from these videos were ready targets for algorithms and gave as a result what we would also like to have: 240 fps without motion blur. The problem was the input frames for the learning. In order to obtain 24 fps, several original frames were averaged creating new frames with synthetic motion blur. Models trained on such data gave great results, but only on databases with synthetic motion blur. In videos with real motion blur they created unacceptable artefacts ruling out their practical use.

This was the case until November 2022, when an academic paper describing the new training database was published, and in March 2023 the database was made available to all interested parties.

What is special about this database? I will use quotes from this paper:

Specifically, two BITRAN CS-700C cameras are physically aligned to the beam-splitter by laser calibration. During shooting, the light is split in half and goes into the camera with high and low frame rate modes. The low-frame-rate camera adopts long exposure scheme to capture the blurred video. Besides, a ND filter with about 10% transmittance is installed before the low-frame-rate camera to ensure photometric balance between the blurred frames and the corresponding sharp frames from the high-frame-rate camera.

RBI dataset. We use this customized hybrid camera system to collect 55 video pairs as real-world blur interpolation (RBI) dataset. The frame-rate of blurred video and the corresponding sharp video are 25 fps and 500 fps, respectively. The exposure time of blurred image is 18 ms, while the exposure time of sharp image is nearly 2 ms. This means that there are 9 sharp frames corresponding to one blurred frame, and 11 sharp frames exist in the readout deadtime between adjacent blurred frames.

This is the first database of its kind where the same scene is recorded by two cameras calibrated to less than one pixel, where:
the first camera records: 25 fps with 18 ms motion blur
the second camera records:  500 fps with 2 ms motion blur.

We also have models trained on the above database.

Do you want to see the effect of interpolation with BiT?

Turn on the animation of the mountain scene on the Official repository of BiT: https://github.com/zzh-tech/BiT
The same thing broken down into 30 interpolated frames can be found in the paper I quoted above: https://arxiv.org/abs/2211.11423

Before you write that you see artefacts near a woman's face or a man's hand look first at the motion blur input frame from which these 30 interpolated frames were created. Yes, the input frame is one big artefact, neither the girl's face exists nor the man's hand, there are only blurry spots. This is what we have in a typical fast action video. No super resolution algorithm like Real-ESRGAN will help here. RIFE will only move the blurry contours too.

I have nothing against super resolution algorithms. They are great for blur-free, static photos and for movies when there is no motion. However, if there is no motion then there is nothing to interpolate, and if there is motion then there is motion blur. The more motion, the more motion blur. Motion blur is responsible for the biggest artefacts in every movie.

What could be much better than RIFE for interpolating video frames?

Let me perhaps start with the basics. Why do we use RIFE and SVP?

Simple: a typical video has 24 frames per second, and with RIFE and SVP we can create intermediate frames and, for example, with x5 interpolation, have 120 frames per second and a smooth experience watching a movie or series.

The problem is that this 120 fps is far from what we would experience if we had the same movie natively recorded in 120 fps. My point is not that RIFE isn't perfect, because no algorithm can losslessly restore what was happening in front of the camera.

The point is that in typical 24 fps footage, artefacts in the form of motion blur have been artificially added to trick our brains into giving the impression of movement. As in photography, so in video, each frame has a specific exposure time. In photography, however, apart from a few artistic exceptions, we want the image to be clear without any blurring and therefore the exposure time to be as short as possible. In 24 fps video, the opposite is true: the long exposure time for each of the 24 frames is intended to simulate the effect of a larger number of frames through blurring. The effect is that during a running scene, for example, you don't see the runner's hand at all, just a blurry patch. This is the case at 24 fps with a typical film exposure time of 1/48 s. You can read more about this here: https://www.red.com/red-101/shutter-angle-tutorial

If the above example run had been recorded at 120 fps or 240 fps the runner's hand would be sharp and clear on every frame. RIFE and other typical AI algorithms will do well to interpolate 24 fps to 120 fps when the exposure time of each frame is very short at, say, 1/240 s and the details on each frame are sharp. However, there are no such videos, well, unless we create them ourselves, but then the question arises: why not record straight away in 120 fps or 240 fps? Then no RIFE would be necessary. Unfortunately, in most cases we have to make do with what we have, which is 24 fps and heavily blurred motion. In such a 24 fps video, the runner's hand will be blurred in the original and it will be blurred in the 120 fps created by the RIFE.

This is where the problem lies. We have a smooth 120 fps video obtained by interpolation with RIFE, but our brain nevertheless perceives that something is wrong. Something is artificial and many people hate interpolation precisely for this. Many explain to themselves that the magic of cinema is not there, that this was not the director's intention, that the effect is a soap opera.... however, this is just looking for an explanation that our brain hates what it sees. Can it be explained somehow?

Yes, it can be explained by the 'uncanny valley' effect: https://en.wikipedia.org/wiki/Uncanny_valley This whole magic of cinema is nothing more than the fact that 24 fps is so far from reality that our brain immediately knows it's a movie, entertainment and not reality. This is the same as an industrial robot for the 'uncanny valley' effect. We're 100% sure it's a robot so it doesn't cause us concern or uncertainty about what we're dealing with.

Now let's get back to what video frame interpolation does: 120 fps takes us away from the magic of cinema and brings us closer to reality. Of course, we all want movies to be more imersive and as real as possible: 3D, higher resolution and more frames per second. The problem is that there remains too much artificiality in the interpolation of video frames with RIFE (motion blur even at 120 or 240 fps), which causes us to fall right into that famous 'uncanny valley' when approaching reality. The film is smooth: 120 fps, but there is something wrong with it. Every frame has an unnaturally long blur for 120 fps. Despite 120 fps, certain details, instead of being clear and sharp, are blurred and distorted by motion blur.

Of course, we accept the compromises because we know that RIFE is still the best way to get us closer to the ideal and we have no alternative. In the same way that for many years we accepted artifacts from the basic SVP algorithm because there was no alternative. However, there are some people to whom, if we show the effect of RIFE or SVP interpolation, will still prefer the original 24 fps. Often this is due not to stubbornness or conservatism, but to the simple fact that there is something wrong with the interpolated image.

My dream is to interpolate the video frames of a 24 fps movie in such a way that we get the effect we would get from watching that movie shot natively in 120, 240 or 500 fps. Yes, we already have 500 Hz monitors and this effect can be achieved!

Xenocyde wrote:

I understand you are most likely busy, but May is here.

My deepest apologies to: Blackfyre, DragonicPrime, Fortune424, Xenocyde and everyone else who waited for my answer.

zSoi wrote:

Any clue on the method to downscale before using rife ?

Image Scaling
https://www.svp-team.com/wiki/Manual:Resizing

If you want to change the settings precisely:
https://www.svp-team.com/wiki/Manual:Advanced_SVP
https://www.svp-team.com/wiki/Manual:All_settings

Xenocyde wrote:
UHD wrote:

If anyone is interested in much, much better quality than RIFE interpolation I will write about it soon, here on this forum. Please give me another week, maybe two weeks, until I update my repository on GitHub.

2 weeks are up, UHD! Time to spill the beans, don't keep us boiling here roll

Thanks for remembering. Sorry, some unforeseen work-related overtime fell out. I'll be back with information by the end of April for sure.

Blackfyre wrote:

EDIT:

I backed up, then replaced the ones you linked with the ones inside (C:\Program Files (x86)\SVP 4\rife\models\rife), but it did not work. I will wait for the update via the app.


Chainik wrote:

replace SVP 4\rife\vsmlrt.py

https://www.svp-team.com/forum/viewtopi … 114#p82114


Blackfyre wrote:

Not sure if versions above have fixed such issues

v2 only improve performance. Quality remains the same, as it is still based on the current RIFE v4.6 model

Blackfyre, of course not and there is a good reason for this. The RIFE models that are described in that paper you linked to: RIFE, RIFE-Large and RIFEm are optimised to achieve the best possible PSNR and SSIM. Unfortunately, many researchers stop there, because that is enough for publications and citations. However, there are researchers who also provide models optimised for human perception and measures that better reflect it: LPIPS, FloLPIPS, VFIPS.

RIFE developer hzwer did not include models optimised for human perception in the paper you linked to, but he did in a separate repository to which you have a link here: https://github.com/hzwer/Practical-RIFE The practical models are, of course, based on the models contained in the scientific paper you provided a link to and so, for example, the v4.0-v4.6 practical models are based on the RIFEm model. The difference is, the v4.0-v4.6 practical models are better optimised in terms of human perception, inference performance and trained on more data. These models use SVP, including the latest v4.6, but not directly.

To further improve performance and take advantage of the fastest Tensor cores of NVIDIA graphics cards for AI inference, SVP uses versions of these models from this repository: https://github.com/AmusementClub/vs-mlrt The authors of this repository have recently included a second version of the implementation of these models, and described the changes in this update: https://github.com/AmusementClub/vs-mlr … c649dfb212

To summarise:

The best current model is:
rife_v4.6_ensemble.onnx from the v2 package: https://github.com/AmusementClub/vs-mlr … e_v2_v1.7z

If someone is not able to use the full capacity of their screen with this model, for example to interpolate 4K files, it is worth using a faster model, with a small loss of quality:
rife_v4.6.onnx from the v2 package: https://github.com/AmusementClub/vs-mlr … e_v2_v1.7z

If anyone is interested in much, much better quality than RIFE interpolation I will write about it soon, here on this forum. Please give me another week, maybe two weeks, until I update my repository on GitHub.

Mardon85 wrote:
UHD wrote:
zerosoul9901 wrote:

From my tests, using ensemble models improves frame interpolation quality, while v2 models significantly reduces the playback seek time, improve performance and should also reduce RAM bandwidth requirements.

Thanks, that's very good news smile

Please can you elaborate on "And maybe remove explicit padding, I am not a programmer." So I can get this installed and tested please?

Cheers

Update I tried the files about (its probably user error) but I couldn't get them to work. Hopefully the SVPTeam will issue an offical update with them in?



IPaddingLayer is deprecated in TensorRT 8.2 and will be removed in TensorRT 10.0. Use ISliceLayer to pad the tensor, which supports new non-constant, reflects padding mode and clamp, and supports padding output with dynamic shape.

https://docs.nvidia.com/deeplearning/te … dding.html

Chainik will probably know better what this is all about.

zerosoul9901 wrote:

From my tests, using ensemble models improves frame interpolation quality, while v2 models significantly reduces the playback seek time, improve performance and should also reduce RAM bandwidth requirements.

Thanks, that's very good news smile

Thanks Mardon85 for the further tests and I'm very glad you found a way to interpolate x3 in real time. I think the setting you showed might also help someone.

One more thing can increase efficiency:

scripts/vsmlrt.py: added support for rife v2 implementation
(experimental) rife v2 models can be downloaded on https://github.com/AmusementClub/vs-mlr … nal-models ("rife_v2_v{version}.7z"). It leverages onnx's shape tensor to reduce memory transaction from cpu to gpu by 36.4%. It also handles padding internally so explicit padding is not required.

This update came out a few days ago: https://github.com/AmusementClub/vs-mlr … c649dfb212

I guess all you need to do is replace the vsmlrt.py file and download the rife v2 models. And maybe remove explicit padding, I am not a programmer.

Share your impressions on how this modification, and what Mardon85 suggests, affects performance.

Traditional performance DDR4 UDIMM does not come with ECC capabilities. The CPU is responsible for handling the error correction. Due to the frequency limitations, we’ve only seen ECC on lower-spec kits.

https://www.overclockers.com/ddr5-overclocking-guide/


Mushkin Redline ECC Black DIMM Kit 32GB, DDR4-3600, CL16-19-19-39, ECC (MRC4E360GKKP16GX2)

Mushkin Redline ECC White DIMM Kit 32GB, DDR4-3600, CL16-19-19-39, ECC (MRD4E360GKKP16GX2)

30.5% increase in performance with HAGS OFF during encoding:
https://www.svp-team.com/forum/viewtopi … 819#p81819

On the other hand, with real-time playback we have:

10.8% increase in performance with HAGS OFF
=(3200*1800)/(3040*1710)

over 59.6% performance increase with 6.7% more RAM bandwidth
=(3840*2160)/(3040*1710)

I know that with a more precisely run test the performance increase is unlikely to exceed the increase in RAM bandwidth, but I think this sufficiently confirms that with a top-of-the-line PC for the present times the bottleneck for 4K HDR files is RAM bandwidth.

Thanks for clarifying the test information and sharing additional interesting information. I think many people reading this forum will benefit from it.
Regarding ECC, below is a short quote that I think explains the matter in the shortest and best way:

DDR5's on-die ECC doesn't correct for DDR channel errors and enterprise will continue to use the typical side-band ECC alongside DDR5's additional on-die ECC.

https://www.quora.com/What-is-the-diffe … d-real-ECC

Back to the tests. Here's a summary, if I've misunderstood something then correct me.

6000mhz 40 40 40, hags on,
Fail 3840x2160
OK 3840x2160 with downscaling to 3040x1710

6000mhz 40 40 40, hags off,
Fail 3840x2160
OK 3840x2160 with downscaling to 3200x1800

6400mhz 36 38 38, hags off,
OK 3840x2160 without downscaling

6400mhz 36 38 38, hags on,
OK 3840x2160 without downscaling

onurco wrote:

I am using 12900K at 5.2 ghz all core overclock and I overclocked my ram from 6000mhz 40 40 40 xmp 3 profile to 6400 36 38 38 after reading your text. I was using RIFE tensor rt with 3040x1710 downscaling for 60 fps interpolation, after overclocking the RAM, with my rtx 4090 and hags off, I can interpolate 3840x2160 60 fps with no frame drops! Thanks for the heads up.

Thanks for sharing the test results.

Did I understand correctly?

6000mhz 40 40 40, hags off,
Fail 3840x2160
OK 3840x2160 with downscaling to 3040x1710

6400mhz 36 38 38, hags off,
OK 3840x2160 without downscaling

I mean, was the only change an overclocking of the RAM? Just hags off with 6000mhz was not enough?

Second question.

Are you writing about interpolation:
24 fps → 60 fps
25 fps → 60 fps
30 fps → 60 fps
60 fps → 120 fps

?