Rife-App Update 2.70


Here is the new version. Horray!

  • 3 new animation models
  • 2 new very WIP Real Life Models. 
  • I'm still gonna try to improve all models, but RL need more work than animation.
  • Improvements on bugs/memory usage
  • The app now save your configurations once you hit Interpolate. You can put the default values again by hitting "Restore default App Values"

Feel free to drop any feedback on any of the models.

Files

RIFE-App 2.7 2 GB
Nov 09, 2021

Get Rife-App 3.35

Buy Now$24.90 USD or more

Comments

Log in with itch.io to leave a comment.

What does the "Use less memory" option do? I don't notice it reducing VRAM usage. How does it impact quality?

Changing Scale from 1.0 to 0.5 does seem to significantly reduce VRAM usage. How much does it impact quality?

Thanks for this magical app. :)

Less memory try to put a single frame at the time on the memory, but the difference usually is pretty small. For now I keep the option in there since from time to time I try to improve on it. Scale do change a lot of the memory usage. But it can get pretty bad results. 0.5 should work fine for high res input.

Thanks for the complement!

I'm seeing vertical judders (screen seems to shake up and down) from interpolating a 5K video. Here are small clips (12MB each) to show what I mean:

Here were my settings:


Tested turning off "Interpolate in YUV". The results are even worse, with black block artifacts: Interpolated Clip in RGB

I can't seen to be able to run the file, even on VLC. But it may be a problem on my computer. Will try again later. For now can you try using the "[New] Quality" and disable YUV, see if it help? You may need to use a smaller scale so you don't run out of memory.

Looks like "[New] Balance" with Scale 0.5 got rid of the vertical judder. ("[New] Quality" ran out of VRAM even with Scale 0.5.)

Interpolated Clip (New Balance Scale 0.5)

I can't seen to be able to run the file, even on VLC.

Hmm, not sure why. I tried VLC and it worked. What about PotPlayer?

If this model fixed, then it's all good. The next update will have a model that use less Vram and give better results than "[New] Balance", so it should fix your problem.

Good afternoon, please tell me, RTX 3090 is capable of interpolating video in 4K, is there enough memory? Of course, on the preset (new)Quality.

How is the quality preset in 2.70 fundamentally different from any other preset?

Should we expect a video comparison of the quality mode, for example, with balance or 3.1?

Each quality is a different model. But i'm almost finishing the next version, it will use less memory and lower "Quality" will give almost as good results as the best quality and use way less memory.  Not sure if the current model work at 4K in 3090, but the next one will, even if you need to turn down a little the quality, it will not affect that much the result.

I checked it on RTX 3090, and there is not enough memory on the balance sheet and in quality. I look forward to a new version, thanks for your reply.

Ran into the following error during interpolation:

Exception ignored in thread started by: <function queue_file_save at 0x000001D9FF244C10>
Traceback (most recent call last):
  File "my_DAIN_class.py", line 754, in queue_file_save
  File "my_DAIN_class.py", line 665, in PipeFrame
BrokenPipeError: [Errno 32] Broken pipe

Logs prior to the error:

FPS: 24000/1001
FPS Eval: 23.976023976023978
Using Benchmark: True
Batch Size: -1 Input FPS: 23.976023976023978 Use all GPUS: False Scale: 1.0 Render Mode: 0 Interpolations: 2X Use Smooth: 0 Use Alpha: 0 Use YUV: 1 Encode: libsvtav1 Device: cuda:0 Using Half-Precision: False Resolution: 3840x2160 Using Model: 3_1 Selected auto batch size, testing a good batch size. Setting new batch size to 1 Resolution: 3840x2160 RunTime: 2453.803000 Total Frames: 58832   0%|                         | 2/58832 [00:04<43:53:18,  2.69s/it, file=File 0]------------------------------------------- SVT [version]:  SVT-AV1 Encoder Lib v0.8.6-72-gec7ac87f SVT [build]  :  GCC 10.2.0       64 bit LIB Build date: Feb  7 2021 13:13:12 ------------------------------------------- Number of logical cores available: 24 Number of PPCS 53 [asm level on system : up to avx2] [asm level selected : up to avx2] ------------------------------------------- SVT [config]: Main Profile      Tier (auto)     Level (auto) SVT [config]: Preset                                                    : 7 SVT [config]: EncoderBitDepth / EncoderColorFormat / CompressedTenBitFormat     : 10 / 1 / 0 SVT [config]: SourceWidth / SourceHeight                                        : 3840 / 2160 SVT [config]: Fps_Numerator / Fps_Denominator / Gop Size / IntraRefreshType     : 48000 / 1001 / 49 / 2 SVT [config]: HierarchicalLevels  / PredStructure                               : 4 / 2 SVT [config]: BRC Mode / QP  / LookaheadDistance / SceneChange                  : CQP / 50 / 0 / 0 -------------------------------------------  54%|████████▌       | 31489/58832 [3:39:56<2:49:33,  2.69it/s, file=File 31487]


I'm not totally sure, but I think the data precision should be at normal. Can you try using at normal?

Yep, will try. I've seen artifacts when interpolating high resolution video with Normal Data Precision, and the artifacts go away when bumping Data Precision up, so I have been using Data Precision as high as my VRAM will allow.

I'll stop that. :)

On older versions it really was necessary to use it to improve artifacts. It should not ocurr in this version with regular precision. Unless I messed something up. If you see artifacts let me know.

(1 edit)

if filenames contans dot, then output will be replaced 8(

for example select files:

test.1.mp4
test.2.mp4
test.3.mp4

all output will be in one folder test

Yeah, that is a good point. Going to fix it, thanks.

The 2 new Real Life Models seem to use far more VRAM than the 3.1 model. I couldn't interpolate a 4K video using the new models with 10GB of VRAM.

Yes, I'm still working on then to improve quality and use less Vram, in high res you most likely still want to use 3.1

Error when starting interpolation:

Using Benchmark: True
Batch Size: -1
Input FPS: 23.976023976023978
Use all GPUS: False
Scale: 1.0
Render Mode: 0
Interpolations: 4X
Use Smooth: 0
Use Alpha: 0
Use YUV: 0
Encode: libsvtav1
Device: cuda:0
Using Half-Precision: False
Resolution: 3840x2160
Loading Pre-train
Using Model: rl_regular2
Traceback (most recent call last):
  File "my_design.py", line 86, in run
  File "my_DAIN_class.py", line 1436, in RenderVideo
  File "my_DAIN_class.py", line 1495, in RenderVideoWithModel
  File "my_DAIN_class.py", line 143, in make_inference
  File "model\flow_slow2.py", line 442, in inference
  File "torch\nn\modules\module.py", line 1051, in _call_impl
    return forward_call(*input, **kwargs)
  File "model\flow_slow2.py", line 165, in forward
  File "torch\nn\modules\module.py", line 1051, in _call_impl
    return forward_call(*input, **kwargs)
  File "torch\nn\modules\container.py", line 139, in forward
    input = module(input)
  File "torch\nn\modules\module.py", line 1051, in _call_impl
    return forward_call(*input, **kwargs)
  File "torch\nn\modules\container.py", line 139, in forward
    input = module(input)
  File "torch\nn\modules\module.py", line 1051, in _call_impl
    return forward_call(*input, **kwargs)
  File "torch\nn\modules\conv.py", line 443, in forward
    return self._conv_forward(input, self.weight, self.bias)
  File "torch\nn\modules\conv.py", line 439, in _conv_forward
    return F.conv2d(input, weight, bias, self.stride,
RuntimeError: Input type (torch.cuda.DoubleTensor) and weight type (torch.cuda.FloatTensor) should be the same
QObject::setParent: Cannot set parent, new parent is in a different thread

Ah, is you "Data Precision" to something other than "Normal"? If so, try putting it on normal. This is indeed a bug, but with the latest changes, Normal should work for all types of input.