<?xml version="1.0" encoding="utf-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom">
	<channel>
		<title><![CDATA[SmoothVideo Project — Error ONNX model, TensorRT]]></title>
		<link>https://www.svp-team.com/forum/viewtopic.php?id=6805</link>
		<atom:link href="https://www.svp-team.com/forum/extern.php?action=feed&amp;tid=6805&amp;type=rss" rel="self" type="application/rss+xml" />
		<description><![CDATA[The most recent posts in Error ONNX model, TensorRT.]]></description>
		<lastBuildDate>Wed, 18 Jan 2023 17:50:48 +0000</lastBuildDate>
		<generator>PunBB</generator>
		<item>
			<title><![CDATA[Re: Error ONNX model, TensorRT]]></title>
			<link>https://www.svp-team.com/forum/viewtopic.php?pid=81766#p81766</link>
			<description><![CDATA[<p>there&#039;re no errors here<br />just wait for the end of the process</p>]]></description>
			<author><![CDATA[null@example.com (Chainik)]]></author>
			<pubDate>Wed, 18 Jan 2023 17:50:48 +0000</pubDate>
			<guid>https://www.svp-team.com/forum/viewtopic.php?pid=81766#p81766</guid>
		</item>
		<item>
			<title><![CDATA[Re: Error ONNX model, TensorRT]]></title>
			<link>https://www.svp-team.com/forum/viewtopic.php?pid=81762#p81762</link>
			<description><![CDATA[<p>1 and 3rd line are not errors</p>]]></description>
			<author><![CDATA[null@example.com (dlr5668)]]></author>
			<pubDate>Wed, 18 Jan 2023 16:24:50 +0000</pubDate>
			<guid>https://www.svp-team.com/forum/viewtopic.php?pid=81762#p81762</guid>
		</item>
		<item>
			<title><![CDATA[Error ONNX model, TensorRT]]></title>
			<link>https://www.svp-team.com/forum/viewtopic.php?pid=81761#p81761</link>
			<description><![CDATA[<p>[W] [TRT] onnx2trt_utils.cpp:377: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.</p><p> [W] Could not read timing cache from: C:\Users\......\AppData\Roaming\SVP4\cache\Program Files (x86)/SVP 4/rife\models\rife\rife_v4.4.onnx.min64x64_opt2560x1440_max2560x1440_fp16_trt-8502_cudnn_I-fp16_O-fp16_NVIDIA-GeForce-GTX-1070_a8b3b7a9.engine.cache.</p> <br /><p>[TRT] CUDA lazy loading is not enabled. Enabling it can significantly reduce device memory usage. See `CUDA_MODULE_LOADING` in <a href="https://docs.nvidia.com/cuda/cuda-c-programming-guide/index.html#env-vars">https://docs.nvidia.com/cuda/cuda-c-pro … l#env-vars</a></p><br /><br /><p>Is something wrong with&nbsp; NVIDIA TensorRT?<br />What to do?<br />I have read the: <br />Module loading</p><p>CUDA_MODULE_LOADING</p><p>DEFAULT, LAZY, EAGER</p><p>Specifies the module loading mode for the application. When set to EAGER, all kernels from a cubin, fatbin or a PTX file are fully loaded upon corresponding cuModuleLoad* API call. This is the same behavior as in all preceding CUDA releases. When set to LAZY, loading of a specific kernel is delayed to the point a CUfunc handle is extracted with cuModuleGetFunction API call. This mode allows for lowering initial module loading latency and decreasing initial module-related device memory consumption, at the cost of higher latency of cuModuleGetFunction API call. Default behavior is EAGER. Default behavior may change in future CUDA releases.</p><p>But i dont understand!</p>]]></description>
			<author><![CDATA[null@example.com (anders.nilsson)]]></author>
			<pubDate>Wed, 18 Jan 2023 15:46:04 +0000</pubDate>
			<guid>https://www.svp-team.com/forum/viewtopic.php?pid=81761#p81761</guid>
		</item>
	</channel>
</rss>
