{"id":780,"date":"2025-07-20T17:57:07","date_gmt":"2025-07-20T17:57:07","guid":{"rendered":"https:\/\/ccds.ai\/?p=780"},"modified":"2025-08-10T18:15:35","modified_gmt":"2025-08-10T18:15:35","slug":"uflow-net-a-unified-approach-for-improved-video-frame-interpolation","status":"publish","type":"post","link":"https:\/\/ccds.ai\/?p=780","title":{"rendered":"UFlow-Net: A Unified Approach for Improved Video Frame Interpolation"},"content":{"rendered":"\n<figure class=\"wp-block-image size-full\"><img fetchpriority=\"high\" decoding=\"async\" width=\"975\" height=\"490\" src=\"https:\/\/ccds.ai\/wp-content\/uploads\/2025\/07\/Screenshot-2025-07-20-235627.png\" alt=\"\" class=\"wp-image-781\" srcset=\"https:\/\/ccds.ai\/wp-content\/uploads\/2025\/07\/Screenshot-2025-07-20-235627.png 975w, https:\/\/ccds.ai\/wp-content\/uploads\/2025\/07\/Screenshot-2025-07-20-235627-300x151.png 300w, https:\/\/ccds.ai\/wp-content\/uploads\/2025\/07\/Screenshot-2025-07-20-235627-768x386.png 768w, https:\/\/ccds.ai\/wp-content\/uploads\/2025\/07\/Screenshot-2025-07-20-235627-705x354.png 705w\" sizes=\"(max-width: 975px) 100vw, 975px\" \/><\/figure>\n\n\n\n<p>In computer vision, video frame interpolation&nbsp; plays a significant role in video enhancement by synthesizing intermediate frames to improve temporal resolution and visual quality. This techniques help reduce motion blur, create smoother slow motion videos and enhancing total viewing experience, especially in low frame rate video. This is vital for application like video processing, streaming and video restoration. We are developing <strong>UFlow-Net<\/strong>, a deep learning-based model that improves frame interpolation accuracy.<\/p>\n\n\n\n<p>The process starts with a dataset of three consecutive video frames. The first and third frames are used as inputs, while the second frame is used as a reference for evaluation. These frames go through preprocessing, such as resizing, normalizing, and stacking the frames.<\/p>\n\n\n\n<p>Next, the preprocessed frames are passed into UFlow-Net, which consists of two key steps. <strong>The Flow-Enhanced Encoder-Decoder<\/strong> captures motion and spatial details from the input frames, and<strong> <\/strong>reconstructs the features, keeping the motion consistent. The <strong>Refined Frame Synthesis <\/strong>step, refines the features more and generates the missing middle frame by using the learned motion patterns and spatial relationships. We evaluate our model using <strong>PSNR <\/strong>(Peak Signal-to-Noise Ratio) and <strong>SSIM <\/strong>(Structural Similarity Index Measure). Our Model achieved a PSNR of <strong>35.65 dB <\/strong>and SSIM score<strong> 0.97<\/strong>.&nbsp;<\/p>\n\n\n\n<p><strong>Relevant publications:<\/strong>&nbsp;<\/p>\n\n\n\n<p>F. Israq, S. B. Alam, H. Khatun, S.S. Sarker, S.T. Bhuiyan, M. Haque, R. Rahman and S. Kobashi &#8221; UFlow-Net: A Unified Approach for Improved Video Frame Interpolation&#8221; in Proc. 2024 27th International Conference on Computer and Information Technology (ICCIT), Cox\u2019s Bazar, Bangladesh, Dec. 20-22, 2024.&nbsp;<\/p>\n","protected":false},"excerpt":{"rendered":"<p>In computer vision, video frame interpolation&nbsp; plays a significant role in video enhancement by synthesizing intermediate frames to improve temporal resolution and visual quality. This techniques help reduce motion blur, [&hellip;]<\/p>\n","protected":false},"author":2,"featured_media":781,"comment_status":"closed","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"_acf_changed":false,"_jetpack_memberships_contains_paid_content":false,"footnotes":""},"categories":[93],"tags":[],"class_list":["post-780","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-mira_projects"],"acf":[],"jetpack_featured_media_url":"https:\/\/ccds.ai\/wp-content\/uploads\/2025\/07\/Screenshot-2025-07-20-235627.png","jetpack_sharing_enabled":true,"_links":{"self":[{"href":"https:\/\/ccds.ai\/index.php?rest_route=\/wp\/v2\/posts\/780","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/ccds.ai\/index.php?rest_route=\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/ccds.ai\/index.php?rest_route=\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/ccds.ai\/index.php?rest_route=\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/ccds.ai\/index.php?rest_route=%2Fwp%2Fv2%2Fcomments&post=780"}],"version-history":[{"count":1,"href":"https:\/\/ccds.ai\/index.php?rest_route=\/wp\/v2\/posts\/780\/revisions"}],"predecessor-version":[{"id":782,"href":"https:\/\/ccds.ai\/index.php?rest_route=\/wp\/v2\/posts\/780\/revisions\/782"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/ccds.ai\/index.php?rest_route=\/wp\/v2\/media\/781"}],"wp:attachment":[{"href":"https:\/\/ccds.ai\/index.php?rest_route=%2Fwp%2Fv2%2Fmedia&parent=780"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/ccds.ai\/index.php?rest_route=%2Fwp%2Fv2%2Fcategories&post=780"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/ccds.ai\/index.php?rest_route=%2Fwp%2Fv2%2Ftags&post=780"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}