Frame Interpolation
Multiply frames between keyframes with RIFE 4.22-lite, RIFE 4.25-heavy and GMFSS for fluid in-betweens. Scene-change detection built in.
An open-source AI video toolkit engineered for anime. Interpolation, upscaling, depth, segmentation and restoration — all in one pipeline. Runs on CUDA, TensorRT, DirectML and OpenVINO.
01 · Upscale Demo
Drag the handle. The left side is the untouched source frame; the right is the TAS-upscaled result using fallin_strong on TensorRT. Pixel-for-pixel sharper lines, preserved cel colors, no halo.
02 · Pipeline
Multiply frames between keyframes with RIFE 4.22-lite, RIFE 4.25-heavy and GMFSS for fluid in-betweens. Scene-change detection built in.
2× resolution with anime-tuned models. ShuffleCugan, SPAN, Fallin Strong and Compact variants all supported.
Monocular depth estimation via Depth Anything V2. Small / Base / Large variants for After Effects 2.5D parallax and shader work.
Alpha-channel mattes for character isolation. Clean edges on hair, armor and line art — ready for compositing in Resolve or AE.
SCUNet denoising, anime-specific sharpening and deblock. Rescue old broadcast masters without destroying cel detail.
Drops redundant or near-identical cels before the GPU touches them — fewer frames in means faster passes out. Swap detection with --dedup_method: SSIM, MSE, FlowNetS or VMAF.
Drop TAS directly into your motion-graphics workflow, or script every pass from the terminal. Same models, same flags, same results.
A CEP panel that sits inside AE. Queue shots, pick a preset, render — without leaving your comp.
Every pass is a flag on main.py. Script it, batch it, drop it in a Makefile — it's just a CLI.
03 · By the Numbers
04 · Get TAS
Prebuilt releases bundle all models and dependencies. No Python install required on Windows.
Latest build · v2.6.0 · ships with all models bundled
05 · How it works
Every pass TAS runs shares a single in-memory frame queue — no redundant disk writes between stages. Flags you pass on the command line decide which nodes light up.
Local file, batch list or YouTube URL.
mp4 · mkv · mov · webmDrops redundant cels before the GPU ever sees them.
SSIM · MSE · FlowNetS · VMAFSynthesises in-between frames from neighbouring keyframes.
RIFE 4.6 → 4.25-heavy · GMFSS · Elexor2× with anime-tuned super-resolution architectures.
ShuffleCugan · SPAN · Compact · OpenProteus · AniScale 2Denoise, deblock, dejpeg, sharpen and darken line art — chainable.
SCUNet · NAFNet · Anime1080Fixer · FastLineDarkenFFmpeg hand-off with animation-tuned x264/x265 or NVENC.
x264_animation_10bit · x265 · NVENC · AV1 · ProRes06 · Supported models
Real model names, real backends. Swap any weight in via --custom_model (Spandrel on CUDA, ONNX on TensorRT / DirectML / OpenVINO).
07 · Pipeline in 30 seconds
Every pass is a flag. Chain them. Auto-enable is real — specifying a *_method turns the feature on for you.
$ python main.py --input anime_episode.mkv \
--upscale_method shufflecugan-tensorrt \
--interpolate_method rife4.25-tensorrt --ensemble \
--restore_method anime1080fixer-tensorrt \
--dedup_method ssim-cuda \
--sharpen --sharpen_sens 30 \
--encode_method x264_animation_10bit --bit_depth 10bit
› resolving backends · tensorrt engines cached
› stage: dedup · ssim-cuda
› stage: interpolate · rife4.25-tensorrt (ensemble)
› stage: upscale · shufflecugan-tensorrt (2×)
› stage: restore · anime1080fixer-tensorrt + sharpen
› encoder · libx264 · animation · 10-bit
✓ done in 12m 47s
08 · FAQ
If you have an RTX 20/30/40/50 card, use the CUDA + TensorRT build for the best speed. GTX 16 series works on CUDA. GTX 10 series (Pascal) and AMD cards run on the DirectML backend. Intel iGPUs and dGPUs run on OpenVINO. Pick the matching -tensorrt, -directml or -openvino model suffix.
Yes. The --custom_model flag accepts .pt, .pth, .ckpt and .safetensors for the CUDA compact path via Spandrel, and .onnx for the -directml, -openvino and -tensorrt variants. Just match the backend suffix to the file format.
Not for the prebuilt Windows release — everything you need is bundled. On first run TAS will fetch FFmpeg automatically if it isn't already present. You only need Python if you're cloning the repo to develop, in which case pip install -r requirements.txt plus one of the extra-requirements-* profiles will get you going.
Yes — new models, backends and restoration passes land regularly. The standalone Windows app and the After Effects panel are both under active development. Follow releases on GitHub or join the Discord for nightly builds.
09 · Ship it
Open-source. No paywall. Just frames.
AGPL-3.0 · Windows · Built by the community, for the community.