v2.7.0 · Open Source · AGPL-3.0

ENHANCE ANIME. FRAME BY FRAME.

An open-source AI video toolkit engineered for anime. Interpolation, upscaling, depth, segmentation and restoration — all in one pipeline. Runs on CUDA, TensorRT, DirectML and OpenVINO.

01 · Upscale Demo

From source frame to anime-tuned SR.

Drag the handle. The left side is the untouched source frame; the right is the TAS-upscaled result using fallin_strong on TensorRT. Pixel-for-pixel sharper lines, preserved cel colors, no halo.

After: TAS upscaled frame with fallin_strong
Before: source frame
BEFORE
AFTER · fallin_strong
Fallin Strong · TensorRT · FP16 Sharper enhancement with denoising 2× scale factor

02 · Pipeline

One toolkit. Every step of the post-production chain.

Frame Interpolation

Multiply frames between keyframes with RIFE 4.22-lite, RIFE 4.25-heavy and GMFSS for fluid in-betweens. Scene-change detection built in.

  • RIFE
  • GMFSS
  • Scene Cut

AI Upscaling

2× resolution with anime-tuned models. ShuffleCugan, SPAN, Fallin Strong and Compact variants all supported.

  • ShuffleCugan
  • SPAN
  • Compact

Depth Maps

Monocular depth estimation via Depth Anything V2. Small / Base / Large variants for After Effects 2.5D parallax and shader work.

  • DA V2
  • Small/Base/Large

BG / FG Segmentation

Alpha-channel mattes for character isolation. Clean edges on hair, armor and line art — ready for compositing in Resolve or AE.

  • ANIMESEGMENT
  • Alpha Out

Restoration

SCUNet denoising, anime-specific sharpening and deblock. Rescue old broadcast masters without destroying cel detail.

  • SCUNet
  • NAFNet
  • Sharpen

Deduplication

Drops redundant or near-identical cels before the GPU touches them — fewer frames in means faster passes out. Swap detection with --dedup_method: SSIM, MSE, FlowNetS or VMAF.

  • SSIM
  • FlowNetS
  • VMAF

Two frontends. One engine.

Drop TAS directly into your motion-graphics workflow, or script every pass from the terminal. Same models, same flags, same results.

After Effects Panel

A CEP panel that sits inside AE. Queue shots, pick a preset, render — without leaving your comp.

  • AE CEP Panel
  • Windows

Python CLI

Every pass is a flag on main.py. Script it, batch it, drop it in a Makefile — it's just a CLI.

  • main.py
  • Python 3.13

03 · By the Numbers

Numbers that actually ship.

30+
AI Models
interp · upscale · depth · restore
20+
Video Encoders
x264 · x265 · NVENC · AV1 · ProRes
4
Inference Backends
CUDA · TensorRT · DirectML · OpenVINO
Upscale Factor
anime-tuned super-resolution
1
Windows
Standalone + After Effects
100% OSS
AGPL-3.0
fork it, contribute back

04 · Get TAS

Install once. Run everywhere.

Prebuilt releases bundle all models and dependencies. No Python install required on Windows.

Latest build · v2.6.0 · ships with all models bundled

05 · How it works

One file in. Six stages of inference out.

Every pass TAS runs shares a single in-memory frame queue — no redundant disk writes between stages. Flags you pass on the command line decide which nodes light up.

  1. 01

    Input

    Local file, batch list or YouTube URL.

    mp4 · mkv · mov · webm
  2. 02

    Deduplication

    Drops redundant cels before the GPU ever sees them.

    SSIM · MSE · FlowNetS · VMAF
  3. 03

    Interpolation

    Synthesises in-between frames from neighbouring keyframes.

    RIFE 4.6 → 4.25-heavy · GMFSS · Elexor
  4. 04

    Upscaling

    2× with anime-tuned super-resolution architectures.

    ShuffleCugan · SPAN · Compact · OpenProteus · AniScale 2
  5. 05

    Restoration

    Denoise, deblock, dejpeg, sharpen and darken line art — chainable.

    SCUNet · NAFNet · Anime1080Fixer · FastLineDarken
  6. 06

    Encoded Output

    FFmpeg hand-off with animation-tuned x264/x265 or NVENC.

    x264_animation_10bit · x265 · NVENC · AV1 · ProRes

06 · Supported models

Pick the model. Pick the backend.

Real model names, real backends. Swap any weight in via --custom_model (Spandrel on CUDA, ONNX on TensorRT / DirectML / OpenVINO).

Interpolation

RIFE family · GMFSS · Elexor
RIFE 4.22 RIFE 4.22-lite RIFE 4.25 RIFE 4.25-lite RIFE 4.25-heavy RIFE 4.6 RIFE 4.15 RIFE 4.17 RIFE 4.18 RIFE 4.20 RIFE 4.21 GMFSS Rife_Elexor RIFE 4.22 RIFE 4.22-lite RIFE 4.25 RIFE 4.25-lite RIFE 4.25-heavy RIFE 4.6 RIFE 4.15 RIFE 4.17 RIFE 4.18 RIFE 4.20 RIFE 4.21 GMFSS Rife_Elexor

Upscaling

SPAN · Compact · ShuffleCugan · AniScale · Proteus
ShuffleCugan SPAN Compact UltraCompact SuperUltraCompact OpenProteus AniScale 2 RTMOSR Saryn Gauss Adore Fallin Soft Fallin Strong AnimeSR ShuffleCugan SPAN Compact UltraCompact SuperUltraCompact OpenProteus AniScale 2 RTMOSR Saryn Gauss Adore Fallin Soft Fallin Strong AnimeSR

Restoration & Depth

Denoise · Line art · Depth Anything V2
SCUNet NAFNet DPIR Anime1080Fixer FastLineDarken GaterV3 DeH264 (Real-PLKSR) DeJpeg (Real-PLKSR) HurrDeblur Depth Anything V2 · Small Depth Anything V2 · Base Depth Anything V2 · Large Depth Anything V2 · Giant Distill Small v2 Distill Base v2 Distill Large v2 SCUNet NAFNet DPIR Anime1080Fixer FastLineDarken GaterV3 DeH264 (Real-PLKSR) DeJpeg (Real-PLKSR) HurrDeblur Depth Anything V2 · Small Depth Anything V2 · Base Depth Anything V2 · Large Depth Anything V2 · Giant Distill Small v2 Distill Base v2 Distill Large v2

07 · Pipeline in 30 seconds

One invocation. Whole chain.

Every pass is a flag. Chain them. Auto-enable is real — specifying a *_method turns the feature on for you.

main.py · full anime restoration pipeline
$ python main.py --input anime_episode.mkv \
    --upscale_method shufflecugan-tensorrt \
    --interpolate_method rife4.25-tensorrt --ensemble \
    --restore_method anime1080fixer-tensorrt \
    --dedup_method ssim-cuda \
    --sharpen --sharpen_sens 30 \
    --encode_method x264_animation_10bit --bit_depth 10bit
 resolving backends · tensorrt engines cached
 stage: dedup        · ssim-cuda
 stage: interpolate  · rife4.25-tensorrt (ensemble)
 stage: upscale      · shufflecugan-tensorrt (2×)
 stage: restore      · anime1080fixer-tensorrt + sharpen
 encoder · libx264 · animation · 10-bit
✓ done in 12m 47s

08 · FAQ

The honest answers.

Does TAS work with my GPU?

If you have an RTX 20/30/40/50 card, use the CUDA + TensorRT build for the best speed. GTX 16 series works on CUDA. GTX 10 series (Pascal) and AMD cards run on the DirectML backend. Intel iGPUs and dGPUs run on OpenVINO. Pick the matching -tensorrt, -directml or -openvino model suffix.

Can I use my own models?

Yes. The --custom_model flag accepts .pt, .pth, .ckpt and .safetensors for the CUDA compact path via Spandrel, and .onnx for the -directml, -openvino and -tensorrt variants. Just match the backend suffix to the file format.

Do I need Python installed?

Not for the prebuilt Windows release — everything you need is bundled. On first run TAS will fetch FFmpeg automatically if it isn't already present. You only need Python if you're cloning the repo to develop, in which case pip install -r requirements.txt plus one of the extra-requirements-* profiles will get you going.

Is TAS really free and open source?

Yes — the source is on GitHub under the AGPL-3.0 license. Fork it, modify it, redistribute it — just keep any network-service derivatives open under the same license.

Will there be new features?

Yes — new models, backends and restoration passes land regularly. The standalone Windows app and the After Effects panel are both under active development. Follow releases on GitHub or join the Discord for nightly builds.

09 · Ship it

READY TO ENHANCE?

Open-source. No paywall. Just frames.

AGPL-3.0 · Windows · Built by the community, for the community.