MotionAE

Project Preview

MotionAE: Motion Autoencoder on HumanML3D

A minimal autoencoder trained on HumanML3D for motion compression and reconstruction. This page shows the architecture, metrics (MSE/MAE/L2), and example visualizations, with links to code and environment.

Issam Alzouby

UNC Charlotte

Original vs Reconstructed motion teaser
Original vs Reconstructed joint trajectories (teaser). Replace with your own GIF.
Latent space scatter
Latent space preview (e.g., TSNE/UMAP). Replace with your own plot.

Abstract

MotionAE is a lightweight autoencoder for human motion sequences built for clarity and speed. Trained on HumanML3D (SMPL-H, 22 joints, 196 frames, 263-D features), MotionAE learns a compact latent representation that reconstructs trajectories with low error while remaining straightforward to extend to VAE/VQ-VAE variants.

Contributions

  • Simple, reproducible baseline for motion reconstruction on HumanML3D.
  • Clean PyTorch code with environment.yml and scripts to train/validate/visualize.
  • Ready-made figures (joint curves, latent previews) and a one-page project site with light/dark mode.

Architecture

A standard MLP autoencoder. Replace the diagram below with your own (SVG/PNG).

Architecture diagram
Input
196 Γ— 263 flattened
Latent
512-D
Loss
MSE (reconstruction)

Validation Metrics

From aevalidate.py (averaged over the held-out split).

Metric Value Notes
MSE β€” Mean Squared Error (per sample mean)
MAE β€” Mean Absolute Error (per sample mean)
L2 β€” Euclidean distance

Tip: export a small metrics.json and fetch it below to auto-fill these numbers.

Results & Visualizations

Drop your PNGs/GIFs into assets/ and update captions.

Joint 0 X over time
Joint 0 β€” X position vs time (original vs reconstructed).
Joint 5 Y over time
Joint 5 β€” Y position vs time.
Joint 10 Z over time
Joint 10 β€” Z position vs time.

Pretrained Weights

Download the pretrained MotionAE checkpoint from Google Drive or via gdown. The page link is also in the hero badges above.

Terminal β€” Download weights
$ pip install gdown
$ mkdir -p weights && cd weights
$ gdown --fuzzy "https://drive.google.com/file/d/1dYFW_9yYElH_7etZhHoHO2bsn7DB_cQI/view?usp=sharing"
# (optional) verify checksum if provided
$ cd ..
# Example load (adjust path to your script):
$ python -c "import torch; m=torch.load('weights/autoencoder_humanml3d.pth', map_location='cpu'); print(type(m))"

Reproduce

Terminal β€” Create env
$ conda env create -f environment.yml
$ conda activate momask
$ python MotionAE.py      # trains & saves autoencoder_humanml3d.pth
$ python aevalidate.py    # prints MSE / MAE / L2 and writes assets/metrics.json
$ python visualize.py     # generates plots into assets/
Terminal β€” GitHub Pages
# Option A: GitHub Pages (static)
# Put index.html and assets/ in your repo root
# Settings β†’ Pages β†’ Source: main /(root)

$ git add .
$ git commit -m "Deploy MotionAE page"
$ git push origin main
Terminal β€” Vercel / Netlify
# Option B: Vercel (CLI)
$ npm i -g vercel
$ vercel deploy

# Option B: Netlify (drag-and-drop)
# Drop the folder in the Netlify dashboard

BibTeX

@misc{alzouby2025motionae,
  title  = {MotionAE: A Minimal Motion Autoencoder Baseline},
  author = {Issam Alzouby},
  year   = {2025},
  url    = {https://ialzouby.github.io/Motion-AutoEncoder/}
}