![]() |
Words in Motion: Extracting Interpretable Control Vectors for Motion Transformers
Ömer Şahin Taş*, Royden Wagner* ICLR, 2025 arXiv / code / poster Control vectors enable modifying hidden states of transformer models. We show that a high degree of neural collapse and concept-based interpretability are necessary to fit control vectors. Furthermore, we optimize them using sparse autoencoders. |
![]() |
JointMotion: Joint Self-supervision for Joint Motion Prediction
Royden Wagner*, Ömer Şahin Taş*, Marvin Klemp, Carlos Fernandez Lopez CoRL, 2024 arXiv / code / video / poster JointMotion connects scene-level motion and environment embeddings via a non-contrastive alignment objective, then applies masked polyline modeling to unify global context and instance-level representation. |
![]() |
RedMotion: Motion Prediction via Redundancy Reduction
Royden Wagner, Ömer Şahin Taş, Marvin Klemp, Carlos Fernandez Lopez, Christoph Stiller TMLR, 2024 arXiv / code RedMotion fuses local road features into a global embedding via an internal decoder, then applies self-supervised redundancy reduction across augmented views to unify local and global road representations. |