Follow
Paul Voigtlaender
Paul Voigtlaender
Research Scientist at Google
Verified email at google.com
Title
Cited by
Cited by
Year
MOTS: Multi-Object Tracking and Segmentation
P Voigtlaender, M Krause, A Osep, J Luiten, BBG Sekar, A Geiger, ...
IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2019
684*2019
Siam r-cnn: Visual tracking by re-detection
P Voigtlaender, J Luiten, PHS Torr, B Leibe
Proceedings of the IEEE/CVF conference on computer vision and pattern …, 2020
6542020
FEELVOS: Fast End-to-End Embedding Learning for Video Object Segmentation
P Voigtlaender, Y Chai, F Schroff, H Adam, B Leibe, LC Chen
IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2019
4602019
Online adaptation of convolutional neural networks for video object segmentation
P Voigtlaender, B Leibe
British Machine Vision Conference (BMVC), 2017
4482017
PReMVOS: Proposal-generation, refinement and merging for video object segmentation
J Luiten, P Voigtlaender, B Leibe
Asian Conference on Computer Vision, 2018
3392018
Handwriting Recognition with Large Multidimensional Long Short-Term Memory Recurrent Neural Networks
P Voigtlaender, P Doetsch, H Ney
International Conference on Frontiers in Handwriting Recognition (ICFHR …, 2016
2742016
A comprehensive study of deep bidirectional LSTM RNNs for acoustic modeling in speech recognition
A Zeyer, P Doetsch, P Voigtlaender, R Schlüter, H Ney
2017 IEEE international conference on acoustics, speech and signal …, 2017
2162017
Iteratively Trained Interactive Segmentation
S Mahadevan, P Voigtlaender, B Leibe
British Machine Vision Conference (BMVC), 2018
1442018
Online adaptation of convolutional neural networks for the 2017 davis challenge on video object segmentation
P Voigtlaender, B Leibe
The 2017 DAVIS Challenge on Video Object Segmentation-CVPR Workshops 5 (6), 2017
852017
RETURNN: The RWTH extensible training framework for universal recurrent neural networks
P Doetsch, A Zeyer, P Voigtlaender, I Kulikov, R Schlüter, H Ney
2017 IEEE International Conference on Acoustics, Speech and Signal …, 2017
812017
Step: Segmenting and tracking every pixel
M Weber, J Xie, M Collins, Y Zhu, P Voigtlaender, H Adam, B Green, ...
arXiv preprint arXiv:2102.11859, 2021
702021
Track, then decide: Category-agnostic vision-based multi-object tracking
A Ošep, W Mehner, P Voigtlaender, B Leibe
2018 IEEE International Conference on Robotics and Automation (ICRA), 3494-3501, 2018
702018
PReMVOS: Proposal-generation, refinement and merging for the YouTube-VOS challenge on video object segmentation 2018
J Luiten, P Voigtlaender, B Leibe
The 1st Large-scale Video Object Segmentation Challenge-ECCV Workshops 1 (2), 6, 2018
46*2018
Sequence-discriminative training of recurrent neural networks
P Voigtlaender, P Doetsch, S Wiesler, R Schlüter, H Ney
2015 IEEE International Conference on Acoustics, Speech and Signal …, 2015
442015
PReMVOS: Proposal-generation, Refinement and Merging for the DAVIS Challenge on Video Object Segmentation 2018
J Luiten, P Voigtlaender, B Leibe
The 2018 DAVIS Challenge on Video Object Segmentation - CVPR Workshops, 2018
382018
Boltvos: Box-level tracking for video object segmentation
P Voigtlaender, J Luiten, B Leibe
arXiv preprint arXiv:1904.04552, 2019
322019
Burst: A benchmark for unifying object recognition, segmentation and tracking in video
A Athar, J Luiten, P Voigtlaender, T Khurana, A Dave, B Leibe, ...
Proceedings of the IEEE/CVF winter conference on applications of computer …, 2023
312023
Large-Scale Object Mining for Object Discovery from Unlabeled Video
A Osep, P Voigtlaender, J Luiten, S Breuers, B Leibe
International Conference on Robotics and Automation, 2019
282019
Pali-3 vision language models: Smaller, faster, stronger
X Chen, X Wang, L Beyer, A Kolesnikov, J Wu, P Voigtlaender, B Mustafa, ...
arXiv preprint arXiv:2310.09199, 2023
242023
Reducing the annotation effort for video object segmentation datasets
P Voigtlaender, L Luo, C Yuan, Y Jiang, B Leibe
Proceedings of the IEEE/CVF Winter Conference on Applications of Computer …, 2021
232021
The system can't perform the operation now. Try again later.
Articles 1–20