Publications

For the most up-to-date list, check my Google Scholar profile.

Preprints


Preprint, 2024
Ruchit Rawal, Khalid Saifullah, Ronen Basri, David Jacobs, Gowthami Somepalli, Tom Goldstein
This paper introduces a dataset and benchmark for long video question answering.

Conference Papers


International Conference on Software Engineering (ICSE), 2025
Ruchit Rawal, Victor-Alexandru Pădurean, Sven Apel, Adish Singla, Mariya Toneva
This paper explores how natural language representations and different types of hints influence end-user debugging accuracy in AI-assisted programming.
Findings of the Association for Computational Linguistics (ACL), 2024
Ruchit Rawal, Mariya Toneva
A framework to compare NLP models by analyzing their shared invariances to linguistic perturbations, offering insights into model evolution and performance.
Conference on Lifelong Learning Agents (CoLLAs), 2023
Gabriele Merlin, Vedant Nanda, Ruchit Rawal, Mariya Toneva
Pretraining induces transferable invariances, which are retained in shallow layers and compressed during finetuning, shedding light on why pretrained models excel.
Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (WACV), 2023
Gaurav Kumar Nayak*, Ruchit Rawal*, Anirban Chakraborty
DE-CROP enhances certified robustness of pretrained models with limited data by generating diverse samples and optimizing denoiser training in logit space.
Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (WACV), 2022
Gaurav Kumar Nayak*, Ruchit Rawal*, Anirban Chakraborty
A novel test-time adversarial defense method that detects and corrects adversarial samples without requiring training data, significantly improving robustness.

Journal Articles


IEEE Transactions on Neural Networks and Learning Systems (IEEE-TNNLS), 2024
Gaurav Kumar Nayak, Ruchit Rawal, Inder Khatri, Anirban Chakraborty
A simple yet effective self-distillation approach enhances adversarial robustness in few-shot learning without requiring adversarial samples, achieving significant improvements with minimal computational overhead.