Shashank Sule

I’m Shashank and I am a sixth-year PhD student in the Applied Mathematics, Applied Statistics and Scientific Computation program at the University of Maryland, College Park. I am jointly advised by Dr. Wojciech Czaja and Dr. Maria Cameron. I use tools from applied analysis to design and analyse algorithms for manifold learning, deep learning, and explainability in molecular dynamics and medical imaging applications. Recently I’ve been interested in equivariant graph neural networks, neural collapse, and bilevel optimization. Before Maryland, I graduated from Amherst College with a degree in mathematics.
Find my latest CV, find me on LinkedIn email me at ssule25[at]umd[dot]edu.
Selected publications
- Learning collective variables that preserve transition rates (2025). Shashank Sule, Arnav Mehta, Maria Cameron. In review at SIAM Multiscale Modeling and Simulation. arXiv.
- Sharp estimates for target measure diffusion maps and applications to the committor problem (2025). Shashank Sule, Luke Evans, K. Maria Cameron. Applied and Computational Harmonic Analysis, Volume 79, 101803, ISSN 1063-5203.
- On the limits of neural network explainability via descrambling (2025). Shashank Sule, Richard G. Spencer, Wojciech Czaja. Applied and Computational Harmonic Analysis, Volume 79, 2025, 101793, ISSN 1063-5203.
Recent posts
Collective variable discovery
posted on 5 Jul 2025
Last month marked the culmination of a two major projects regarding collective variable discovery, a fundamental interdisciplinary problem in drug discovery, computational statistical physics, and stochastic processes. From a probabilistic perpsective, this problem asks: how can we map a stochastic process to low dimensions and still preserve its statistics? In two case study-style papers on the butane molecule and Lennard-Jones clusters we provide some answers by resorting to quantitive coarse graining theory and proposing algorithms that use some of my favourite tools from geometric data science.
Two new papers
posted on 12 Feb 2025
Two papers I’ve been working on are out. One tells a story about how you can combine classical and deep learning methods for magnetic resonance imaging in the brain. The other one shows how to use the Neumann eigenvectors of subgraphs for dimension reduction–with nearly isometric embeddings!
To access older posts, click here