-
Carnegie
- radevs@rpi.edu
- https://orcid.org/0000-0002-6702-9559
Research
My research follows two streams. The first focuses on developing new Bayesian methods through the emerging generation of generative neural networks. The second focuses on building and applying computational models of complex processes (e.g., cognition, disease outbreaks) to gain insights from data (and sometimes Big Data). These two streams converge at the BayesFlow
framework for Bayesian inference with modern deep learning (GitHub)(project page), of which I am the core developer and maintainer.
In the new computer age, modern Bayesian inference allows us to estimate, validate, and draw substantive conclusions from high-fidelity probabilistic models. Typical problems in Bayesian analysis are:
- Estimate the posterior distribution of hidden parameters from noisy data (i.e., inverse inference);
- Compare competing models in terms of their complexity and predictive performance;
- Emulate the observable behavior of a system or predict future behavior under uncertainty.
However, despite their theoretical appeal and utility, Bayesian workflows are limited by severe computational bottlenecks: Analyzing even a single data set may already eat up days of computation, so that model validation and calibration become completely infeasible.
BayesFlow
addresses these challenges by training custom generative neural networks on model simulations. Researchers can then re-use and share these networks for any subsequent application of the model. Since the trained networks can perform inference almost instantaneously (typically well below one second), the upfront neural network training amortizes quickly. For instance, amortized inference allows us to test a model’s ability to recover its parameters or assess its uncertainty calibration for different data set sizes in a matter of seconds, even though this may require the estimation of thousands of posterior distributions.
If you are a student who wants to do research at the frontier of probabilistic modeling, drop me a mail or simply pass by my office.
Teaching
I teach courses revolving around computational modeling, probabilistic inference, and statistics.
- Thursday 2:00 - 3:00PM
COGS 6960 / PSYC 4960: Bayesian Data Analysis
Description: Bayesian statistics provides a principled framework for inference and uncertainty quantification. The course aims at building a strong foundation in the basic principles and methods of Bayesian statistics and probabilistic modeling. The course will introduce students to the vast landscape of computational methods for Bayesian inference and enable them to apply these methods to challenging real-world problems. The course includes multiple hands-on sessions and applied problems using the popular R programming language.
Recognition
- More expressive amortized Bayesian inference via joint learning and self-consistency? (2023). Talk at the One World Approximate Bayesian Computation (ABC) Seminar. Virtual event (Department of Statistics, Warwick, UK).
- JANA: Jointly amortized neural approximation of complex Bayesian models. (2023). Spotlight presentation and poster at the 39th Conference on Uncertainty in Artificial Intelligence (UAI 2023). Pittsburgh, PA, USA.
- Principled amortized Bayesian inference with deep learning. (2023). Conference workshop at MathPsych/ICCM/EMPG 2023. Amsterdam, Netherlands.
- Deep learning for cognitive modeling. (2023). Symposium organized at MathPsych/ICCM/EMPG 2023. Amsterdam, Netherlands.
- Compressing Bayesian inference with information maximization. (2023). Talk at MathPsych/ICCM/EMPG 2023. Amsterdam, Netherlands.
- I simulate, therefore I understand? Simulacrum and explanation. (2023). Talk at the Conference on Interdisciplinary Research in Philosophy and Psychology (AG Philosophie und Psychologie). Cologne, Germany.
- One framework to learn them all: Amortizing Bayes’ rule. (2022). Talk at the Meeting of the European Mathematical Psychology Group (EMPG 2022). Roverto (TN), Italy.
- BayesFlow: New advances from the frontier of simulation-based inference. (2022). Talk at the SIAM Conference on Uncertainty Quantification (UQ 2022). Atlanta, Georgia, USA.
- BayesFlow: Scalable amortized Bayesian inference with invertible networks. (2020). Poster at NeurIPS Europe Meetup on Bayesian Deep Learning. Virtual event.
- Amortized Bayesian inference for models of cognition. (2020). Talk and conference paper at MathPsych/ICCM 2020. Virtual event.
Publications
Recent Preprints
- Pogorelyuk, L., & Radev, S. T. (2024). Aligning Motion-Blurred Images Using Contrastive Learning on Overcomplete Pixels. arXiv preprint arXiv:2410.07410. (arXiv)
- Habermann, D., Schmitt, M., Kühmichel, L., Bulling, A., Radev, S. T., & Bürkner, P. C. (2024). Amortized Bayesian Multilevel Models. arXiv preprint arXiv:2408.13230. (arXiv)
- Müller, J., Kühmichel, L., Rohbeck, M., Radev, S. T., & Кöthe, U. (2024). Towards context-aware domain generalization: Understanding the benefits and limits of marginal transfer learning. arXiv preprint arXiv:2312.10107. (arXiv)
- Schmitt, M., Radev, S. T., & Bürkner, P. C. (2023). Fuse it or lose it: Deep fusion for multimodal simulation-based inference. arXiv preprint arXiv:2311.10671. (arXiv)
Recent Conference Proceedings
- Schmitt, M., Pratz, V., Köthe, U., Bürkner, P. C., & Radev, S. T. (2024). Consistency models for scalable and fast simulation-based inference. NeurIPS. (arXiv)(OpenReview)
- Schmitt, M., Li, C., Vehtari, A., Acerbi, L., Bürkner, P. C., & Radev, S. T. (2024). Amortized Bayesian Workflow. Bayesian Decision-making and Uncertainty Workshop, NeurIPS. (arXiv)
- Schmitt, M., Habermann, D., Bürkner, P. C., Köthe, U., & Radev, S. T. (2024). Leveraging self-consistency for data-efficient amortized Bayesian inference. Proceedings of the 41st International Conference on Machine Learning (ICML), 43723-43741. (arXiv)(OpenReview)
- Schmitt, M., Habermann, D., Bürkner, P. C., Köthe, U., & Radev, S. T. (2023). Leveraging self-consistency for data-efficient amortized Bayesian inference. UniReps Workshop, NeurIPS, New Orleans. (arXiv)
- Radev, S. T., Schmitt, M., Pratz, V., Picchini, U., Köthe, U., & Bürkner, P. C. (2023). JANA: Jointly amortized neural approximation of complex Bayesian models. Proceedings of the 39th Conference on Uncertainty in Artificial Intelligence (UAI), 216, 1695-1706. (arXiv)(PMLR)
- Schmitt, M., Bürkner, P. C., Köthe, U., & Radev, S. T. (2023). Detecting model misspecification in amortized Bayesian inference with neural networks. Proceedings of the 45th German Conference on Pattern Recognition (GCPR), 1-9. (arXiv)(TBA)
Recent Journal Papers
- Schumacher, L., Schnuerch, M., Voss, A., Radev, S. T. (2024). Validation and Comparison of Non-stationary Cognitive Models: A Diffusion Model Application. Computational Brain & Behavior.
- Prillinger, K., de Lara, G. A., Klöbl, M., Lanzenberger, R., Plener, P. L., Poustka, L., ... & Radev, S. T. (2024). Multisession tDCS combined with intrastimulation training improves emotion recognition in adolescents with autism spectrum disorder. Neurotherapeutics, e00460.
- Elsemüller, L., Schnuerch, M., Bürkner, P. C., & Radev, S. T. (2024). A deep learning method for comparing Bayesian hierarchical models. Psychological Methods.
- Bockting, F., Radev, S. T., & Bürkner, P. C. (2024). Simulation-based prior knowledge elicitation for parametric Bayesian models. Scientific Reports, 14(1), 17330.
- Elsemüller, L., Schnuerch, M., Bürkner, P. C., & Radev, S. T. (2024). A deep learning method for comparing Bayesian hierarchical models. Psychological Methods. Advance online publication.
- Bürkner, P. C., Scholz, M., & Radev, S. T. (2023). Some models are useful, but how do we know which ones? Towards a unified Bayesian model taxonomy. Statistics Surveys, (17), 216-310.
- Schumacher, L., Bürkner, P. C., Voss, A., Köthe, U., Radev, S. T. (2023). Neural superstatistics for Bayesian estimation of dynamic cognitive models. Scientific Reports, (13), 13778.
- von Krause*, M., Radev*, S. T., & Voss, A. (2022). Mental speed is high until age 60 as revealed by analysis of over a million participants. Nature Human Behaviour, 6(5), 700-708.