Reyhane Askari

I'm a PhD candidate at Mila lab, Université de Montréal. I work under the supervision of Ioannis Mitliagkas (UdeM) and Nicolas Le Roux (Google Brain Montreal). Prior to my PhD, I received my Masters in Computer Science from Université de Montréal and worked for two years at Mila on several open-source software for deep learning such as Theano, Orion and Cortex. I also did my bachelors in Computer Engineering at Amirkabir University of Technology (Tehran Polytechnic).

My research interests are in the intersection of machine learning, large scale optimization and game theory. I am the winner of Borealis AI graduate fellowship 2020.

I co-organized the Bridging Game Theory and Deep Learning workshop at NeurIPS 2019. I also co-organize Mila's Deep Learning Theory Group and MTL MLOpt. MTL MLOpt is a bi-weekly meeting in Montreal that includes researchers from the University of Montréal, McGill, Google Brain, Samsung SAIT AI Lab (SAIL) Montreal, Facebook AI Research Montréal (FAIR) and Microsoft Research Montréal.

Twitter  /  Email  /  CV  /  GitHub  /  Google Scholar

Projects

LEAD: Least-Action Dynamics for Min-Max Optimization


We propose LEAD (Least-Action Dynamics), a second-order optimizer that uses the principle of least-action from physics to discover an efficient optimizer for min-max games. We subsequently provide convergence analysis of our optimizer in quadratic minmax games using the Lyapunov theory.

Negative Momentum for Improved Game Dynamics


In this paper, we analyze gradient-based methods with momentum on simple games. We prove that alternating updates are more stable than simultaneous updates. Next, we show both theoretically and empirically that alternating gradient updates with a negative momentum term achieves convergence in a difficult toy adversarial problem, but also on the notoriously difficult to train, saturating GANs.

Oríon: Experiment Version Control for Efficient Hyperparameter Optimization


Oríon is a new black-box optimization tool currently in development that is designed to adapt to the workflow of machine learning researchers for minimal obstruction.

Auto Encoders in PyTorch


A quick implementation of Auto Encoder, Denoising Auto Encoders and Variational Auto Encoders in PyTorch.


This was cool :)