2023
05.04

aaron sidford cv

aaron sidford cv

My CV. with Hilal Asi, Yair Carmon, Arun Jambulapati and Aaron Sidford In particular, it achieves nearly linear time for DP-SCO in low-dimension settings. {{{;}#q8?\. . 4 0 obj Aaron Sidford is an assistant professor in the departments of Management Science and Engineering and Computer Science at Stanford University. My research is on the design and theoretical analysis of efficient algorithms and data structures. 113 * 2016: The system can't perform the operation now. Algorithms Optimization and Numerical Analysis. We make safe shipping arrangements for your convenience from Baton Rouge, Louisiana. July 8, 2022. ", "Collection of new upper and lower sample complexity bounds for solving average-reward MDPs. Faculty Spotlight: Aaron Sidford. Before attending Stanford, I graduated from MIT in May 2018. 2019 (and hopefully 2022 onwards Covid permitting) For more information please watch this and please consider donating here! Honorable Mention for the 2015 ACM Doctoral Dissertation Award went to Aaron Sidford of the Massachusetts Institute of Technology, and Siavash Mirarab of the University of Texas at Austin. My research focuses on the design of efficient algorithms based on graph theory, convex optimization, and high dimensional geometry (CV). My research focuses on AI and machine learning, with an emphasis on robotics applications. With Rong Ge, Chi Jin, Sham M. Kakade, and Praneeth Netrapalli. >> [pdf] [poster] Given a linear program with n variables, m > n constraints, and bit complexity L, our algorithm runs in (sqrt(n) L) iterations each consisting of solving (1) linear systems and additional nearly linear time computation. We also provide two . with Aaron Sidford ", "A low-bias low-cost estimator of subproblem solution suffices for acceleration! Call (225) 687-7590 or park nicollet dermatology wayzata today! AISTATS, 2021. [pdf] [talk] Optimal Sublinear Sampling of Spanning Trees and Determinantal Point Processes via Average-Case Entropic Independence, FOCS 2022 . [pdf] [talk] CV; Theory Group; Data Science; CSE 535: Theory of Optimization and Continuous Algorithms. [pdf] >> If you see any typos or issues, feel free to email me. Some I am still actively improving and all of them I am happy to continue polishing. Efficient accelerated coordinate descent methods and faster algorithms for solving linear systems. Best Paper Award. Follow. Verified email at stanford.edu - Homepage. what is a blind trust for lottery winnings; ithaca college park school scholarships; We forward in this generation, Triumphantly. 2022 - current Assistant Professor, Georgia Institute of Technology (Georgia Tech) 2022 Visiting researcher, Max Planck Institute for Informatics. Gary L. Miller Carnegie Mellon University Verified email at cs.cmu.edu. [5] Yair Carmon, Arun Jambulapati, Yujia Jin, Yin Tat Lee, Daogao Liu, Aaron Sidford, Kevin Tian. My research was supported by the National Defense Science and Engineering Graduate (NDSEG) Fellowship from 2018-2021, and by a Google PhD Fellowship from 2022-2023. 2023. . en_US: dc.format.extent: 266 pages: en_US: dc.language.iso: eng: en_US: dc.publisher: Massachusetts Institute of Technology: en_US: dc.rights: M.I.T. I regularly advise Stanford students from a variety of departments. Michael B. Cohen, Yin Tat Lee, Gary L. Miller, Jakub Pachocki, and Aaron Sidford. My long term goal is to bring robots into human-centered domains such as homes and hospitals. The ones marked, 2014 IEEE 55th Annual Symposium on Foundations of Computer Science, 424-433, SIAM Journal on Optimization 28 (2), 1751-1772, Proceedings of the twenty-fifth annual ACM-SIAM symposium on Discrete, 2015 IEEE 56th Annual Symposium on Foundations of Computer Science, 1049-1065, 2013 ieee 54th annual symposium on foundations of computer science, 147-156, Proceedings of the forty-fifth annual ACM symposium on Theory of computing, MB Cohen, YT Lee, C Musco, C Musco, R Peng, A Sidford, Proceedings of the 2015 Conference on Innovations in Theoretical Computer, Advances in Neural Information Processing Systems 31, M Kapralov, YT Lee, CN Musco, CP Musco, A Sidford, SIAM Journal on Computing 46 (1), 456-477, P Jain, S Kakade, R Kidambi, P Netrapalli, A Sidford, MB Cohen, YT Lee, G Miller, J Pachocki, A Sidford, Proceedings of the forty-eighth annual ACM symposium on Theory of Computing, International Conference on Machine Learning, 2540-2548, P Jain, SM Kakade, R Kidambi, P Netrapalli, A Sidford, 2015 IEEE 56th Annual Symposium on Foundations of Computer Science, 230-249, Mathematical Programming 184 (1-2), 71-120, P Jain, C Jin, SM Kakade, P Netrapalli, A Sidford, International conference on machine learning, 654-663, Proceedings of the Twenty-Ninth Annual ACM-SIAM Symposium on Discrete, D Garber, E Hazan, C Jin, SM Kakade, C Musco, P Netrapalli, A Sidford, New articles related to this author's research, Path finding methods for linear programming: Solving linear programs in o (vrank) iterations and faster algorithms for maximum flow, Accelerated methods for nonconvex optimization, An almost-linear-time algorithm for approximate max flow in undirected graphs, and its multicommodity generalizations, A faster cutting plane method and its implications for combinatorial and convex optimization, Efficient accelerated coordinate descent methods and faster algorithms for solving linear systems, A simple, combinatorial algorithm for solving SDD systems in nearly-linear time, Uniform sampling for matrix approximation, Near-optimal time and sample complexities for solving Markov decision processes with a generative model, Single pass spectral sparsification in dynamic streams, Parallelizing stochastic gradient descent for least squares regression: mini-batching, averaging, and model misspecification, Un-regularizing: approximate proximal point and faster stochastic algorithms for empirical risk minimization, Accelerating stochastic gradient descent for least squares regression, Efficient inverse maintenance and faster algorithms for linear programming, Lower bounds for finding stationary points I, Streaming pca: Matching matrix bernstein and near-optimal finite sample guarantees for ojas algorithm, Convex Until Proven Guilty: Dimension-Free Acceleration of Gradient Descent on Non-Convex Functions, Competing with the empirical risk minimizer in a single pass, Variance reduced value iteration and faster algorithms for solving Markov decision processes, Robust shift-and-invert preconditioning: Faster and more sample efficient algorithms for eigenvector computation. Sidford received his PhD from the department of Electrical Engineering and Computer Science at the Massachusetts Institute of Technology where he was advised by Professor Jonathan Kelner. Aaron Sidford is an assistant professor in the department of Management Science and Engineering and the department of Computer Science at Stanford University. Annie Marsden, Vatsal Sharan, Aaron Sidford, Gregory Valiant, Efficient Convex Optimization Requires . Department of Electrical Engineering, Stanford University, 94305, Stanford, CA, USA Symposium on Foundations of Computer Science (FOCS), 2020, Efficiently Solving MDPs with Stochastic Mirror Descent The paper, Efficient Convex Optimization Requires Superlinear Memory, was co-authored with Stanford professor Gregory Valiant as well as current Stanford student Annie Marsden and alumnus Vatsal Sharan. Congratulations to Prof. Aaron Sidford for receiving the Best Paper Award at the 2022 Conference on Learning Theory ( COLT 2022 )! data structures) that maintain properties of dynamically changing graphs and matrices -- such as distances in a graph, or the solution of a linear system. Outdated CV [as of Dec'19] Students I am very lucky to advise the following Ph.D. students: Siddartha Devic (co-advised with Aleksandra Korolova . Our algorithm combines the derandomized square graph operation (Rozenman and Vadhan, 2005), which we recently used for solving Laplacian systems in nearly logarithmic space (Murtagh, Reingold, Sidford, and Vadhan, 2017), with ideas from (Cheng, Cheng, Liu, Peng, and Teng, 2015), which gave an algorithm that is time-efficient (while ours is . We provide a generic technique for constructing families of submodular functions to obtain lower bounds for submodular function minimization (SFM). Neural Information Processing Systems (NeurIPS), 2021, Thinking Inside the Ball: Near-Optimal Minimization of the Maximal Loss Stability of the Lanczos Method for Matrix Function Approximation Cameron Musco, Christopher Musco, Aaron Sidford ACM-SIAM Symposium on Discrete Algorithms (SODA) 2018. Daniel Spielman Professor of Computer Science, Yale University Verified email at yale.edu. Full CV is available here. I am a fifth-and-final-year PhD student in the Department of Management Science and Engineering at Stanford in the Operations Research group. Google Scholar; Probability on trees and . The site facilitates research and collaboration in academic endeavors. ICML, 2016. 9-21. With Michael Kapralov, Yin Tat Lee, Cameron Musco, and Christopher Musco. with Arun Jambulapati, Aaron Sidford and Kevin Tian I am a fifth-and-final-year PhD student in the Department of Management Science and Engineering at Stanford in We are excited to have Professor Sidford join the Management Science & Engineering faculty starting Fall 2016. Publications and Preprints. SODA 2023: 4667-4767. CS265/CME309: Randomized Algorithms and Probabilistic Analysis, Fall 2019. theory and graph applications. [pdf] [poster] (, In Symposium on Foundations of Computer Science (FOCS 2015) (, In Conference on Learning Theory (COLT 2015) (, In International Conference on Machine Learning (ICML 2015) (, In Innovations in Theoretical Computer Science (ITCS 2015) (, In Symposium on Fondations of Computer Science (FOCS 2013) (, In Symposium on the Theory of Computing (STOC 2013) (, Book chapter in Building Bridges II: Mathematics of Laszlo Lovasz, 2020 (, Journal of Machine Learning Research, 2017 (. << Abstract. ", "Collection of variance-reduced / coordinate methods for solving matrix games, with simplex or Euclidean ball domains. KTH in Stockholm, Sweden, and my BSc + MSc at the Summer 2022: I am currently a research scientist intern at DeepMind in London. COLT, 2022. Slides from my talk at ITCS. One research focus are dynamic algorithms (i.e. "t a","H Secured intranet portal for faculty, staff and students. University of Cambridge MPhil. with Sepehr Assadi, Arun Jambulapati, Aaron Sidford and Kevin Tian Email: [name]@stanford.edu I received my PhD from the department of Electrical Engineering and Computer Science at the Massachusetts Institute of Technology where I was advised by Professor Jonathan Kelner. Conference Publications 2023 The Complexity of Infinite-Horizon General-Sum Stochastic Games With Yujia Jin, Vidya Muthukumar, Aaron Sidford To appear in Innovations in Theoretical Computer Science (ITCS 2023) (arXiv) 2022 Optimal and Adaptive Monteiro-Svaiter Acceleration With Yair Carmon, Assistant Professor of Management Science and Engineering and of Computer Science. Aaron's research interests lie in optimization, the theory of computation, and the . . /CreationDate (D:20230304061109-08'00') July 2015. pdf, Szemerdi Regularity Lemma and Arthimetic Progressions, Annie Marsden. Selected for oral presentation. ?_l) Neural Information Processing Systems (NeurIPS), 2014. We establish lower bounds on the complexity of finding $$-stationary points of smooth, non-convex high-dimensional functions using first-order methods. ", "Sample complexity for average-reward MDPs? Given an independence oracle, we provide an exact O (nr log rT-ind) time algorithm. Cameron Musco, Praneeth Netrapalli, Aaron Sidford, Shashanka Ubaru, David P. Woodruff Innovations in Theoretical Computer Science (ITCS) 2018. Email / Before attending Stanford, I graduated from MIT in May 2018. >CV >code >contact; My PhD dissertation, Algorithmic Approaches to Statistical Questions, 2012. CoRR abs/2101.05719 ( 2021 ) Secured intranet portal for faculty, staff and students. (arXiv pre-print) arXiv | pdf, Annie Marsden, R. Stephen Berry. I am an Assistant Professor in the School of Computer Science at Georgia Tech. ", "An attempt to make Monteiro-Svaiter acceleration practical: no binary search and no need to know smoothness parameter! ", "A new Catalyst framework with relaxed error condition for faster finite-sum and minimax solvers. I am an assistant professor in the department of Management Science and Engineering and the department of Computer Science at Stanford University. Goethe University in Frankfurt, Germany. Office: 380-T Annie Marsden. to be advised by Prof. Dongdong Ge. Thesis, 2016. pdf. ", "How many \(\epsilon\)-length segments do you need to look at for finding an \(\epsilon\)-optimal minimizer of convex function on a line? Main Menu. Aaron Sidford (sidford@stanford.edu) Welcome This page has informatoin and lecture notes from the course "Introduction to Optimization Theory" (MS&E213 / CS 269O) which I taught in Fall 2019. 2017. With Yosheb Getachew, Yujia Jin, Aaron Sidford, and Kevin Tian (2023). with Yair Carmon, Aaron Sidford and Kevin Tian International Conference on Machine Learning (ICML), 2020, Principal Component Projection and Regression in Nearly Linear Time through Asymmetric SVRG Neural Information Processing Systems (NeurIPS, Oral), 2020, Coordinate Methods for Matrix Games I am fortunate to be advised by Aaron Sidford. Yu Gao, Yang P. Liu, Richard Peng, Faster Divergence Maximization for Faster Maximum Flow, FOCS 2020 Conference on Learning Theory (COLT), 2015. %PDF-1.4 In Sidford's dissertation, Iterative Methods, Combinatorial . [pdf] [poster] ", "Faster algorithms for separable minimax, finite-sum and separable finite-sum minimax. Try again later. Efficient Convex Optimization Requires Superlinear Memory. Intranet Web Portal. Group Resources. Neural Information Processing Systems (NeurIPS, Oral), 2019, A Near-Optimal Method for Minimizing the Maximum of N Convex Loss Functions Page 1 of 5 Aaron Sidford Assistant Professor of Management Science and Engineering and of Computer Science CONTACT INFORMATION Administrative Contact Jackie Nguyen - Administrative Associate ", "Team-convex-optimization for solving discounted and average-reward MDPs! Articles 1-20. SHUFE, where I was fortunate I am particularly interested in work at the intersection of continuous optimization, graph theory, numerical linear algebra, and data structures. MS&E welcomes new faculty member, Aaron Sidford ! I completed my PhD at I graduated with a PhD from Princeton University in 2018. with Aaron Sidford with Yair Carmon, Aaron Sidford and Kevin Tian I often do not respond to emails about applications. This is the academic homepage of Yang Liu (I publish under Yang P. Liu). In each setting we provide faster exact and approximate algorithms. [name] = yangpliu, Optimal Sublinear Sampling of Spanning Trees and Determinantal Point Processes via Average-Case Entropic Independence, Maximum Flow and Minimum-Cost Flow in Almost Linear Time, Online Edge Coloring via Tree Recurrences and Correlation Decay, Fully Dynamic Electrical Flows: Sparse Maxflow Faster Than Goldberg-Rao, Discrepancy Minimization via a Self-Balancing Walk, Faster Divergence Maximization for Faster Maximum Flow. Try again later. BayLearn, 2019, "Computing stationary solution for multi-agent RL is hard: Indeed, CCE for simultaneous games and NE for turn-based games are both PPAD-hard. With Jan van den Brand, Yin Tat Lee, Danupon Nanongkai, Richard Peng, Thatchaphol Saranurak, Zhao Song, and Di Wang. International Colloquium on Automata, Languages, and Programming (ICALP), 2022, Sharper Rates for Separable Minimax and Finite Sum Optimization via Primal-Dual Extragradient Methods DOI: 10.1109/FOCS.2016.69 Corpus ID: 3311; Faster Algorithms for Computing the Stationary Distribution, Simulating Random Walks, and More @article{Cohen2016FasterAF, title={Faster Algorithms for Computing the Stationary Distribution, Simulating Random Walks, and More}, author={Michael B. Cohen and Jonathan A. Kelner and John Peebles and Richard Peng and Aaron Sidford and Adrian Vladu}, journal . My broad research interest is in theoretical computer science and my focus is on fundamental mathematical problems in data science at the intersection of computer science, statistics, optimization, biology and economics. Aaron Sidford, Introduction to Optimization Theory; Lap Chi Lau, Convexity and Optimization; Nisheeth Vishnoi, Algorithms for . with Yair Carmon, Arun Jambulapati and Aaron Sidford Optimization Algorithms: I used variants of these notes to accompany the courses Introduction to Optimization Theory and Optimization . My research interests lie broadly in optimization, the theory of computation, and the design and analysis of algorithms. aaron sidford cvis sea bass a bony fish to eat. Spectrum Approximation Beyond Fast Matrix Multiplication: Algorithms and Hardness. In Symposium on Foundations of Computer Science (FOCS 2020) Invited to the special issue ( arXiv) Anup B. Rao. I am a fifth year Ph.D. student in Computer Science at Stanford University co-advised by Gregory Valiant and John Duchi. [pdf] [talk] [poster] [pdf] I was fortunate to work with Prof. Zhongzhi Zhang. We organize regular talks and if you are interested and are Stanford affiliated, feel free to reach out (from a Stanford email). Done under the mentorship of M. Malliaris. [pdf] Winter 2020 Teaching assistant for EE364a: Convex Optimization I taught by John Duchi, Fall 2018 Teaching assitant for CS265/CME309: Randomized Algorithms and Probabilistic Analysis, Fall 2019 taught by Greg Valiant. arXiv preprint arXiv:2301.00457, 2023 arXiv. Some I am still actively improving and all of them I am happy to continue polishing. United States. We will start with a primer week to learn the very basics of continuous optimization (July 26 - July 30), followed by two weeks of talks by the speakers on more advanced . in Chemistry at the University of Chicago. Aaron Sidford. Yang P. Liu, Aaron Sidford, Department of Mathematics Janardhan Kulkarni, Yang P. Liu, Ashwin Sah, Mehtaab Sawhney, Jakub Tarnawski, Fully Dynamic Electrical Flows: Sparse Maxflow Faster Than Goldberg-Rao, FOCS 2021 Google Scholar Digital Library; Russell Lyons and Yuval Peres. NeurIPS Smooth Games Optimization and Machine Learning Workshop, 2019, Variance Reduction for Matrix Games Many of these algorithms are iterative and solve a sequence of smaller subproblems, whose solution can be maintained via the aforementioned dynamic algorithms. ", "We characterize when solving the max \(\min_{x}\max_{i\in[n]}f_i(x)\) is (not) harder than solving the average \(\min_{x}\frac{1}{n}\sum_{i\in[n]}f_i(x)\). F+s9H ", "A general continuous optimization framework for better dynamic (decremental) matching algorithms. /N 3 This site uses cookies from Google to deliver its services and to analyze traffic. I am currently a third-year graduate student in EECS at MIT working under the wonderful supervision of Ankur Moitra. Aaron Sidford is part of Stanford Profiles, official site for faculty, postdocs, students and staff information (Expertise, Bio, Research, Publications, and more). (arXiv), A Faster Cutting Plane Method and its Implications for Combinatorial and Convex Optimization, In Symposium on Foundations of Computer Science (FOCS 2015), Machtey Award for Best Student Paper (arXiv), Efficient Inverse Maintenance and Faster Algorithms for Linear Programming, In Symposium on Foundations of Computer Science (FOCS 2015) (arXiv), Competing with the Empirical Risk Minimizer in a Single Pass, With Roy Frostig, Rong Ge, and Sham Kakade, In Conference on Learning Theory (COLT 2015) (arXiv), Un-regularizing: approximate proximal point and faster stochastic algorithms for empirical risk minimization, In International Conference on Machine Learning (ICML 2015) (arXiv), Uniform Sampling for Matrix Approximation, With Michael B. Cohen, Yin Tat Lee, Cameron Musco, Christopher Musco, and Richard Peng, In Innovations in Theoretical Computer Science (ITCS 2015) (arXiv), Path-Finding Methods for Linear Programming : Solving Linear Programs in (rank) Iterations and Faster Algorithms for Maximum Flow, In Symposium on Foundations of Computer Science (FOCS 2014), Best Paper Award and Machtey Award for Best Student Paper (arXiv), Single Pass Spectral Sparsification in Dynamic Streams, With Michael Kapralov, Yin Tat Lee, Cameron Musco, and Christopher Musco, An Almost-Linear-Time Algorithm for Approximate Max Flow in Undirected Graphs, and its Multicommodity Generalizations, With Jonathan A. Kelner, Yin Tat Lee, and Lorenzo Orecchia, In Symposium on Discrete Algorithms (SODA 2014), Efficient Accelerated Coordinate Descent Methods and Faster Algorithms for Solving Linear Systems, In Symposium on Fondations of Computer Science (FOCS 2013) (arXiv), A Simple, Combinatorial Algorithm for Solving SDD Systems in Nearly-Linear Time, With Jonathan A. Kelner, Lorenzo Orecchia, and Zeyuan Allen Zhu, In Symposium on the Theory of Computing (STOC 2013) (arXiv), SIAM Journal on Computing (arXiv before merge), Derandomization beyond Connectivity: Undirected Laplacian Systems in Nearly Logarithmic Space, With Jack Murtagh, Omer Reingold, and Salil Vadhan, Book chapter in Building Bridges II: Mathematics of Laszlo Lovasz, 2020 (arXiv), Lower Bounds for Finding Stationary Points II: First-Order Methods. [i14] Yair Carmon, Arun Jambulapati, Yujia Jin, Yin Tat Lee, Daogao Liu, Aaron Sidford, Kevin Tian: ReSQueing Parallel and Private Stochastic Convex Optimization. Yujia Jin. Optimization and Algorithmic Paradigms (CS 261): Winter '23, Optimization Algorithms (CS 369O / CME 334 / MS&E 312): Fall '22, Discrete Mathematics and Algorithms (CME 305 / MS&E 315): Winter '22, '21, '20, '19, '18, Introduction to Optimization Theory (CS 269O / MS&E 213): Fall '20, '19, Spring '19, '18, '17, Almost Linear Time Graph Algorithms (CS 269G / MS&E 313): Fall '18, Winter '17. Personal Website. David P. Woodruff . Applying this technique, we prove that any deterministic SFM algorithm . The Complexity of Infinite-Horizon General-Sum Stochastic Games, With Yujia Jin, Vidya Muthukumar, Aaron Sidford, To appear in Innovations in Theoretical Computer Science (ITCS 2023) (arXiv), Optimal and Adaptive Monteiro-Svaiter Acceleration, With Yair Carmon, Danielle Hausler, Arun Jambulapati, and Yujia Jin, To appear in Advances in Neural Information Processing Systems (NeurIPS 2022) (arXiv), On the Efficient Implementation of High Accuracy Optimality of Profile Maximum Likelihood, With Moses Charikar, Zhihao Jiang, and Kirankumar Shiragur, Improved Lower Bounds for Submodular Function Minimization, With Deeparnab Chakrabarty, Andrei Graur, and Haotian Jiang, In Symposium on Foundations of Computer Science (FOCS 2022) (arXiv), RECAPP: Crafting a More Efficient Catalyst for Convex Optimization, With Yair Carmon, Arun Jambulapati, and Yujia Jin, International Conference on Machine Learning (ICML 2022) (arXiv), Efficient Convex Optimization Requires Superlinear Memory, With Annie Marsden, Vatsal Sharan, and Gregory Valiant, Conference on Learning Theory (COLT 2022), Sharper Rates for Separable Minimax and Finite Sum Optimization via Primal-Dual Extragradient Method, Conference on Learning Theory (COLT 2022) (arXiv), Big-Step-Little-Step: Efficient Gradient Methods for Objectives with Multiple Scales, With Jonathan A. Kelner, Annie Marsden, Vatsal Sharan, Gregory Valiant, and Honglin Yuan, Regularized Box-Simplex Games and Dynamic Decremental Bipartite Matching, With Arun Jambulapati, Yujia Jin, and Kevin Tian, International Colloquium on Automata, Languages and Programming (ICALP 2022) (arXiv), Fully-Dynamic Graph Sparsifiers Against an Adaptive Adversary, With Aaron Bernstein, Jan van den Brand, Maximilian Probst, Danupon Nanongkai, Thatchaphol Saranurak, and He Sun, Faster Maxflow via Improved Dynamic Spectral Vertex Sparsifiers, With Jan van den Brand, Yu Gao, Arun Jambulapati, Yin Tat Lee, Yang P. Liu, and Richard Peng, In Symposium on Theory of Computing (STOC 2022) (arXiv), Semi-Streaming Bipartite Matching in Fewer Passes and Optimal Space, With Sepehr Assadi, Arun Jambulapati, Yujia Jin, and Kevin Tian, In Symposium on Discrete Algorithms (SODA 2022) (arXiv), Algorithmic trade-offs for girth approximation in undirected graphs, With Avi Kadria, Liam Roditty, Virginia Vassilevska Williams, and Uri Zwick, In Symposium on Discrete Algorithms (SODA 2022), Computing Lewis Weights to High Precision, With Maryam Fazel, Yin Tat Lee, and Swati Padmanabhan, With Hilal Asi, Yair Carmon, Arun Jambulapati, and Yujia Jin, In Advances in Neural Information Processing Systems (NeurIPS 2021) (arXiv), Thinking Inside the Ball: Near-Optimal Minimization of the Maximal Loss, In Conference on Learning Theory (COLT 2021) (arXiv), The Bethe and Sinkhorn Permanents of Low Rank Matrices and Implications for Profile Maximum Likelihood, With Nima Anari, Moses Charikar, and Kirankumar Shiragur, Towards Tight Bounds on the Sample Complexity of Average-reward MDPs, In International Conference on Machine Learning (ICML 2021) (arXiv), Minimum cost flows, MDPs, and 1-regression in nearly linear time for dense instances, With Jan van den Brand, Yin Tat Lee, Yang P. Liu, Thatchaphol Saranurak, and Zhao Song, Di Wang, In Symposium on Theory of Computing (STOC 2021) (arXiv), Ultrasparse Ultrasparsifiers and Faster Laplacian System Solvers, In Symposium on Discrete Algorithms (SODA 2021) (arXiv), Relative Lipschitzness in Extragradient Methods and a Direct Recipe for Acceleration, In Innovations in Theoretical Computer Science (ITCS 2021) (arXiv), Acceleration with a Ball Optimization Oracle, With Yair Carmon, Arun Jambulapati, Qijia Jiang, Yujia Jin, Yin Tat Lee, and Kevin Tian, In Conference on Neural Information Processing Systems (NeurIPS 2020), Instance Based Approximations to Profile Maximum Likelihood, In Conference on Neural Information Processing Systems (NeurIPS 2020) (arXiv), Large-Scale Methods for Distributionally Robust Optimization, With Daniel Levy*, Yair Carmon*, and John C. Duch (* denotes equal contribution), High-precision Estimation of Random Walks in Small Space, With AmirMahdi Ahmadinejad, Jonathan A. Kelner, Jack Murtagh, John Peebles, and Salil P. Vadhan, In Symposium on Foundations of Computer Science (FOCS 2020) (arXiv), Bipartite Matching in Nearly-linear Time on Moderately Dense Graphs, With Jan van den Brand, Yin Tat Lee, Danupon Nanongkai, Richard Peng, Thatchaphol Saranurak, Zhao Song, and Di Wang, In Symposium on Foundations of Computer Science (FOCS 2020), With Yair Carmon, Yujia Jin, and Kevin Tian, Unit Capacity Maxflow in Almost $O(m^{4/3})$ Time, Invited to the special issue (arXiv before merge)), Solving Discounted Stochastic Two-Player Games with Near-Optimal Time and Sample Complexity, In International Conference on Artificial Intelligence and Statistics (AISTATS 2020) (arXiv), Efficiently Solving MDPs with Stochastic Mirror Descent, In International Conference on Machine Learning (ICML 2020) (arXiv), Near-Optimal Methods for Minimizing Star-Convex Functions and Beyond, With Oliver Hinder and Nimit Sharad Sohoni, In Conference on Learning Theory (COLT 2020) (arXiv), Solving Tall Dense Linear Programs in Nearly Linear Time, With Jan van den Brand, Yin Tat Lee, and Zhao Song, In Symposium on Theory of Computing (STOC 2020).

Bloomington Il Police Scanner Live, Anakin Never Left Tatooine Fanfiction, Articles A

schweizer 300 main rotor blades
2023
05.04

aaron sidford cv

My CV. with Hilal Asi, Yair Carmon, Arun Jambulapati and Aaron Sidford In particular, it achieves nearly linear time for DP-SCO in low-dimension settings. {{{;}#q8?\. . 4 0 obj Aaron Sidford is an assistant professor in the departments of Management Science and Engineering and Computer Science at Stanford University. My research is on the design and theoretical analysis of efficient algorithms and data structures. 113 * 2016: The system can't perform the operation now. Algorithms Optimization and Numerical Analysis. We make safe shipping arrangements for your convenience from Baton Rouge, Louisiana. July 8, 2022. ", "Collection of new upper and lower sample complexity bounds for solving average-reward MDPs. Faculty Spotlight: Aaron Sidford. Before attending Stanford, I graduated from MIT in May 2018. 2019 (and hopefully 2022 onwards Covid permitting) For more information please watch this and please consider donating here! Honorable Mention for the 2015 ACM Doctoral Dissertation Award went to Aaron Sidford of the Massachusetts Institute of Technology, and Siavash Mirarab of the University of Texas at Austin. My research focuses on the design of efficient algorithms based on graph theory, convex optimization, and high dimensional geometry (CV). My research focuses on AI and machine learning, with an emphasis on robotics applications. With Rong Ge, Chi Jin, Sham M. Kakade, and Praneeth Netrapalli. >> [pdf] [poster] Given a linear program with n variables, m > n constraints, and bit complexity L, our algorithm runs in (sqrt(n) L) iterations each consisting of solving (1) linear systems and additional nearly linear time computation. We also provide two . with Aaron Sidford ", "A low-bias low-cost estimator of subproblem solution suffices for acceleration! Call (225) 687-7590 or park nicollet dermatology wayzata today! AISTATS, 2021. [pdf] [talk] Optimal Sublinear Sampling of Spanning Trees and Determinantal Point Processes via Average-Case Entropic Independence, FOCS 2022 . [pdf] [talk] CV; Theory Group; Data Science; CSE 535: Theory of Optimization and Continuous Algorithms. [pdf] >> If you see any typos or issues, feel free to email me. Some I am still actively improving and all of them I am happy to continue polishing. Efficient accelerated coordinate descent methods and faster algorithms for solving linear systems. Best Paper Award. Follow. Verified email at stanford.edu - Homepage. what is a blind trust for lottery winnings; ithaca college park school scholarships; We forward in this generation, Triumphantly. 2022 - current Assistant Professor, Georgia Institute of Technology (Georgia Tech) 2022 Visiting researcher, Max Planck Institute for Informatics. Gary L. Miller Carnegie Mellon University Verified email at cs.cmu.edu. [5] Yair Carmon, Arun Jambulapati, Yujia Jin, Yin Tat Lee, Daogao Liu, Aaron Sidford, Kevin Tian. My research was supported by the National Defense Science and Engineering Graduate (NDSEG) Fellowship from 2018-2021, and by a Google PhD Fellowship from 2022-2023. 2023. . en_US: dc.format.extent: 266 pages: en_US: dc.language.iso: eng: en_US: dc.publisher: Massachusetts Institute of Technology: en_US: dc.rights: M.I.T. I regularly advise Stanford students from a variety of departments. Michael B. Cohen, Yin Tat Lee, Gary L. Miller, Jakub Pachocki, and Aaron Sidford. My long term goal is to bring robots into human-centered domains such as homes and hospitals. The ones marked, 2014 IEEE 55th Annual Symposium on Foundations of Computer Science, 424-433, SIAM Journal on Optimization 28 (2), 1751-1772, Proceedings of the twenty-fifth annual ACM-SIAM symposium on Discrete, 2015 IEEE 56th Annual Symposium on Foundations of Computer Science, 1049-1065, 2013 ieee 54th annual symposium on foundations of computer science, 147-156, Proceedings of the forty-fifth annual ACM symposium on Theory of computing, MB Cohen, YT Lee, C Musco, C Musco, R Peng, A Sidford, Proceedings of the 2015 Conference on Innovations in Theoretical Computer, Advances in Neural Information Processing Systems 31, M Kapralov, YT Lee, CN Musco, CP Musco, A Sidford, SIAM Journal on Computing 46 (1), 456-477, P Jain, S Kakade, R Kidambi, P Netrapalli, A Sidford, MB Cohen, YT Lee, G Miller, J Pachocki, A Sidford, Proceedings of the forty-eighth annual ACM symposium on Theory of Computing, International Conference on Machine Learning, 2540-2548, P Jain, SM Kakade, R Kidambi, P Netrapalli, A Sidford, 2015 IEEE 56th Annual Symposium on Foundations of Computer Science, 230-249, Mathematical Programming 184 (1-2), 71-120, P Jain, C Jin, SM Kakade, P Netrapalli, A Sidford, International conference on machine learning, 654-663, Proceedings of the Twenty-Ninth Annual ACM-SIAM Symposium on Discrete, D Garber, E Hazan, C Jin, SM Kakade, C Musco, P Netrapalli, A Sidford, New articles related to this author's research, Path finding methods for linear programming: Solving linear programs in o (vrank) iterations and faster algorithms for maximum flow, Accelerated methods for nonconvex optimization, An almost-linear-time algorithm for approximate max flow in undirected graphs, and its multicommodity generalizations, A faster cutting plane method and its implications for combinatorial and convex optimization, Efficient accelerated coordinate descent methods and faster algorithms for solving linear systems, A simple, combinatorial algorithm for solving SDD systems in nearly-linear time, Uniform sampling for matrix approximation, Near-optimal time and sample complexities for solving Markov decision processes with a generative model, Single pass spectral sparsification in dynamic streams, Parallelizing stochastic gradient descent for least squares regression: mini-batching, averaging, and model misspecification, Un-regularizing: approximate proximal point and faster stochastic algorithms for empirical risk minimization, Accelerating stochastic gradient descent for least squares regression, Efficient inverse maintenance and faster algorithms for linear programming, Lower bounds for finding stationary points I, Streaming pca: Matching matrix bernstein and near-optimal finite sample guarantees for ojas algorithm, Convex Until Proven Guilty: Dimension-Free Acceleration of Gradient Descent on Non-Convex Functions, Competing with the empirical risk minimizer in a single pass, Variance reduced value iteration and faster algorithms for solving Markov decision processes, Robust shift-and-invert preconditioning: Faster and more sample efficient algorithms for eigenvector computation. Sidford received his PhD from the department of Electrical Engineering and Computer Science at the Massachusetts Institute of Technology where he was advised by Professor Jonathan Kelner. Aaron Sidford is an assistant professor in the department of Management Science and Engineering and the department of Computer Science at Stanford University. Annie Marsden, Vatsal Sharan, Aaron Sidford, Gregory Valiant, Efficient Convex Optimization Requires . Department of Electrical Engineering, Stanford University, 94305, Stanford, CA, USA Symposium on Foundations of Computer Science (FOCS), 2020, Efficiently Solving MDPs with Stochastic Mirror Descent The paper, Efficient Convex Optimization Requires Superlinear Memory, was co-authored with Stanford professor Gregory Valiant as well as current Stanford student Annie Marsden and alumnus Vatsal Sharan. Congratulations to Prof. Aaron Sidford for receiving the Best Paper Award at the 2022 Conference on Learning Theory ( COLT 2022 )! data structures) that maintain properties of dynamically changing graphs and matrices -- such as distances in a graph, or the solution of a linear system. Outdated CV [as of Dec'19] Students I am very lucky to advise the following Ph.D. students: Siddartha Devic (co-advised with Aleksandra Korolova . Our algorithm combines the derandomized square graph operation (Rozenman and Vadhan, 2005), which we recently used for solving Laplacian systems in nearly logarithmic space (Murtagh, Reingold, Sidford, and Vadhan, 2017), with ideas from (Cheng, Cheng, Liu, Peng, and Teng, 2015), which gave an algorithm that is time-efficient (while ours is . We provide a generic technique for constructing families of submodular functions to obtain lower bounds for submodular function minimization (SFM). Neural Information Processing Systems (NeurIPS), 2021, Thinking Inside the Ball: Near-Optimal Minimization of the Maximal Loss Stability of the Lanczos Method for Matrix Function Approximation Cameron Musco, Christopher Musco, Aaron Sidford ACM-SIAM Symposium on Discrete Algorithms (SODA) 2018. Daniel Spielman Professor of Computer Science, Yale University Verified email at yale.edu. Full CV is available here. I am a fifth-and-final-year PhD student in the Department of Management Science and Engineering at Stanford in the Operations Research group. Google Scholar; Probability on trees and . The site facilitates research and collaboration in academic endeavors. ICML, 2016. 9-21. With Michael Kapralov, Yin Tat Lee, Cameron Musco, and Christopher Musco. with Arun Jambulapati, Aaron Sidford and Kevin Tian I am a fifth-and-final-year PhD student in the Department of Management Science and Engineering at Stanford in We are excited to have Professor Sidford join the Management Science & Engineering faculty starting Fall 2016. Publications and Preprints. SODA 2023: 4667-4767. CS265/CME309: Randomized Algorithms and Probabilistic Analysis, Fall 2019. theory and graph applications. [pdf] [poster] (, In Symposium on Foundations of Computer Science (FOCS 2015) (, In Conference on Learning Theory (COLT 2015) (, In International Conference on Machine Learning (ICML 2015) (, In Innovations in Theoretical Computer Science (ITCS 2015) (, In Symposium on Fondations of Computer Science (FOCS 2013) (, In Symposium on the Theory of Computing (STOC 2013) (, Book chapter in Building Bridges II: Mathematics of Laszlo Lovasz, 2020 (, Journal of Machine Learning Research, 2017 (. << Abstract. ", "Collection of variance-reduced / coordinate methods for solving matrix games, with simplex or Euclidean ball domains. KTH in Stockholm, Sweden, and my BSc + MSc at the Summer 2022: I am currently a research scientist intern at DeepMind in London. COLT, 2022. Slides from my talk at ITCS. One research focus are dynamic algorithms (i.e. "t a","H Secured intranet portal for faculty, staff and students. University of Cambridge MPhil. with Sepehr Assadi, Arun Jambulapati, Aaron Sidford and Kevin Tian Email: [name]@stanford.edu I received my PhD from the department of Electrical Engineering and Computer Science at the Massachusetts Institute of Technology where I was advised by Professor Jonathan Kelner. Conference Publications 2023 The Complexity of Infinite-Horizon General-Sum Stochastic Games With Yujia Jin, Vidya Muthukumar, Aaron Sidford To appear in Innovations in Theoretical Computer Science (ITCS 2023) (arXiv) 2022 Optimal and Adaptive Monteiro-Svaiter Acceleration With Yair Carmon, Assistant Professor of Management Science and Engineering and of Computer Science. Aaron's research interests lie in optimization, the theory of computation, and the . . /CreationDate (D:20230304061109-08'00') July 2015. pdf, Szemerdi Regularity Lemma and Arthimetic Progressions, Annie Marsden. Selected for oral presentation. ?_l) Neural Information Processing Systems (NeurIPS), 2014. We establish lower bounds on the complexity of finding $$-stationary points of smooth, non-convex high-dimensional functions using first-order methods. ", "Sample complexity for average-reward MDPs? Given an independence oracle, we provide an exact O (nr log rT-ind) time algorithm. Cameron Musco, Praneeth Netrapalli, Aaron Sidford, Shashanka Ubaru, David P. Woodruff Innovations in Theoretical Computer Science (ITCS) 2018. Email / Before attending Stanford, I graduated from MIT in May 2018. >CV >code >contact; My PhD dissertation, Algorithmic Approaches to Statistical Questions, 2012. CoRR abs/2101.05719 ( 2021 ) Secured intranet portal for faculty, staff and students. (arXiv pre-print) arXiv | pdf, Annie Marsden, R. Stephen Berry. I am an Assistant Professor in the School of Computer Science at Georgia Tech. ", "An attempt to make Monteiro-Svaiter acceleration practical: no binary search and no need to know smoothness parameter! ", "A new Catalyst framework with relaxed error condition for faster finite-sum and minimax solvers. I am an assistant professor in the department of Management Science and Engineering and the department of Computer Science at Stanford University. Goethe University in Frankfurt, Germany. Office: 380-T Annie Marsden. to be advised by Prof. Dongdong Ge. Thesis, 2016. pdf. ", "How many \(\epsilon\)-length segments do you need to look at for finding an \(\epsilon\)-optimal minimizer of convex function on a line? Main Menu. Aaron Sidford (sidford@stanford.edu) Welcome This page has informatoin and lecture notes from the course "Introduction to Optimization Theory" (MS&E213 / CS 269O) which I taught in Fall 2019. 2017. With Yosheb Getachew, Yujia Jin, Aaron Sidford, and Kevin Tian (2023). with Yair Carmon, Aaron Sidford and Kevin Tian International Conference on Machine Learning (ICML), 2020, Principal Component Projection and Regression in Nearly Linear Time through Asymmetric SVRG Neural Information Processing Systems (NeurIPS, Oral), 2020, Coordinate Methods for Matrix Games I am fortunate to be advised by Aaron Sidford. Yu Gao, Yang P. Liu, Richard Peng, Faster Divergence Maximization for Faster Maximum Flow, FOCS 2020 Conference on Learning Theory (COLT), 2015. %PDF-1.4 In Sidford's dissertation, Iterative Methods, Combinatorial . [pdf] [poster] ", "Faster algorithms for separable minimax, finite-sum and separable finite-sum minimax. Try again later. Efficient Convex Optimization Requires Superlinear Memory. Intranet Web Portal. Group Resources. Neural Information Processing Systems (NeurIPS, Oral), 2019, A Near-Optimal Method for Minimizing the Maximum of N Convex Loss Functions Page 1 of 5 Aaron Sidford Assistant Professor of Management Science and Engineering and of Computer Science CONTACT INFORMATION Administrative Contact Jackie Nguyen - Administrative Associate ", "Team-convex-optimization for solving discounted and average-reward MDPs! Articles 1-20. SHUFE, where I was fortunate I am particularly interested in work at the intersection of continuous optimization, graph theory, numerical linear algebra, and data structures. MS&E welcomes new faculty member, Aaron Sidford ! I completed my PhD at I graduated with a PhD from Princeton University in 2018. with Aaron Sidford with Yair Carmon, Aaron Sidford and Kevin Tian I often do not respond to emails about applications. This is the academic homepage of Yang Liu (I publish under Yang P. Liu). In each setting we provide faster exact and approximate algorithms. [name] = yangpliu, Optimal Sublinear Sampling of Spanning Trees and Determinantal Point Processes via Average-Case Entropic Independence, Maximum Flow and Minimum-Cost Flow in Almost Linear Time, Online Edge Coloring via Tree Recurrences and Correlation Decay, Fully Dynamic Electrical Flows: Sparse Maxflow Faster Than Goldberg-Rao, Discrepancy Minimization via a Self-Balancing Walk, Faster Divergence Maximization for Faster Maximum Flow. Try again later. BayLearn, 2019, "Computing stationary solution for multi-agent RL is hard: Indeed, CCE for simultaneous games and NE for turn-based games are both PPAD-hard. With Jan van den Brand, Yin Tat Lee, Danupon Nanongkai, Richard Peng, Thatchaphol Saranurak, Zhao Song, and Di Wang. International Colloquium on Automata, Languages, and Programming (ICALP), 2022, Sharper Rates for Separable Minimax and Finite Sum Optimization via Primal-Dual Extragradient Methods DOI: 10.1109/FOCS.2016.69 Corpus ID: 3311; Faster Algorithms for Computing the Stationary Distribution, Simulating Random Walks, and More @article{Cohen2016FasterAF, title={Faster Algorithms for Computing the Stationary Distribution, Simulating Random Walks, and More}, author={Michael B. Cohen and Jonathan A. Kelner and John Peebles and Richard Peng and Aaron Sidford and Adrian Vladu}, journal . My broad research interest is in theoretical computer science and my focus is on fundamental mathematical problems in data science at the intersection of computer science, statistics, optimization, biology and economics. Aaron Sidford, Introduction to Optimization Theory; Lap Chi Lau, Convexity and Optimization; Nisheeth Vishnoi, Algorithms for . with Yair Carmon, Arun Jambulapati and Aaron Sidford Optimization Algorithms: I used variants of these notes to accompany the courses Introduction to Optimization Theory and Optimization . My research interests lie broadly in optimization, the theory of computation, and the design and analysis of algorithms. aaron sidford cvis sea bass a bony fish to eat. Spectrum Approximation Beyond Fast Matrix Multiplication: Algorithms and Hardness. In Symposium on Foundations of Computer Science (FOCS 2020) Invited to the special issue ( arXiv) Anup B. Rao. I am a fifth year Ph.D. student in Computer Science at Stanford University co-advised by Gregory Valiant and John Duchi. [pdf] [talk] [poster] [pdf] I was fortunate to work with Prof. Zhongzhi Zhang. We organize regular talks and if you are interested and are Stanford affiliated, feel free to reach out (from a Stanford email). Done under the mentorship of M. Malliaris. [pdf] Winter 2020 Teaching assistant for EE364a: Convex Optimization I taught by John Duchi, Fall 2018 Teaching assitant for CS265/CME309: Randomized Algorithms and Probabilistic Analysis, Fall 2019 taught by Greg Valiant. arXiv preprint arXiv:2301.00457, 2023 arXiv. Some I am still actively improving and all of them I am happy to continue polishing. United States. We will start with a primer week to learn the very basics of continuous optimization (July 26 - July 30), followed by two weeks of talks by the speakers on more advanced . in Chemistry at the University of Chicago. Aaron Sidford. Yang P. Liu, Aaron Sidford, Department of Mathematics Janardhan Kulkarni, Yang P. Liu, Ashwin Sah, Mehtaab Sawhney, Jakub Tarnawski, Fully Dynamic Electrical Flows: Sparse Maxflow Faster Than Goldberg-Rao, FOCS 2021 Google Scholar Digital Library; Russell Lyons and Yuval Peres. NeurIPS Smooth Games Optimization and Machine Learning Workshop, 2019, Variance Reduction for Matrix Games Many of these algorithms are iterative and solve a sequence of smaller subproblems, whose solution can be maintained via the aforementioned dynamic algorithms. ", "We characterize when solving the max \(\min_{x}\max_{i\in[n]}f_i(x)\) is (not) harder than solving the average \(\min_{x}\frac{1}{n}\sum_{i\in[n]}f_i(x)\). F+s9H ", "A general continuous optimization framework for better dynamic (decremental) matching algorithms. /N 3 This site uses cookies from Google to deliver its services and to analyze traffic. I am currently a third-year graduate student in EECS at MIT working under the wonderful supervision of Ankur Moitra. Aaron Sidford is part of Stanford Profiles, official site for faculty, postdocs, students and staff information (Expertise, Bio, Research, Publications, and more). (arXiv), A Faster Cutting Plane Method and its Implications for Combinatorial and Convex Optimization, In Symposium on Foundations of Computer Science (FOCS 2015), Machtey Award for Best Student Paper (arXiv), Efficient Inverse Maintenance and Faster Algorithms for Linear Programming, In Symposium on Foundations of Computer Science (FOCS 2015) (arXiv), Competing with the Empirical Risk Minimizer in a Single Pass, With Roy Frostig, Rong Ge, and Sham Kakade, In Conference on Learning Theory (COLT 2015) (arXiv), Un-regularizing: approximate proximal point and faster stochastic algorithms for empirical risk minimization, In International Conference on Machine Learning (ICML 2015) (arXiv), Uniform Sampling for Matrix Approximation, With Michael B. Cohen, Yin Tat Lee, Cameron Musco, Christopher Musco, and Richard Peng, In Innovations in Theoretical Computer Science (ITCS 2015) (arXiv), Path-Finding Methods for Linear Programming : Solving Linear Programs in (rank) Iterations and Faster Algorithms for Maximum Flow, In Symposium on Foundations of Computer Science (FOCS 2014), Best Paper Award and Machtey Award for Best Student Paper (arXiv), Single Pass Spectral Sparsification in Dynamic Streams, With Michael Kapralov, Yin Tat Lee, Cameron Musco, and Christopher Musco, An Almost-Linear-Time Algorithm for Approximate Max Flow in Undirected Graphs, and its Multicommodity Generalizations, With Jonathan A. Kelner, Yin Tat Lee, and Lorenzo Orecchia, In Symposium on Discrete Algorithms (SODA 2014), Efficient Accelerated Coordinate Descent Methods and Faster Algorithms for Solving Linear Systems, In Symposium on Fondations of Computer Science (FOCS 2013) (arXiv), A Simple, Combinatorial Algorithm for Solving SDD Systems in Nearly-Linear Time, With Jonathan A. Kelner, Lorenzo Orecchia, and Zeyuan Allen Zhu, In Symposium on the Theory of Computing (STOC 2013) (arXiv), SIAM Journal on Computing (arXiv before merge), Derandomization beyond Connectivity: Undirected Laplacian Systems in Nearly Logarithmic Space, With Jack Murtagh, Omer Reingold, and Salil Vadhan, Book chapter in Building Bridges II: Mathematics of Laszlo Lovasz, 2020 (arXiv), Lower Bounds for Finding Stationary Points II: First-Order Methods. [i14] Yair Carmon, Arun Jambulapati, Yujia Jin, Yin Tat Lee, Daogao Liu, Aaron Sidford, Kevin Tian: ReSQueing Parallel and Private Stochastic Convex Optimization. Yujia Jin. Optimization and Algorithmic Paradigms (CS 261): Winter '23, Optimization Algorithms (CS 369O / CME 334 / MS&E 312): Fall '22, Discrete Mathematics and Algorithms (CME 305 / MS&E 315): Winter '22, '21, '20, '19, '18, Introduction to Optimization Theory (CS 269O / MS&E 213): Fall '20, '19, Spring '19, '18, '17, Almost Linear Time Graph Algorithms (CS 269G / MS&E 313): Fall '18, Winter '17. Personal Website. David P. Woodruff . Applying this technique, we prove that any deterministic SFM algorithm . The Complexity of Infinite-Horizon General-Sum Stochastic Games, With Yujia Jin, Vidya Muthukumar, Aaron Sidford, To appear in Innovations in Theoretical Computer Science (ITCS 2023) (arXiv), Optimal and Adaptive Monteiro-Svaiter Acceleration, With Yair Carmon, Danielle Hausler, Arun Jambulapati, and Yujia Jin, To appear in Advances in Neural Information Processing Systems (NeurIPS 2022) (arXiv), On the Efficient Implementation of High Accuracy Optimality of Profile Maximum Likelihood, With Moses Charikar, Zhihao Jiang, and Kirankumar Shiragur, Improved Lower Bounds for Submodular Function Minimization, With Deeparnab Chakrabarty, Andrei Graur, and Haotian Jiang, In Symposium on Foundations of Computer Science (FOCS 2022) (arXiv), RECAPP: Crafting a More Efficient Catalyst for Convex Optimization, With Yair Carmon, Arun Jambulapati, and Yujia Jin, International Conference on Machine Learning (ICML 2022) (arXiv), Efficient Convex Optimization Requires Superlinear Memory, With Annie Marsden, Vatsal Sharan, and Gregory Valiant, Conference on Learning Theory (COLT 2022), Sharper Rates for Separable Minimax and Finite Sum Optimization via Primal-Dual Extragradient Method, Conference on Learning Theory (COLT 2022) (arXiv), Big-Step-Little-Step: Efficient Gradient Methods for Objectives with Multiple Scales, With Jonathan A. Kelner, Annie Marsden, Vatsal Sharan, Gregory Valiant, and Honglin Yuan, Regularized Box-Simplex Games and Dynamic Decremental Bipartite Matching, With Arun Jambulapati, Yujia Jin, and Kevin Tian, International Colloquium on Automata, Languages and Programming (ICALP 2022) (arXiv), Fully-Dynamic Graph Sparsifiers Against an Adaptive Adversary, With Aaron Bernstein, Jan van den Brand, Maximilian Probst, Danupon Nanongkai, Thatchaphol Saranurak, and He Sun, Faster Maxflow via Improved Dynamic Spectral Vertex Sparsifiers, With Jan van den Brand, Yu Gao, Arun Jambulapati, Yin Tat Lee, Yang P. Liu, and Richard Peng, In Symposium on Theory of Computing (STOC 2022) (arXiv), Semi-Streaming Bipartite Matching in Fewer Passes and Optimal Space, With Sepehr Assadi, Arun Jambulapati, Yujia Jin, and Kevin Tian, In Symposium on Discrete Algorithms (SODA 2022) (arXiv), Algorithmic trade-offs for girth approximation in undirected graphs, With Avi Kadria, Liam Roditty, Virginia Vassilevska Williams, and Uri Zwick, In Symposium on Discrete Algorithms (SODA 2022), Computing Lewis Weights to High Precision, With Maryam Fazel, Yin Tat Lee, and Swati Padmanabhan, With Hilal Asi, Yair Carmon, Arun Jambulapati, and Yujia Jin, In Advances in Neural Information Processing Systems (NeurIPS 2021) (arXiv), Thinking Inside the Ball: Near-Optimal Minimization of the Maximal Loss, In Conference on Learning Theory (COLT 2021) (arXiv), The Bethe and Sinkhorn Permanents of Low Rank Matrices and Implications for Profile Maximum Likelihood, With Nima Anari, Moses Charikar, and Kirankumar Shiragur, Towards Tight Bounds on the Sample Complexity of Average-reward MDPs, In International Conference on Machine Learning (ICML 2021) (arXiv), Minimum cost flows, MDPs, and 1-regression in nearly linear time for dense instances, With Jan van den Brand, Yin Tat Lee, Yang P. Liu, Thatchaphol Saranurak, and Zhao Song, Di Wang, In Symposium on Theory of Computing (STOC 2021) (arXiv), Ultrasparse Ultrasparsifiers and Faster Laplacian System Solvers, In Symposium on Discrete Algorithms (SODA 2021) (arXiv), Relative Lipschitzness in Extragradient Methods and a Direct Recipe for Acceleration, In Innovations in Theoretical Computer Science (ITCS 2021) (arXiv), Acceleration with a Ball Optimization Oracle, With Yair Carmon, Arun Jambulapati, Qijia Jiang, Yujia Jin, Yin Tat Lee, and Kevin Tian, In Conference on Neural Information Processing Systems (NeurIPS 2020), Instance Based Approximations to Profile Maximum Likelihood, In Conference on Neural Information Processing Systems (NeurIPS 2020) (arXiv), Large-Scale Methods for Distributionally Robust Optimization, With Daniel Levy*, Yair Carmon*, and John C. Duch (* denotes equal contribution), High-precision Estimation of Random Walks in Small Space, With AmirMahdi Ahmadinejad, Jonathan A. Kelner, Jack Murtagh, John Peebles, and Salil P. Vadhan, In Symposium on Foundations of Computer Science (FOCS 2020) (arXiv), Bipartite Matching in Nearly-linear Time on Moderately Dense Graphs, With Jan van den Brand, Yin Tat Lee, Danupon Nanongkai, Richard Peng, Thatchaphol Saranurak, Zhao Song, and Di Wang, In Symposium on Foundations of Computer Science (FOCS 2020), With Yair Carmon, Yujia Jin, and Kevin Tian, Unit Capacity Maxflow in Almost $O(m^{4/3})$ Time, Invited to the special issue (arXiv before merge)), Solving Discounted Stochastic Two-Player Games with Near-Optimal Time and Sample Complexity, In International Conference on Artificial Intelligence and Statistics (AISTATS 2020) (arXiv), Efficiently Solving MDPs with Stochastic Mirror Descent, In International Conference on Machine Learning (ICML 2020) (arXiv), Near-Optimal Methods for Minimizing Star-Convex Functions and Beyond, With Oliver Hinder and Nimit Sharad Sohoni, In Conference on Learning Theory (COLT 2020) (arXiv), Solving Tall Dense Linear Programs in Nearly Linear Time, With Jan van den Brand, Yin Tat Lee, and Zhao Song, In Symposium on Theory of Computing (STOC 2020). Bloomington Il Police Scanner Live, Anakin Never Left Tatooine Fanfiction, Articles A

oak island treasure found 2021