Monotonicity is All You Need
Alan Lujan
Proves that monotonicity makes the upper envelope sufficient for finding globally optimal solutions in discrete choice models, providing theoretical foundations for applying the Endogenous Grid Method to these problems.
Solving Heterogeneous Agent Models with Gaussian Process Reinforcement Learning
Alan Lujan
Agents learn their environmental transitions in a model-free way through Gaussian Process dynamic programming combined with the Endogenous Grid Method. In the key application, Krusell-Smith agents learn the aggregate law of motion online, delivering smooth policy functions with uncertainty estimates.
Sequential Programming
Alan Lujan
Provides formal conditions under which sequential solutions to multi-decision problems converge to optimal policies, establishing convergence theory for the sequential endogenous grid method.
Exploring Effects of Consumer Finance Regulation with Deep Learning
Sebastian Benthall (NYU), Alan Lujan, Ngozi Nwanta (NYU), Zixuan Pan, Katherine J. Strandburg (NYU)
Applies deep learning methods to analyze and predict the effects of consumer finance regulation. Supported by National Science Foundation through Award No. 2131532.