A large amount of effort has recently been put into understanding the barren plateau phenomenon. In this perspective talk, we face the increasingly loud elephant in the room and ask a question that has been hinted at by many but not explicitly addressed: Can the structure that allows one to avoid barren plateaus also be leveraged to efficiently simulate the loss classically? We present a case-by-case argument that commonly used models with provable absence of barren plateaus are also in a sense classically simulable, provided that one can collect some classical data from quantum devices during an initial data acquisition phase. This follows from the observation that barren plateaus result from a curse of dimensionality, and that current approaches for solving them end up encoding the problem into some small, classically simulable, subspaces. We end by discussing caveats in our arguments including the limitations of average case arguments, the role of smart initializations, models that fall outside our assumptions, the potential for provably superpolynomial advantages and the possibility that, once larger devices become available, parametrized quantum circuits could heuristically outperform our analytic expectations.
The 3rd talk of a monthly webinar series jointly hosted by Perimeter, IVADO, and Institut Courtois.
Rare event sampling in dynamical systems is a fundamental problem arising in the natural sciences, which poses significant computational challenges due to an exponentially large space of trajectories. For settings where the dynamical system of interest follows a Brownian motion with known drift, the question of conditioning the process to reach a given endpoint or desired rare event is definitively answered by Doob's h-transform. However, the naive estimation of this transform is infeasible, as it requires simulating sufficiently many forward trajectories to estimate rare event probabilities. In this talk, I'll present our recent findings on the variational formulation of Doob's h-transform as an optimization problem over trajectories between a given initial point and the desired ending point. To solve this optimization, we propose a simulation-free training objective with a model parameterization that imposes the desired boundary conditions by design. Our approach significantly reduces the search space over trajectories and avoids expensive trajectory simulation and inefficient importance sampling estimators which are required in existing methods. We demonstrate the ability of our method to find feasible transition paths on real-world molecular simulation and protein folding tasks.
Zoom link
The 2nd talk of a monthly webinar series jointly hosted by Perimeter, IVADO, and Institut Courtois.
Speaker: David Kremer Garcia, AI Engineer & Lead Data Scientist, IBM Quantum, Yorktown Heights, NY, USA.
In this session, I will talk about how we are using AI to improve quantum circuit transpiling and optimization. I will show some of our recent work, where we apply AI methods such as Reinforcement Learning to different transpiling tasks and achieve a remarkable balance between speed and quality of the results. I will also talk about how we integrate these methods with other heuristics to provide "ai-enhanced transpiling" through our Qiskit Transpiler Service.