Guillaume Pourcel
📢 @NeurIPS25 | Finishing my PhD and seeking industry internship or full-time position as AI Research Scientist At NeurIPS, I’ll present:
- RHEL (first-author, oral)
- RHEL for neuroscience (last-author, workshop poster)
- Training LLM for persuasion improves generalization (last-author, workshop poster)
I am a PhD candidate in the AI department at the University of Groningen, where I work in the MINDS group under the supervision of Prof. Herbert Jaeger. My research is affiliated with CogniGron and supported by the Post-Digital project. Throughout my doctoral studies, I have been a visiting researcher at ETH Zurich and IFISC, and completed research internships at Inria (SCOOL) and Inria (Flowers). I also collaborated with Maxence Ernoult from Google DeepMind.
Research Interests
Hardware-aware learning algorithms Modern computing is increasingly dominated by just two programs: neural network inference and gradient computation. This demands exploiting the specificities of these two programs to design efficient specialized hardware. Going further, it was proposed to redesign them so that they are unified into a single process. This was called “physics-grounded deep learning”, with algorithms like equilibrium propagation where inference and gradient computation are done with the same physical process. However, these methods were limited to equilibrium systems that can only process static data—missing the temporal sequences that dominate real-world applications.
My research addresses this gap through RHEL (Recurrent Hamiltonian Echo Learning), which extends physics-grounded learning to temporal learning. RHEL computes exact gradients through time using the same forward dynamics—no separate backward pass or memory storage needed. This makes it naturally suited for analog and neuromorphic hardware where both inference and learning obey the same physical process, potentially achieving better efficiency than its conventional autodiff counterpart (backpropagation through time).
Agency in AI systems My cognitive science background drove my interest in the fundamental gap between natural and artificial agency. Current approaches to agentic AI fall into two inadequate categories: rigid hand-engineered reward functions in RL, or the poorly-understood emergent behaviors of LLMs that mimic human motivations from training corpora—often with systematic pathologies. Understanding and bridging this gap is crucial for building AI systems that can act autonomously in complex, open-ended environments. My contributions to this problem include studies on:
- Persuasion as an intrinsic motivation for training LLMs.
- Interest-based curriculum learning for Deep RL
news
| Nov 2025 | Paper “Optimizing for Persuasion Improves LLM Generalization” accepted at NeurIPS 2025 Workshop on Multi-Turn Interactions in Large Language Models! |
|---|---|
| Oct 2025 | Our paper “Learning long range dependencies through time reversal symmetry breaking” was accepted as an Oral presentation (top 0.3%) at NeurIPS 2025! |
| Sep 2025 | Awarded the Edward N. Lorenz Early Career Award for our paper “Adaptive control of recurrent neural networks using conceptors” published in Chaos (to appear). |
| Nov 2024 | Won first prize at the hack1robot hackathon for our work on optimizing prompts for persuasion in multi-agent LLM debates! |
| Jun 2023 | Invited talk at the Santa Fe Workshop on “Sensory Prediction: Engineered and Evolved” - Controlling the geometry of neural dynamics for robust predictions. |