FluidWorld: Reaction-Diffusion Dynamics as a Predictive Substrate for World Models
Abstract
FluidWorld demonstrates that partial differential equations can serve as an efficient alternative to attention mechanisms and convolutional recurrent networks in world modeling, achieving better spatial structure preservation and coherent multi-step predictions with reduced computational complexity.
World models learn to predict future states of an environment, enabling planning and mental simulation. Current approaches default to Transformer-based predictors operating in learned latent spaces. This comes at a cost: O(N^2) computation and no explicit spatial inductive bias. This paper asks a foundational question: is self-attention necessary for predictive world modeling, or can alternative computational substrates achieve comparable or superior results? I introduce FluidWorld, a proof-of-concept world model whose predictive dynamics are governed by partial differential equations (PDEs) of reaction-diffusion type. Instead of using a separate neural network predictor, the PDE integration itself produces the future state prediction. In a strictly parameter-matched three-way ablation on unconditional UCF-101 video prediction (64x64, ~800K parameters, identical encoder, decoder, losses, and data), FluidWorld is compared against both a Transformer baseline (self-attention) and a ConvLSTM baseline (convolutional recurrence). While all three models converge to comparable single-step prediction loss, FluidWorld achieves 2x lower reconstruction error, produces representations with 10-15% higher spatial structure preservation and 18-25% more effective dimensionality, and critically maintains coherent multi-step rollouts where both baselines degrade rapidly. All experiments were conducted on a single consumer-grade PC (Intel Core i5, NVIDIA RTX 4070 Ti), without any large-scale compute. These results establish that PDE-based dynamics, which natively provide O(N) spatial complexity, adaptive computation, and global spatial coherence through diffusion, are a viable and parameter-efficient alternative to both attention and convolutional recurrence for world modeling.
Community
Deeply inspired by Y.LeCun's AMI vision and the progress of JEPA models. A known open challenge for scaling World Models is maintaining stable long-term rollouts without compounding errors.
What if we replaced Self-Attention with continuous physics to solve this? Enter FLUIDWORLD
Instead of O(N²) attention, FLUIDWORLD explores Reaction-Diffusion PDEs as the predictive engine (O(N) complexity).
The fascinating part? The Laplacian diffusion natively acts as a low-pass filter, dissipating prediction errors at each step
This leads to autopoietic self-repair. Even with 50% state corruption, the PDE dynamics naturally recover spatial coherence.
Now I need Labs to scale this physics-grounded substrate!
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- Latent Generative Solvers for Generalizable Long-Term Physics Simulation (2026)
- RAE-NWM: Navigation World Model in Dense Visual Representation Space (2026)
- Disentangled Latent Dynamics Manifold Fusion for Solving Parameterized PDEs (2026)
- Causal World Modeling for Robot Control (2026)
- LLM4Fluid: Large Language Models as Generalizable Neural Solvers for Fluid Dynamics (2026)
- Physics-guided curriculum learning for the identification of reaction-diffusion dynamics from partial observations (2026)
- Factorized Neural Implicit DMD for Parametric Dynamics (2026)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: @librarian-bot recommend
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper
Collections including this paper 0
No Collection including this paper