The first two lines are premesis, which we can consider as existing data. The third line, a conclusion, is derived through reasoning, representing new data.
AlphaFold 2, in predicting the 3D structures of proteins, incorporates elements that parallel human spatial reasoning.12 There are also mathematical reasoning models, most notably Google DeepMind’s AlphaProof and AlphaGeometry 2. These two models together solved four out of six problems at the 2024 International Mathematical Olympiad (IMO), achieving a score in the high end of the silver medal category.13 Among large language models with advanced reasoning capabilities are OpenAI’s o3 and DeepSeek’s R1.
Generative AI as an Abstraction Layer
The predictive power of deep learning has significant implications for the role of quantitative methods in decision-making. For example, AlphaFold solved the 50‑year-old problem of protein folding without understanding the folding process itself.14 Similarly, the Artificial Intelligence Forecasting System (AIFS), a deep learning system for weather forecasting introduced by the European Centre for Medium-Range Weather Forecasts (ECMWF) in February 2025, outperforms the most accurate physics-based models for many target variables without understanding meteorology.15
Just as these generative AI systems lack an understanding of biology or physics, we do not fully comprehend how these models arrive at their predictions.16 Deep learning systems provide predictions without the opportunity for causal interpretation (“why”)17 and related counterfactual inspection (“what if”). However, causal interpretation is crucial for forming narratives, which help us organize information and communicate it coherently. Narrative plays a key role in corporate decision‑making.18
Generative AI can be viewed as an abstraction layer. In software engineering, abstraction layers are designed to hide the inner workings of subsystems. Also in science, abstraction layers simplify complex systems. For instance, chemistry serves as an abstraction layer over physics, allowing us to understand chemical processes without fully grasping the underlying physics.19
Consider how we drive motor vehicles without fully understanding the physics that powers them. When we learned to drive, we formed an abstraction layer. We verify this abstraction based on predictive benefits. For example, we predict that stepping on the accelerator will make the car move faster, and stepping on the brakes will slow it down. When driving an unfamiliar car, our predictions may be slightly off, and we adjust our abstraction accordingly.
Conclusion
The Nobel Prizes for AI marked a milestone, not only in scientific discovery but also in decision-making. Generative AI systems provide powerful predictions that resist causal interpretation and the formation of accompanying narratives. These systems require us to abstract from the underlying causal forces and focus on verifying the predictive benefits. Abstraction is a well-known concept in both science and everyday life.