In a collaboration between University of North Carolina, Chapel Hill and Nanyang Technological University on a paper published on February 10, 2026 researchers introduces AVIC, an adaptive framework designed to improve how AI models handle visual spatial reasoning by selectively using world models to "imagine" scenes. While traditional systems often use always-on imagination, which is computationally expensive and frequently produces misleading or redundant data, AVIC employs a gating policy to decide if additional visual evidence is truly necessary. If required, the system generates targeted action plans to simulate specific viewpoints, which are then verified for accuracy and relevance before being used to answer a question. Testing across benchmarks like SAT and R2R shows that this selective approach reaches state-of-the-art performance with significantly greater efficiency. Ultimately, the research demonstrates that visual imagination is most effective when applied sparingly to action-conditioned tasks rather than static observations. Source: February 10 2026 When and How Much to Imagine: Adaptive Test-Time Scaling with World Models for Visual Spatial Reasoning University of North Carolina, Chapel Hill; Nanyang Technological University Shoubin Yu, Yue Zhang, Zun Wang, Jaehong Yoon, Huaxiu Yao, Mingyu Ding, Mohit Bansal https://arxiv.org/pdf/2602.08236