Icon Oryx MLLM: On-Demand Spatial-Temporal Understanding at Arbitrary Resolution

Zuyan Liu1,2,* Yuhao Dong2,3,* Ziwei Liu3 Winston Hu2 Jiwen Lu1 Yongming Rao2,1
1Tsinghua University  2Tencent  3S-Lab, NTU  *Equal Contribution 

Abstract

Visual data comes in various forms, ranging from small icons of just a few pixels to long videos spanning hours. Existing multi-modal LLMs usually standardize these diverse visual inputs to a fixed resolution for visual encoders and yield similar numbers of tokens for LLMs, which is non-optimal for multimodal understanding and inefficient for processing long visual content. To solve the problem, we propose the Oryx, a unified multimodal architecture for the spatial-temporal understanding of images, videos, and multi-view 3D scenes. Oryx offers an on-demand solution to seamlessly and efficiently process visual inputs with arbitrary spatial sizes and temporal lengths with two core designs: 1) a pre-trained OryxViT model that can encode images at any resolution into LLM-friendly visual representations; 2) a dynamic compressor module that supports 1x to 16x compression on visual tokens by request. Thanks to these designs, Oryx accommodates extremely long visual contexts like videos with lower resolution and high compression while maintaining high recognition precision for tasks like document understanding with native resolution and no compression. Beyond the architectural improvements, enhanced data curation and specialized training on long-context retrieval and spatial-aware data help Oryx achieve great capabilities in image, video, and 3D multimodal understanding simultaneously.

Examples of Oryx


On-Demand Visual Perception


As illustrated in Figure 1, optimizing for resolution and compression can lead to greater efficiency and meet practical needs: high resolution is crucial for text-relevant tasks, while object-level tasks may require only simple images, some applications may need to summarize extremely long videos while others maintain high precision for each frame.

Oryx Architecture

Oryx includes: 1) A well-trained visual encoder OryxViT that flexibly modifies the positional embedding layer and employs variable-length self-attention to process visual tokens in batches, thereby generating LLM- friendly visual representations at native resolutions. 2) Dynamic compression technique that adjusts downsampling ratios arbitrarily while fusing the information through a shared projector, thereby supporting extremely long inputs with up to 16x downsampling while maintaining the precision for low compression without increasing the overall token length.

General Temporal Understanding

We conduct experiments on four multiple-choice benchmarks and three generation benchmarks comprehensively and report the main score for each dataset. Oryx exhibits superior performance under a wide range of open-sourced video MLLMs.

Long-Form Temporal Understanding

We show results on three mainstream longform temporal understanding datasets, each featuring video inputs of tens of minutes in duration. Oryx demonstrates superior performance, achieving state-of-the-art results and surpassing several proprietary models across various benchmarks.

Image Understanding

We conduct 2D spatial understanding tasks on six representative image benchmarks, including general and task-specific benchmarks. Our Oryx model achieves tier-1 performance across a wide range of MLLMs.

3D Spatial Understanding

We use the popular ScanQA dataset and evaluate the relevant scores. We compare the Oryx model with 3D-specific models together with general open-source MLLMs. Oryx excels in 3D spatial understanding tasks, highlighting its versatility across various applications.

Citation (BibTeX)


@article{liu2024oryx,
  title={Oryx MLLM: On-Demand Spatial-Temporal Understanding at Arbitrary Resolution},
  author={Liu, Zuyan and Dong, Yuhao and Liu, Ziwei and Hu, Winston and Lu, Jiwen and Rao, Yongming},
  journal={arXiv preprint arXiv:2409.12961},
  year={2024}
  }