-
LearnAct: Few-Shot Mobile GUI Agent with a Unified Demonstration Benchmark
Paper • 2504.13805 • Published • 11 -
Towards Agentic Recommender Systems in the Era of Multimodal Large Language Models
Paper • 2503.16734 • Published • 1 -
LLM-Powered GUI Agents in Phone Automation: Surveying Progress and Prospects
Paper • 2504.19838 • Published • 22
Collections
Discover the best community collections!
Collections including paper arxiv:2504.13805
-
Mobile-Agent-V: Learning Mobile Device Operation Through Video-Guided Multi-Agent Collaboration
Paper • 2502.17110 • Published • 13 -
WebGames: Challenging General-Purpose Web-Browsing AI Agents
Paper • 2502.18356 • Published • 14 -
VEM: Environment-Free Exploration for Training GUI Agent with Value Environment Model
Paper • 2502.18906 • Published • 12 -
AppAgentX: Evolving GUI Agents as Proficient Smartphone Users
Paper • 2503.02268 • Published • 11
-
EVA-CLIP-18B: Scaling CLIP to 18 Billion Parameters
Paper • 2402.04252 • Published • 29 -
Vision Superalignment: Weak-to-Strong Generalization for Vision Foundation Models
Paper • 2402.03749 • Published • 14 -
ScreenAI: A Vision-Language Model for UI and Infographics Understanding
Paper • 2402.04615 • Published • 44 -
EfficientViT-SAM: Accelerated Segment Anything Model Without Performance Loss
Paper • 2402.05008 • Published • 23
-
R1-Onevision: Advancing Generalized Multimodal Reasoning through Cross-Modal Formalization
Paper • 2503.10615 • Published • 17 -
UniGoal: Towards Universal Zero-shot Goal-oriented Navigation
Paper • 2503.10630 • Published • 6 -
Search-R1: Training LLMs to Reason and Leverage Search Engines with Reinforcement Learning
Paper • 2503.09516 • Published • 36 -
LMM-R1: Empowering 3B LMMs with Strong Reasoning Abilities Through Two-Stage Rule-Based RL
Paper • 2503.07536 • Published • 88
-
Multimodal Self-Instruct: Synthetic Abstract Image and Visual Reasoning Instruction Using Language Model
Paper • 2407.07053 • Published • 47 -
LMMs-Eval: Reality Check on the Evaluation of Large Multimodal Models
Paper • 2407.12772 • Published • 35 -
VLMEvalKit: An Open-Source Toolkit for Evaluating Large Multi-Modality Models
Paper • 2407.11691 • Published • 15 -
MMIU: Multimodal Multi-image Understanding for Evaluating Large Vision-Language Models
Paper • 2408.02718 • Published • 62
-
LearnAct: Few-Shot Mobile GUI Agent with a Unified Demonstration Benchmark
Paper • 2504.13805 • Published • 11 -
Towards Agentic Recommender Systems in the Era of Multimodal Large Language Models
Paper • 2503.16734 • Published • 1 -
LLM-Powered GUI Agents in Phone Automation: Surveying Progress and Prospects
Paper • 2504.19838 • Published • 22
-
R1-Onevision: Advancing Generalized Multimodal Reasoning through Cross-Modal Formalization
Paper • 2503.10615 • Published • 17 -
UniGoal: Towards Universal Zero-shot Goal-oriented Navigation
Paper • 2503.10630 • Published • 6 -
Search-R1: Training LLMs to Reason and Leverage Search Engines with Reinforcement Learning
Paper • 2503.09516 • Published • 36 -
LMM-R1: Empowering 3B LMMs with Strong Reasoning Abilities Through Two-Stage Rule-Based RL
Paper • 2503.07536 • Published • 88
-
Mobile-Agent-V: Learning Mobile Device Operation Through Video-Guided Multi-Agent Collaboration
Paper • 2502.17110 • Published • 13 -
WebGames: Challenging General-Purpose Web-Browsing AI Agents
Paper • 2502.18356 • Published • 14 -
VEM: Environment-Free Exploration for Training GUI Agent with Value Environment Model
Paper • 2502.18906 • Published • 12 -
AppAgentX: Evolving GUI Agents as Proficient Smartphone Users
Paper • 2503.02268 • Published • 11
-
Multimodal Self-Instruct: Synthetic Abstract Image and Visual Reasoning Instruction Using Language Model
Paper • 2407.07053 • Published • 47 -
LMMs-Eval: Reality Check on the Evaluation of Large Multimodal Models
Paper • 2407.12772 • Published • 35 -
VLMEvalKit: An Open-Source Toolkit for Evaluating Large Multi-Modality Models
Paper • 2407.11691 • Published • 15 -
MMIU: Multimodal Multi-image Understanding for Evaluating Large Vision-Language Models
Paper • 2408.02718 • Published • 62
-
EVA-CLIP-18B: Scaling CLIP to 18 Billion Parameters
Paper • 2402.04252 • Published • 29 -
Vision Superalignment: Weak-to-Strong Generalization for Vision Foundation Models
Paper • 2402.03749 • Published • 14 -
ScreenAI: A Vision-Language Model for UI and Infographics Understanding
Paper • 2402.04615 • Published • 44 -
EfficientViT-SAM: Accelerated Segment Anything Model Without Performance Loss
Paper • 2402.05008 • Published • 23