Robotic Environmental State Recognition

with Pre-Trained Vision-Language Models and Black-Box Optimization

Advanced Robotics

  • Kento Kawaharazuka
  • Yoshiki Obinata
  • Naoaki Kanazawa
  • Kei Okada
  • Masayuki Inaba
  • JSK Robotics Laboratory, The University of Tokyo, Japan

In order for robots to autonomously navigate and operate in diverse environments, it is essential for them to recognize the state of their environment. On the other hand, the environmental state recognition has traditionally involved distinct methods tailored to each state to be recognized. In this study, we perform a unified environmental state recognition for robots through the spoken language with pre-trained large-scale vision-language models. We apply Visual Question Answering and Image-to-Text Retrieval, which are tasks of Vision-Language Models. We show that with our method, it is possible to recognize not only whether a room door is open/closed, but also whether a transparent door is open/closed and whether water is running in a sink, without training neural networks or manual programming. In addition, the recognition accuracy can be improved by selecting appropriate texts from the set of prepared texts based on black-box optimization. For each state recognition, only the text set and its weighting need to be changed, eliminating the need to prepare multiple different models and programs, and facilitating the management of source code and computer resource. We experimentally demonstrate the effectiveness of our method and apply it to the recognition behavior on a mobile robot, Fetch.


Robotic Environmental State Recognition with Pre-Trained Vision-Language Models

The concept of this study: for the robotic environmental state recognition, we use pre-trained vision-language models BLIP2 and OFA for Visual Question Answering (VQA), and CLIP and ImageBind for Image-to-Text Retrieval (ITR), with black-box optimization to optimize the weighting of prepared text prompts.


Basic Experiments

The set of text prompts and representative images for Room, Elevator, Cabinet, Refrigerator, Microwave, Various Doors, Transparent Door, Light, Display, Handbag, Water, and Kitchen experiments.

The result of the state recognition experiment. The percentage of correct responses is shown for four different models.


Advanced Experiment

Navigation experiment including recognition of the refrigerator door state, cabinet door state, and room door state.


Bibtex

@article{kawaharazuka2024vlmbbo,
  author={K. Kawaharazuka and Y. Obinata and N. Kanazawa and K. Okada and M. Inaba},
  title={{Robotic Environmental State Recognition with Pre-Trained Vision-Language Models and Black-Box Optimization}},
  journal={Advanced Robotics},
  pages={1--10},
  year=2024,
  doi={10.1080/01691864.2024.2366995},
}
            

Contact

If you have any questions, please feel free to contact Kento Kawaharazuka.