<aside> <img src="/icons/mail_gray.svg" alt="/icons/mail_gray.svg" width="40px" /> Email
</aside>
<aside> <img src="https://prod-files-secure.s3.us-west-2.amazonaws.com/d1771b69-4bbd-4546-9b9c-048c64f78304/be754098-d3e9-4e54-b426-cc1940ceb2d0/git.png" alt="https://prod-files-secure.s3.us-west-2.amazonaws.com/d1771b69-4bbd-4546-9b9c-048c64f78304/be754098-d3e9-4e54-b426-cc1940ceb2d0/git.png" width="40px" /> Github
</aside>
<aside> <img src="https://prod-files-secure.s3.us-west-2.amazonaws.com/d1771b69-4bbd-4546-9b9c-048c64f78304/fed292de-c921-4cb6-a0ff-b00099421b20/linkedin.png" alt="https://prod-files-secure.s3.us-west-2.amazonaws.com/d1771b69-4bbd-4546-9b9c-048c64f78304/fed292de-c921-4cb6-a0ff-b00099421b20/linkedin.png" width="40px" /> LinkedIn
</aside>
I’m currently Ph.D student at Seoul National University Autonomous Robot Intelligence Lab. I’m generally interested in (i) LLM-based Robotics, (ii) Visual-Language Navigation, (iii) Multimodal Learnig, and (iv) Natural Language Processing (Especially dialogue model). My overall research goal is making fully autonomous embodied agent, through closing sensory gap between human and robot.
**[paper] [code] [project page]**
https://www.youtube.com/watch?v=mVvqgM88dE0