Joey Hong

I am a PhD Student advised by Professor Anca Dragan and Professor Sergey Levine, where I work on offline reinforcement learning.

Prior to joining my PhD program, I was an AI Resident at Google Research, where I worked on multi-task bandits as well as program synthesis.

Before that, I was graduated from Caltech where I worked with Professor Yisong Yue.

Email  /  Google Scholar  /  Github

profile photo

Current Research

I'm currently interested in pushing the capabilities of offline reinforcement learning, specifically in applications that involve interacting with humans, through a mixture of theory and applied work.

Zero-Shot Goal-Directed Dialogue via RL on Imagined Conversations
Joey Hong, Sergey Levine, Anca Dragan
arXiv, slides
Offline RL with Observation Histories: Analyzing and Improving Sample Complexity
Joey Hong, Anca Dragan, Sergey Levine
Learning to Influence Human Behavior with Offline Reinforcement Learning
Joey Hong, Sergey Levine, Anca Dragan
NeurIPS, 2023
arXiv, website
Confidence-Conditioned Value Functions for Offline Reinforcement Learning
Joey Hong, Aviral Kumar, Sergey Levine
ICLR, 2022 (oral)
On the Sensitivity of Reward Inference to Misspecified Human Models
Joey Hong, Kush Bhatia, Anca Dragan
ICLR, 2022 (oral)
When Should We Prefer Offline Reinforcement Learning Over Behavioral Cloning?
Aviral Kumar*, Joey Hong*, Anikait Singh, Sergey Levine
ICLR, 2021
arXiv, blog

This website uses this template.