Artificial agents (e.g., animated avatars, robots) are expected to have an increasing presence in our lives; acting as assistants in consumer, education and healthcare settings. Much work attempting to maximise engagement with artificial agents has focused on how these agents look and behave, but very little has focused on understanding how the fundamental neurocognitive mechanisms of human social perception and expectation come in to play. This is, in part, because we still know relatively little about these mechanisms due to a dearth of experimental paradigms that can offer ecological validity, experimental control and objective measures of attention, behaviour and neural processing during dynamic social interactions. Artificial agents in virtual reality can offer a solution – by realistically simulating dyadic interactions in a context that offers experimental control. I combine neurophysiology, eye-tracking and motion capture measures across various virtual interaction paradigms to objectively measure social attention and behaviour during interactions with other humans and artificial agents. I also investigate how our beliefs and expectations about artificial agents influence our strategies for social information processing. In doing so, I hope to advance our understanding of the neurocognitive mechanism of social interaction and inform how to best design and position artificial agents to promote intuitive interactions.