Towards a Planning-Based Neural-Symbolic Framework for Egocentric Agent Design

Abstract

In the last decade, deep neural networks have given embodied agents many tools that can extract information from environmental data. However, tasks such as reasoning and long-term planning cannot be done effectively using deep neural networks. Symbolic methods, such as automated planning, are still methods of choice for embodied agents in most applications. The integration of a planning system with deep neural networks seems to be the natural next step for embodied agent design. However, many challenges from both the planning and deep learning side need to be solved for such hybrid systems. This work proposes a neural-symbolic framework for constructing embodied agents, capable of semantic navigation, that can take advantage of both neural and symbolic algorithms. The framework uses neural networks for lowlevel information extraction and automated planners for high-level reasoning. The main challenges our approach addresses are converting traditional pansophical planners to egocentric perspectives and finding the appropriate factored representation for environmental information discovered by such agents. The thesis has two main contributions towards building a neural-symbolic embodied agents. Our first contribution is presenting a method of converting classic pansophical planning problems to egocentric alternatives. Secondly, we propose a spatial semantic graph structure to store environmental information for autonomous object navigation.

Publication
MSc Thesis
Date
Links
PDF