Publication 6722

Porr B. & Miller P. (2019) Forward propagation closed loop learning. Adaptive Behavior 28(3): 181–194. Fulltext at
For an autonomous agent, the inputs are the sensory data that inform the agent of the state of the world, and the outputs are their actions, which act on the world and consequently produce new sensory inputs. The agent only knows of its own actions via their effect on future inputs; therefore desired states, and error signals, are most naturally defined in terms of the inputs. Most machine learning algorithms, however, operate in terms of desired outputs. For example, backpropagation takes target output values and propagates the corresponding error backwards through the network in order to change the weights. In closed loop settings, it is far more obvious how to define desired sensory inputs than desired actions, however. To train a deep network using errors defined in the input space would call for an algorithm that can propagate those errors forwards through the network, from input layer to output layer, in much the same way that activations are propagated. In this article, we present a novel learning algorithm which performs such ‘forward-propagation’ of errors. We demonstrate its performance, first in a simple line follower and then in a 1st person shooter game.


The publication has not yet bookmarked in any reading list

You cannot bookmark this publication into a reading list because you are not member of any
Log in to create one.

There are currently no annotations

To add an annotation you need to log in first

Download statistics

Log in to view the download statistics for this publication
Export bibliographic details as: CF Format · APA · BibTex · EndNote · Harvard · MLA · Nature · RIS · Science