Olivier Georgeon's research blog—also known as the story of little Ernest, the developmental agent.

Keywords: situated cognition, constructivist learning, intrinsic motivation, bottom-up self-programming, individuation, theory of enaction, developmental learning, artificial sense-making, biologically inspired cognitive architectures, agnostic agents (without ontological assumptions about the environment).

Friday, December 6, 2013

Radical Interactionism

Olivier L. Georgeon and David W. Aha 2013. The Radical Interactionism Conceptual Commitment. Journal of Artificial General Intelligence 4(2): 31-36.

This is where I turn radical and eliminate perceptions and actions as primitive notions of cognitive models. Now I wonder why it took so many years to come up with such an obvious and elegant formalism.

Friday, November 15, 2013

Single Agents Can Be Constructivist Too

Olivier L. Georgeon and Salima Hassas 2013. Single Agents Can Be Constructivist too. Constructivist Foundations 9(1): 40-42.

In this open peer commentary to an article by Roesch et al.,  we wished to argue that multi-agent systems are not the only alternative to cognitivism. We present Ernest in Roesch et al.'s environment to show that Ernest is constructivist too.

Thursday, September 12, 2013

Ernest 12

Ernest 12  categorizes the entities in its environment based on the possibilities of interaction that they afford, and adjusts its behavior to categories.

Top-left: Ernest in its environment. The "eye" (half-circle) takes the color of the entity that get Ernest's attention at any given time.

Top-right: Ernest's spatial memory. Interactions are localized in space, and Ernest updates their position as he moves. Entities are constructed where interactions overlap. Rectangles and trapezoids represent interactions, circles represent entities.

Bottom: activity trace. Bottom: interactions (rectangles and trapezoids) enacted on the left, in front, or on the right of Ernest. Middle: the motivational value of the enacted interaction represented as a bargraph (green when positive, red when negative). Top: the actions (half-circles (turn), triangles (try to step forward)) and the entities (blue and green circles) learned over time.

In this run, Ernest, learns the "bishop behavior" during the first 50 steps. On steps 78, we introduce two targets in a raw. The spatial memory shows that Ernest interacts with these two targets at the same time. Ernest's spatial memory (associated with its rudimentary attentional system) allows Ernest to focus on one target at a time.

On step 110, we introduce a "wall brick", and Ernest learns that this kind of entity affords the interaction "bumping". Subsequently, when we introduce a target, Ernest will preferably go towards the target than towards the wall brick because it has learned that the targets are edible.

Ernest 12 implements ECA, the Enactive Cognitive Architecture.
(Demo implemented with Ernest r439 and Vacuum r392)

Tuesday, June 18, 2013

Enactive Robot Learning

Olivier L. Georgeon, Christian Wolf, and Simon Gay 2013. An Enactive Approach to Autonomous Agent and Robot Learning.  IEEE Third Joint  International Conference on Development and Learning and on Epigenetic Robotics (EPIROB2013). Osaka, Japan. August 18-22 2013.

This paper constitutes a short introductory version of our ECA paper. It also presents the experiment of Ernest7 in an e-puck robot

Tuesday, May 14, 2013

Enactive Cognitive Architecture

Olivier L. Georgeon, James B. Marshall, and Riccardo Manzotti 2013. ECA: An enactivist cognitive architecture based on sensorimotor modeling. Biologically Inspired Cognitive Architectures, Volume 6, pp. 46-57, doi: 10.1016/j.bica.2013.05.006. Also presented at BICA2013.

This paper introduces a new way of modeling an agent interacting with an environment called an Enactive Markov Decision Process, inspired by the Theory of Enaction. It also describes Ernest's motivational principle in relation with the autotelic principle (Steels, 2004) and the optimal experience principle (Csikszentmihalyi, 1990). It introduces ECA, the Enactive Cognitive Architecture that drives Ernest 11, and it reports the Ernest 11.2 experiment.

Monday, February 11, 2013

Sensemaking emergence demonstration

Olivier L. Georgeon and James B. Marshall 2013. Demonstrating sensemaking emergence in artificial agents: A method and an example. International Journal of Machine Consciousness, 5(2), pp 131-144, doi: 10.1142/S1793843013500029.

This paper addresses the sensemaking demonstration problem : the problem of demonstrating that an agent gives meaning to or understands its experience. We present a methodology to produce empirical evidence to support or contradict the claim that an agent is capable of a rudimentary form of sensemaking, based on an analysis of the agent's behavior.

As an example, we report an analysis of Ernest's behavior in the Small Loop Problem and we conclude that Ernest is capable of a rudimentary form of sensemaking. This paper is an extended version of our previous paper presented at BICA2012.