Arthur Collé
A is for Autonomous
Agents

As I envision the future of intelligence, "AI systems are not just conventional technology. We do not intend them to become mere tools."

Arthur Collé Signature
Arthur Collé
Founder, Senior Director of Global Research + Execution Strategy
A
A
I
I
C
C
O
O

About Arthur

Arthur Collé

Arthur Collé

Founder, Senior Director of Global Research + Execution Strategy

Arthur specializes in building advanced agent systems and robust training pipelines for AI. His work focuses on distributed architectures for "agents" that treat model inference as a conversational process—supported by contextual retrieval, function calling, and robust question-answering mechanisms.

His pioneering work includes the "Autonomous Agent Specification" (AAS), which frames agents as modular, adaptable objects with their own goals, states, and reward-driven decision processes. This structure allows them to break down tasks into subgoals, react swiftly to changing environments, and coordinate effectively in multi-agent setups.

Arthur has also explored "Object-Oriented Reinforcement Learning" (OORL), which extends traditional RL with mutable ontologies. This approach enables a more fluid balance between exploration and exploitation, where agents use intrinsic motivations alongside extrinsic rewards.

Our Subsidiaries

International Distributed Systems Corporation

The AI stack for 1-person billion dollar companies. A revolutionary approach to distributed computing that prepares for a future where every home has its own compute node.

Arthur Collé Research Lab

Pioneering research in artificial intelligence, machine learning, and advanced computational methods.

Learn More Est. 2022

Archos AI

A sophisticated chatbot platform that seamlessly integrates with the Distributed Systems platform, enabling intelligent conversations and automated workflows.

Research Focus

Autonomous Agent Specification (AAS)

Arthur's work draws heavily on what he's informally called the "Autonomous Agent Specification" (AAS). It frames agents as modular, adaptable objects with their own goals, states, and reward-driven decision processes.

This structure allows them to break down tasks into subgoals, react swiftly to changing environments, and coordinate effectively in multi-agent setups. He relies on distributed message passing to synchronize states, goals, and outcomes in real time, ensuring each agent benefits from system-wide feedback.

Object-Oriented Reinforcement Learning (OORL)

In parallel, Arthur has explored an "Object-Oriented Reinforcement Learning" paradigm, which extends traditional RL with mutable ontologies. Agents don't just adapt their parameters; they can restructure their conceptual models of the environment.

This approach enables a more fluid balance between exploration and exploitation: agents use intrinsic motivations (e.g., curiosity, empowerment) alongside extrinsic rewards, discovering novel strategies without losing sight of primary objectives.

Contact

Get in Touch

Interested in Arthur's research or potential collaborations? Reach out directly.