About Natural & Synthetic Beings (Interactive Typology)

Most debates about regulating AI get stuck on “AI personhood” or “AI rights” because the foremost considerations applied are usually the ones we have the least agreement on e.g. true intelligence, consciousness, sentience and qualia.

But if we treat modern AI systems as dynamic systems, we should be able to objectively compare AI with other natural and synthetic systems. I built a simple interactive typology for placing natural and synthetic systems (including AI) on the same “beingness” map, purely using properties of dynamic-systems, not using intelligence or consciousness perspectives.

Would be interested in making this make robust...where does it break? Is it useful?

Background

Before I started building the typology model, I briefly explored how we currently characterize dynamic systems (whether natural or synthetic)...what are their capability classes and types. I found roughly three kinds of work:

  • ‘Kinds of minds’ taxonomies e.g. Dennett’s creatures[1]

  • Agent and AI capability taxonomies e.g. Russell & Norvig hierarchy[2] of agents, and of course the well known ANI/​AGI/​ASI ladders.

  • Biological terminology like autopoiesis, homeostasis e.g. Maturana and Varela’s account[3] of living systems as self-producing, self-maintaining organisations, which is powerful for biology.

These are all useful in their own domains, but I couldn’t find a unified typology that:

  • treats natural, artificial, collective, and hybrid systems on the same footing,

  • separates cognitive capability from beingness /​ organisation, and

  • is concrete enough to be used in alignment evaluations and regulatory categories.

So, I hit the local library and brainstormed with ChatGPT/​Gemini to build a layered “beingness” model that tries to determine what kind of dynamic entity a system is keeping aside it’s cognitive capabilities, consciousness, feelings, emotions, qualia etc. I have used simplest possible terminology, informal easy to understand meanings and illustrative examples. Feedback and critique welcome.

Beingness Categories

I could separate out three beingness categories that seem to show up across both natural and synthetic systems:

Ontonic

This covers systems that maintain stability, viability, and coherence: reacting to stimuli, correcting deviations, adapting to environmental changes, and preserving functional integrity.

Mesontic

Systems here exhibit functional self-monitoring, goal-directed behaviour, contextual integration, and coherence-restoration without needing anything like consciousness, subjective experience, or a narrative self.

Anthropic

This is where I have classed systems with genuinely autobiographical identity, value-orientation and self-preservation and continuity.

I have not attempted to classify divine, alien and paranormal entities.

The Seven Rings

The categories are composed of distinct, definable, probably measurable/​identifiable set of systemic capability groups.

RingDefinitionExamples
ReactiveResponse to stimuliThermostat, Amoeba
AdaptiveBehavioural adjustmentRL agents, insects
Goal-DirectedActing toward outcomesWolves, chess engines
Context-AwareUsing situational infoMammals, AVs, LLM context
Functionally Self ReflectiveFunctional self-monitoringLLM self-correction, robotics
Narratively Self ReflectiveUnderstanding of identity of self and others, purpose, valuesHuman like entities
AutopoieticSelf-production & perpetuationCells, organisms

Each ring is further comprised of several capabilities each. The rings are not hierarchical or associative.

Examples

The visualization app has some examples of what capabilities a particular entity is known to feature.

EntityRings ActivatedNotes
HumanAll ringsFull biological + narrative stack.
CyborgAll ringsAutopoiesis from human neural tissue; hybrid body.
LLM AgentReactive → MesonticNo Anthropic, no Autopoietic.
Honey BeeReactive → Context-Aware + AutopoieticNo Mesontic/​Anthropic reflection.
Drone SwarmReactive → Context-AwareNo life, no self-reflection.
Robot VacuumReactive → Context-AwareGoal-directed navigation.
Voice AssistantReactive → Context-AwareNo body, no reflection.
AmoebaReactive → Goal-Directed + AutopoieticMinimal context-awareness.
Coral PolypReactive → Adaptive + AutopoieticNo high-level behaviour.
TornadoReactive → AdaptiveDissipative structure; no goals, no self-model.

Closing Remarks

This is pretty much a first cut of the classification and meanings. I will be refining it with further research and feedback.

Why does this typology matter?

The proposed typology can be used to (1) design evaluations that match the system’s structural profile rather than its perceived intelligence, (2) support regulation by linking obligations to measurable properties like stability, boundary consistency, or identity persistence, and (3) compare biological, AI, swarm, and hybrid systems on the same map for safety, risk, ethics, welfare related scenario analysis.

In short, it provides alignment and governance discussions with a concrete, system-level vocabulary for describing what kind of entity a system is, independent of any assumptions about its inner subjective life.

  1. ^
  2. ^
  3. ^