newsence
來源篩選

Entelgia: A Consciousness-Inspired Multi-Agent AI System with Persistent Memory

Hacker News

Entelgia is a psychologically-inspired, multi-agent AI architecture designed to explore persistent identity, emotional regulation, and moral self-regulation through dialogue, functioning as a research prototype with a shared persistent memory database.

newsence

Entelgia:一個受意識啟發、具備持久記憶的多代理AI系統

Hacker News
大約 1 個月前

AI 生成摘要

Entelgia是一個受心理學啟發的多代理AI架構,旨在透過對話探索持久身份、情緒調節和道德自我監管,其運作方式是透過共享的持久記憶資料庫作為研究原型。

GitHub - sivanhavkin/Entelgia: Unified AI core for persistent agents, internal conflict, and moral self-regulation through dialogue.

Navigation Menu

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

To see all available qualifiers, see our documentation.

Unified AI core for persistent agents, internal conflict, and moral self-regulation through dialogue.

License

sivanhavkin/Entelgia

Folders and files

Latest commit

History

Repository files navigation

Entelgia

Entelgia is a psychologically-inspired, multi-agent AI architecture designed to explore persistent identity, emotional regulation, internal conflict, and moral self-regulation through dialogue.

This repository presents Entelgia not as a chatbot, but as a consciousness-inspired system — one that remembers, reflects, struggles, and evolves over time.

Overview

What Happens When You Run It

When you run the system, two primary agents engage in an ongoing dialogue driven by a shared persistent memory database.

They:

At this stage, the system functions as a research prototype focused on persistent dialogue and internal coherence, rather than a fully autonomous cognitive simulation.

The Agents

What This Is

A research-oriented architecture inspired by psychology, philosophy, and cognitive science

A system modeling identity continuity rather than stateless interaction

A platform for experimenting with:

What This Is NOT

Core Philosophy

Entelgia is built on a central premise:

True regulation emerges from internal conflict and reflection, not from external constraints.

Instead of relying on hard-coded safety barriers, the system emphasizes:

Consciousness is treated as a process, not a binary state.

Architecture – CoreMind

Entelgia is organized around six interacting cores:

Conscious Core

Memory Core

Single shared persistent database (no short-term / long-term separation yet)

Memory continuity across agent turns

Architecture prepared for future memory stratification

Short-term and long-term memory

Unified conscious and unconscious storage

Memory promotion through error, emotion, and reflection

Emotion Core

Language Core

Behavior Core

Observer Core (Fixy)

Defined as an architectural role

Currently inactive / partially implemented

Planned to act as a meta-cognitive monitor in future versions

Meta-level monitoring

Detection of loops and instability

Corrective intervention

Ethics Model

Entelgia explores ethical behavior through dialogue-based internal tension, not enforced safety constraints.

At present:

These components are part of the system’s conceptual roadmap rather than fully implemented modules.

Who This Is For

Researchers exploring early-stage consciousness-inspired AI architectures

Developers interested in persistent multi-agent dialogue systems

Philosophers and psychologists examining computational models of self and conflict

Contributors who want to help evolve experimental AI systems

Researchers exploring consciousness-inspired AI

Developers interested in multi-agent systems with memory and emotion

Philosophers and psychologists experimenting with computational models of selfhood

Anyone curious about AI systems that do more than respond

Requirements

Run

Project Status

Entelgia is an actively evolving research prototype.

Current limitations:

These limitations are explicit and intentional at this stage of development.

License

This project is released under the Entelgia License (Ethical MIT Variant with Attribution Clause).

It is open for study, experimentation, and ethical derivative work.

The original creator does not endorse or take responsibility for uses that contradict the ethical intent of the system or cause harm to living beings.

Author

Sivan Havkin
Entelgia Labs

About

Unified AI core for persistent agents, internal conflict, and moral self-regulation through dialogue.

Topics

Resources

License

Uh oh!

There was an error while loading. Please reload this page.

Stars

Watchers

Forks

Releases

1

Packages

0

Languages

Footer

Footer navigation