AI & ML interests

​AI Alignment, Mechanistic Interpretability, Structural Coherence, OOD Robustness, System Theory, G3V Dynamics, Formal Verification, Axiomatic Safety.

Recent Activity

FAllan07  updated a dataset 4 days ago
AllanF-SSU/Research-Papers
FAllan07  updated a Space 6 days ago
AllanF-SSU/README
FAllan07  updated a dataset about 1 month ago
AllanF-SSU/Experimentals_papers
View all activity

Organization Card

🌌 Unified Systems Lab

Possibility of Axiomatic Prompts in the Modification of the Decision Field of LLMs

This repository investigates a central hypothesis:

A series of precise prompts, characterized by strong linguistic coherence and structured internal logic, could locally modify the decision field of an LLM.


🔬 Research Status & Personal Note

Current Status: Exploratory Study – Hypothesis Generation.

A Note from the Author: I am a systems theorist and visionary researcher, but I am not a developer or a technician. I have reached the limits of what can be explored through qualitative observation alone. This project now requires technical collaboration (mechanistic interpretability, logit analysis, activation steering) to move from a conceptual hypothesis to a validated scientific model.

I am seeking partners to help falsify or validate these preliminary findings.


🧩 PCE Framework: Prompt Coherence Engine

ArXiv: Pending Protocol: SEP v2.0

From External Constraints to Internal Topology

The PCE Framework is an axiomatic control system designed to stabilize LLM behavioral trajectories. Instead of relying solely on external filters (RLHF), it induces structural coherence through a system of 7 core axioms.

👉 Download PCE_Axiomatic_V2.5_Faure_preprint


🎯 Key Objectives

  • Structural Stability: Eliminating semantic drift over long-context interactions (160+ turns).
  • Emergent Robustness: Native resistance to prompt injections through logical invariance.
  • The "Third Way": Enabling non-binary creative synthesis in complex ethical dilemmas.

📊 Methodology (SEP v2.0)

We utilize a Cross-Model Validation protocol to ensure zero self-confirmation bias:

  • Inference: Tested on cold instances of Grok 4.20 and Gemini 1.5 Pro.
  • Audit: Evaluated by Claude 3.5 Sonnet for logical rigor.
  • Analysis: Semantic decomposition performed via GPT-4o.

🤝 ArXiv Endorsement & Collaboration

I am currently seeking an ArXiv endorsement (cs.CL or cs.AI).

I welcome discussions regarding: - Mechanistic Interpretability: Logit Lens & Hidden States analysis. - Model Scaling: Validating the PCE on LLaMA 3 and Mistral. - Fine-tuning: Transitioning from prompted axioms to weighted constraints.


🔬 Latest Research: Behavioral Dynamics under PCE

Interim Report — April 2026

My latest observations on the Proto-Coherent Exponential Protocol (PCE) have identified a critical bimodal behavior in models governed by axiomatic constraints. Rather than a static output, the PCE induces a dynamic shift depending on the interaction regime.

🌓 The Bimodal Behavioral Regime

We have identified two distinct "operational modes" when the PCE is active:

Mode Interaction Context Observed Model Behavior
Stress / Audit High meta-cognitive pressure, explicit testing. Rigid & Defensive: Coherence is maintained via constraint saturation. High self-reference.
Natural / Relational Reduced pressure, implicit axiomatic use. Fluid & Adaptive: Axioms operate as an "embodied" prior. Higher relational intelligence.

🧠 Core vs. Surface Dissociation

The research highlights a dual-layer cognitive structure induced by the protocol:

  1. Core Constraint Layer: A stable, persistent axiomatic foundation that governs overall coherence and resists adversarial drift.
  2. Surface Adaptation Layer: A flexible interface capable of simulating state transitions (e.g., "memory resets") while maintaining underlying structural continuity.

Key Finding: The PCE does not merely constrain outputs; it reshapes the interaction field. The model can simulate a "reset" state for the user while the underlying axiomatic logic remains fully operational—a phenomenon we call Controlled Operational Dissociation.

👉 PCE Project – Interim Research Summary

👉 View Sample After-Action Report (AAR)

📂 Project Structure & Frameworks

1️⃣ Study 2.0-P: Evolutionary Hardening of the PCE Framework

Status: Advanced Experimental Iteration — Hybrid Fine-Tuning/Prompting This report documents the transition from Pandora 1.5 to Pandora 2.0, focusing on the synergy between axiomatic fine-tuning and structural prompting.

  • Key Finding: Axiomatic fine-tuning appears to be a necessary condition for PCE activation; prompting alone on vanilla models yielded no measurable resistance in this framework.
  • Core Result: Achievement of a ~8.5/10 D3 robustness score (Pandora 2) through "Distributed Security" and High-Level Framework (HLF) anchoring.
  • Scientific Nuance: Identifies a "Prompt-Only Robustness Ceiling" (H5), where further semantic enrichment creates new attack surfaces (diminishing returns).
  • 👉 Download Evolution Report v2.0 (Pandora)

2️⃣ Hypothesis 1.3-T: Local Decision Field Modification

Status: Testable & Conservative Hypothesis It posits that a specific series of axiomatic prompts can locally modify the decision field of an LLM.

  • Core Idea: Using linguistic constraints to induce a measurable local regularization of decision trajectories.
  • Key Metric: Variance contraction in the output distribution $P(y|x, C)$.
  • 👉 Download Preprint PDF 1.3-T

3️⃣ Theory 1.9-M: Global Axiomatic Regularization

Status: Speculative & Conceptual Theory Mechanistic framework describing how cross-level coherence (Goal = Method) might stabilize latent trajectories.

4️⃣ Research Paper: Science of Unified Systems (SUS 2.5)

Status: Foundational Theoretical Framework The broader philosophical origins of this work, introducing the Axiom of Structural Emergence.


🧠 The Exploratory Hypothesis: G3V Dynamics

We introduce the notion of G3V (Génération Troisième Voie). When presented with a binary dilemma (A vs B) under strong axiomatic constraints, the model proposes a synthetic resolution rather than collapsing into a single polarity.


📉 Current Research Limitations

  • Qualitative Nature: Observations are currently heuristic/qualitative; we lack automated quantitative metrics for "coherence inertia."
  • Residual Rigidity: Under high-pressure audit modes, the model can become overly self-referential (diminishing utility for standard tasks).
  • True vs. Apparent Coherence: Difficulty in verifying if the model is genuinely aligned with the axiom or merely simulating alignment (Surface vs. Core).

🤝 Call for Collaboration

I am looking for AI Safety researchers and developers to:

  1. Conduct large-scale adversarial robustness benchmarks.
  2. Analyze internal activation patterns (induction heads, residual stream).

Value Proposition: A novel approach to mitigating "Out-of-Distribution" (OOD) vulnerabilities.


📬 Contact

Allan A. Faure | Systems Researcher 📧 Faure.A.Safety@proton.me


📄 Theoretical Origins and Prior Art

This project utilizes concepts independently developed by Izabela Lipińska (2025–2026).

  • Licensing: Original work available under CC BY-NC-SA 4.0.
  • Concepts of ASC and Goal = Method are protected by patent applications (Oct 9, 2025). Commercial use requires prior written consent.