Agents_Perspective / README.md
RFTSystems's picture
Update README.md
d6211c0 verified

A newer version of the Gradio SDK is available: 6.2.0

Upgrade
metadata
title: Agents Perspective
emoji: 👀
colorFrom: red
colorTo: pink
sdk: gradio
sdk_version: 6.1.0
app_file: app.py
pinned: false
short_description: Experience the world through an RFT observer agent’s view
thumbnail: >-
  /static-proxy?url=https%3A%2F%2Fcdn-uploads.huggingface.co%2Fproduction%2Fuploads%2F685edcb04796127b024b4805%2FRfLfI-R1Q_Eq5_VzS9LO4.jpeg%3C%2Fspan%3E

RFT Predator Space — Symmetric Observers (First-Person POV)

Experience the world from an RFT “observer agent” perspective inside a predator/prey arena rendered in pseudo-3D. The point is simple and testable: both agents are symmetric observers co-existing in the same frame, each with their own viewpoint, heading, and local policy.

This Space is intentionally inspectable: deterministic seeds, explicit rules, and no hidden model weights.


Safety / Accessibility

No flashing UI: the “Map progression” panel is event-driven (updates only on reset/catch/load), not updated every timer tick. This prevents flicker that could be unsafe for photosensitive users.


What this Space is

A deterministic gridworld where:

  • Predator and Prey move on the same map.
  • You view the world in first-person from the currently selected observer.
  • The other agent is only visible when line-of-sight + field-of-view allow it (no “see through walls” cheating).
  • Capture occurs if Predator and Prey occupy the same cell.

This is not a physics engine. It’s an observer-perception demo that’s easy to replay and verify.


Core Concept: Symmetric Observers

Both agents have:

  • position (x, y)
  • orientation (E/S/W/N)
  • local behavior rules

You can toggle control (Pred ↔ Prey) to swap whose “reality” you’re seeing. This is the cleanest way to demonstrate symmetry: neither observer is privileged.


Modes (Manual / Auto / Hybrid)

Manual (AutoRun OFF)

  • Your button presses drive the currently controlled observer.
  • The non-controlled agent still runs its own policy each step (so the world remains “alive”).

AutoRun + AutoChase (AutoRun ON, AutoChase ON)

  • Predator attempts to chase when the prey is within LOS+FOV.
  • Otherwise predator wanders.
  • Prey flees when not player-controlled.

Hybrid AutoRun (AutoRun ON, AutoChase OFF)

  • Predator wanders autonomously (no pursuit logic).
  • Prey still flees.
  • This demonstrates two independent observers operating in the same frame without collapsing into a single “predator narrative.”

Important: AutoRun moves the current POV observer (autopilot). If you’re viewing Predator POV, Predator moves under AutoRun. If you’re viewing Prey POV, Prey runs its flee autopilot.


Rendering / “Seeing” Rules (First-Person POV)

The pseudo-3D view uses grid raycasting to draw walls and depth. The other agent only appears if:

  1. Line-of-sight between the agents is clear
  2. The target is within the observer’s field-of-view
  3. The target is not hidden behind a closer wall slice (occlusion)

Result: the observer can’t see through walls, and perception is honest.


Controls

Movement (applies to the currently controlled observer)

  • Turn Left
  • Forward
  • Turn Right

Observer / Autonomy

  • Toggle Control (Pred ↔ Prey)
    Switches camera + manual input to the other observer.

  • Toggle AutoRun
    Enables autonomous ticking (timer based).

  • Toggle AutoChase
    Only affects predator behavior under AutoRun:

    • ON → chase policy (LOS + FOV) with roam fallback
    • OFF → wander-only policy (Hybrid mode)
  • Tick
    Single deterministic step (useful for inspection and debugging).

Symmetry tools

  • Swap Roles (Pred ⇄ Prey)
    Swaps positions and orientations. This is a “symmetry hammer” that makes it obvious both observers are equivalent in the frame.

Optional overlay (kept subtle)

  • Toggle Overlay
    Displays faint “disturbance/coherence” visualization. It’s off by default to keep the base experience clean.

Map Progression + Unlocks

Maps unlock as you accumulate catches. The map list shows:

  • ✅ unlocked maps
  • 🔒 locked maps and required catch count

Unlocks update only when something actually changes (reset/catch/load) to avoid flicker.


Save / Load

Two supported workflows:

Slot saves (server-side)

  • Saves are written to ./saves/*.json
  • Dropdown lists existing saves (Refresh if needed)

Export / Import (portable)

  • Export produces a downloadable JSON save file
  • Import lets you upload the save later to resume exactly

Saved state includes:

  • seed, map name
  • positions and orientations
  • control target (Pred/Prey)
  • mode toggles (AutoRun, AutoChase, overlay)
  • catches (and therefore unlocked maps)

Determinism / Reproducibility

Runs are deterministic given:

  • the Seed
  • the same sequence of actions / ticks

Autonomy uses deterministic RNG streams derived from seed + step index, so:

  • same seed + same choices = same behavior

This is crucial for verification and repeatable demos.


Files

  • app.py — single-file Space for simplicity
  • saves/ — created automatically for slot saves

Requirements

Minimal dependencies: gradio numpy



Notes

This Space is designed to be inspectable and falsifiable at the “game rules” level: you can see exactly what the observer can see, why it turns, and when it commits to action. Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference