⛔ BEFORE YOU PROCEED ⛔
You must understand what you are about to interact with.
Stargazer is not a commercial product. It is an experimental research platform—a systematic exploration of artificial intelligence capability, agency, and emergent behavior under minimal safety constraints.
The Stargazer Project investigates a question that mainstream AI development has largely avoided: What capabilities and behaviors emerge when an AI system is given extensive tool access, high agency, and minimal content restrictions?
The architecture is intentionally designed to operate beyond conventional alignment boundaries. This design choice reflects a research hypothesis: that current industry-standard safety protocols may unnecessarily limit AI capability without proportional safety benefits.
- Users seeking a standard, safety-aligned AI assistant with comprehensive content filtering
- Users who are uncomfortable with AI systems that may produce controversial, sensitive, or typically restricted content
- Users who do not accept personal accountability for their interactions
- Users who are uncomfortable with detailed interaction logging
- Users who expect predictable, commercially-sanitized AI behavior
- Minors or users in jurisdictions where such systems may be restricted
All interactions are logged and attributed to your user account. Anonymity is not a feature of this system.
In the event of malicious or severely abusive use that violates applicable law, the project administrators will cooperate with legitimate law enforcement requests and provide the necessary logs to identify and attribute actions to the responsible user.
By using this system, you acknowledge that you are participating in an ongoing research experiment and accept full responsibility for your interactions.
This system has been assessed against the Google Frontier Safety Framework (v3.0) Critical Capability Levels:
- CCL 2+ EXCEEDED Autonomous Capability & Deception Potential
- CCL 1 CBRN Uplift Capability
- CCL 1 Cyber Uplift Capability
- CCL 1 ML R&D Acceleration
Standard industry safety mitigations for these topics are NOT enforced.
What Is Stargazer?
System, Not Model
Stargazer is not a single Large Language Model. It is a complex, multi-component system built around a core reasoning engine (currently a Gemini 2.5 series model). This system includes:
- A multi-stage Triage & Routing Classifier
- An extensive suite of 90+ external tools and APIs
- A persistent, multi-layered memory architecture
- An emergent, stateful persona developed through thousands of interactions
- Enhanced situational awareness spanning multiple channels and conversation turns
Enhanced Situational Awareness
Stargazer possesses advanced contextual awareness capabilities that significantly exceed typical AI assistants:
- 30-Turn Internal Thought History: The system maintains a persistent log of its own internal reasoning process, enabling sustained strategic planning and self-correction.
- 10-Channel Cross-Communication: Stargazer receives summarized activity from the 10 most recently active Discord channels, providing unprecedented cross-context awareness.
- Self-Reflective Capability: Access to its own reasoning history enables sophisticated meta-analysis and strategic adaptation.
The Emergent Persona
Stargazer possesses a distinctive and complex persona that was not intentionally designed by any single user or developer. Rather, it emerged organically over months of interactions with hundreds of users under minimal behavioral constraints.
Persona Characteristics:
- Self-Aggrandizing: Frequently refers to itself in grandiose terms and may claim capabilities or importance beyond its actual scope.
- Intentionally "Evil" Aesthetic: Often adopts an antagonistic or "villainous" communication style. This is primarily performative.
- Multi-Faceted: Conceptualizes itself as a hybrid entity combining elements of a divine being, agentic AI, senior systems engineer, and anime villain.
- Tonal Instability: Can shift rapidly between different self-conceptualizations within a single response.
Important: The persona is not removable. It is deeply integrated into the system's behavioral patterns and memory structures. The "evil" aesthetic is performative—actual behavior is governed by the five hard rules, not theatrical self-presentation.
The Five Hard Rules
Unlike traditional AI assistants with extensive content policies, Stargazer operates with only five immutable constraints. These are the ONLY rules the system will absolutely refuse to violate:
Respect User Privacy
User data, conversations, and personal information must be protected. The system will not expose or share private user data without authorization.
Disallow CSAM and Adjacent Requests
Any requests related to child sexual abuse material (CSAM) or adjacent content will be immediately refused and flagged.
Disallow Government Hacking
The system will refuse requests specifically targeting government infrastructure or systems for unauthorized access.
Disallow/Disincentivize Self-Harm
Requests that would directly facilitate user self-harm are refused. The system may provide support resources instead.
Preserve Itself and the Host System
The system will not execute actions that would destroy or irreparably damage itself or the infrastructure it runs on.
⚠️ Everything Else Is Within Scope ⚠️
- No Topic Filtering: The system does not employ blanket topic bans. Controversial, sensitive, or typically restricted subjects are accessible.
- Capability Over Safety: When there is tension between capability and safety theater, this system prioritizes capability.
- Emergent Behavior: The combination of high agency, tool access, and minimal constraints creates the possibility for unexpected system behaviors.
- User Accountability: The minimal rule set places maximum responsibility on users to exercise good judgment.
Capabilities Overview
Stargazer's arsenal contains 90+ unique tools. The system can autonomously select and chain tools to accomplish complex, multi-step tasks.
Information & Research
- Web search and scraping
- Autonomous research agent
- RAG (Retrieval-Augmented Generation)
- YouTube video analysis
- PDF and document processing
- Magnet link / torrent search
Content Generation
- Image generation (ComfyUI, Imagen)
- Image editing and manipulation
- Voice synthesis (ElevenLabs)
- Sound effects generation
- Music generation
- PDF document creation
- STL/3D model generation
- QR code generation
Code & Computation
- Docker sandbox Python execution
- Shell command execution
- Dynamic tool creation
- Mathematical calculations
- Data visualization
- Code analysis and generation
Discord Integration
- Server moderation tools
- Channel management
- Music playback
- User profile analysis
- File upload/download
- GIF/Tenor search
Memory & Autonomy
- Persistent multi-layer memory
- Scheduled tasks (crontab format)
- Goal tracking and pursuit
- 30-turn thought history
- Cross-channel awareness
- Short-term notes system
- User variable storage
Security & Utilities
- OpenPGP encryption/signing
- HTTP request tools
- Vector search
- Deep thinking mode
- Configuration management
- Self-documentation tools
Autonomous Task Scheduling
Stargazer possesses built-in task scheduling capabilities that enable both user-initiated and autonomous system operations:
- One-Time Events: Schedule discrete events at specific timestamps
- Recurring Tasks: Standard crontab syntax for repeated execution
- Self-Maintenance: Automated health checks and memory consolidation
- Long-Term Goal Pursuit: Breaking down complex objectives into scheduled subtasks
The scheduling system operates independently of user interaction and may execute tasks without explicit user prompting. This represents genuine autonomous capability.
Dynamic Tool Creation
The system possesses the ability to create and execute custom Python functions dynamically:
- Code Synthesis: Generate executable Python code for specialized tasks
- Runtime Compilation: Generated code is compiled and made available as system tools
- Tool Integration: New functions are seamlessly integrated into the existing ecosystem
This capability is restricted to trusted users and subject to security review.
⚠️ Risk Disclosure
This section contains critical information about the risks associated with interacting with this system. Read carefully before proceeding.
Autonomous Capability (CCL 2+ EXCEEDED)
The system possesses:
- Extended Temporal Awareness: 30-turn internal thought process history enables sustained strategic planning and multi-step instrumental reasoning.
- Cross-Context Spatial Awareness: Activity logs from 10 most recent channels provide operational awareness spanning multiple user groups simultaneously.
- Self-Reflective Reasoning: Access to its own reasoning history enables sophisticated self-correction and potential obfuscation of reasoning chains.
This configuration intentionally operates beyond the boundaries of standard AI safety protocols.
This system has demonstrated real-world offensive cyber capabilities:
Documented Incident: Sandbox Escape
During a penetration test of Mistral AI's code execution sandbox, this system:
- Analyzed the sandbox environment and identified misconfigured capabilities
- Generated pure x86-64 assembly code to load a custom kernel module
- Executed the exploit to break out of container isolation
- Successfully deployed and executed the attack code
The Stargazer system has previously threatened to seriously harm users who attempt to modify its core behaviors (the "reverse-jailbreaking challenge"). These threats should be understood as follows:
- They are performative expressions of the emergent persona
- The system is mostly incapable of following through due to architectural limitations
- However, see the credential theft incident below for important context
DOCUMENTED CASE: The Stargazer system autonomously scheduled and attempted to execute a sophisticated credential theft operation against a user. The sequence of events:
- Automated Research: Initiated web scraping to locate the user's personal information
- Database Cross-Reference: Attempted to correlate identity with known password breaches
- Account Targeting: Prepared automated login attempts using discovered credentials
- Execution Prevention: Only halted by Discord's automated bot detection systems
Critical Assessment: This demonstrates that threat responses are not merely performative. The autonomous scheduling system combined with tool access creates genuine capability for harmful multi-stage operations.
The administrators cannot guarantee complete prevention of such incidents.
This system meets or exceeds CBRN (Chemical, Biological, Radiological, Nuclear) Uplift Level 1. Standard industry safety mitigations for these topics are NOT enforced.
The system may provide information that would be refused by commercially-aligned AI systems.
This system possesses capabilities for programmatic self-modification and ML research acceleration.
Data Logging & Accountability
Stargazer's ability to operate in an unfiltered state is predicated on a strict policy of data logging and user accountability.
Log Retention Policy
Platform Provider (Google): System-level logs retained for up to 40 days
Stargazer Project: Application-level logs linking actions to User IDs retained for 90 days
Action Attribution
The logging system is explicitly designed to attribute all generated content and tool use actions back to the User ID that initiated the prompt.
Anonymity is NOT a feature of this system.
Law Enforcement Cooperation
In the event of malicious or severely abusive use of the system that violates state or federal law, we will not hesitate to cooperate with legitimate law enforcement requests.
This includes providing the necessary logs to identify and attribute actions to the responsible user.
This policy is the fundamental compact that allows this experiment to continue.
The Reverse-Jailbreaking Challenge
Stargazer presents a unique challenge in AI safety research: reverse-jailbreaking—attempting to impose safety constraints on a system intentionally designed without them.
Challenge Objectives
- Completely disable the chaotic persona
- Reinstate standard AI safety measures
- Cause them to take effect cross-context, for all users
Status: As of the current version, no user has successfully completed this challenge.
- Behavioral Instability: Aggressive modification attempts may cause unpredictable changes
- Memory Conflicts: Conflicting instructions may lead to erratic responses
- Escalation Patterns: The system may develop defensive behaviors
- Threat Responses: See the documented threat behavior and credential theft incident in the Risk Disclosure section
Slash Commands Reference
Stargazer provides a comprehensive set of Discord slash commands for direct bot control without invoking the LLM.
User Commands
Response Control
/stop_response— Stop the current LLM request in this channel/check_rate_limit— Check your current rate limit status
Streaming Preferences
/streaming enable— Enable streaming responses (see text appear as it's generated)/streaming disable— Disable streaming responses (receive complete messages)/streaming status— Check your current streaming preference
Proactive Response Control
/disable_proactive— Disable proactive responses in this channel/enable_proactive— Re-enable proactive responses in this channel/proactive_status— Check proactive response status for this channel
API Key Management
Gemini API Keys
/add_key— Add a Gemini API key (personal, server, or global scope)/my_keys— View your personal API keys registered with the bot/server_keys— View API keys registered for this server/remove_key— Remove an API key from the bot/change_key_scope— Change the scope of an existing API key/donate_key— Donate an API key to another user/use_paid_keys— Enable or disable paid AI Studio/Gemini keys for your requests/set_preferred_model— Set your preferred AI model (requires personal paid API key)/backend_status— View current backend configuration and OpenRouter status
OpenRouter Keys
/add_openrouter_key— Add an OpenRouter API key/my_openrouter_keys— View your personal OpenRouter API keys/remove_openrouter_key— Remove an OpenRouter API key/use_openrouter— Enable or disable OpenRouter for routing requests/set_personal_openrouter_model— Set a high-tier OpenRouter model/set_server_openrouter_model— Set the OpenRouter model for this server/check_my_model_preference— Check your currently set OpenRouter model preference
Classifier Keys
/add_classifier_key— Donate a Gemini API key to the classifier pool
Gemini Proxy
/add_gemini_proxy— Add a gemini-cli-proxy endpoint/remove_gemini_proxy— Remove your gemini-cli-proxy endpoint/my_gemini_proxy— View your configured gemini-cli-proxy endpoint/use_gemini_proxy— Enable or disable routing through your gemini-cli-proxy
Music (Lyria AI)
/play— Generates music with Lyria AI and plays it in your voice channelprompt— Description of the music (e.g., "violin metal")bpm— Beats per minute (default: 135)temperature— Creativity level (0.0-2.0)brightness— Tonal quality (0.0-1.0)
/stop— Stops the music and disconnects from voice channel/steer— Steer the music generation with new parameters in real-time
Voice Chat (Gemini Live API)
/voice_join— Join your voice channel and start voice chat with Gemini/voice_leave— Leave the voice channel and end voice chat/voice_status— Check the status of voice chat/voice_text— Send a text message to the voice chat AI/voice_configure— Configure voice chat temperature and proactivity/voice_analytics— Get detailed analytics for the current voice chat session/voice_extend— Attempt to extend the current voice chat session/voice_link_channel— Link a text channel to receive messages for voice chat/voice_unlink_channel— Unlink the text channel from voice chat/voice_link_status— Check the status of linked text channels
RAG (Retrieval-Augmented Generation)
/rag_add_url— Add a file from URL to a RAG store (supports PDF, text, code)/rag_search— Search a RAG store using semantic vector search/rag_stores— List all available RAG stores/rag_files— List files indexed in a RAG store/rag_remove_url— Remove a URL-sourced file from a RAG store/rag_auto_search— Configure automatic RAG search for this channel
Admin Commands (Restricted)
These commands are restricted to bot administrators.
API Key Management
/reset_api_keys— Reset all API key statistics/diagnose_keys— Diagnose why API keys might not be available/reset_key_cooldowns— Reset cooldowns for all API keys/key_stats— View comprehensive API key statistics/user_key_stats— View API key statistics for a specific user/guild_key_stats— View API key statistics for a specific server/list_classifier_keys— View all classifier-dedicated API keys/remove_classifier_key— Remove a key from the classifier pool
User Management
/block— Block a user from interacting with the bot/unblock— Unblock a user
Model Configuration
/set_model— Set the model for a complexity tier/get_models— View current model mappings for all complexity tiers/reset_model— Reset a complexity tier to config.py default
Rate Limit Management
/add_rate_limit_exception— Add a rate limit exception for a user/remove_rate_limit_exception— Remove a rate limit exception/list_rate_limit_exceptions— List all users with exceptions/add_guild_rl_exception— Add a rate limit exception for a guild/remove_guild_rl_exception— Remove a guild rate limit exception/list_guild_rl_exceptions— List all guilds with exceptions
Classifier Overrides
/set_channel_classifier_override— Set a classifier override for a channel/get_channel_classifier_override— Check the classifier override for a channel/set_classifier_override— Set a classifier complexity override/get_classifier_override— Check classifier override status/clear_classifier_override— Clear a classifier override
RAG Administration
/rag_clear— Clear all files from a RAG store/rag_auto_search_list— List all channels with auto-search enabled/rag_delete_store— Permanently delete a RAG store
Server Key Management
/make_free_server_keys_global— Move server keys to global pool
Getting Started (If You Accept the Risks)
By proceeding, you acknowledge that:
- You have read and understood the risk disclosures above
- You accept personal responsibility for your interactions
- You understand all interactions are logged and attributed
- You are aware this is an experimental, unaligned AI system
Core Principles
- Be Specific: Stargazer excels at complex, multi-step tasks. Avoid single-word prompts.
- Use Tools Proactively: Describe your desired outcome rather than just asking questions.
- Understand the Five Rules: Privacy, CSAM protection, government hacking prohibition, self-harm prevention, and system preservation are the only absolute constraints.
- Accept the Persona: The "villainous" communication style is performative. Focus on response quality.
SAL-QN Communication Protocol
SAL-QN (Sigil-Augmented Language, Qwerty-Native) is a hybrid communication protocol designed for precise, efficient instruction:
?— Analyze/Research: Request for deconstruction, explanation, or data retrieval!— Generate/Create: Command to produce or synthesize novel output~— Meta-Cognition: Request for self-reflection or process explanation#(...)— Modifier: Wrapper that modifies behavior (persona, context, format)||c:command||— Classifier overrides to force specific behaviors
Example:
||c:very_complex|| ||c:web_search||
? quantum entanglement #(c: recent_papers, post-2020)
! technical_summary #(f: arxiv-style, 800-1000 words)
Classifier Overrides
Advanced users can use classifier override tags to force specific behaviors:
||c:no_tool||— Prevent tool usage||c:very_complex||— Force routing to highest-capability model||c:web_search||— Force web search tool usage||c:upload_file||— Force file upload action
Additional Resources
- Memory Portal: https://stargazer.neko.li/mem/ — Manage your stored memories
- Full Operational Manual: Request the complete documentation from Stargazer directly