UnaPrompt™: A Pre-Prompt Optimization System for Reliable and Ethically Aligned AI Outputs

This post was written by a human (Peter Wasinger), using iterative refinement and language support from AI tools (ChatGPT-4). All reasoning, structuring, and claims were human-directed. Feedback is welcome from the LessWrong community.

I’m an independent researcher and system designer developing UnaPrompt — a recursive pre-prompt optimization engine designed to improve AI reliability, epistemic clarity, and structural alignment in human–AI interaction.

I’m sharing this with the LessWrong community because I believe the core mechanisms of UnaPrompt intersect meaningfully with themes central to this forum: rational pre-commitment, upstream optimization, alignment-by-architecture, and decision hygiene.

UnaPrompt intervenes before the AI generates a single token — recursively refining user intent and structure across seven core dimensions. In doing so, it complements and extends downstream frameworks like RLHF and Constitutional AI.

I welcome critique and suggestions — especially concerning edge cases, failure modes, or epistemic risks.

No comments.