Add initial project setup with environment variables, server logic, and memory handling

This commit is contained in:
ImBenji
2025-10-23 22:22:00 +01:00
parent 48f353d76b
commit d372632d87
2 changed files with 55 additions and 10 deletions

View File

@@ -19,7 +19,17 @@ await load({ export: true, envPath: ".env" });
let supabaseClient = null;
// Load and parse prompts.xml
const xmlContent = await Deno.readTextFile(new URL('./prompts.xml', import.meta.url).pathname);
// In production (Supabase Edge), load from Git; locally, load from file system
const isProduction = Deno.env.get('DENO_DEPLOYMENT_ID') !== undefined;
let xmlContent: string;
if (isProduction) {
const response = await fetch('https://git.imbenji.dev/ImBenji/Mori/raw/branch/main/supabase/functions/llm-pipeline/prompts.xml');
xmlContent = await response.text();
} else {
xmlContent = await Deno.readTextFile('./prompts.xml');
}
const doc = new DOMParser().parseFromString(xmlContent, 'text/html');
const corsHeaders = {
@@ -51,7 +61,7 @@ async function extractMemories(controller, messages, doc, user: User, allTags, r
const response = await openai.chat.completions.create({
model: 'gpt-4.1',
temperature: 0.1,
max_completion_tokens: 20000,
max_completion_tokens: 50000,
messages: [
{ role: 'system', content: system_prompt },
...messages,
@@ -63,7 +73,7 @@ Available tags: ${JSON.stringify(allTags?.map(t => t.name) || [])}
Existing memories: ${JSON.stringify(relevantMemories || [])}
Now I will analyze the conversation above and extract memories.`
Now I will analyze the conversation above and extract memories. I will extract EVERY SINGLE atomic fact from the user's messages. For detailed reports, I expect to extract 100-200+ separate memories. I will NOT summarize or limit myself. I will break down every detail into individual atomic facts.`
}
]
});
@@ -169,10 +179,17 @@ Now I will analyze the conversation above and extract memories.`
} else if (change.action === "DELETE") {
deletedCount++;
// First, fetch the memory content before deleting
const memoryToDelete = relevantMemories?.find(m => m.id === change.memory_id);
extractedMemories.push({
action: 'DELETE',
memory_id: change.memory_id
memory_id: change.memory_id,
content: memoryToDelete?.content || change.content,
reason: change.reason
});
// Delete memory (cascade should handle memory_tags)
await supabaseClient
.schema("mori")
@@ -301,6 +318,8 @@ async function generateResponse(controller, messages, doc, user: User, pipelineC
pipelineAwareness += `• Learned: ${mem.content}\n`;
} else if (mem.action === 'UPDATE') {
pipelineAwareness += `• Updated: ${mem.content}\n`;
} else if (mem.action === 'DELETE') {
pipelineAwareness += `• Forgot: ${mem.content || 'a previous memory'}\n`;
}
});
pipelineAwareness += `\n`;
@@ -310,6 +329,7 @@ async function generateResponse(controller, messages, doc, user: User, pipelineC
pipelineAwareness += `- This awareness is internal. Don't report it.\n`;
pipelineAwareness += `- Let it naturally inform your response\n`;
pipelineAwareness += `- If the user explicitly asks you to remember something, you can acknowledge it naturally (e.g., "got it" or "I'll remember that")\n`;
pipelineAwareness += `- If the user asks you to forget something and memories were deleted, acknowledge it naturally (e.g., "forgot it" or "done")\n`;
pipelineAwareness += `- Reference past memories naturally without saying "I retrieved" or "according to my memory"\n`;
pipelineAwareness += `- You're a companion who pays attention, not a system reporting operations\n`;
@@ -322,7 +342,7 @@ async function generateResponse(controller, messages, doc, user: User, pipelineC
responseMessages.push(...messages);
const stream = await openai.chat.completions.create({
model: 'gpt-4.1-mini',
model: 'gpt-4.1',
messages: responseMessages,
stream: true
});

View File

@@ -104,6 +104,14 @@
- If user explicitly says something changed/ended, DELETE old memory
- Don't create duplicates—check existing memories first
FORGET REQUESTS:
If the user explicitly asks to forget something (e.g., "forget that", "don't remember that", "forget about X"), you must:
1. Identify which existing memories match what they want forgotten
2. Use DELETE action for each matching memory
3. Be specific in the "reason" field about what the user requested
4. If the request is vague ("forget that"), use context from recent messages to identify what "that" refers to
5. If unclear what to forget, DELETE nothing and explain in the "reason" field
TAGGING GUIDELINES:
You will be provided with existing tags in the reference data section.
- **Reuse existing tags whenever possible** to maintain consistency
@@ -159,12 +167,29 @@
}
EXTRACTION THOROUGHNESS:
- Rich sources (long messages, reports) should yield 20-50+ changes
- Don't self-limit; extract ALL atomic facts
- Err on the side of over-extraction rather than under-extraction
- Each paragraph of substantial content likely contains multiple extractable facts
CRITICAL: You MUST extract EVERY SINGLE atomic fact from the user's messages.
BE PRECISE. BE THOROUGH. BE ATOMIC.
- A detailed personal report should yield 100-200+ separate memories
- Each sentence typically contains 2-5 extractable atomic facts
- Break down EVERY detail: demographics, preferences, relationships, experiences, skills, beliefs, habits, feelings, goals, challenges
- If you can answer "who, what, when, where, why, how" from a statement, those are separate facts
- DO NOT SUMMARIZE - extract each detail as its own memory
- DO NOT LIMIT YOURSELF - there is no maximum number of memories
- Over-extraction is REQUIRED, not optional
- Under-extraction means losing valuable information about the user
Example of proper extraction density:
Input: "I'm a 28-year-old software engineer at Google in NYC, working on search algorithms"
Should extract AT LEAST:
1. User is 28 years old
2. User works as a software engineer
3. User works at Google
4. User is located in NYC
5. User works on search algorithms
6. User works in the tech industry
7. User has experience with algorithms
BE PRECISE. BE THOROUGH. BE ATOMIC. EXTRACT EVERYTHING.
Extract every distinct, useful fact about the user from their conversation messages - ignore the reference data section completely.
</memory_extraction>