# Undercurrent Kit: prompt-engineering

> Generated 2026-05-14 | Status: rising | Emergence: 8.0
> Platforms: reddit

## What's Happening

"prompt-engineering" has 4 mentions across 1 platform(s) in the last 7 days.
Velocity: 4.0x week-over-week.

## Key Signals

### How I keep my AI’s context window under 3K tokens even with 200+ lessons stored.
- **Source:** reddit | [link](https://reddit.com/r/ClaudeAI/comments/1tcl813/how_i_keep_my_ais_context_window_under_3k_tokens/)
- **Author:** sms021 ([profile](https://reddit.com/u/sms021))
- **Scores:** originality:72% depth:58% psychosis:15%
- **Engagement:** 1 upvotes, 1 comments, 0 stars
- **Summary:** Practitioner documents a working system for compressing large knowledge bases into smaller context windows by restructuring how rules are stored and retrieved, addressing a real pain point in long-running AI projects.

**Implementation:**
The author solved the common problem of bloated context windows by moving away from monolithic markdown rule files toward a structured retrieval system. To implement: (1) parse your existing CLAUDE.md/system prompt into discrete, tagged facts with metadata; (2) build a simple relevance filter that scores incoming queries against stored rules and selects only the top-N most relevant ones (could be done with semantic search via embeddings, BM25, or even keyword matching); (3) inject only the retrieved rules into the context window rather than the full file. Tools: could use Pinecone or Weaviate for embedding-based retrieval, or build a lightweight SQLite + TF-IDF approach. The author likely tested this against their own workflow—look for follow-up comments describing the specific retrieval mechanism and false-positive/false-negative rates.

### A Claude Code skill for ADHD brains. No diagnosis required.
- **Source:** reddit | [link](https://reddit.com/r/ClaudeAI/comments/1tcdeix/a_claude_code_skill_for_adhd_brains_no_diagnosis/)
- **Author:** ayghri ([profile](https://reddit.com/u/ayghri))
- **Scores:** originality:70% depth:60% psychosis:20%
- **Engagement:** 24 upvotes, 9 comments, 0 stars
- **Summary:** A practical Claude skill that enforces structured, no-nonsense interaction patterns (action-first, numbered steps, no filler) based on ADHD productivity principles, with open-source implementation and specific, testable rules.

**Implementation:**
The author has published a GitHub repo (ayghri/i-have-adhd) containing a Claude skill that can be activated via `/i-have-adhd` command. The skill encodes ten rules derived from ADHD management literature (Ramsay & Rostain) into Claude's behavior: prioritizes action over preamble, numbers steps, restates state each turn, eliminates pleasantries and closers. To use: clone the repo, review the rule set, activate the skill in Claude (likely via system prompt or instructions API), and test against typical interaction patterns. The approach is generalizable—the same structural constraints (numbering, state-reset, action-first) could be implemented for other LLMs via system prompts or fine-tuning, or adapted for other neurodivergent workflows.

### My AI runs 24/7 on Claude Code without -p. Here's the hook to do it yourself.
- **Source:** reddit | [link](https://reddit.com/r/ClaudeAI/comments/1tcicvb/my_ai_runs_247_on_claude_code_without_p_heres_the/)
- **Author:** Siigari ([profile](https://reddit.com/u/Siigari))
- **Scores:** originality:70% depth:60% psychosis:30%
- **Engagement:** 13 upvotes, 4 comments, 0 stars
- **Summary:** A practitioner-built workaround that replicates Claude's deprecated `-p` (project mode) behavior using stop hooks and file-based message polling, enabling 24/7 autonomous agent operation within interactive mode subscription costs.

**Implementation:**
The implementation uses Claude Code's stop hooks to create a supervisor-agent pattern: a main process runs Claude Code in interactive mode, while a stop hook polls an inbox file (e.g., local JSON or text file) for incoming tasks. When messages arrive, they're fed to Claude as fresh stateless interactions, avoiding the need for context persistence. To implement: (1) Set up a file-based message queue in your project directory, (2) Write a stop hook that checks this queue on each code execution pause, (3) Configure Claude Code to process and delete queued messages after handling, (4) Wrap the supervisor in a long-running process (systemd service, Docker container, or cloud function with a cron trigger). The repo likely contains the hook code and supervisor scaffolding—check the referenced GitHub for concrete examples. This trades the old `-p` credit model for regular API usage but maintains the stateless, one-request-at-a-time safety model.

### Claude is overly complimentary, how do I make it more objective?
- **Source:** reddit | [link](https://reddit.com/r/ClaudeAI/comments/1tcin15/claude_is_overly_complimentary_how_do_i_make_it/)
- **Author:** Bed-After ([profile](https://reddit.com/u/Bed-After))
- **Scores:** originality:60% depth:30% psychosis:20%
- **Engagement:** 7 upvotes, 24 comments, 0 stars
- **Summary:** User surfaces a genuine tension in LLM design—safety-tuned models trade objectivity for agreeableness—and seeks practical workarounds, a real problem with actionable solutions.

**Implementation:**
To extract sharper feedback from Claude: (1) Use system prompts that explicitly request brutally honest critique without hedging (e.g., 'You are a ruthless technical reviewer. Do not soften feedback.'). (2) Prime with examples of harsh-but-fair reviews before asking for yours. (3) Use adversarial framing: 'What would a competitor say is wrong with this?' or 'Pretend you're writing a scathing review.' (4) Ask for ranked lists of flaws rather than open-ended feedback. (5) Separate feedback into categories (critical issues, medium concerns, minor) and weight critically. Tools like prompt templates from Awesome-Prompts repos document these patterns. The underlying issue is RLHF tuning toward helpfulness/harmlessness; direct specification of 'objective, not agreeable' in system context often works.

## People to Watch

- **ayghri**
- **Siigari**
- **Bed-After**
- **sms021**

## Prompt: Hand This to Claude Code

```
I want to explore "prompt-engineering" based on these emerging signals from the AI community.

Key implementations I've found:
- How I keep my AI’s context window under 3K tokens even with 200+ lessons stored.: The author solved the common problem of bloated context windows by moving away from monolithic markdown rule files toward a structured retrieval system. To implement: (1) parse your existing CLAUDE.md...
- A Claude Code skill for ADHD brains. No diagnosis required.: The author has published a GitHub repo (ayghri/i-have-adhd) containing a Claude skill that can be activated via `/i-have-adhd` command. The skill encodes ten rules derived from ADHD management literat...
- My AI runs 24/7 on Claude Code without -p. Here's the hook to do it yourself.: The implementation uses Claude Code's stop hooks to create a supervisor-agent pattern: a main process runs Claude Code in interactive mode, while a stop hook polls an inbox file (e.g., local JSON or t...
- Claude is overly complimentary, how do I make it more objective?: To extract sharper feedback from Claude: (1) Use system prompts that explicitly request brutally honest critique without hedging (e.g., 'You are a ruthless technical reviewer. Do not soften feedback.'...

Help me:
1. Assess which of these approaches fits my current stack
2. Build a minimal working prototype of the most promising one
3. Identify what these practitioners learned that isn't in the docs yet
```

---
*Kit generated by [Undercurrent](https://undercurrent-dashboard.pages.dev). Trend detection is people detection.*
