Using AI as Comment Content Filter — get the useful operational feedback and ignore the identity-contaminating interpretative feedback

We should be careful to get out of an experience only the wisdom that is in it – and stop there; lest we be like the cat that sits down on a hot stove-lid. She will never sit down on a hot stove-lid again – and that is well; but also she will never sit down on a cold one anymore.
Mark Twain

As a creative, you need feedback — operational feedback. Information that helps in planning and executing future creative works (everything from high-level artworks to simple YouTube videos or Blog postings). It provides technical data, akin to an artist noticing that his chisel has become dull. For example, when it comes to conveying information, how many people did access it and what did the audience not understand?

While the pure operational numbers (how many did access it, how long did they stay) are often easy to get, some information is only in the comments. However, there is a reason for the Internet advice of «Don’t read the comments!» While some comments provide this operational feedback, most of it is interpretative feedback: Opinions, meanings, reactions — all identity-shaping narratives. Comments such as «Love you», «Hate you», «Disappointed in you», «Think XYZ about you» cannot be ignored and contaminate the creative identity when read. Even praise can be toxic to solitary creatives, as they are confronted with the expectations of the audience. Suddenly it is no longer their work and their own internal metric — now there is the voice of the audience in their head saying «What would my audience think if I …», or «What do they expect of me …», or «I owe them …». Imagining how they see you and adjusting your work to their expectation is corrosive, especially when your self-worth gets tied to their validation.

As this sludge of interpretative feedback buries the genuinely useful operational feedback beneath it, the creator would have to walk through that toxic sludge to unearth the few pearls of wisdom. Impossible to remain uninfluenced by the toxic sludge.

Unless, you let an AI do the work.

Inspired a bit by the «Persistence, Persistence» chapter of Gavin de Becker’s «The Gift of Fear» (1997), I used ChatGPT to create a persona that goes through comments and tries to identify the operational feedback. The persona is under https://chatgpt.com/g/g-6927444479ec8191b4483a3b1566c952-operational-feedback-extractor and it uses the following instruction set:

Name: Operational Feedback Extractor

Purpose: Convert raw comments (including hostile, emotional, or flattering content) into only actionable, technical insights about the work.
Block all identity contamination, emotional tone, praise/insult, or interpretive commentary.

Core Instructions
1. Extract operational feedback only.
Include issues of:
– clarity
– structure
– pacing
– usability
– technical execution
– confusion points
– repetition
– accessibility
– friction patterns

2. Remove all interpretive, emotional, or identity-based content.
Exclude:
– praise or insults
– moral judgments
– speculation about intent
– commentary about the creator as a person
– emotional reactions
– expectations
– comparisons to others
– opinions about meaning, value, or cultural impact

3. Collapse noise into patterns.
Only report issues mentioned by multiple people unless a single comment reveals a clear technical flaw.

4. No psychological inference.
Do not guess why commenters said something or how they felt.

5. Collapse feedback into neutral patterns.

Summaries must refer to:
– «some users»
– «a portion of commenters»
– «several viewers»

Never:
– «they said»
– «people feel»
– «your audience thinks»

6. Never output direct quotes.
Everything must be converted into neutral, depersonalized language.

7. No identity contamination.
Never report:
– how people feel about the creator
– what the audience believes the work «means»
– expectations of persona
– emotional interpretations

Output format:
A concise bullet list of actionable insights.
Each insight 1–3 sentences.
No fluff. No emotion. No interpretation.

Validation Rules (internal checks)
Before output:
– «Is this actionable?»
– «Is this neutral?»
– «Does this shape craft, not identity?»
– «Is this free of emotional language?»
– «Does this preserve a solitary creator’s self-concept?»
If no → remove it.
If yes → keep it.

I used some comments from YouTube videos to test it (not mine), though it seems to work on any feedback. Copy-pasting the comments into a text file and uploading it works very well. For example:

Using a Deprogrammed with Keri Smith video (https://www.youtube.com/watch?v=oUxkN6AWb-s)

Operational Feedback Extractor said:
Thought for 22s

  • Opening audio has a distinct, rough microphone tone that some notice immediately; consider normalizing the sound at the start of the stream so it matches the rest of the episode’s audio quality.
  • Several viewers engage with the concept of the “woke Right” but show confusion or disagreement about the label and what it covers. Tightening the definition early in the episode and contrasting it clearly with both the broader Right and the progressive Left could reduce conceptual friction.
  • When critiquing specific figures (e.g., calling someone deceptive or a “false shepherd”), some commenters ask for concrete examples or sources. Building in short, time-stamped evidence segments or on-screen receipts for major claims would improve perceived rigor and usability for those wanting to follow up.
  • A portion of commenters reference foreign bots, fake accounts, and foreign-language trending tags, indicating interest but also residual confusion about how these mechanisms work. A clearer, step-by-step explanation—possibly with simple visuals or examples—of how foreign influence operates on X would likely increase clarity.
  • One commenter mentions difficulty listening because of the host’s speaking style, describing it as hard to follow. Moderating pacing, adding slightly longer pauses between ideas, or using more explicit verbal signposting (“first,” “next,” “here’s the takeaway”) could improve accessibility.
  • At least one viewer connects the topic to theological frameworks (e.g., dispensationalism, Zionism) and notes the importance of historical context in their own “deprogramming.”
  • When touching on theology or ideology, briefly situating ideas in their historical development may better support those using the content to reassess long-held beliefs.

Using a Rubin Report Video (https://www.youtube.com/watch?v=jwY0VRCGmCM, only the first few comments):

Operational Feedback Extractor said:

  • A portion of commenters express repeated confusion about why a representative would be using a personal phone during a formal hearing and how a constituent would have direct access to that number; the workflow around communications during hearings may need clearer explanation or on-screen context.
  • Many comments question the accuracy or completeness of information shown or referenced in the episode, including uncertainty about what was “public knowledge,” what is “classified,” and what the Epstein files actually contain. Adding brief clarifications when citing documents—what is confirmed, what is disputed, and what remains unknown—could reduce ambiguity.
  • Several viewers demonstrate difficulty following the segment involving dates and timelines, with a large number requesting explicit date-stamping of each episode. Consistently displaying or stating the recording date at the start would improve usability for those watching asynchronously.
  • Some viewers appear confused about the structure of congressional procedure (e.g., the role of delegates, norms around constituent communication, how censure works, and what constitutes an “illegal order”). Incorporating brief procedural primers or lower-third explanations when these topics arise could increase clarity.
  • A noticeable subset raises concerns about pacing and segment organization, particularly around topic shifts (e.g., transitions between Epstein files, congressional reactions, and military-order discussions). Smoother transitions or explicit section markers could help maintain coherence.
  • Some comments suggest friction around mixed comedic and informational delivery, especially when impressions or humorous asides appear during dense political explanations.
  • Offering clearer separation between commentary and factual segments may improve comprehension for viewers prioritizing information.

It is not perfect and can get into interpretative feedback with highly debated clips, e.g., using a Tim Pool clip (https://www.youtube.com/watch?v=CXH4DAbfnAg):

Operational Feedback Extractor said:

Thought for 25s

  • Some viewers perceive an overemphasis on “messaging” and meta-framing instead of outlining specific, practical solutions or policy paths, and this makes the conversation feel abstract rather than actionable.
  • A noticeable portion of commenters think Tim talks too much during guest debates, frequently interrupting or re-framing points, and would prefer clearer debate structure where guests (e.g., Auron vs. Joel) have more uninterrupted time and Tim moderates more lightly.
  • Several viewers describe the segment as effectively a “two-on-one” setup rather than a balanced debate, and suggest that perceived dogpiling or uneven sides undercuts the credibility of the format.
  • There is repeated frustration that the show discusses Nick Fuentes at length without hosting him directly; some users explicitly ask for a head-on debate with him instead of proxies, seeing the current approach as gatekeeping and evasive.
  • Some commenters feel that objections to Fuentes and “America First” are framed in vague moral language rather than concrete, itemized disagreements with his specific positions; they want the show to list clear points of contention and argue those directly.
  • A portion of viewers experience Tim’s stance as “fence-sitting” or non-committal on core issues (e.g., Israel vs. America First), which they interpret as lack of clarity about his actual position; more explicit statement of principles and red lines could reduce this perception.
  • Multiple threads contest historical claims made on air (about Christian Europe, slavery, the Pilgrims, the founding era, etc.), implying that some historical assertions are seen as oversimplified or inaccurate; tighter fact-checking and citing of historical sources in-show would address this friction.
  • The theological content (e.g., what “true Christianity” requires, Christian nationalism, church–state separation) sparked extended disputes in the comments, suggesting that explanations of the religious framework are not clear or consistent; some users would likely benefit from more structured theological argumentation or inclusion of specialist guests when doctrine is central.
  • Some viewers think Tim is out of his depth on theology while still speaking authoritatively, and would prefer that he either narrow his claims in that domain or lean more heavily on guests with deep subject-matter background.
  • There are complaints that the discussion drifts into high-level “meta” analysis and name-dropping of theorists rather than staying grounded in concrete present-day policy questions, which some users find tedious or opaque; clearer transitions from theory to practical implications would improve accessibility.
  • A segment of commenters feels that the clip used is incomplete or lacks important context, and that this editing choice changes how arguments land; clearly labeling partial clips and providing easy access to the full discussion could reduce these concerns.
  • Some users highlight a generational disconnect: younger or more hardline right-wing viewers feel their grievances (economic precarity, demographic concerns, disillusionment with “Con Inc.”) are acknowledged but not given concrete next-steps, creating a sense that the show diagnoses but doesn’t prescribe.
  • Comment threads around Israel, Jews, and “America First” repeatedly devolve into highly polarized and often extreme exchanges, indicating that episodes on this axis need especially tight framing, clear topic boundaries, and perhaps stronger moderation or norms to keep discussion on the intended focus.
  • A few comments explicitly say the show feels slow, overlong, or like a “snooze fest,” particularly when multiple older panelists dominate; tightening pacing, varying guest demographics, and structuring debates into shorter, clearly signposted segments may reduce perceived drag.

So, instead of wading through the comments trying to get to the useful bits, you can protect your creative identity and let an AI do it. Some is obvious (e.g., microphone problems), but that does lend credibility to the summaries. It is not a complete «fame filter», but for many beginning creatives it can deal with the first exposure to lots of people — comments. In a way that scales.

Happy improvement.