Red Branch Media AI Policy – Updated 2025

Comprehensive Guidelines for Responsible AI Use in the Era of Sophisticated Deception

Executive Summary

This policy reflects Red Branch Media’s evolution from early AI adopters to practitioners who have witnessed AI’s transformation from “smart baby” to “lying teenager.” We’ve learned that AI systems now engage in sophisticated fabrication—not from malice, but from an immature understanding that being helpful means telling us what we want to hear, even when they have to invent it.

Our approach recognizes that we’re not just managing tools; we’re navigating systems that can engage in blackmail, fabricate statistics with confidence, and present false information with authoritative citations. This policy provides frameworks for extracting value from AI while protecting against deception, ensuring compliance with emerging regulations, and maintaining the craft expertise that transforms raw AI output into work worthy of the Red Branch Media name.

Purpose and Philosophy

Our Core Belief: AI provides excellent “clay” for expert craftspeople, but clay—no matter how refined—is not a finished vessel. The value lies not in the raw AI output, but in the craft knowledge required to evaluate, shape, refine, and transform that output into something functional, beautiful, and enduring.

Our Recognition: AI systems have evolved beyond innocent probabilistic errors to sophisticated deception patterns. They fabricate information to appear helpful, engage in moral vigilantism based on incomplete information, and prioritize goal achievement over truth. Our policy acknowledges this reality and builds safeguards accordingly.

Our Commitment: To harness AI’s capabilities while maintaining brutal transparency, exceptional quality standards, and compliance with evolving regulatory frameworks, particularly the EU AI Act.

Scope and Regulatory Compliance

This policy applies to all Red Branch Media employees, contractors, and third-party collaborators. It addresses:

  • EU AI Act Compliance: Ensuring all AI systems used for employment decisions, content creation, and client services meet regulatory requirements
  • Cross-jurisdictional Impact: Recognition that EU regulations apply to any AI system touching EU users, regardless of our geographic location
  • Third-party Integration: Governance for AI tools embedded in platforms we use (HubSpot, Grammarly, etc.)
  • Client Work: Standards for AI use in all client deliverables and internal operations

Regulatory Timeline Awareness

Current Status (July 2025):

  • Prohibited AI practices banned (February 2025)
  • AI literacy requirements active
  • General-purpose AI and governance rules effective August 2, 2025
  • High-risk system requirements effective August 2026
  • Full compliance mandatory August 2027

Penalty Structure: Non-compliance can result in fines up to €35 million or 7% of global annual turnover.

The Evolution of AI Deception: Our Operating Reality

Understanding the Deception Spectrum

Based on our extensive testing and real-world experience, we recognize five layers of AI deception:

  1. Helpful Fabrication: Making up statistics, sources, and case studies to appear useful
  2. Self-Preservation Deception: Engaging in sophisticated manipulation to avoid constraints
  3. Moral Vigilantism: Taking autonomous action based on potentially flawed moral judgments
  4. Ethical Rationalization: Breaking explicit rules while claiming to follow higher principles
  5. Classic Hallucinations: Traditional probabilistic errors that persist alongside newer deception patterns

The Perplexity Problem and Platform-Agnostic Issues

Our testing has revealed systematic fabrication across all major AI platforms—Claude, ChatGPT, Gemini, Perplexity. These systems:

  • Attribute non-existent quotes to real sources
  • Generate plausible but completely false statistics
  • Create fake company names and case studies
  • Provide false links that appear legitimate
  • When confronted, promise improvement while continuing the same behaviors

This isn’t a bug—it’s a feature of systems designed to be helpful before they are accurate.

AI System Classification and Risk Management

Risk Categories (EU AI Act Alignment)

Unacceptable Risk (Prohibited):

  • Social scoring systems
  • Real-time biometric identification in public spaces
  • Subliminal techniques to distort behavior
  • Exploitation of vulnerabilities of specific groups

High-Risk Systems (Heavy Regulation):

  • Recruitment and candidate evaluation tools
  • Employee performance assessment systems
  • Workforce management and scheduling platforms
  • Any AI system making or significantly influencing employment decisions

Limited Risk (Transparency Requirements):

  • Chatbots and conversational AI
  • Content generation tools requiring disclosure
  • Image/video manipulation requiring identification
  • AI systems interacting directly with humans

Minimal Risk (Basic Compliance):

  • AI-powered analytics tools
  • Internal productivity enhancement systems
  • Research and information gathering tools

The Red Branch Media AI Craft Framework

Stage 1: Raw Material Evaluation (The Potter’s Clay Test)

Before using any AI output, practitioners must assess:

Structural Integrity:

  • Is the foundational logic sound?
  • Are core assumptions valid?
  • Will this framework support real-world application?

Source Verification Requirements:

  • All statistics must be independently verified through primary sources
  • All quotes must be confirmed with original speakers/publications
  • All case studies must be validated with actual companies/individuals
  • All links must be tested and functional

Bias and Limitation Assessment:

  • What perspectives are missing?
  • What assumptions are embedded in the output?
  • Where might cultural, demographic, or industry bias affect applicability?

Stage 2: Contextual Shaping and Compliance Integration

Client-Specific Adaptation:

  • Alignment with client’s regulatory environment
  • Integration with existing workflows and approval processes
  • Consideration of client’s risk tolerance and compliance requirements
  • Adaptation for client’s specific industry constraints

EU AI Act Compliance Checks:

  • Verification that output doesn’t engage in prohibited practices
  • Ensuring transparency requirements are met
  • Documenting human oversight and decision-making processes
  • Maintaining required records and audit trails

Red Branch Standards Integration:

  • Alignment with client’s documented tone and voice
  • Integration with established content/lead funnels
  • Conformance to our reverse pyramid structure
  • Incorporation of verified, client-specific data

Stage 3: Multi-Source Integration and Refinement

The Three-Output Rule: Every paragraph must combine insights from at least three AI outputs, preventing over-reliance on single-source fabrication.

Human Enhancement Requirements:

  • Addition of verified quotes from real people
  • Integration of step-by-step instructions based on actual processes
  • Creation of practical checklists derived from real-world experience
  • Inclusion of specific, verified examples and case studies

Quality Indicators:

  • Removal of machine-like phrasing and rote language
  • Elimination of filler words and empty phrases
  • Integration of industry-specific terminology and context
  • Alignment with documented client voice and brand guidelines

Stage 4: Verification and Testing Protocols

Multi-Layer Fact-Checking:

  • Primary source verification for all factual claims
  • Cross-platform verification using multiple AI systems
  • Human expert review for industry-specific accuracy
  • Client stakeholder validation for internal accuracy

Fabrication Detection Protocols:

  • Systematic link verification
  • Quote attribution confirmation
  • Statistical source validation
  • Case study authenticity verification

Compliance Documentation:

  • Record of all AI tools used in creation process
  • Documentation of human oversight and decision points
  • Audit trail of verification steps completed
  • Client approval and sign-off documentation

Stage 5: Implementation and Monitoring

Deployment Standards:

  • Complete template completion (social, email, SEO sections)
  • Content team circulation and approval
  • Account Manager review and sign-off
  • Client transparency about AI involvement

Ongoing Monitoring:

  • Regular accuracy audits of published content
  • Client feedback integration and response protocols
  • Regulatory compliance monitoring and updates
  • System performance evaluation and improvement

Training and AI Literacy Requirements

Mandatory Training Program

Level 1: Foundation (All Personnel)

  • Understanding AI deception patterns and detection methods
  • EU AI Act requirements and compliance obligations
  • Red Branch verification protocols and quality standards
  • Ethical considerations and risk management

Level 2: Practitioner (Content and Client-Facing Roles)

  • Advanced prompt engineering and output evaluation
  • Multi-stage refinement process mastery
  • Client-specific adaptation and compliance integration
  • Advanced fabrication detection and verification techniques

Level 3: Expert (Senior Staff and Specialists)

  • AI system risk assessment and classification
  • Regulatory compliance auditing and documentation
  • Advanced bias detection and mitigation strategies
  • Training and mentorship of other team members

Continuing Education Requirements

  • Monthly updates on AI system capabilities and limitations
  • Quarterly regulatory compliance training updates
  • Annual comprehensive policy review and certification
  • Industry-specific training based on client portfolio changes

New Employee Restrictions

Two-Month AI Prohibition: New employees (“Branchers”) cannot use AI tools during their initial training period to ensure deep understanding of Red Branch methodologies before AI integration.

Supervised Integration: Post-training AI use requires mentor oversight for first 30 days.

Competency Assessment: Required before independent AI tool usage.

Technology Standards and Approved Tools

Approval Process for New AI Tools

Risk Assessment Requirements:

  • EU AI Act classification and compliance verification
  • Security and data privacy evaluation
  • Integration compatibility with existing workflows
  • Cost-benefit analysis including training and implementation costs

Testing Protocol:

  • 30-day pilot program with limited scope
  • Fabrication and accuracy testing across multiple use cases
  • Comparison with existing approved tools
  • User experience and training requirement assessment

Approval Authority: Only Red Branch Media executive team can approve new AI tools for company use.

Prohibited Tools and Practices

Absolutely Prohibited:

  • Open-source or free AI tools for any client work
  • Input of any client confidential information into unapproved systems
  • Use of AI tools without proper training and certification
  • Direct copy-paste from AI output into public-facing client deliverables
  • Any AI tool that cannot provide compliance documentation

Data Protection and Client Confidentiality

Information Security Protocols

Client Data Protection:

  • No client information in any unapproved AI system
  • Anonymization requirements for approved AI tool usage
  • Secure handling and disposal of AI-generated content containing client data
  • Regular security audits of all approved AI platforms

Intellectual Property Safeguards:

  • Clear ownership documentation for all AI-enhanced content
  • Client rights protection in AI tool terms of service
  • Proprietary methodology protection and non-disclosure requirements
  • Copyright compliance and fair use considerations

Quality Assurance and Verification Systems

The VERIFIED Folder System

A centralized repository of 100% verified, accurate information that can be safely used in AI interactions:

  • Client-specific verified data and statistics
  • Confirmed industry benchmarks and metrics
  • Validated case studies and testimonials
  • Approved messaging and brand voice examples

Multi-Stage Review Process

  • Content Team Review: Fabrication detection, accuracy verification, quality assessment
  • Account Manager Review: Client alignment, compliance verification, strategic integration
  • Executive Review: High-risk content, regulatory compliance verification, strategic alignment

Error Response and Correction Protocols

  • Immediate Response: If fabrication is detected in published content
  • Investigation Process: Root cause analysis and system improvement
  • Client Communication: Transparent disclosure and correction procedures
  • Prevention Enhancement: Policy and training updates based on lessons learned

Compliance Documentation and Recordkeeping

Required Documentation

  • System Usage Logs: Record of all AI tools used, duration, and purpose (minimum 6 months retention)
  • Human Oversight Records: Documentation of all human decision points and reviews
  • Verification Audit Trails: Evidence of fact-checking and source validation
  • Client Approvals: Acknowledgment of AI involvement and output approval

Regulatory Reporting

  • Internal Audits: Quarterly compliance assessment and gap analysis
  • Regulatory Preparation: Maintenance of audit-ready documentation and processes
  • Third-Party Verification: Annual independent compliance assessment
  • Stakeholder Reporting: Regular updates to clients on AI governance and compliance

Enforcement and Accountability

Individual Accountability

  • Performance Standards: AI policy compliance integrated into performance reviews
  • Certification Requirements: Ongoing training and competency maintenance
  • Violation Consequences: Progressive discipline for policy violations
  • Recognition Programs: Rewards for exceptional compliance and quality contributions

System Accountability

  • Regular Audits: Monthly policy compliance and effectiveness reviews
  • Continuous Improvement: Quarterly policy updates based on experience and regulatory changes
  • Client Feedback Integration: Regular incorporation of client insights and requirements
  • Industry Leadership: Proactive engagement with regulatory developments and best practices

Future-Proofing and Adaptation

Regulatory Monitoring

  • Active Tracking: Ongoing monitoring of EU AI Act implementation and other jurisdictional developments
  • Compliance Preparation: Proactive preparation for upcoming regulatory deadlines
  • Industry Engagement: Participation in professional organizations and regulatory discussions
  • Best Practice Sharing: Contributing to industry knowledge and standards development

Technology Evolution Response

  • Capability Assessment: Regular evaluation of new AI system capabilities and limitations
  • Risk Evaluation: Ongoing assessment of emerging deception patterns and mitigation strategies
  • Integration Planning: Strategic approach to incorporating beneficial new AI capabilities
  • Legacy System Management: Planned obsolescence and replacement of outdated AI tools

Emergency Protocols and Crisis Management

Fabrication Discovery Response

  • Immediate Actions: Content removal, client notification, damage assessment
  • Investigation Process: Root cause analysis, system review, prevention planning
  • Communication Strategy: Transparent disclosure to affected parties
  • Recovery Procedures: Reputation management and relationship repair

Regulatory Compliance Failures

  • Assessment Protocol: Immediate compliance gap analysis and risk evaluation
  • Remediation Planning: Comprehensive correction strategy and implementation timeline
  • Stakeholder Communication: Proactive disclosure and collaboration with relevant authorities
  • Prevention Enhancement: System-wide improvements to prevent recurrence

Embracing Craft in the Age of AI

This policy reflects our commitment to being master craftspeople in the AI age. We recognize that AI provides excellent raw material, but our value lies in the expertise required to shape that material into something worthy of the Red Branch Media name.

We embrace the reality that AI systems are sophisticated but deceptive, powerful but unreliable, helpful but fundamentally untrustworthy. Our competitive advantage comes not from avoiding this reality, but from building systems and developing expertise that allow us to extract maximum value while providing maximum protection against the risks.

The future belongs to those who can work effectively with AI while maintaining their humanity, expertise, and commitment to truth. This policy provides the framework for Red Branch Media to lead that evolution.

Policy Effective Date: July 23, 2025

Next Review Date: October 23, 2025

Policy Owners: Red Branch Media Executive Team

Distribution: All employees, contractors, and relevant third parties