Episodios

  • EP 37: AI Content Creation: 3x Output, Half the Cost
    Feb 25 2026

    The numbers are staggering: 96% of companies now use generative AI for content production. Companies report 3-5x more content output, 30-50% cost savings, and 50% reductions in creation time. This isn't incremental improvement—it's transformational change in how marketing teams operate.

    AI content creation in 2025 encompasses far more than ChatGPT writing blog posts. We're talking about integrated workflows governing ideation, creation, distribution, and analytics. Tools like Jasper, Copy.ai, and ContentBot handle everything from drafting to scheduling and multi-platform distribution. The sophistication has moved far beyond simple text generation.

    Limitations remain clear: AI struggles with truly original creative thinking—breakthrough ideas that redefine categories. It excels at recombining existing concepts but genuine innovation requires human creativity. AI lacks emotional intelligence and cultural nuance, can mimic empathy but doesn't actually understand context the way humans do, and generates confidently wrong information (hallucinations), which is why human fact-checking remains non-negotiable.

    Looking ahead, the strategic implication is marketing teams shifting focus from production to strategy. When AI handles volume, humans focus on insight, positioning, and differentiation. Small teams can now compete with large enterprises because production bottlenecks disappear.

    Más Menos
    19 m
  • EP 36: AI Personalization: From Segments to Individuals
    Feb 25 2026

    AI personalization has evolved dramatically from basic segmentation to true individual-level customization. McKinsey's 2025 research shows businesses using advanced personalization techniques are seeing 10-15% revenue increases, with 89% of decision makers saying AI-driven personalization will be critical in the next three years. This isn't optional anymore-it's competitive survival.

    Consumer expectations have shifted dramatically. 72% of consumers say they only engage with marketing messages tailored to their interests, and 90% are happy to share personal data if the result is a smoother, more personalized experience. However, they want immediate tangible value in exchange—brands can't just collect data and hope customers will be patient.

    Looking ahead to 2026, generative AI will create not just personalized messages but personalized imagery, video, and even product configurations. Adobe's 2025 Digital Trends Report shows 58% of teams seeing GenAI ROI expect better quality customer interactions in the next 12-24 months. The winners will be brands that see personalization as a system, not just a tactic-building predictive models into planning cycles while maintaining human oversight on privacy and ethics.

    Más Menos
    12 m
  • EP 35: AI Algorithmic Trading: The New Market Makers
    Feb 22 2026

    Welcome to the final episode of the AI in Finance series, exploring algorithmic trading and AI market makers—genuinely the wild west of AI in finance. Here's context most people don't realize: 60-70% of equity market volume already comes from algorithmic trading, with high-frequency trading alone accounting for roughly 50%. When you think about the stock market, you're thinking about a system that's already majority AI and algorithms, not human traders.

    Sam and Mac explore what fundamentally differentiates AI algorithmic trading from traditional algorithmic trading. Traditional algorithms follow fixed rules: if condition X, then execute action Y—deterministic and predictable. AI algorithms learn and adapt dynamically, recognizing complex patterns across multiple variables, adjusting strategies in real time based on changing market conditions, and optimizing behaviors continuously.

    The technical models include reinforcement learning (AI learning optimal strategies through trial and error in simulations), LSTMs for time series prediction, and increasingly transformer models adapted for financial data—same basic architecture as ChatGPT but trained on market data instead of language. These models are exceptional at understanding that the same price movement means different things in different contexts: high volatility versus low volatility, bull market versus bear market.

    Regulatory landscape remains challenging. The SEC requires reasonable oversight, but defining "reasonable" for systems executing thousands of trades per second is genuinely difficult. In practice, this means kill switches, risk limits built into algorithms, monitoring systems that flag unusual patterns, and automatic shutoffs when volatility triggers occur.

    Más Menos
    15 m
  • EP 34: AI in Credit and Lending: Democratizing Access or Amplifying Bias?
    Feb 22 2026

    AI in credit decisions is genuinely controversial because it could either democratize lending and expand access to underserved populations or take historical discrimination and amplify it at scale. The reality is both are happening simultaneously in different institutions—it all depends on how intentionally the AI is designed and monitored for fairness.

    Sam and Mac examine how AI is disrupting traditional credit scoring. FICO scores have dominated for decades using limited data: payment history, credit utilization, length of credit history, types of credit, and recent inquiries. This approach systematically excludes millions who don't have traditional credit histories, even if they're perfectly responsible with money and would be excellent borrowers.

    The technical models include XGBoost as the industry standard and neural networks for processing more data with hidden layers. Traditional logistic regression is often a poor fit for real-world credit behavior. Banks need model governance with clear ownership, regular bias testing, robust explainability, and human oversight for complex cases. AI handles straightforward approvals and denials; humans handle the middle—complex situations requiring judgment and contextual understanding.

    Más Menos
    15 m
  • EP 33: AI in Compliance: Turning Regulation into Competitive Advantage
    Feb 22 2026

    Compliance has traditionally been viewed as a pure cost center—regulatory overhead that doesn't generate revenue. But AI is fundamentally changing this equation by turning compliance from a defensive obligation into an actual strategic advantage. New LSTM networks are achieving 94.2% accuracy in compliance monitoring while simultaneously cutting false positives dramatically.

    Sam and Mac explore why AI in compliance might be the biggest impact area that nobody is talking about. The false positive problem has always made compliance painful and expensive—traditional systems generated massive false positive rates, with analysts drowning in alerts where 95% turned out to be completely legitimate activity. This creates compliance fatigue where analysts become desensitized because so many alerts are false.

    The episode covers AI's impact across major regulatory areas: AML (Anti-Money Laundering), KYC (Know Your Customer), Sanctions Screening, and Trade Surveillance. For AML, AI narrows down suspicious patterns while letting routine activity pass without alerts. For KYC, banks report 78% faster onboarding times and 85% reduction in manual review—customers approved in an hour instead of days.

    AI must be transparent and auditable. The future is shifting from reacting to violations to preventing them entirely, flagging patterns on day three instead of catching problems on day 30, saving millions in potential federal lawsuits.

    Más Menos
    15 m
  • EP 31: AI in Stock Prediction: The Stanford Study that outperformed 93% of Fund Managers
    Feb 22 2026

    Stanford just dropped a bombshell study: an AI analyst made 30 years of stock picks and outperformed 93% of human mutual fund managers by an average of 600 basis points—that's 6% annually. This is absolutely massive in the investment world, kicking off Inside AssembleAI's AI in Finance series with the technology that's shaking Wall Street.

    Here's what's fascinating: the AI mostly used simple variables, not the sophisticated ones everyone expected. Firm size and dollar trading volume were dominant factors, but it used complex AI techniques to squeeze maximum predictive value from simple data everyone can access. The insight isn't about finding hidden data-it's about extracting more signal from obvious data. Any investment firm could have had this data in the pre-AI era, but it was simply too costly to justify economically.

    Sam and Mac explore three main approaches institutions use today: pattern recognition for known scenarios (AI learns what fraud or manipulation looks like), anomaly detection for unknown threats (establishing what's normal and alerting on deviations), and predictive analytics for future behavior (forecasting what's likely to happen next). All happening in real time, in milliseconds-the game changer compared to legacy systems.

    The data quality issue compounds everything—garbage in, garbage out. Models require at least five years of high-quality historical data for reliable results, and even then, past performance doesn't guarantee future success. Looking ahead to 2026, expect more hedge funds adopting sophisticated AI systems, models incorporating multi-modal data like satellite imagery and social sentiment, intensifying regulatory scrutiny, and continued democratization as retail investors gain access to tools that were hedge fund exclusive just years ago.

    Más Menos
    16 m
  • EP 30: Healthcare Data Security in The AI Era
    Feb 22 2026

    In 2024, a single cyber attack exposed the medical records of 190 million Americans. As healthcare organizations rush to adopt AI—with 38% now using it regularly—a new crisis is emerging: how do we harness AI's transformative power while protecting the most sensitive data we possess? This episode tackles the critical intersection of AI innovation and healthcare data security, where the stakes couldn't be higher.

    Sam and Mac reveal alarming statistics that healthcare executives can't afford to ignore: AI privacy incidents surged 56.4% in 2024, with 72% of healthcare organizations citing data privacy as their top AI risk. The average healthcare breach now costs $11.07 million per incident, yet only 17% of organizations have technical controls in place to prevent data leaks. The math is terrifying—and the problem is accelerating.

    The conversation explores how AI fundamentally changes the threat model in healthcare. Unlike traditional software that processes data according to fixed rules, AI models can unintentionally retain sensitive patient information from training data, creating new vulnerabilities that standard security practices weren't designed to address. Shadow AI—unauthorized AI tools used by employees handling sensitive data—poses massive compliance risks that most organizations haven't even begun to map.

    But this isn't just a doom-and-gloom episode. Sam and Mac outline emerging solutions that could reshape how healthcare handles AI and data security. Federated learning allows AI models to train across multiple institutions without patient data ever leaving its original location, enabling collaboration without exposure. Synthetic data can mimic real patient populations for AI training without using actual patient information, dramatically reducing privacy risks while maintaining analytical value.

    Looking forward, the episode emphasizes that stronger regulations and compliance practices aren't obstacles to AI adoption—they're prerequisites for sustainable innovation. Patient trust is healthcare's most valuable asset, and once lost through a major AI-related breach, it may be impossible to recover. The organizations that will thrive in the AI era are those that treat data protection not as a compliance checkbox but as a competitive advantage and moral imperative.

    Key topics covered:

    • The 2024 cyber attack exposing 190 million American medical records

    • Why 72% of healthcare organizations cite data privacy as their top AI risk

    • The 56.4% surge in AI privacy incidents involving PII (personally identifiable information)

    • Healthcare breach costs: $11.07 million average per incident

    • Shadow AI risks: unauthorized tools handling sensitive patient data

    • Why only 17% of organizations have adequate technical controls

    • How AI models unintentionally retain sensitive training data

    • Federated learning: training AI without data leaving institutions

    • Synthetic data: mimicking real populations without using actual patient information

    • The regulatory landscape and need for stronger compliance frameworks

    • Balancing innovation velocity with responsible AI practices

    • Privacy-preserving techniques: differential privacy and secure multi-party computation

    • Patient trust as healthcare's most critical asset in the AI era

    • Practical governance frameworks for healthcare AI implementation

    This episode is essential listening for healthcare executives navigating AI adoption, data security professionals protecting sensitive information, technology leaders implementing AI systems, and anyone concerned about the privacy implications of AI in medicine. Sam and Mac cut through the hype to deliver actionable insights on one of healthcare's most pressing challenges: how to innovate responsibly in an era where a single breach can expose hundreds of millions of records.

    Más Menos
    18 m