MLOps.community Podcast Por Demetrios arte de portada

MLOps.community

MLOps.community

De: Demetrios
Escúchala gratis

Relaxed Conversations around getting AI into production, whatever shape that may come in (agentic, traditional ML, LLMs, Vibes, etc)Demetrios
Episodios
  • The Modern Software Engineer
    Apr 14 2026

    Talk with Mihail Eric

    Más Menos
    54 m
  • How We Cut LLM Latency 70% With TensorRT in Production
    Apr 10 2026

    Maher Hanafi is an engineering leader who went from zero AI experience to self-hosting LLMs at enterprise scale — managing GPU costs, optimizing inference with TensorRT LLM, and building an AI platform for HR tech. In this conversation, he breaks down exactly how his team cut latency by 70%, reduced GPU spend through counterintuitive scaling strategies, and navigated the messy reality of taking AI from proof-of-concept to production.


    How We Cut LLM Latency 70% With TensorRT in Production // MLOps Podcast #369 with Maher Hanafi, SVP of Engineering at Betterworks


    Key topics covered:

    The AI Iceberg — Why the invisible work behind AI (performance, latency, throughput, cost, accuracy) is harder than building the features themselves

    GPU Cost Optimization — How upgrading to more expensive GPUs actually saved money by reducing total runtime hours

    TensorRT LLM Deep Dive — Rewiring neural networks to match GPU architecture for 50-70% latency reduction

    Cold Start Solutions — Using AWS FSx, baking models into container images, and cutting minutes off spin-up times

    KV Cache & In-Flight Batching — Why using one model per GPU with maximum KV cache beats cramming multiple models together

    Scheduled & Dynamic Scaling — Pattern-based scaling for HR tech workloads (nights, weekends, end-of-quarter spikes)

    Verticalized AI Platform — Building horizontal AI infrastructure that serves multiple HR product verticals

    AI Engineering Lab — How junior vs. senior engineers adopted AI coding tools differently, and the cultural shift that followed

    Agentic Coding in Practice — Navigating AI coding agent costs, quality control, and redefining the SDLC

    Chinese Models & Compliance — Why enterprise customers block DeepSeek/Qwen and the geopolitics of model training data


    This episode is for engineering leaders building AI in production, MLOps engineers optimizing GPU infrastructure, and anyone navigating the gap between AI demos and enterprise-scale deployment.


    Links & Resources:

    TensorRT LLM: https://github.com/NVIDIA/TensorRT-LLM

    NVIDIA Run: ai Model Streamer (cold start optimization): https://developer.nvidia.com/blog/reducing-cold-start-latency-for-llm-inference-with-nvidia-runai-model-streamer/

    vLLM vs TensorRT-LLM comparison: https://northflank.com/blog/vllm-vs-tensorrt-llm-and-how-to-run-them


    Timestamps:

    0:00 — Intro & teaser clips

    1:00 — Maher's journey from traditional engineering to AI leadership

    4:30 — The AI iceberg: cost, performance, latency, throughput, accuracy

    8:00 — Managing AI coding agent costs & premium token budgets

    12:00 — GPU scaling strategies: scheduled, dynamic, and proactive

    16:00 — Cold start problem: FSx, baked images, and container optimization

    20:00 — TensorRT LLM: 50-70% latency reduction explained

    25:00 — KV cache, in-flight batching, and throughput optimization

    30:00 — The counterintuitive math: bigger GPUs = lower cost

    35:00 — Verticalized AI products for HR tech40:00 — Building a horizontal AI platform with preprocessing layers

    45:00 — AI feedback polishing: the feature that needed guardrails

    50:00 — AI Engineering Lab: adoption curves by seniority

    55:00 — Redefining the SDLC for AI-assisted development

    60:00 — Self-hosting coding agents & leveraging internal AI platform

    63:00 — Chinese models, compliance, and training data bias

    Más Menos
    1 h y 5 m
  • Getting Humans Out of the Way: How to Work with Teams of Agents
    Apr 7 2026

    Rob Ennals is a Staff Software Engineer at Uber, working on large-scale distributed systems and core backend infrastructure.Getting Humans Out of the Way: How to Work with Teams of Agents // MLOps Podcast #368 with Rob Ennals, the Creator of Broomy Join the Community: https://go.mlops.community/YTJoinInGet the newsletter: https://go.mlops.community/YTNewsletterMLOps GPU Guide: https://go.mlops.community/gpuguide// AbstractMost people cripple coding agents by micromanaging them—reviewing every step and becoming the bottleneck.The shift isn’t to better supervise agents, but to design systems where they work well on their own: parallelized, self-validating, and guided by strong processes.Done right, you don’t lose control—you gain leverage. Like paving roads for cars, the real unlock is reshaping the environment so AI can move fast.// BioRob Ennals is the creator of Broomy, an open-source IDE designed for working effectively with many agents in parallel. He previously worked at Meta, Quora, Google Search, and Intel Research. He has a PhD in Computer Science from the University of Cambridge.// Related LinksWebsite: https://robennals.org/https://broomy.org/https://learnai.robennals.org/ (not yet announced, but should be by the time of the podcast)~~~~~~~~ ✌️Connect With Us ✌️ ~~~~~~~Catch all episodes, blogs, newsletters, and more: https://go.mlops.community/TYExploreJoin our Slack community [https://go.mlops.community/slack]Follow us on X/Twitter [@mlopscommunity](https://x.com/mlopscommunity) or [LinkedIn](https://go.mlops.community/linkedin)] Sign up for the next meetup: [https://go.mlops.community/register]MLOps Swag/Merch: [https://shop.mlops.community/]Connect with Demetrios on LinkedIn: /dpbrinkmConnect with Rob on LinkedIn: /robennals/

    Timestamps:[00:00] Agent Optimization Strategies[00:21] Visual Regression Explanation[05:35] Automated QA for Videos[13:05] Verification System Design[19:48] Agent Selection Strategies[30:48] Parallel Agent Management[35:30] Containerization and Cost Estimation[42:48] Shifting to Agent Orchestration[50:10] Wrap up

    Más Menos
    51 m
Todavía no hay opiniones