EDGE AI POD Podcast Por EDGE AI FOUNDATION arte de portada

EDGE AI POD

EDGE AI POD

De: EDGE AI FOUNDATION
Escúchala gratis

Discover the cutting-edge world of energy-efficient machine learning, edge AI, hardware accelerators, software algorithms, and real-world use cases with this podcast feed from all things in the world's largest EDGE AI community.

These are shows like EDGE AI Talks, EDGE AI Blueprints as well as EDGE AI FOUNDATION event talks on a range of research, product and business topics.

Join us to stay informed and inspired!

© 2026 EDGE AI FOUNDATION
Episodios
  • A Unified Neuromorphic Platform for Sparse, Low Power Computation
    Apr 2 2026

    Sensors are flooding the edge with data while CPUs juggle denoising, formatting, and inference. We built ADA to flip that script: a Turing-complete neuromorphic processor that computes with time-encoded spikes, slashing power, latency, and memory movement by keeping work inside an event-driven pipeline.

    We start by unpacking why conventional embedded architectures stall under modern workloads, from pre-processing bottlenecks to compromised security on battery-powered devices. Then we break down neuromorphic fundamentals—how spikes encode information and why sparsity matters—and compare general-purpose frameworks, highlighting the trade-offs that often inflate activity or force manual design. From there, we explain why we chose interval coding and how we solved its biggest flaw. By predicting future spike times, ADA avoids per-tick updates, reducing complexity from linear to logarithmic with precision and mapping neatly to simple add, multiply, and shift hardware.

    You’ll hear how the architecture comes together: a tiny neuron core that fits in modest FPGAs, standard interfaces like UART and AER for DVS cameras, and our Axon SDK that compiles Python, NumPy, or C algorithms into deployable binaries—no neuron micromanagement required. We demo a three-tap FIR filter built from modular primitives and show ADA acting as a programmable pre-processing element for event vision. On the DVS128 gesture dataset, ADA’s spatial-temporal denoising cut downstream compute by over 50%, keeping the pipeline sparse and fast.

    Security gets equal attention. We extended the primitive set with modulus arithmetic to support polynomial math central to post-quantum cryptography such as Kyber. The result: 5x better power efficiency and a 2.5x improvement in energy-latency product over MCU baselines, with clear paths to reduce latency further. It points to neuromorphic cryptography that protects implants and IoT sensors without sacrificing battery life.

    Ready to try it? The Axon SDK is publicly available. Give ADA a spin, share your toughest edge workload, and subscribe for more deep dives into neuromorphic computing. If this sparked ideas, leave a review and pass it to a friend building at the edge.

    Send us Fan Mail

    Support the show

    Learn more about the EDGE AI FOUNDATION - edgeaifoundation.org

    Más Menos
    20 m
  • From Fragments to Foundation: The Sound of Progress in Edge Audio AI
    Mar 26 2026

    What if your printer didn’t just spit out pages, but actually understood them? We walk through a hands-on look at multimodal AI on the edge—how visual-language models read layouts, extract tables, translate content, and reformat documents right where data lives, without shipping sensitive files to the cloud. It’s a practical tour from passive peripherals to active intelligence, with real workflows and measurable speedups.

    We share the architecture behind on-device document intelligence: pre-processing that stabilizes inputs, VLMs that localize and reason over text and images, and post-processing that converts outputs into CSVs, charts, and accessibility-friendly layouts. You’ll hear how Qwen 2.5-VL handles complex visual inputs while maintaining strong language performance, and how a Flux-based diffusion setup enables creative generation and targeted edits—from updating dates in greeting cards to changing borders and colors by prompt. Along the way, we unpack quantization with GGUF to run 7B-class models in tight memory, diffusion sampler and scheduler tuning for latency, and NVIDIA-optimized libraries to squeeze more from modest GPUs.

    Beyond demos, we dig into business and engineering realities: fine-tuning with enterprise data to reduce hallucinations, building guardrails and fallback paths for reliability, and segmenting large documents to manage VRAM. We also discuss why a companion device—AI PC or smartphone—can orchestrate heavy lifting until printer SOCs catch up, keeping data private and workflows responsive. If you care about document AI, privacy by design, or accessibility features like dynamic type and contrast, this conversation makes the path concrete and actionable.

    Enjoy the deep dive? Subscribe, share with a colleague who lives in PDFs, and leave a review with the one edge use case you want us to test next.

    Send us Fan Mail

    Support the show

    Learn more about the EDGE AI FOUNDATION - edgeaifoundation.org

    Más Menos
    29 m
  • Empowering at the Edge: the "Arduino way" to AI
    Mar 19 2026

    What if AI felt like a door you could open, not a wall you had to climb? We dig into how Arduino’s approach—accessibility first, power when you need it—turns the edge AI buzz into a concrete path you can follow, whether you’re a student with a starter kit or an engineer shipping to a fleet.

    We walk through a practical four-step journey: try AI through no-code experiments, understand it with pre-trained models, train by fine-tuning or starting from scratch with your data, and build something real that lives beyond a demo. Along the way, we unpack a core principle we call “abstraction without obfuscation”—removing friction while keeping the logic transparent—so you can inspect, modify, and truly own the systems you create. That design philosophy shapes everything from our open hardware portfolio (TinyML-friendly MCUs up to Linux-capable MPUs) to our integrations with popular AI frameworks and community-driven libraries.

    You’ll also hear how cloud-native developer tools streamline the messy middle: browser-based workflows, single-device to fleet deployments, secure OTA updates, data collection for predictive insights, and closed-loop model improvement. Plus, we introduce our AI assistant as a coach that explains code, diagnoses bugs, and helps optimize for memory and speed—turning dead ends into learning moments. Real-world validation from a 35-million-strong community and enterprise teams, including automotive innovators, shows how openness and cohesion accelerate the leap from idea to production.

    If you care about AI that empowers rather than intimidates, this conversation lays out the playbook. Subscribe, share with a teammate who loves to build, and leave a review telling us the project you’re dreaming about—we might feature it next.

    Send us Fan Mail

    Support the show

    Learn more about the EDGE AI FOUNDATION - edgeaifoundation.org

    Más Menos
    20 m
Todavía no hay opiniones