Stepfunction Podcast Podcast Por Jeff Hwang and Seymour Duncker arte de portada

Stepfunction Podcast

Stepfunction Podcast

De: Jeff Hwang and Seymour Duncker
Escúchala gratis

A new podcast about the world of generative AI, including ChatGPT, Large Language Models (LLMs), DALL-E, Stable Diffusion, and more.2023
Episodios
  • Episode 1 - What is ChatGPT?
    Feb 15 2023

    Jeff and Seymour kick off the podcast with an exploration of ChatGPT. What is it and how might it impact our careers and lives? They use ChatGPT and large language models (LLMs) as an entry point to the larger topic of generative AI.

    Questions and comments? Talk to us.

    Más Menos
    26 m
  • Episode 2 - Who Opened The Floodgates?
    Feb 23 2023

    Jeff and Seymour discuss the unexpected impact of ChatGPT and how Bing Chat may not be ready for prime time. Did OpenAI unintentionally open Pandora's Box because they were worried someone else would beat them to it? Plus some reassurances that Sydney is definitely not sentient or emotional.

    • Kevin Roose's article in The New York Times and transcript of his long chat session with Bing / Sydney.
    • Podcast episode detailing the behind the scenes at OpenAI in the weeks leading up to the launch of ChatGPT in November, 2022 as discussed by Roose and Casey Newton on their show Hard Fork.

    Questions and comments? Talk to us.

    Más Menos
    22 m
  • Episode 3 - Elevators Up, Stairways Down
    Feb 28 2023

    Jeff and Seymour use stories and analogies to explain the two main approaches to AI: Bottom-Up and Top-Down. The recent wave of AI success is mostly based on the bottom up  path which includes machine learning, neural networks, and deep learning. Related links:

    • The case of the construction worker with a nail in his boot
    • Murray Shanahan's excellent December 2022 paper Talking About Large Language Models
    • Meta / Facebook AI research's September 2020 blog post on Retrieval Augmented Generation and Segmentation
    • DeepMind / Google AI's December 2021 paper on using a Retrieval-Enhanced Transformer (aka Retro) and a database of 2-trillion tokens for improved LLM capabilities. 

    Questions and comments? Talk to us.

    Más Menos
    21 m
Todavía no hay opiniones