Mistral: Voxtral TTS, Forge, Leanstral, & what's next for Mistral 4 — w/ Pavan Kumar Reddy & Guillaume Lample Podcast Por  arte de portada

Mistral: Voxtral TTS, Forge, Leanstral, & what's next for Mistral 4 — w/ Pavan Kumar Reddy & Guillaume Lample

Mistral: Voxtral TTS, Forge, Leanstral, & what's next for Mistral 4 — w/ Pavan Kumar Reddy & Guillaume Lample

Escúchala gratis

Ver detalles del espectáculo
Mistral has been on an absolute tear - with frequent successful model launches it is easy to forget that they raised the largest European AI round in history last year. We were long overdue for a Mistral episode, and we were very fortunate to work with Sophia and Howard to catch up with Pavan (Voxtral lead) and Guillaume (Chief Scientist, Co-founder) on the occasion of this week’s Voxtral TTS launch:Mistral can’t directly say it, but the benchmarks do imply, that this is basically an open-weights ElevenLabs-level TTS model (Technically, it is a 4B Ministral based multilingual low-latency TTS open weights model that has a 68.4% win rate vs ElevenLabs Flash v2.5). The contributions are not just in the open weights but also in open research: We also spend a decent amount of the pod talking about their architecture that combines auto-regressive generation of semantic speech tokens with flow-matching for acoustic tokens (typically only applied in the Image Generation space, as seen in the Flow Matching NeurIPS workshop from the principal authors that we reference in the pod).You can catch up on the paper here and the full episode is live on youtube!Timestamps00:00 Welcome and Guests00:22 Announcing Voxtral TTS01:41 Architecture and Codec02:53 Understanding vs Generation05:39 Flow Matching for Audio07:27 Real Time Voice Agents13:40 Efficiency and Model Strategy14:53 Voice Agents Vision17:56 Enterprise Deployment and Privacy23:39 Fine Tuning and Personalization25:22 Enterprise Voice Personalization26:09 Long-Form Speech Models26:58 Real-Time Encoder Advances27:45 Scaling Context for TTS28:53 What Makes Small Models30:37 Merging Modalities Tradeoffs33:05 Open Source Mission35:51 Lean and Formal Proofs38:40 Reasoning Transfer and Agents40:25 Next Frontiers in Training42:20 Hiring and AI for Science44:19 Forward Deployed Engineering46:22 Customer Feedback Loop48:29 Wrap Up and ThanksTranscriptswyx: Okay, welcome to Latent Space. We’re here in the studio with our gues co-host Vibh u. Welcome. Thanks. Excited for this one as well as Guillaume and Pavan from Mistral. Welcome. Excited to be here.Guillaume: Thank you.swyx: Pavan. You are leading audio research at Ms. Charles and Guam, your chief scientist.Announcing Voxtral TTSswyx: What are we announcing today where we’re coordinating this release with you guys?Guillaume: Yeah, so we are releasing box trial TTS. So it’s our first audio model that generates speech. It’s not our first audio model. We had a couple of releases before we had one in the summer that was a box trial, our first audio model, but it’s was like transcription models. A SR like a few months later we released some update on top of this, supporting more languages.Also a lot of table stake features for our customers context biasing ization. Time stamping and the transcription. We will start some real time model that can transcribe not just at the end of the, you just don’t need to fill your entire audio file, but that can also come in [00:01:00] real time. And here, this is the natural extension in the audio.So basically speech generation. So yeah. So we support nine languages. And this is a pretty small model 3D models. So very fast. And also status is the same level at the best model, but it’s much more efficient in terms of cost and also much in terms of cost. It’s also much to only a fraction of the cost of parking competitors.They, and we are also releasing the way that this modelswyx: yeah. Mainly linked, not, yeah. What’s the decision? Factor him.Guillaume: It’s a good question.Pavan: Ooh.swyx: Yeah. For provide any other sort of research notes to add on whatPavan: we maybe we’ll dive into it later in the forecast too.Architecture and CodecPavan: But it’s a novel architecture that we develop inhouse.We traded on several internal architectures and ended up with a auto aggressive flow matching architecture. And also have a new in-house neural audio codec. Which, converts this audio into all point by herds latent [00:02:00] tokens, semantic and acoustic tokens. And yeah, that’s that’s their new part about this model and we’re pretty excited that it’s, it came out with such good quality and Jim was mentioning. Yeah, it’s a three B model. It’s based off of the TAL model that we actually released just a few months back and insert trunk and mainly meant for like the TTS stuff, but they need text capabilities are also there. Yeah.swyx: So there’s a lot to cover.I always I love any, anything to do with novel encodings and all those things because I think that’s obviously I creates a lot of efficiency, but also maybe bugs that sometimes happen. You were previously a Gemini and you worked on post training for language models, and maybe a lot of people will have less experience with audio models just in general compared to pure language.What did you find that you have to revisit from scratch as you joined this trial and started doing this? At leastUnderstanding...
Todavía no hay opiniones