Building a Local Large Language Model (LLM)
No se pudo agregar al carrito
Solo puedes tener X títulos en el carrito para realizar el pago.
Add to Cart failed.
Por favor prueba de nuevo más tarde
Error al Agregar a Lista de Deseos.
Por favor prueba de nuevo más tarde
Error al eliminar de la lista de deseos.
Por favor prueba de nuevo más tarde
Error al añadir a tu biblioteca
Por favor intenta de nuevo
Error al seguir el podcast
Intenta nuevamente
Error al dejar de seguir el podcast
Intenta nuevamente
-
Narrado por:
-
De:
Another #ComputingWeek talk turned into a podcast! Two Red Hat software engineers, both recent graduates of SETU, returned to discuss the issues surrounding running your own LLM on a local machine, how models and datasets are built and reduced (quantised) so as to run on a laptop rather than an array of servers. Mark Campbell and Dimitri Saradkis provided excellent insight on the technical issues surround this topic, before getting into some of the ethical and moral issues with host Rob O'Connor at the end.
You can connect with all the people on this podcast on LinkedIn at:
- Mark Campbell https://www.linkedin.com/in/mark-campbell-76846b194/
- Dimitri Saradakis https://www.linkedin.com/in/dimitri-saridakis-32a087139/
- Rob O'Connor https://www.linkedin.com/in/robertoconnorirl/
Here are links to some of the tools referenced in the podcast:
- Red Hat OpenShift AI https://www.redhat.com/en/technologies/cloud-computing/openshift/openshift-ai
- LMStudio https://lmstudio.ai/
- Ollama https://ollama.ai/
- HuggingFace https://huggingface.co/
Todavía no hay opiniones