Machine Learning Tech Brief By HackerNoon Podcast Por HackerNoon arte de portada

Machine Learning Tech Brief By HackerNoon

Machine Learning Tech Brief By HackerNoon

De: HackerNoon
Escúchala gratis

Obtén 3 meses por US$0.99 al mes

Learn the latest machine learning updates in the tech world.© 2025 HackerNoon Política y Gobierno
Episodios
  • Can ChatGPT Outperform the Market? Week 10
    Oct 21 2025

    This story was originally published on HackerNoon at: https://hackernoon.com/can-chatgpt-outperform-the-market-week-10.
    New high of 32%...
    Check more stories related to machine-learning at: https://hackernoon.com/c/machine-learning. You can also check exclusive content about #ai, #ai-controls-stock-account, #ai-stock-portfolio, #can-chatgpt-outperform-market, #chatgpt-outperform-traders, #chatgpt-outperform-russell, #ai-outperform-the-market, #hackernoon-top-story, and more.

    This story was written by: @nathanbsmith729. Learn more about this writer by checking @nathanbsmith729's about page, and for more stories, please visit hackernoon.com.

    New high of 32%...

    Más Menos
    7 m
  • The Illusion of Scale: Why LLMs Are Vulnerable to Data Poisoning, Regardless of Size
    Oct 19 2025

    This story was originally published on HackerNoon at: https://hackernoon.com/the-illusion-of-scale-why-llms-are-vulnerable-to-data-poisoning-regardless-of-size.
    New research shatters AI security assumptions, showing that poisoning large models is easier than believed and requires a very small number of documents.
    Check more stories related to machine-learning at: https://hackernoon.com/c/machine-learning. You can also check exclusive content about #adversarial-machine-learning, #ai-safety, #generative-ai, #llm-security, #data-poisoning, #backdoor-attacks, #enterprise-ai-security, #hackernoon-top-story, and more.

    This story was written by: @hacker-Antho. Learn more about this writer by checking @hacker-Antho's about page, and for more stories, please visit hackernoon.com.

    The research challenges the conventional wisdom that an attacker needs to control a specific percentage of the training data (e.g., 0.1% or 0.27%) to succeed. For the largest model tested (13B parameters), those 250 poisoned samples represented a minuscule 0.00016% of the total training tokens. Attack success rate remained nearly identical across all tested model scales for a fixed number of poisoned documents.

    Más Menos
    8 m
  • 7 Major Learnings from The AI Engineering SF World Fair 2025
    Oct 19 2025

    This story was originally published on HackerNoon at: https://hackernoon.com/7-major-learnings-from-the-ai-engineering-sf-world-fair-2025.
    AI coding agents dominated the 2025 SF World’s Fair. From spec-driven dev to cloud agents, here are 7 takeaways shaping AI-native engineering.
    Check more stories related to machine-learning at: https://hackernoon.com/c/machine-learning. You can also check exclusive content about #ai, #ai-engineering, #ai-native-development, #ai-engineering-sf-world-fair, #sf-world-fair-2025, #major-ai-trends, #ai-trends-2025, #ai-coding, and more.

    This story was written by: @ainativedev. Learn more about this writer by checking @ainativedev's about page, and for more stories, please visit hackernoon.com.

    AI coding agents dominated the 2025 SF World’s Fair. From spec-driven dev to cloud agents, here are 7 takeaways shaping AI-native engineering.

    Más Menos
    8 m
Todavía no hay opiniones