Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> The biggest lever to achieving 42% accuracy was fine-tuning a Llama 2 (7B) model

42% accuracy on a tiny, outdated model - surely it would improve significantly by fine-tuning Llama 3.1 405B!



Yes very interesting potential, it looks like it can be increased in accuracy considerably because Llama 3.1 with 405B parameters has very similar performance with the latest GPT-4o.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: