Hacker News
new
|
past
|
comments
|
ask
|
show
|
jobs
|
submit
login
fancyfredbot
11 days ago
|
parent
|
context
|
favorite
| on:
Claude Opus 4.7
Wow that is terrible. In my memory GPT 2 was more interesting than that. I remember thinking it could pass a Turing test but that output is barely better than a Markov chain.
I guess I was using the large model?
help
daveguy
11 days ago
|
next
[–]
Here is the XL model. 20x the size of the medium model. Still just 2B parameters, but on the bright side it was trained pre-wordslop.
https://huggingface.co/openai-community/gpt2-xl
reply
sillysaurusx
11 days ago
|
prev
|
next
[–]
There’s an art to GPT sampling. You have to use temperature 0.7. People never believe it makes such a massive difference, but it does.
reply
wat10000
11 days ago
|
prev
[–]
Probably a much better prompt, too. I just literally pasted in the top part of my comment and let fly to see what would happen.
reply
Consider applying for YC's Summer 2026 batch! Applications are open till May 4
Guidelines
|
FAQ
|
Lists
|
API
|
Security
|
Legal
|
Apply to YC
|
Contact
Search:
I guess I was using the large model?