Hey ya'll, just wanted to share the little mac app I made to wrap llama.cpp's server.
It's called FreeChat. The idea is to make an app you can send to someone who knows nothing about LLMs and have them up and running a local model as soon as their download completes. At the same time, I want it to be my daily driver for testing new models so I'll be adding more advanced parameter knobs soon.
The inspiration was just playing with the incredible llama.cpp and realizing just how capable both the Llama 2 models and M1/M2 Macs are.
Happy for any and all feedback. I hope this is useful for others.
It's called FreeChat. The idea is to make an app you can send to someone who knows nothing about LLMs and have them up and running a local model as soon as their download completes. At the same time, I want it to be my daily driver for testing new models so I'll be adding more advanced parameter knobs soon.
The inspiration was just playing with the incredible llama.cpp and realizing just how capable both the Llama 2 models and M1/M2 Macs are.
Happy for any and all feedback. I hope this is useful for others.