LLM-LLaMA
My Activities in playing with LLaMA on home PCs
Attempt 2 - Using LLaMA.cpp on MacOS Intel Laptop
Attempt 1 - Fail:
Using https://github.com/antimatter15/alpaca.cpp Failed as it appears the model has been updated or something, causing all the instructions to develop on the mac to fail with:
bash-3.2$ /Users/daetabit/dalai/alpaca/main --seed -1 --threads 4 --n_predict 200 --model models/13B/ggml-model-q4_0.bin --top_k 40 --top_p 0.9 --temp 0.8 --repeat_last_n 64 --repeat_penalty 1.3 -p "Below is an instruction that describes a task, paired with an input that provides further context. Write a response that appropriately completes the request.
>
> ### Instruction:
> >Tell me about bears
>
> ### Response:
> "
main: seed = 1683786839
llama_model_load: loading model from 'models/13B/ggml-model-q4_0.bin' - please wait ...
llama_model_load: invalid model file 'models/13B/ggml-model-q4_0.bin' (bad magic)
main: failed to load model from 'models/13B/ggml-model-q4_0.bin'
bash-3.2$ exit
exit
llama_model_load: invalid model file ‘models/13B/ggml-model-q4_0.bin’ (bad magic)
The Github page says: Consider using LLaMA.cpp instead The changes from alpaca.cpp have since been upstreamed in llama.cpp.
Last modified June 11, 2023: AI on the go (77556dd)