PC specs: ryzen 5700x,32gb ram, 100gb free space sdd, rtx 3060 12gb vram
I'm trying to run locally llama-7b-chat model. Followed every instruction step, first converted the model to ggml FP16 format
python convert.py .\models\llama-2-7b-chat\
then
./quantize .\models\llama-2-7b-chat\ggml-model-f16.gguf .\models\llama-2-7b-chat\ggml-model-q4_0.bin q4_0
now when i run with
.\main -m .\models\llama-2-7b-chat\ggml-model-q4_0.gguf -n 512
it gives me this error
`
main: build = 1019 (ef3f333)
main: seed = 1692713716
llama_model_loader: loaded meta data with 15 key-value pairs and 291 tensors from .\models\llama-2-7b-chat\ggml-model-q4_0.gguf (version GGUF V1 (latest))
// skipped this part
llama_model_loader: - kv 0: general.architecture str
llama_model_loader: - kv 1: general.name str
llama_model_loader: - kv 2: llama.context_length u32
llama_model_loader: - kv 3: llama.embedding_length u32
llama_model_loader: - kv 4: llama.block_count u32
llama_model_loader: - kv 5: llama.feed_forward_length u32
llama_model_loader: - kv 6: llama.rope.dimension_count u32
llama_model_loader: - kv 7: llama.attention.head_count u32
llama_model_loader: - kv 8: llama.attention.head_count_kv u32
llama_model_loader: - kv 9: llama.attention.layer_norm_rms_epsilon f32
llama_model_loader: - kv 10: tokenizer.ggml.model str
llama_model_loader: - kv 11: tokenizer.ggml.tokens arr
llama_model_loader: - kv 12: tokenizer.ggml.scores arr
llama_model_loader: - kv 13: tokenizer.ggml.token_type arr
llama_model_loader: - kv 14: general.quantization_version u32
llama_model_loader: - type f32: 65 tensors
llama_model_loader: - type q4_0: 225 tensors
llama_model_loader: - type q6_K: 1 tensors
llama_model_load_internal: format = GGUF V1 (latest)
llama_model_load_internal: arch = llama
llama_model_load_internal: vocab type = SPM
llama_model_load_internal: n_vocab = 32000
llama_model_load_internal: n_ctx_train = 2048
llama_model_load_internal: n_ctx = 512
llama_model_load_internal: n_embd = 4096
llama_model_load_internal: n_head = 32
llama_model_load_internal: n_head_kv = 32
llama_model_load_internal: n_layer = 32
llama_model_load_internal: n_rot = 128
llama_model_load_internal: n_gqa = 1
llama_model_load_internal: f_norm_eps = 1.0e-06
llama_model_load_internal: n_ff = 11008
llama_model_load_internal: freq_base = 10000.0
llama_model_load_internal: freq_scale = 1
llama_model_load_internal: model type = 7B
llama_model_load_internal: model ftype = mostly Q4_0
llama_model_load_internal: model size = 6.74 B
llama_model_load_internal: general.name = LLaMA
llama_model_load_internal: BOS token = 1 '<s>'
llama_model_load_internal: EOS token = 2 '</s>'
llama_model_load_internal: LF token = 13 '<0x0A>'
llama_model_load_internal: ggml ctx size = 0.07 MB
llama_model_load_internal: mem required = 3647.94 MB (+ 256.00 MB per state)
error loading model: MapViewOfFile failed: Not enough memory resources are available to process this command.
llama_load_model_from_file: failed to load model
llama_init_from_gpt_params: error: failed to load model '.\models\llama-2-7b-chat\ggml-model-q4_0.gguf'
main: error: unable to load model
`
I tried changing the context length and also redoing every step. Didnt help