llama-2-13b-chat-hf-gptq / gptq_model-4bit-128g.bin
seonglae's picture
build: AutoGPTQ for meta-llama/Llama-2-13b-chat-hf: 4bits, gr128, desc_act=False
fe61be3
download
history blame contribute delete
7.26 GB
This file is stored with Xet . It is too big to display, but you can still download it.

Large File Pointer Details

( Raw pointer file )
SHA256:
e78da402d57ed928bd4ce95f9c8a40bbf6cac7055f54bed221d73ad78ed76d77
Pointer size:
135 Bytes
·
Size of remote file:
7.26 GB
·
Xet hash:
82381c03ebca5757726e8dd3ac50411b43f22403705c783edd8ade895a8cd0db

Xet efficiently stores Large Files inside Git, intelligently splitting files into unique chunks and accelerating uploads and downloads. More info.