I’ve uploaded two models, and I’m pretty sure they’ve had at least a few downloads by now. Yet, since I uploaded them back in July, the download stats have still remained at zero. Any idea what might be causing this?
In addition to the specifications below, there are rare instances where the Hub itself may be buggy. (It happened.)
Your counters are stuck at 0 because the Hub only increments “Downloads last month” when a request hits one of the repository’s query files. If users only fetch weights or clone the repo and your repos don’t have a recognized query file (or your loader never requests it), nothing is counted. By default the Hub counts requests to a small set of files like config.json; many libraries override this with their own rule. GGUF repos are a special case where each .gguf file counts. (Hugging Face)
What the Hub actually counts
- Counting is server-side. Every HTTP
GETorHEADto a library’s configured query file(s) increments the metric. Default fallback isconfig.jsonif no library is set. Examples of library overrides exist and are defined in the open-source registry. (Hugging Face) - Libraries can register custom rules like
countDownloads: path_extension:"safetensors"so that downloads of a single representative file type count. See the VFIMamba addition for a concrete example. (GitHub) - GGUF models count downloads of
.gguffiles directly; cloning a whole repo can overcount in that special case. (Hugging Face)
Why your two repos show zero
Common causes that fit your situation since July:
- No recognized query file in the repo. If
config.json(or another recognized config) isn’t present, default counting has nothing to hit, so the model appears unused even if weights were fetched. (Hugging Face Forums) - Your loader bypasses counted files. Custom code that calls
hf_hub_downloadorsnapshot_downloadfor only*.safetensorsorpytorch_model.binwithout pulling the configured query file will not increment the counter unless your library’s rule has been registered. (Hugging Face) - Library not registered with a count rule. If the models aren’t tied to a known library, the Hub falls back to
config.json. Without that file, counts stay at zero. (Hugging Face) - Short-term Hub glitches don’t explain months of zeros. There were periodic updates outages, but they lasted days, not months. Persistent zero almost always means “no counted queries.” (Hugging Face Forums)
Fast fixes
Pick one. Any single change is enough to start counting.
A) Add a tiny config.json and make sure clients fetch it once.
Include a minimal config at repo root and ensure your usage path touches it (directly or via from_pretrained). Example call that increments count:
# docs: https://huggingface.co/docs/huggingface_hub/en/guides/download
from huggingface_hub import hf_hub_download
hf_hub_download(repo_id="OpenSearch-AI/Ops-MM-embedding-v1-2B", filename="config.json") # counts if present
# then load weights
This aligns with how the Hub counts by default. (Hugging Face)
B) Register your library so the right file counts.
If users only download *.safetensors, submit a small PR to the Hub library registry to set countDownloads for your library, e.g. path_extension:"safetensors" (one counted file pattern to avoid double-counting). The VFIMamba PR shows the exact field to add. (GitHub)
C) Publish a GGUF variant if appropriate.
For GGUF, the Hub already counts .gguf requests. If your user base uses GGML/llama.cpp tools, a GGUF release will “just count.” (Hugging Face)
Quick checks
- Does a counted file exist? If not, add
config.json(or another default config). (Hugging Face) - Does your loader request it? If you use
snapshot_download(..., allow_patterns="*.safetensors"), you may be skipping the config. Ensure at least one call fetches the counted file. (Hugging Face) - Can you read the numbers programmatically? The Hub API exposes
downloads(last 30 days) and sometimesdownloads_all_time, but it doesn’t provide a time series. (Hugging Face)
Notes and pitfalls
- Caches suppress repeat hits. After the first download on a machine, subsequent runs hit the local cache and won’t create new counted requests unless a new revision is fetched. That’s expected. (Hugging Face)
- Cloning doesn’t guarantee a count. A
git lfsclone only increments if the clone actually fetches the configured query file; pulling only weights without a registered rule won’t count. (Hugging Face) - Rollups can lag for days, not months. If you see “0” for multiple months, assume configuration, not a transient issue. (Hugging Face Forums)
Minimal action plan for your repos
- Add a small
config.jsonto each repo and update your README’s first-run snippet to fetchconfig.jsonbefore weights. (Hugging Face) - Optionally submit a library rule PR so your typical weight filename counts, mirroring the VFIMamba example. (GitHub)
- If your users are tooling around GGUF, ship GGUF files so they’re counted automatically. (Hugging Face)
References and good guides
- How model downloads are counted: official Hub doc with defaults, library overrides, GGUF details. (Hugging Face)
- Why configs matter (forum explanation): concise thread clarifying that counts track config-like files unless a library override is set. (Hugging Face Forums)
- Registering a library +
countDownloadsexample: VFIMamba PR inhuggingface.js. (GitHub) - Programmatic downloads and caching:
hf_hub_downloadandsnapshot_downloadguide. (Hugging Face) - API fields for
downloadsanddownloads_all_time: HfApi reference and issues discussing availability and lack of time series. (Hugging Face) - Known “stats not updating” reports: forum threads showing short-term rollup delays. (Hugging Face Forums)
Both model directories include a `config.json` file, and the model-based script will automatically download the `config.json` file.
@pierric Maybe Hub error again?
any progress on this?
No progress so far. Probably a Hub bug, I think…
Maybe you should email HF Support. [email protected]
After I emailed them, they helped identify the issue. Once I removed the “colpali” tag, the model download count displayed correctly. Thanks for providing the contact email address!