Hi there,
I signed up to Hugging Face last week and tried to load medgemma-27b-text-it onto Google Colab but kept getting a HTTPError: 403 Client Error.
I have checked my hf token and edited it to read all public repos and added the model to repository permissions, agreed to license and been granted access to the model according to the website. Website reads “Gated model: You have been granted access to this model”.
The model info list of files loads when I run print(api.list_repo_files(repo_id)) and the token is valid when I run whoami script. But when I run the # pip install accelerate set of codes, it keeps giving me the 403 error. 403 Forbidden: Please enable access to public gated repositories in your fine-grained token settings to view this repository.. Cannot access content at: https://huggingface.co/google/medgemma-27b-text-it/resolve/main/config.json. Make sure your token has the correct permissions.
Sorry I am a python and hugging face (and general ai) novice so if someone could point out what I am doing incorrectly, would be much appreciated. Also any general advice about alternative transformer models for medical text classification.
Thank you!
