Unable to load medgemma-27b-text-it to google colab

Hi there,

I signed up to Hugging Face last week and tried to load medgemma-27b-text-it onto Google Colab but kept getting a HTTPError: 403 Client Error.

I have checked my hf token and edited it to read all public repos and added the model to repository permissions, agreed to license and been granted access to the model according to the website. Website reads “Gated model: You have been granted access to this model”.

The model info list of files loads when I run print(api.list_repo_files(repo_id)) and the token is valid when I run whoami script. But when I run the # pip install accelerate set of codes, it keeps giving me the 403 error. 403 Forbidden: Please enable access to public gated repositories in your fine-grained token settings to view this repository.. Cannot access content at: https://huggingface.co/google/medgemma-27b-text-it/resolve/main/config.json. Make sure your token has the correct permissions.

Sorry I am a python and hugging face (and general ai) novice so if someone could point out what I am doing incorrectly, would be much appreciated. Also any general advice about alternative transformer models for medical text classification.

Thank you!

1 Like

PS I have already checked the “Read access to contents of all public gated repos you can access” box in the token

1 Like

The most common cause is a permission error in fine-grained token, etc., but that doesn’t seem to be the case here. It might be a token error caused by cache, which occasionally occurs in Google Colab.
You should be able to clear the cache using the following method.

from huggingface_hub import logout, login
logout()
!rm -rf ~/.cache/huggingface
login()

Ah ok thanks for this John. I think I figured it out. I unchecked on write access to public repos which seemed to re-highlight the read public gated repos checkbox and it seems to work now. Will keep your code handy to clear the cache as I seem to have trouble keeping track of all my tokens!

1 Like