runtime error
Exit code: 1. Reason: tokenizer_config.json: 100%|ββββββββββ| 2.54k/2.54k [00:00<00:00, 14.0MB/s] spiece.model: 0%| | 0.00/792k [00:00<?, ?B/s][A spiece.model: 100%|ββββββββββ| 792k/792k [00:00<00:00, 3.37MB/s] tokenizer.json: 0%| | 0.00/2.42M [00:00<?, ?B/s][A tokenizer.json: 100%|ββββββββββ| 2.42M/2.42M [00:00<00:00, 130MB/s] special_tokens_map.json: 0%| | 0.00/2.20k [00:00<?, ?B/s][A special_tokens_map.json: 100%|ββββββββββ| 2.20k/2.20k [00:00<00:00, 14.3MB/s] Traceback (most recent call last): File "/app/agent.py", line 29, in __init__ self.model = AutoModelForSeq2SeqLM.from_pretrained(model_name) File "/usr/local/lib/python3.10/site-packages/transformers/utils/import_utils.py", line 2157, in __getattribute__ requires_backends(cls, cls._backends) File "/usr/local/lib/python3.10/site-packages/transformers/utils/import_utils.py", line 2143, in requires_backends raise ImportError("".join(failed)) ImportError: AutoModelForSeq2SeqLM requires the PyTorch library but it was not found in your environment. Check out the instructions on the installation page: https://pytorch.org/get-started/locally/ and follow the ones that match your environment. Please note that you may need to restart your runtime after installation. During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/app/app.py", line 12, in <module> agent = SimpleAgent() # loads model (may take a few seconds) File "/app/agent.py", line 31, in __init__ raise RuntimeError(f"Failed to load model {model_name}: {e}") RuntimeError: Failed to load model google/flan-t5-small: AutoModelForSeq2SeqLM requires the PyTorch library but it was not found in your environment. Check out the instructions on the installation page: https://pytorch.org/get-started/locally/ and follow the ones that match your environment. Please note that you may need to restart your runtime after installation.
Container logs:
Fetching error logs...