Gertie01 / light-bot-89: Report

Job failed with exit code: 1. Reason: cache miss: [run 3/3] LINK COPY --from=pipfreeze --link /pipfreeze/ /pipfreeze/ cache miss: [run 1/3] COPY --link ./ /app cache miss: [pipfreeze 2/2] RUN pip freeze > /pipfreeze/freeze.txt cache miss: [pipfreeze 1/2] RUN mkdir -p /pipfreeze cache miss: [run 2/3] RUN mkdir -p /home/user && ( [ -e /home/user/app ] || ln -s /app/ /home/user/app ) || true cache miss: [run 3/3] COPY --from=pipfreeze --link /pipfreeze/ /pipfreeze/ cache miss: [base 6/7] RUN --mount=target=/tmp/requirements.txt,source=requirements.txt pip install --no-cache-dir -r /tmp/requirements.txt cache miss: [run 1/3] LINK COPY --link ./ /app cache miss: [base 7/7] RUN pip install --no-cache-dir gradio[oauth,mcp]==5.49.1 β€œuvicorn>=0.14.0” spaces {β€œtotal”:26,β€œcompleted”:19,β€œuser_total”:15,β€œuser_cached”:5,β€œuser_completed”:8,β€œuser_cacheable”:14,β€œfrom”:1,β€œmiss”:9,β€œclient_duration_ms”:13102}

Build logs:

===== Build Queued at 2025-11-15 16:52:39 / Commit SHA: 185331b =====

--> FROM docker.io/library/python:3.10@sha256:e944d95e7277b5888479cc4bcd6cec63e3128fec9ec4c6ef099be470d89b54d4
DONE 0.0s

--> RUN pip install --no-cache-dir pip -U && 	pip install --no-cache-dir 	datasets 	"huggingface-hub>=0.30" "hf-transfer>=0.1.4" "protobuf<4" "click<8.1" "pydantic~=1.0"
CACHED

--> COPY --from=root / /
CACHED

--> WORKDIR /app
CACHED

--> RUN apt-get update && apt-get install -y 	git 	git-lfs 	ffmpeg 	libsm6 	libxext6 	cmake 	rsync 	libgl1 	&& rm -rf /var/lib/apt/lists/* 	&& git lfs install
CACHED

--> RUN 	apt-get update && 	apt-get install -y curl && 	curl -fsSL https://deb.nodesource.com/setup_20.x | bash - && 	apt-get install -y nodejs && 	rm -rf /var/lib/apt/lists/* && apt-get clean
CACHED

--> Restoring cache
DONE 12.2s

--> RUN --mount=target=/tmp/requirements.txt,source=requirements.txt     pip install --no-cache-dir -r /tmp/requirements.txt
Collecting git+https://github.com/huggingface/spaces@main (from -r /tmp/requirements.txt (line 8))
  Cloning https://github.com/huggingface/spaces (to revision main) to /tmp/pip-req-build-w7w5wnsw
  Running command git clone --filter=blob:none --quiet https://github.com/huggingface/spaces /tmp/pip-req-build-w7w5wnsw
  fatal: could not read Username for 'https://github.com': No such device or address
  error: subprocess-exited-with-error
  
  Γ— git clone --filter=blob:none --quiet https://github.com/huggingface/spaces /tmp/pip-req-build-w7w5wnsw did not run successfully.
  β”‚ exit code: 128
  ╰─> No available output.
  
  note: This error originates from a subprocess, and is likely not a problem with pip.
ERROR: Failed to build 'git+https://github.com/huggingface/spaces@main' when git clone --filter=blob:none --quiet https://github.com/huggingface/spaces /tmp/pip-req-build-w7w5wnsw

--> ERROR: process "/bin/sh -c pip install --no-cache-dir -r /tmp/requirements.txt" did not complete successfully: exit code: 1

Here: Light Bot 89 - a Hugging Face Space by Gertie01

@John6666

1 Like

Try this: https://huggingface.co/spaces/Gertie01/light-bot-89/discussions/1

First, I think you should learn how to debug using generative AI and give it a try…

I don’t think AI or other people exist to take over someone’s tasks 100% for free… Let’s use them appropriately…

Exit code: 1. Reason: Loading Models… model_index.json: 0%| | 0.00/541 [00:00<?, ?B/s]e[A model_index.json: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 541/541 [00:00<00:00, 2.46MB/s] Traceback (most recent call last): File β€œ/app/app.py”, line 21, in pipe_v1 = DiffusionPipeline.from_pretrained( File β€œ/usr/local/lib/python3.10/site-packages/huggingface_hub/utils/_validators.py”, line 89, in _inner_fn return fn(*args, **kwargs) File β€œ/usr/local/lib/python3.10/site-packages/diffusers/pipelines/pipeline_utils.py”, line 833, in from_pretrained cached_folder = cls.download( File β€œ/usr/local/lib/python3.10/site-packages/huggingface_hub/utils/_validators.py”, line 89, in _inner_fn return fn(*args, **kwargs) File β€œ/usr/local/lib/python3.10/site-packages/diffusers/pipelines/pipeline_utils.py”, line 1575, in download pipeline_class._is_onnx, File β€œ/usr/local/lib/python3.10/site-packages/diffusers/utils/import_utils.py”, line 645, in _getattr_ requires_backends(cls, cls._backends) File β€œ/usr/local/lib/python3.10/site-packages/diffusers/utils/import_utils.py”, line 613, in requires_backends raise ImportError(β€œβ€.join(failed)) ImportError: StableDiffusionPipeline requires the transformers library but it was not found in your environment. You can install it with pip: `pip install transformers`

Container logs:

===== Application Startup at 2025-11-16 12:27:27 =====

Loading Models...


model_index.json:   0%|          | 0.00/541 [00:00<?, ?B/s]
model_index.json: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 541/541 [00:00<00:00, 2.46MB/s]
Traceback (most recent call last):
  File "/app/app.py", line 21, in <module>
    pipe_v1 = DiffusionPipeline.from_pretrained(
  File "/usr/local/lib/python3.10/site-packages/huggingface_hub/utils/_validators.py", line 89, in _inner_fn
    return fn(*args, **kwargs)
  File "/usr/local/lib/python3.10/site-packages/diffusers/pipelines/pipeline_utils.py", line 833, in from_pretrained
    cached_folder = cls.download(
  File "/usr/local/lib/python3.10/site-packages/huggingface_hub/utils/_validators.py", line 89, in _inner_fn
    return fn(*args, **kwargs)
  File "/usr/local/lib/python3.10/site-packages/diffusers/pipelines/pipeline_utils.py", line 1575, in download
    pipeline_class._is_onnx,
  File "/usr/local/lib/python3.10/site-packages/diffusers/utils/import_utils.py", line 645, in __getattr__
    requires_backends(cls, cls._backends)
  File "/usr/local/lib/python3.10/site-packages/diffusers/utils/import_utils.py", line 613, in requires_backends
    raise ImportError("".join(failed))
ImportError: 
StableDiffusionPipeline requires the transformers library but it was not found in your environment. You can install it with pip: `pip
install transformers`

Loading Models...


model_index.json:   0%|          | 0.00/541 [00:00<?, ?B/s]
model_index.json: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 541/541 [00:00<00:00, 2.27MB/s]
Traceback (most recent call last):
  File "/app/app.py", line 21, in <module>
    pipe_v1 = DiffusionPipeline.from_pretrained(
  File "/usr/local/lib/python3.10/site-packages/huggingface_hub/utils/_validators.py", line 89, in _inner_fn
    return fn(*args, **kwargs)
  File "/usr/local/lib/python3.10/site-packages/diffusers/pipelines/pipeline_utils.py", line 833, in from_pretrained
    cached_folder = cls.download(
  File "/usr/local/lib/python3.10/site-packages/huggingface_hub/utils/_validators.py", line 89, in _inner_fn
    return fn(*args, **kwargs)
  File "/usr/local/lib/python3.10/site-packages/diffusers/pipelines/pipeline_utils.py", line 1575, in download
    pipeline_class._is_onnx,
  File "/usr/local/lib/python3.10/site-packages/diffusers/utils/import_utils.py", line 645, in __getattr__
    requires_backends(cls, cls._backends)
  File "/usr/local/lib/python3.10/site-packages/diffusers/utils/import_utils.py", line 613, in requires_backends
    raise ImportError("".join(failed))
ImportError: 
StableDiffusionPipeline requires the transformers library but it was not found in your environment. You can install it with pip: `pip
install transformers`

Exit code: 1. Reason: οΏ½β–ˆβ–ˆβ–ˆ| 551/551 [00:00<00:00, 5.90MB/s] vae/diffusion_pytorch_model.safetensors: 0%| | 0.00/335M [00:00<?, ?B/s]e[A vae/diffusion_pytorch_model.safetensors: 20%|β–ˆβ–‰ | 66.6M/335M [00:02<00:10, 25.7MB/s]e[A vae/diffusion_pytorch_model.safetensors: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 335M/335M [00:02<00:00, 124MB/s] Loading pipeline components…: 0%| | 0/6 [00:00<?, ?it/s]e[A`torch_dtype` is deprecated! Use `dtype` instead! Loading pipeline components…: 83%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Ž | 5/6 [00:01<00:00, 3.18it/s]e[A Loading pipeline components…: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 6/6 [00:01<00:00, 3.58it/s] Traceback (most recent call last): File β€œ/app/app.py”, line 27, in ).to(DEVICE) File β€œ/usr/local/lib/python3.10/site-packages/diffusers/pipelines/pipeline_utils.py”, line 541, in to module.to(device, dtype) File β€œ/usr/local/lib/python3.10/site-packages/transformers/modeling_utils.py”, line 4343, in to return super().to(*args, **kwargs) File β€œ/usr/local/lib/python3.10/site-packages/torch/nn/modules/module.py”, line 1371, in to return self._apply(convert) File β€œ/usr/local/lib/python3.10/site-packages/torch/nn/modules/module.py”, line 930, in _apply module._apply(fn) File β€œ/usr/local/lib/python3.10/site-packages/torch/nn/modules/module.py”, line 930, in _apply module._apply(fn) File β€œ/usr/local/lib/python3.10/site-packages/torch/nn/modules/module.py”, line 930, in _apply module._apply(fn) File β€œ/usr/local/lib/python3.10/site-packages/torch/nn/modules/module.py”, line 957, in _apply param_applied = fn(param) File β€œ/usr/local/lib/python3.10/site-packages/torch/nn/modules/module.py”, line 1357, in convert return t.to( File β€œ/usr/local/lib/python3.10/site-packages/torch/cuda/_init_.py”, line 410, in _lazy_init torch._C._cuda_init() RuntimeError: Found no NVIDIA driver on your system. Please check that you have an NVIDIA GPU and installed a driver from Download The Latest Official NVIDIA Drivers

Container logs:

===== Application Startup at 2025-11-16 13:51:46 =====

Loading Models...


model_index.json:   0%|          | 0.00/541 [00:00<?, ?B/s]
model_index.json: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 541/541 [00:00<00:00, 5.08MB/s]


preprocessor_config.json:   0%|          | 0.00/342 [00:00<?, ?B/s]
preprocessor_config.json: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 342/342 [00:00<00:00, 4.27MB/s]


scheduler_config-checkpoint.json:   0%|          | 0.00/209 [00:00<?, ?B/s]
scheduler_config-checkpoint.json: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 209/209 [00:00<00:00, 2.48MB/s]


scheduler_config.json:   0%|          | 0.00/313 [00:00<?, ?B/s]
scheduler_config.json: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 313/313 [00:00<00:00, 3.16MB/s]


config.json:   0%|          | 0.00/592 [00:00<?, ?B/s]
config.json: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 592/592 [00:00<00:00, 7.18MB/s]


text_encoder/model.safetensors:   0%|          | 0.00/492M [00:00<?, ?B/s]

text_encoder/model.safetensors:  60%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–‰    | 293M/492M [00:01<00:00, 292MB/s]
text_encoder/model.safetensors: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 492M/492M [00:01<00:00, 396MB/s]


merges.txt:   0%|          | 0.00/525k [00:00<?, ?B/s]
merges.txt: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 525k/525k [00:00<00:00, 66.7MB/s]


special_tokens_map.json:   0%|          | 0.00/472 [00:00<?, ?B/s]
special_tokens_map.json: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 472/472 [00:00<00:00, 4.05MB/s]


tokenizer_config.json:   0%|          | 0.00/806 [00:00<?, ?B/s]
tokenizer_config.json: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 806/806 [00:00<00:00, 7.46MB/s]


vocab.json:   0%|          | 0.00/1.06M [00:00<?, ?B/s]
vocab.json: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 1.06M/1.06M [00:00<00:00, 151MB/s]


config.json:   0%|          | 0.00/743 [00:00<?, ?B/s]
config.json: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 743/743 [00:00<00:00, 7.20MB/s]


unet/diffusion_pytorch_model.safetensors:   0%|          | 0.00/3.44G [00:00<?, ?B/s]

unet/diffusion_pytorch_model.safetensors:   1%|          | 19.9M/3.44G [00:01<04:55, 11.6MB/s]

unet/diffusion_pytorch_model.safetensors:   3%|β–Ž         | 87.0M/3.44G [00:03<01:48, 30.9MB/s]

unet/diffusion_pytorch_model.safetensors:  10%|β–ˆ         | 355M/3.44G [00:04<00:32, 93.6MB/s] 

unet/diffusion_pytorch_model.safetensors:  49%|β–ˆβ–ˆβ–ˆβ–ˆβ–‰     | 1.70G/3.44G [00:05<00:03, 455MB/s]
unet/diffusion_pytorch_model.safetensors: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 3.44G/3.44G [00:06<00:00, 528MB/s]


config.json:   0%|          | 0.00/551 [00:00<?, ?B/s]
config.json: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 551/551 [00:00<00:00, 5.90MB/s]


vae/diffusion_pytorch_model.safetensors:   0%|          | 0.00/335M [00:00<?, ?B/s]

vae/diffusion_pytorch_model.safetensors:  20%|β–ˆβ–‰        | 66.6M/335M [00:02<00:10, 25.7MB/s]
vae/diffusion_pytorch_model.safetensors: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 335M/335M [00:02<00:00, 124MB/s]  


Loading pipeline components...:   0%|          | 0/6 [00:00<?, ?it/s]`torch_dtype` is deprecated! Use `dtype` instead!


Loading pipeline components...:  83%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Ž | 5/6 [00:01<00:00,  3.18it/s]
Loading pipeline components...: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 6/6 [00:01<00:00,  3.58it/s]
Traceback (most recent call last):
  File "/app/app.py", line 27, in <module>
    ).to(DEVICE)
  File "/usr/local/lib/python3.10/site-packages/diffusers/pipelines/pipeline_utils.py", line 541, in to
    module.to(device, dtype)
  File "/usr/local/lib/python3.10/site-packages/transformers/modeling_utils.py", line 4343, in to
    return super().to(*args, **kwargs)
  File "/usr/local/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1371, in to
    return self._apply(convert)
  File "/usr/local/lib/python3.10/site-packages/torch/nn/modules/module.py", line 930, in _apply
    module._apply(fn)
  File "/usr/local/lib/python3.10/site-packages/torch/nn/modules/module.py", line 930, in _apply
    module._apply(fn)
  File "/usr/local/lib/python3.10/site-packages/torch/nn/modules/module.py", line 930, in _apply
    module._apply(fn)
  File "/usr/local/lib/python3.10/site-packages/torch/nn/modules/module.py", line 957, in _apply
    param_applied = fn(param)
  File "/usr/local/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1357, in convert
    return t.to(
  File "/usr/local/lib/python3.10/site-packages/torch/cuda/__init__.py", line 410, in _lazy_init
    torch._C._cuda_init()
RuntimeError: Found no NVIDIA driver on your system. Please check that you have an NVIDIA GPU and installed a driver from http://www.nvidia.com/Download/index.aspx
Loading Models...


model_index.json:   0%|          | 0.00/541 [00:00<?, ?B/s]
model_index.json: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 541/541 [00:00<00:00, 5.69MB/s]


preprocessor_config.json:   0%|          | 0.00/342 [00:00<?, ?B/s]
preprocessor_config.json: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 342/342 [00:00<00:00, 2.89MB/s]


scheduler_config-checkpoint.json:   0%|          | 0.00/209 [00:00<?, ?B/s]
scheduler_config-checkpoint.json: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 209/209 [00:00<00:00, 2.51MB/s]


scheduler_config.json:   0%|          | 0.00/313 [00:00<?, ?B/s]
scheduler_config.json: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 313/313 [00:00<00:00, 4.29MB/s]


config.json:   0%|          | 0.00/592 [00:00<?, ?B/s]
config.json: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 592/592 [00:00<00:00, 6.71MB/s]


text_encoder/model.safetensors:   0%|          | 0.00/492M [00:00<?, ?B/s]

text_encoder/model.safetensors:  86%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–‹ | 425M/492M [00:01<00:00, 397MB/s]
text_encoder/model.safetensors: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 492M/492M [00:01<00:00, 439MB/s]


merges.txt:   0%|          | 0.00/525k [00:00<?, ?B/s]
merges.txt: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 525k/525k [00:00<00:00, 39.5MB/s]


special_tokens_map.json:   0%|          | 0.00/472 [00:00<?, ?B/s]
special_tokens_map.json: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 472/472 [00:00<00:00, 4.90MB/s]


tokenizer_config.json:   0%|          | 0.00/806 [00:00<?, ?B/s]
tokenizer_config.json: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 806/806 [00:00<00:00, 5.94MB/s]


vocab.json:   0%|          | 0.00/1.06M [00:00<?, ?B/s]
vocab.json: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 1.06M/1.06M [00:00<00:00, 45.4MB/s]


config.json:   0%|          | 0.00/743 [00:00<?, ?B/s]
config.json: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 743/743 [00:00<00:00, 7.26MB/s]


unet/diffusion_pytorch_model.safetensors:   0%|          | 0.00/3.44G [00:00<?, ?B/s]

unet/diffusion_pytorch_model.safetensors:   2%|▏         | 67.1M/3.44G [00:02<01:57, 28.7MB/s]