Gertie01 / app-gb199hq1: Report

I’ve inserted my API keys into the “Repository secrets” section at the “Settings” tab. After that, I used the example and pressed the Remix button, and nothing happened.

Here: App Gb199hq1 - a Hugging Face Space by Gertie01

@John6666

1 Like

Probably points 1 and 2 below are causing the error.


The “Remix” click is bound to a Python function that expects a gr.Progress argument but Gradio doesn’t inject it because you didn’t give it a default. That raises an exception on click, so the UI appears to do nothing. You also call OpenAI with an invalid model id (gpt-image-1-low) and use a previous-generation image model for synthesis (dall-e-3). Fix the function signature, use a supported multimodal model for the analysis step, and restart the Space after adding secrets.

What’s actually breaking

  1. Gradio progress parameter is required but not provided
    Your click handler calls remixer_wrapper(...) with five inputs, but the function signature has a sixth parameter progress: gr.Progress and no default. Gradio only auto-injects progress when the parameter has a default value. Your code then calls progress(...), which throws before any output is rendered.
    Docs: functions must declare progress=gr.Progress(...) to receive the injected object. (Gradio)

  2. Unsupported OpenAI model id in analysis path
    You send image+text to Chat Completions with model="gpt-image-1-low". That id does not exist. “Low/High” is a quality setting for the image generation tool, not a model suffix. Use a vision-capable chat model such as gpt-4o-mini (or gpt-4o) with image_url parts, or call the Images tool with gpt-image-1. (OpenAI Platform)

  3. Previous-generation image model for the final render
    You call images.generate(model="dall-e-3"). It still works, but it is listed as “previous generation.” Prefer gpt-image-1 or gpt-image-1-mini. (OpenAI Platform)

  4. Secrets are environment variables. A rebuild/restart is needed after adding them
    Your code expects OPENAI_API_KEY and GEMINI_API_KEY from the environment (Hugging Face “Repository secrets” surface them at runtime). If you add secrets after a Space is running, restart the Space to load them. (Hugging Face)

  5. Version drift risk
    Your Space YAML pins Gradio to 5.49.1, but requirements.txt floats package versions. Keep them consistent to avoid mismatches with v5 APIs like gr.Progress.
    Config reference for sdk/sdk_version: (Microsoft Learn)


Exact fixes to apply

A. Make the click handler compatible with Gradio

Add a default to the progress parameter so Gradio injects it.

# app.py
def remixer_wrapper(
    model_choice: str,
    prompt: str,
    img1_path: Optional[str],
    img2_path: Optional[str],
    img3_path: Optional[str],
    progress: gr.Progress = gr.Progress()  # <-- add default
) -> Tuple[str, Image.Image]:
    ...

This matches the documented pattern (progress=gr.Progress(...)). (Gradio)

B. Use a supported model for the analysis step (image+text → prompt)

Replace the invalid gpt-image-1-low Chat Completions call with a vision-capable chat model (example: gpt-4o-mini). Keep your base64 image_url parts.

# models.py  (analysis branch)
response = OPENAI_CLIENT.chat.completions.create(
    model="gpt-4o-mini",  # vision-capable chat model
    messages=[{"role": "user", "content": contents}],
    max_tokens=500
)
expanded_prompt = response.choices[0].message.content.strip()

OpenAI vision in Chat Completions: send {"type": "image_url", "image_url": {"url": "data:image/jpeg;base64,...", "detail": "low"}} items in messages. (OpenAI Platform)
Why: gpt-image-1 is for the Images tool, not a Chat Completions model. (OpenAI Platform)

C. Modernize the final image generation call

Prefer the GPT Image tool.

# models.py  (synthesis step)
dalle_response = OPENAI_CLIENT.images.generate(
    model="gpt-image-1",         # replace "dall-e-3"
    prompt=final_prompt,
    size="1024x1024",
    quality="standard",
    n=1
)

dall-e-3 is “previous generation”; gpt-image-1 is current. (OpenAI Platform)

D. Confirm secrets and restart

  • In your Space: Settings → Repository secrets. Add:

    • OPENAI_API_KEY
    • GEMINI_API_KEY
  • Click Restart this Space to reload env vars.
    Secrets usage and monitoring logs: (Hugging Face)
    Google SDK accepts either GEMINI_API_KEY or GOOGLE_API_KEY. Your code uses the default genai.Client() which reads them automatically. (Google AI for Developers)

E. Keep package versions aligned

Make requirements.txt consistent with your Space SDK:

# requirements.txt
gradio==5.49.1     # match README sdk_version
Pillow>=10.4,<11
google-genai>=0.5
openai>=1.51       # new Images + Chat Completions APIs
requests>=2.32

Space config reference for sdk_version: (Microsoft Learn)


Quick validation checklist (beginner-safe)

  1. Reload the Space
    After pushing the code changes, open the Space page and press Restart. Then press View logs in the right panel. If anything fails, the Python traceback appears there. (OpenAI Platform)

  2. Test without API calls

  • Leave model_selector as “gpt image-1”.
  • Upload 1 dummy image and type a short prompt.
  • You should see the progress bar step from 0.1 to 1.0 and a generated image.
    If you see an error, the log will point to the line number. Your code raises Gradio errors for missing API keys or no images.
  1. Switch analysis models
  • “gemini-2” path calls models.generate_content(model='gemini-2.0-flash-live', ...). Consider using a current model id like gemini-2.5-flash in the same API. (Google AI for Developers)
  1. If the button still “does nothing”
    Open the browser console for front-end errors and check Space logs for Python exceptions. Most often it is a TypeError from the mismatched progress parameter or an invalid model id. (OpenAI Platform)

Why the fixes map to your code

  • Your handler signature requires progress but the click wires only five inputs. Adding a default lets Gradio inject it and unblocks the call. (Gradio)
  • Your Chat Completions call uses an invalid model id and correct but mis-applied image_url content. Swap to gpt-4o-mini (vision chat) or move analysis into the Images tool with gpt-image-1 if you want single-tool processing. (OpenAI Platform)
  • The final render prefers gpt-image-1; dall-e-3 is legacy. (OpenAI Platform)
  • Secrets: your modules create clients from env on import. A restart is required after adding secrets in Settings. (Hugging Face)

Optional improvements

  • Guardrails in UI: Disable the “Remix” button until a prompt and at least one image are present.
  • Inline secret check: On app load, test for OPENAI_API_KEY and show a banner if missing.
  • Model ids: Expose a dropdown of supported ids and keep them in a constants list so typos cannot reach the API.

Short, curated references

Gradio

  • Progress usage pattern (progress=gr.Progress(...)). (Gradio)
  • Blocks/ChatInterface and examples shape. (Gradio)

Hugging Face Spaces

OpenAI

Google GenAI



Here’s a minimal patch set that unblocks the Remix button and fixes the model IDs.

diff --git a/app.py b/app.py
index 0000000..1111111 100644
--- a/app.py
+++ b/app.py
@@
-def remixer_wrapper(
+def remixer_wrapper(
     model_choice: str,
     prompt: str,
     img1_path: Optional[str],
     img2_path: Optional[str],
     img3_path: Optional[str],
-    progress: gr.Progress
+    progress: gr.Progress = gr.Progress()
 ) -> Tuple[str, Image.Image]:
diff --git a/models.py b/models.py
index 0000000..2222222 100644
--- a/models.py
+++ b/models.py
@@
-            response = OPENAI_CLIENT.chat.completions.create(
-                model="gpt-image-1-low",
+            response = OPENAI_CLIENT.chat.completions.create(
+                model="gpt-4o-mini",
                 messages=[
                     {"role": "user", "content": contents}
                 ],
                 max_tokens=500
             )
@@
-        dalle_response = OPENAI_CLIENT.images.generate(
-            model="dall-e-3",
+        dalle_response = OPENAI_CLIENT.images.generate(
+            model="gpt-image-1",
             prompt=final_prompt,
             size="1024x1024",
             quality="standard",
             n=1
         )

After applying

  • Restart the Space so secrets and code changes load. (Hugging Face)

Why these exact changes

  • Gradio injects a progress tracker only when the parameter has a default of gr.Progress(). Without the default, your click raises and the UI looks inert. (Gradio)
  • Vision analysis must use a chat model (e.g., gpt-4o/gpt-4o-mini) with image_url content. gpt-image-1 is the Images API model, not a Chat Completions model. (OpenAI Platform)
  • dall-e-3 is listed as previous generation; use gpt-image-1 on the Images API. (OpenAI Platform)

That’s it. Apply these diffs and restart. If anything still stalls, check the Space logs for tracebacks.

FAILED. Error: Image generation failed: Error code: 400 - {‘error’: {‘message’: “Invalid value: ‘standard’. Supported values are: ‘low’, ‘medium’, ‘high’, and ‘auto’.”, ‘type’: ‘invalid_request_error’, ‘param’: ‘quality’, ‘code’: ‘invalid_value’}}

1 Like

When using external endpoints, the actual parameters accepted differ for each endpoint, so you’ll likely get errors unless you thoroughly read the documentation first. Even the same endpoint may undergo specification changes over time.:sweat_smile:


Use quality="medium" or drop the param. "standard" is for DALL·E 3, not gpt-image-1. Valid values now: low, medium, high, auto (default). (OpenAI Cookbook) Also keep your existing URL-based download by requesting URLs explicitly.

Minimal patch

diff --git a/models.py b/models.py
@@
-        dalle_response = OPENAI_CLIENT.images.generate(
-            model="gpt-image-1",
-            prompt=final_prompt,
-            size="1024x1024",
-            quality="standard",
-            n=1
-        )
+        dalle_response = OPENAI_CLIENT.images.generate(
+            model="gpt-image-1",
+            prompt=final_prompt,
+            size="1024x1024",
+            quality="medium",           # valid: low | medium | high | auto
+            response_format="url",      # return URLs, so the next lines keep working
+            n=1
+        )

Why this fixes it

  • gpt-image-1 rejects quality="standard". Use low|medium|high|auto. (OpenAI Cookbook)
  • If you prefer not to pin quality, remove the line and let it default to auto. (OpenAI Cookbook)
  • Asking for response_format="url" preserves your existing dalle_response.data[0].url download path. OpenAI notes URL links are temporary, so save the bytes locally. (OpenAI Help Center)

Reference confirming gpt-image-1 quality levels and examples: OpenAI Cookbook. (OpenAI Cookbook)

FAILED. Error: Image generation failed: Error code: 400 - {‘error’: {‘message’: “Unknown parameter: ‘response_format’.”, ‘type’: ‘invalid_request_error’, ‘param’: ‘response_format’, ‘code’: ‘unknown_parameter’}}

1 Like

Sorry, I got the API spec wrong…:sweat_smile: response_format doesn’t seem to be accepted. I committed it. Try merging it.

FAILED. Error: Image generation failed: Error code: 400 - {‘error’: {‘message’: ‘Billing hard limit has been reached.’, ‘type’: ‘billing_limit_user_error’, ‘param’: None, ‘code’: ‘billing_hard_limit_reached’}}

1 Like

Billing hard limit has been reached

This definitely looks like an Endpoint payment-related error…
Using it any further will probably incur charges. There’s nothing we can do with the code.