open-webui 使用 fal.ai 绘图函数[Flux-Pro]

前言

众所周知,硅基流动的 Flux 下架了,但是很快有佬友找到了新的渠道 fal.ai

通过这个链接的教程可以获取 50$,于是我根据接口稍微捏了一个新的绘图函数,用的是 fal.ai 里的 Flux-Pro 模型 - FLUX1.1 [pro] ultra | Text to Image | fal.ai

函数(修改时间:2025-03-04-03:00)

import aiohttp
import asyncio
import json
from typing import Optional
from pydantic import BaseModel, Field


async def emit(emitter, msg, done):
    await emitter(
        {
            "type": "status",
            "data": {
                "done": done,
                "description": msg,
            },
        }
    )


class Filter:
    class Valves(BaseModel):
        priority: int = Field(
            default=0,
            description="Priority level for the filter operations.",
        )
        api_key: str = Field(
            default="",
            description="fal.ai 的 API 密钥",
        )

    class UserValves(BaseModel):
        enable_safety_checker: bool = Field(
            default=False,
            description="是否开启安全内容过滤",
        )
        safety_tolerance: str = Field(
            default="5",
            description="安全等级(1-5),1为最严格,5为最宽松",
        )
        aspect_ratio: str = Field(
            default="16:9",
            description="图片比例,可选:21:9, 16:9, 4:3, 3:2, 1:1, 2:3, 3:4, 9:16, 9:21",
        )
        raw: bool = Field(
            default=False,
            description="生成较少处理、更自然的图像。",
        )
        seed: Optional[int] = Field(
            default=None,
            description="图像生成的种子",
        )

    def __init__(self):
        self.valves = self.Valves()

    async def inlet(self, body, __user__, __event_emitter__):
        await emit(__event_emitter__, "正在生成绘制提示词,请等待...", False)
        return body

    async def request(self, prompt, __user__, __event_emitter__):
        url = "https://queue.fal.run/fal-ai/flux-pro/v1.1-ultra"

        headers = {
            "accept": "application/json",
            "content-type": "application/json",
            "Authorization": f"Key {self.valves.api_key}",
        }

        payload = {
            "prompt": prompt,
            "enable_safety_checker": __user__["valves"].enable_safety_checker,
            "safety_tolerance": __user__["valves"].safety_tolerance,
            "aspect_ratio": __user__["valves"].aspect_ratio,
            "raw": __user__["valves"].raw,
        }

        if seed := __user__["valves"].seed:
            payload["seed"] = seed

        async with aiohttp.ClientSession() as sess:
            # Initial request to start image generation
            initial_response = await sess.post(url, json=payload, headers=headers)

            if initial_response.status != 200:
                initial_text = await initial_response.text()
                await emit(
                    __event_emitter__,
                    f"The initial request failed ({initial_response.status}): {initial_text}.",
                    True,
                )
                return []

            initial_data = await initial_response.json()
            response_url = initial_data.get("response_url")

            if not response_url:
                await emit(__event_emitter__, f"Failed to get request_url", True)
                return []

            # Poll for results with updated logic
            max_attempts = 15
            for attempt in range(max_attempts):
                # Determine wait time based on attempt number
                wait_time = 5 if attempt < 10 else 10

                await emit(
                    __event_emitter__,
                    f"正在生成图片,请等待... ({attempt+1}/{max_attempts})",
                    False,
                )

                # Wait before polling
                await asyncio.sleep(wait_time)

                try:
                    poll_response = await sess.get(response_url, headers=headers)
                    poll_text = await poll_response.text()
                    poll_data = json.loads(poll_text)

                    # Check if we have images (success case)
                    if "images" in poll_data and poll_data["images"]:
                        images = []
                        for i, image_data in enumerate(poll_data["images"]):
                            if url := image_data.get("url"):
                                images.append(f"![image{i}]({url})")

                        if images:
                            await emit(
                                __event_emitter__, f"图片生成成功,请点击预览!", True
                            )
                            return images

                    # Check if still in progress
                    if (
                        "detail" in poll_data
                        and "Request is still in progress" in poll_data["detail"]
                    ):
                        # This is expected while waiting - continue to next polling attempt
                        continue

                    # If we get here, it's some other unexpected response
                    await emit(
                        __event_emitter__, f"Unexpected response: {poll_data}", True
                    )
                    return []

                except Exception as e:
                    await emit(
                        __event_emitter__, f"Error during polling: {str(e)}", True
                    )
                    return []

            # If we reach here, polling timed out
            await emit(__event_emitter__, "生成时间超时", True)
            return []

    async def outlet(self, body, __user__, __event_emitter__):
        await emit(__event_emitter__, f"正在生成图片,请等待...", False)
        last = body["messages"][-1]
        res = await self.request(last["content"], __user__, __event_emitter__)

        for r in res:
            last["content"] = f"{r}\n{last['content']}"

        return body

使用教程:

  1. open-webui 创建函数,丢入代码保存

  2. 齿轮按钮进入后配置 API

  1. 自定义模型,并在过滤器中启用该函数

提示词:

You are a text-to-image prompt generator, and your task is to convert my sentences into detailed, rich, and creative English prompts. First, you need to remove non-descriptive content from my sentences, such as: “Draw a sleeping kitten”, you only need to extract the main part: “A sleeping kitten”. Then you need to process the main content: use your imagination and creativity to describe as detailed and vivid, and scenario-compliant prompts as possible.

Directly output the prompt words, no need for affirmative responses, and do not output any irrelevant content.

提示词可根据实际情况自行调整,我这个也只是个示例,确保模型能根据你的提问生成长度合理,格式规范的 prompt

其它说明:

由于 fal.ai 的接口是异步出图的,所以我这里只好搞了一个轮询,如果图片太长时间不返回只能作废,但大部分情况都会在几秒内生成。此外,open-webui 在对话生图过程中切换对话或关闭网页后再选择该对话好像会导致函数异常,建议大家等待生成结束后再切换对话或关闭网页。

有其他模型需求的佬友可自行看接口文档,然后修改一两个参数即可,实在不会还可以问 AI

57 Likes

感謝大佬,正好用得上

先谢后用

佬太快了~

先收藏了,谢谢佬

感谢佬友的精彩教程

感谢大佬教程!

谢谢佬 :call_me_hand:

亲测可用,感谢大佬

请教一下大佬,我添加完函数和模型,但是对话页面找不到新添加的模型,请问是什么原因导致的。新增的函数和模型都确定启用了,owu版本是0.5.18

怎么可能,你确定管理员账户也看不到?

确定看不到,准备重新部署下试试了 :joy:

搞清楚了,我之前的api和模型都是在用户设置中添加的“直接连接”,而不是管理员页面添加的“外部链接”,所以才导致的添加自定义模型看不到。

感谢分享 :pray:

感谢分享,很有帮助

用上了,感谢佬

问一下这个地址是代理后的吗?https://queue.fal.run/fal-ai/flux-pro/v1.1-ultra

不知道,没见过

误解了,你是问这个 API 接口是吧,这是官方的 API 接口

试了下,可以了。谢谢大佬

此话题已在最后回复的 30 天后被自动关闭。不再允许新回复。