2 Likes
硅基流动没墙啊直接连官网,服务器在北京。我 NextChat 是这样填:https://api.siliconflow.cn/v1/chat/completions#
2 Likes
要在openwebui中使用硅基的嵌入向量模型,测试其他硅基模型,报错
不是有个函数设置吗,你在自定义一个就可以了 套用openai的函数就可以了
1 Like
小白,不懂怎么操作,函数我一次都没用过
直接用啊,和openai格式一样的
1 Like
这里,官方哪里也有
去get 到你部署的url上就好了
但是那个文档模型向量化,我没搞懂,今天才发现这个api有嵌入api
1 Like
但是会报错,硅基后台又能看到使用记录
import asyncio
from typing import Optional
from pydantic import BaseModel, Field
async def emit(emitter, msg, done):
await emitter(
{
"type": "status",
"data": {
"done": done,
"description": msg,
},
}
)
class Filter:
class Valves(BaseModel):
priority: int = Field(
default=0,
description="Priority level for the filter operations.",
)
api_url: str = Field(
default="https://api.siliconflow.cn/v1",
description="Base URL for the Siliconflow API.",
)
api_key: str = Field(
default="",
description="API Key for the Siliconflow API.",
)
class UserValves(BaseModel):
size: str = Field(
default="1024x1024",
description="1024x1024, 512x1024, 768x512, 768x1024, 1024x576, 576x1024.",
)
steps: int = Field(
default=35,
description="The number of inference steps to be performed (1-50).",
)
model: str = Field(
default="black-forest-labs/FLUX.1-dev",
description="The name of the model. (Pro/black-forest-labs/FLUX.1-schnell, black-forest-labs/FLUX.1-schnell, black-forest-labs/FLUX.1-dev, stabilityai/stable-diffusion-3-medium, stabilityai/stable-diffusion-3-5-large)",
)
pnum: int = Field(
default=3,
description="The number of pictures.",
)
seed: Optional[int] = Field(
default=None,
description="The seed.",
)
def __init__(self):
self.valves = self.Valves()
async def inlet(self, body, __user__, __event_emitter__):
await emit(__event_emitter__, "Generating prompt, please wait...", False)
return body
async def request(self, prompt, __user__):
url = f"{self.valves.api_url}/image/generations"
headers = {
"accept": "application/json",
"content-type": "application/json",
"authorization": f"Bearer {self.valves.api_key}",
}
payload = {
"prompt": prompt,
"model": __user__["valves"].model,
"image_size": __user__["valves"].size,
"num_inference_steps": __user__["valves"].steps,
}
if seed := __user__["valves"].seed:
payload["seed"] = seed
pnum = __user__["valves"].pnum
async with aiohttp.ClientSession() as sess:
tasks = [sess.post(url, json=payload, headers=headers) for _ in range(pnum)]
res = await asyncio.gather(*tasks)
ret = []
for i, r in enumerate(res):
if (s := r.status) == 200:
json = await r.json()
url = json["images"][0]["url"]
ret.append(f"")
else:
text = await r.text()
ret.append(f"> The {i} request failed ({s}): {text}.")
return ret
async def outlet(self, body, __user__, __event_emitter__):
await emit(__event_emitter__, f"Generating picture(s), please wait...", False)
last = body["messages"][-1]
res = await self.request(last["content"], __user__)
for r in res:
last["content"] += f"\n\n{r}"
await emit(
__event_emitter__, f"Generated successfully, click to preview!", True
)
return body
修改好了,不过这个模型真的有点多啊
1 Like
硅基怎么用生图呢,对话可以,生图就提示连不上,
整个下午都在赶稿,所以 new api 你连接上了?我 one api 里新建时有个 SiliconFlow(硅基流动官方英文名)然后就跟普通的一样填。
生图的话点我头像那个牛皮癣广告贴就是。。看着图在插件那里配置一下就行。(但我用的是 NextChat,你用 OpenWebUI 的话得搜 OI 的贴那个我没用过)
ok,谢谢大佬,搞上了,
我的可以直接用