Librechat部署介绍-新手项

前言
为什么会有这篇教程?
前些天在使用Librechat的自定义端点时想翻翻看官方的文档,写的宛如一坨,老旧掺在一起。
介绍什么是Librechat:
LibreChat是第三方的开源ai聊天界面,它提供了与多个种模型管理和交互的工具,LibreChat支持使用Azure OpenAI、OpenAI等AI服务集成,并提供了自定义配置选项和插件支持。

截图:

前置要求
一台性能还算过的去服务器,一个有CTRL、C、V键的键盘、一个灵活的小手。

部署

1. 先把项目克隆到本地

git clone https://github.com/danny-avila/LibreChat.git

2. 编辑配置文件(必须)

克隆到本地后你会看到 .env.example的文件,把它改名成.env,前期你需要编辑的所有项目都在里面了。该配置文件里充斥很多对我们身处国内的我们基本一点用没有的配置,所以需要按需启用。

配置文件编辑-点击展开/收起

我们从上到下看

第一先取消下面行的注释

# UID=1000
# GID=1000

开始配置使用大模型

我们从上到下来看

claude

ANTHROPIC_API_KEY=user_provided
# ANTHROPIC_MODELS=claude-3-opus-20240229,claude-3-sonnet-20240229,claude-2.1,claude-2,claude-1.2,claude-1,claude-1-100k,claude-instant-1,claude-instant-1-100k
# ANTHROPIC_REVERSE_PROXY=

该项对很多人来说没用,官方claude注册难封号快,所以我先把这项掉注释,用其他方式配置中转。

Azure、BingAI统统注释掉

为什么全部注释掉呢,因为在最新版本中azure有了新的配置方式,而BingAI它所使用的方式,你部署在服务器可用性基本为0,不用考虑,想集成使用现在有更好的方式。

Google的配置

你可从Google Al Studio这里获取GOOGLE_KEY(使用美国节点)
GOOGLE_MODELS这一项可以设定模型,大家一般用的也就是gemini-pro,gemini-pro-vision

OpenAI配置

OPENAI_API_KEY=user_provided
# OPENAI_MODELS=gpt-3.5-turbo-0125,gpt-3.5-turbo-0301,gpt-3.5-turbo,gpt-4,gpt-4-0613,gpt-4-vision-preview,gpt-3.5-turbo-0613,gpt-3.5-turbo-16k-0613,gpt-4-0125-preview,gpt-4-turbo-preview,gpt-4-1106-preview,gpt-3.5-turbo-1106,gpt-3.5-turbo-instruct,gpt-3.5-turbo-instruct-0914,gpt-3.5-turbo-16k

如果OPENAI_API_KEYuser_provided,则你可以到了web界面再填写api key,如果你只是自己和朋友使用这里就可以填上。
OPENAI_MODELS后填入你可以使用、需要使用的模型,那些老旧的模型可以删了。这是常用的你可以自己更改。
gpt-3.5-turbo,gpt-3.5-turbo-1106,gpt-3.5-turbo-0125,gpt-3.5-turbo-16k,gpt-4,gpt-4-1106-preview,gpt-4-0125-preview,gpt-4-turbo-preview, gpt-4-32k,gpt-4-vision-preview


# TITLE_CONVO=false
# OPENAI_TITLE_MODEL=gpt-3.5-turbo

该配置是是用来生成对话标题的,取消掉注释,false改成true开启,建议使用,因为看着会舒服很多,标题生成模型使用gpt-3.5-turbo就够了。
!!!!注意该项会有额外的api消耗,介意的请勿启用


# OPENAI_FORCE_PROMPT=true

如果你使用的是中转代理api我建议你取消该项注释并把true改成false


# OPENAI_REVERSE_PROXY=

该项是你中转api的地址,你购买api会给,格式:http(s)://xxxxxxx.xxxxx/v1

Assistants API 助手api

ASSISTANTS_API_KEY=user_provided
# ASSISTANTS_BASE_URL=
# ASSISTANTS_MODELS=gpt-3.5-turbo-0125,gpt-3.5-turbo-16k-0613,gpt-3.5-turbo-16k,gpt-3.5-turbo,gpt-4,gpt-4-0314,gpt-4-32k-0314,gpt-4-0613,gpt-3.5-turbo-0613,gpt-3.5-turbo-1106,gpt-4-0125-preview,gpt-4-turbo-preview,gpt-4-1106-preview

该项可以按需填写。openai和azure都推出了助手api,国内的中转我还没有看到,如果有openai官方的或者你有中转可以自己填,!!!azure的助手api在其他地方设置
如果你没有或者你不需要或者你使用的azure的助手api请注释掉

常用插件设置

记住如果你想使用插件功能你的api必须是官转无论是azure或者openai都行,逆向是不支持 Function Calling

  1. Dall-E
    Dall-e分为openai和azure两个渠道,中转dall-e-3我们归入openai渠道

先说openai

# DALLE_API_KEY=
# DALLE3_API_KEY=
# DALLE2_API_KEY=
# DALLE_REVERSE_PROXY=

这几个便是你要填的,你有几代就填几,没有就别管,中转地址填在 DALLE_REVERSE_PROXY中,v1结尾,填完记得取消注释。
Librechat调用的dalle3模型名称格式为dall-e-3,有的中转模型名字不是这个,会调用失败,dall-e-2同理

azure的dalle3配置

你需要配置下面几项:

# DALLE3_API_KEY=
# DALLE2_API_KEY=
# DALLE3_BASEURL=
# DALLE2_BASEURL=
# DALLE3_AZURE_API_VERSION=
# DALLE2_AZURE_API_VERSION=

还是那个原则需要啥填啥,不需要或者没有不填,这几项都是微软给的。
2. 下面是一些常用的插件,我就之间方官方文档链接了直接获取api填进去就行

# Google
#-----------------
GOOGLE_API_KEY=
GOOGLE_CSE_ID=
# https://docs.librechat.ai/features/plugins/google_search.html

# SerpAPI
#-----------------
SERPAPI_API_KEY=
# https://docs.librechat.ai/install/configuration/dotenv.html#serpapi

# Stable Diffusion
#-----------------
SD_WEBUI_URL=http://host.docker.internal:7860

# Tavily
#-----------------
TAVILY_API_KEY=
# https://app.tavily.com

# Traversaal
#-----------------
TRAVERSAAL_API_KEY=
# https://api.traversaal.ai

# WolframAlpha
#-----------------
WOLFRAM_APP_ID=
# https://docs.librechat.ai/features/plugins/wolfram.html

3. 配置其他的AI模型的(可选)

Librechat的其他ai模型的配置在另一个配置文件里librechat.example.yaml,把名字改成librechat.yaml使用。
这里先放两个官方文档,有需要的自己去翻,我这里只说最简单直观的

链接

配置模板

模板
version: 1.0.5
cache: true
# fileStrategy: "firebase"  # If using Firebase CDN
fileConfig:
  endpoints:
version: 1.0.4
cache: true
endpoints:
  azureOpenAI:
    titleModel: "gpt-3.5-turbo"
    assistants: true
    plugins: true
    groups:
    - group: "assistants"
      apiKey: ""
      instanceName: ""
    # Mark this group as assistants compatible
      assistants: true
    # version must be "2024-02-15-preview" or later
      version: "2024-03-01-preview"
      models:
        gpt-3.5-turbo:
          deploymentName: "gpt-35-turbo-0125"
        gpt-3.5-turbo-16k: 
          deploymentName: "gpt-35-turbo-16k"
        gpt-4-1106-preview:
          deploymentName: "gpt-4-1106-preview"
        gpt-4-vision-preview:
          deploymentName: "gpt-4-vision-preview"
          version: "2024-03-01-preview"
  custom:
    - name: "Mistral"
      apiKey: "${MISTRAL_API_KEY}"
      baseURL: "https://api.mistral.ai/v1"
      models:
        default: ["mistral-tiny", "mistral-small", "mistral-medium", "mistral-large-latest"]
        # Attempt to dynamically fetch available models
        fetch: true  
        userIdQuery: false
      iconURL: "https://example.com/mistral-icon.png"
      titleConvo: true
      titleModel: "mistral-tiny"
      modelDisplayLabel: "Mistral AI"
      # addParams:
      # Mistral API specific value for moderating messages
      #   safe_prompt: true 
      dropParams:
        - "stop"
        - "user"
        - "presence_penalty"
        - "frequency_penalty"
      # headers:
      #    x-custom-header: "${CUSTOM_HEADER_VALUE}"
    - name: "OpenRouter"
      apiKey: "${OPENROUTER_API_KEY}"
      baseURL: "https://openrouter.ai/api/v1"
      models:
        default: ["gpt-3.5-turbo"]
        fetch: false
      titleConvo: true
      titleModel: "gpt-3.5-turbo"
      modelDisplayLabel: "OpenRouter"
      dropParams:
        - "stop"
        - "frequency_penalty"
参考模板Azure的部分如果不需要可删除或者注释掉
Azure
azureOpenAI:
    titleModel: "gpt-3.5-turbo"
    assistants: true
# 是否启用助手api
    plugins: true
# 是否可以使用插件
    groups:
    - group: "assistants"
# 名字可以随便取
      apiKey: ""
# 微软给你的密钥
      instanceName: ""
# 你不是的这个Azure OpenAI的服务名
    # Mark this group as assistants compatible
      assistants: true
    # version must be "2024-02-15-preview" or later
      version: "2024-03-01-preview"
      models:
        gpt-3.5-turbo:
          deploymentName: "gpt-35-turbo-0125"
        gpt-3.5-turbo-16k: 
          deploymentName: "gpt-35-turbo-16k"
        gpt-4-1106-preview:
          deploymentName: "gpt-4-1106-preview"
        gpt-4-vision-preview:
          deploymentName: "gpt-4-vision-preview"
          version: "2024-03-01-preview"

因为很多人不止部署一个azure的服务,可以以相同的格式再加一组- group: "assistants2"

其他模型
custom:
    - name: "Mistral"
      apiKey: "${MISTRAL_API_KEY}"
      baseURL: "https://api.mistral.ai/v1"
      models:
        default: ["mistral-tiny", "mistral-small", "mistral-medium", "mistral-large-latest"]
        # Attempt to dynamically fetch available models
        fetch: true  
        userIdQuery: false
      iconURL: "https://example.com/mistral-icon.png"
      titleConvo: true
      titleModel: "mistral-tiny"
      modelDisplayLabel: "Mistral AI"
      # addParams:
      # Mistral API specific value for moderating messages
      #   safe_prompt: true 
      dropParams:
        - "stop"
        - "user"
        - "presence_penalty"
        - "frequency_penalty"
      # headers:
      #    x-custom-header: "${CUSTOM_HEADER_VALUE}"
    - name: "OpenRouter"
      apiKey: "${OPENROUTER_API_KEY}"
      baseURL: "https://openrouter.ai/api/v1"
      models:
        default: ["gpt-3.5-turbo"]
        fetch: false
      titleConvo: true
      titleModel: "gpt-3.5-turbo"
      modelDisplayLabel: "OpenRouter"
      dropParams:
        - "stop"
        - "frequency_penalty"

这里简单的解释一下:
- name 随便填
apiKey: "${OPENROUTER_API_KEY}" apikey,${OPENROUTER_API_KEY}这个部分括号里的内容,你可以按照这个格式自定义变量,然后在**.env**里填apikey,当然你也可以直接填在这里
baseURL中转api,v1结尾
iconURL:图标网址
models:可以用的模型,有啥填啥,支持中文,按格式一个一个写就行
fetch:是否从当前站点获取所有可用模型,建议false

titleConvo: true
titleModel: "gpt-3.5-turbo"

标题生成和使用模型,不一定非要使用前面models:里的模型,只要是你的中转api里能使用的就行
modelDisplayLabel:随便填,一般和name一样就行

dropParams:
        - "stop"
        - "user"
        - "presence_penalty"
        - "frequency_penalty"

一般不动保持这样就行

配置可以按需叠加

如果你启动后没有看见你配置的模型,那就说明你的配置librechat.yaml有问题一般都是格式上的仔细检查

4. 启动

在启动前你需要编辑一下docker-compose.yml文件

  1. 如果你配置了librechat.yaml文件请在api容器的volumes:部分加入一行
    - ./librechat.yaml:/app/librechat.yaml
  2. 如果你的服务器不支持avx的话请把mongod的版本改到5.0以下,5.0以上强制要求avx,作者推荐的是image: mongo:4.4.18

之后就可以正常启动了
docker compose up -d

5.其他

Librechat暂时不支持google和g4v以外的vision模型,预计会在下个版本支持
建议搭配ONE-API使用,方便各种api渠道的管理,不用老是去动Librechat
bing的api可以用下面这个项目获得非常好用:

可参考的教程:

更新------2024年3月25日

几天前作者又悄摸摸的更新了一些新东西,文件读取的更能改了现在需要外挂一个服务文件上传读取,才能正常运行,服务地址如下:

下面是配置加在.env的任何位置

RAG_API_URL=http://127.0.0.1:8000
#RAG的服务地址,ip+端口,与rag.yml里对应即可
EMBEDDINGS_PROVIDER=azure
#(可选)选择"openai", "azure"默认为"openai"
#(可选)EMBEDDINGS_MODEL=
#默认值:
#openai: "text-embedding-3-small"
#azure: "text-embedding-3-small"
#OPENAI_API_KEY=
#AZURE_OPENAI_API_KEY=
#AZURE_OPENAI_ENDPOINT=

librechat.yaml示例

version: 1.0.5
cache: true

endpoints:
  azureOpenAI:
    titleModel: "gpt-3.5-turbo"
    assistants: true
    plugins: true
    groups:
    - group: ""
      apiKey: ""
      instanceName: ""
   # Mark this group as assistants compatible
      assistants: true
    # version must be "2024-02-15-preview" or later
      version: "2024-03-01-preview"
    #  baseURL: "https://passeerby-dalle.openai.azure.com"
      models:
        gpt-3.5-turbo:
          deploymentName: "gpt-35-turbo-0125"
        gpt-3.5-turbo-16k: 
          deploymentName: "gpt-35-turbo-16k"
        gpt-4-1106-preview:
          deploymentName: "gpt-4-1106-preview"
        gpt-4-vision-preview:
          deploymentName: "gpt-4-vision-preview"
          version: "2024-03-01-preview"
        
  custom:
    - name: "Claude"
      apiKey: "${LibreChat_API_KEY}"
      iconURL: "https://raw.gitmirror.com/Passerby1011/icon/6cb2a7a0474ec18ff74d4e8d358e08d449403b54/Claude-2.svg"
      baseURL: "http://192.168.31.25:30000/v1"
      models: 
        default: ["claude-instant-v1-100k","claude-2","claude-2.1","claude-3-haiku","claude-3-sonnet","claude-3-opus"]
        fetch: false
      titleConvo: true
      titleModel: "gpt-3.5-turbo" 
      summarize: false
      summaryModel: "gpt-3.5-turbo" 
      forcePrompt: false 
      modelDisplayLabel: "Claude"

    - name: "Bing"
      apiKey: "${LibreChat_API_KEY}"
      iconURL: "https://raw.gitmirror.com/Passerby1011/icon/6cb2a7a0474ec18ff74d4e8d358e08d449403b54/newbing.svg"
      baseURL: "http://192.168.31.25:30000/v1"
      models:
        default: ["Precise","Balanced","Creative","Precise-g4t","Balanced-g4t","Creative-g4t","Precise-g4t-offline","Balanced-g4t-offline","Creative-g4t-offline","Precise-18k","Balanced-18k","Creative-18k","Precise-g4t-18k","Balanced-g4t-18k","Creative-g4t-18k","Precise-18k-offline","Balanced-18k-offline","Creative-18k-offline","Precise-g4t-vision","Balanced-g4t-vision","Creative-g4t-vision"]
        fetch: false
      titleConvo: true
      titleModel: "gpt-3.5-turbo"
      summarize: false
      summaryModel: "gpt-3.5-turbo"
      forcePrompt: false
      modelDisplayLabel: "Bing"
      dropParams:
        - "stop"
        - "user"
        - "presence_penalty"
        - "frequency_penalty"
        - "temperature"
        - "top_p"

    - name: "COZE"
      apiKey: "${COZE_API_KEY}"
      iconURL: "https://raw.gitmirror.com/Passerby1011/icon/6cb2a7a0474ec18ff74d4e8d358e08d449403b54/COZE.svg"
      baseURL: "http://192.168.31.25:30000/v1"
      models:
        default: ["COZE","gpt-4","gpt-3.5-turbo-16k"]
        fetch: false
      titleConvo: true
      titleModel: "gpt-4"
      summarize: false
      summaryModel: "gpt-4"
      forcePrompt: false
      modelDisplayLabel: "COZE"        

    - name: "Metaso"
      apiKey: "${LibreChat_API_KEY}"
      iconURL: "https://raw.gitmirror.com/Passerby1011/icon/45e0140aaa56c5dcc00c7298be79687c953adf3c/metaso.svg"
      baseURL: "http://192.168.31.25:30000/v1"
      models:
        default: ["concise","detail","research"]
        fetch: false
      titleConvo: true
      titleModel: "gpt-3.5-turbo"
      summarize: false
      summaryModel: "gpt-3.5-turbo"
      forcePrompt: false
      modelDisplayLabel: "Metaso"
      dropParams:
        - "temperature"
        - "user"
        - "stop"
        - "presence_penalty"
        - "frequency_penalty"

    - name: "ZhipuAI"
      apiKey: "${LibreChat_API_KEY}"
      iconURL: "https://raw.gitmirror.com/Passerby1011/icon/main/glm.png"
      baseURL: "http://192.168.31.25:30000/v1"
      models:
        default: ["glm4","glm-search","glm-drawing","glm-vision"]
        fetch: false
      titleConvo: true
      titleModel: "gpt-3.5-turbo"
      summarize: false
      summaryModel: "gpt-3.5-turbo"
      forcePrompt: false
      modelDisplayLabel: "ZhipuAI"
      dropParams:
        - "temperature"
        - "user"
        - "stop"
        - "presence_penalty"
        - "frequency_penalty"

    - name: "Moonshot Al"
      apiKey: "${LibreChat_API_KEY}"
      iconURL: "https://raw.gitmirror.com/Passerby1011/icon/c841d91b4f9337c984a1f44205d8482d9fb9acc2/Moonshot-Al.svg"
      baseURL: "http://192.168.31.25:30000/v1"
      models:
        default: ["kimi","moonshot-v1-8k","moonshot-v1-32k","moonshot-v1-128k"]
        fetch: false
      titleConvo: true
      titleModel: "gpt-3.5-turbo"
      summarize: false
      summaryModel: "gpt-3.5-turbo"
      forcePrompt: false
      modelDisplayLabel: "Moonshot Al"
      dropParams:
        - "temperature"
        - "user"
        - "stop"
        - "presence_penalty"
        - "frequency_penalty"

    - name: "通义千问"
      apiKey: "${ali_API_KEY}"
      iconURL: "https://raw.gitmirror.com/Passerby1011/icon/6cb2a7a0474ec18ff74d4e8d358e08d449403b54/tongyi.svg"
      baseURL: "http://192.168.31.25:30000/v1"
#      headers: 
#        Content-Type: "application/json"
#       Accept: "text/event-stream"
      models:
        default: ["qwen","qwen-turbo","qwen-plus","qwen-max","qwen-max-1201","qwen-max-longcontext"]
        fetch: false
      titleConvo: true
      titleModel: "gpt-3.5-turbo"
      summarize: false
      summaryModel: "gpt-3.5-turbo"
      forcePrompt: false
      modelDisplayLabel: "通义千问"
      dropParams:
        - "stop"
        - "user"
        - "presence_penalty"
        - "frequency_penalty"

    - name: "知识库"
      apiKey: "${LibreChat_API_KEY}"
      iconURL: "https://raw.gitmirror.com/labring/FastGPT/main/.github/imgs/logo.svg"
      baseURL: "http://192.168.31.25:30000/v1"
      models:
        default: ["LibreChat知识库","FastGPT知识库","RSS hub知识库"]
        fetch: false
      titleConvo: true
      titleModel: "gpt-3.5-turbo"
      summarize: false
      summaryModel: "gpt-3.5-turbo"
      forcePrompt: false
      modelDisplayLabel: "FastGPT"
      dropParams:
        - "stop"
        - "user"
        - "presence_penalty"
        - "frequency_penalty"
        - "temperature"
        - "top_p"

#    - name: "suno"
#      apiKey: "${LibreChat_API_KEY}"
#      baseURL: "http://192.168.31.25:30000/v1"
#      iconURL: "https://raw.gitmirror.com/Passerby1011/icon/main/suno.png"
#      models:
#        default: ["suno-v3"]
#        fetch: false
#      titleConvo: true
#      titleModel: "gpt-3.5-turbo"
#      summarize: false
#      summaryModel: "gpt-3.5-turbo"
#      forcePrompt: false
#      modelDisplayLabel: "Suno"
#      stream: false
#      dropParams:
#        - "stop"
#        - "user"
#        - "presence_penalty"
#        - "frequency_penalty"

    - name: "Mistral"
      apiKey: "${LibreChat_API_KEY}"
      baseURL: "http://192.168.31.25:30000/v1"
      models:
        default: ["mistral-tiny","mistral-small","mistral-medium","mistral-large","mixtral-8x7b-instruct"]
        fetch: false
      titleConvo: true
      titleModel: "gpt-3.5-turbo"
      summarize: false
      summaryModel: "gpt-3.5-turbo"
      forcePrompt: false
      modelDisplayLabel: "Mistral"
      dropParams:
        - "stop"
        - "user"
        - "presence_penalty"
        - "frequency_penalty"
    
    - name: "OpenRouter"
      apiKey: "${OpenRouter_API_KEY}"
      baseURL: "https://openrouter.ai/api/v1"
      models:
        default: ["mistralai/mixtral-8x7b-instruct"]
        fetch: true
      titleConvo: true
      titleModel: "gpt-3.5-turbo"
      summarize: false
      summaryModel: "gpt-3.5-turbo"
      forcePrompt: false
      modelDisplayLabel: "OpenRouter"
      dropParams:
        - "stop"
        - "frequency_penalty"

    - name: "Meta"
      apiKey: "${LibreChat_API_KEY}"
      iconURL: "https://raw.gitmirror.com/Passerby1011/icon/6cb2a7a0474ec18ff74d4e8d358e08d449403b54/Llama2.svg"
      baseURL: "http://192.168.31.25:30000/v1"
      models:
        default: ["llama2-7b","llama2-13b","llama2-70b"]
        fetch: false
      titleConvo: true
      titleModel: "gpt-3.5-turbo"
      summarize: false
      summaryModel: "gpt-3.5-turbo"
      forcePrompt: false
      modelDisplayLabel: "Meta"
      dropParams:
        - "stop"
        - "user"
        - "presence_penalty"
        - "frequency_penalty"
64 个赞

配置调优软件分享

librechat 的官方文档确实相当依托

6 个赞

chatgpt, #openai添加

支持的模型多,配置就多

2 个赞

写的不错 费心了

2 个赞

感谢,确实清晰多了,留着用

2 个赞

要说依托那还是jumpserver更胜一筹

对着文档弄不出来,去论坛问,先说你弄的不对,配置贴出来看看

配置贴出来以后说,哦,是不行,要不你把服务挪到其他机器吧。(争抢443端口)

4 个赞

感谢分享写的很好

3 个赞

项目一有大的更新迭代,技术文档就是史上加史

4 个赞

很好

2 个赞

很棒!有心了!

2 个赞

不支持window端,想要个桌面端的,nextchat支持的模型太少了

2 个赞

他可以windows部署,不过一般都部署在服务器上,这样其他设备也能用

2 个赞

真心复杂,几次想搞都望而却步

2 个赞

其实就是填各种api

1 个赞

是,太多要填了,尤其用Azure的时候,脑壳痛,哈哈哈

1 个赞

看官方文档看得头大,于是搜到了这篇 :smile:

1 个赞

写的不错,费心了

1 个赞

非常用心,非常详尽的一篇部署文档,非常感谢

1 个赞