基于nodejs+express+ts实现共享站的web端接口转api接口

基于nodejs+express+ts实现共享站的web端接口转api接口

首先本项目只适用于https://chat-shared.zhile.io/shared.html?v=2
内的账号,只是方便本地调用。等始皇的服务结束了。这个本地项目也就结束了。
只是为了做个引子,很简单的一个本地项目。毕竟openai的接口有cf把持了,没这么容易弄。
这个项目很大一部分是问chatgpt的,但是都是些已经掌握了解的知识,不想动脑子想那些代码片段而已。

创建项目 node-chatgpt-api

初始化项目

npm init -y

引入express 框架

如果下载不了考虑使用国内代理镜像 https://registry.npm.taobao.org

npm install express

引入 typescript

npm install typescript ts-node

初始化ts

npx tsc --init

修改tsconfig.json文件,确保以下配置项被启用:

{
  "outDir": "./dist",
  "rootDir": "./src",
}

这些配置将TypeScript源代码存储在src目录中,并将编译后的JavaScript代码输出到dist目录。

安装node-typescript支持

npm install @types/node --save-dev

安装express-typescript支持

npm install @types/express --save-dev

在src下创建入口文件app.ts

import express from 'express';
const app = express();
const port = 3000;

app.get('/', (req, res) => {
  res.send('Hello World!');
});

app.listen(port, () => {
  console.log(`Server listening at http://localhost:${port}`);
});

基础的环境搭建完成

框架准备项

跨域

在Express应用中,你可以使用中间件来处理跨域请求。以下是一个简单的例子,使用 cors 模块来实现跨域请求:

npm install cors
npm i --save-dev @types/cors

然后,在你的Express应用中使用以下代码:

import express from 'express';
import cors from 'cors';

const app = express();
const port = 3000;

// 使用 cors 中间件处理跨域请求
app.use(cors());

// 定义路由
app.get('/', (req, res) => {
  res.send('Hello World!');
});

// 启动服务器
app.listen(port, () => {
  console.log(`Server listening at http://localhost:${port}`);
});

路由

首先,创建一个名为 router 的文件夹,然后在该文件夹中创建一个名为 testRoutes.ts 的文件:

// router/testRoutes.ts

import express, { Request, Response } from 'express';

const router = express.Router();

router.get('/', (req: Request, res: Response) => {
  res.send('Test route is working!');
});

router.get('/info', (req: Request, res: Response) => {
  res.json({ message: 'Test route information' });
});

export default router;

然后,在主应用程序文件(app.ts`)中,引入并使用这个路由模块:

import express from 'express';
import cors from 'cors';
import testRoutes from './router/testRouters';

const app = express();
const port = 3000;

// 使用 cors 中间件处理跨域请求
app.use(cors());

app.use('/test', testRoutes)

app.get('/', (req, res) => {
    res.send('Hello World!');
});

app.listen(port, () => {
    console.log(`Server listening at http://localhost:${port}`);
});

现在,测试相关的路由模块被放置在 router 文件夹中,并通过 app.use('/test', testRoutes); 在主应用程序中使用。确保你的文件结构类似于以下结构:

src/
	- app.ts
	- router/
  		- testRoutes.ts

在package.json中添加启动命令

{
  "name": "chatgpt-api",
  "version": "1.0.0",
  "description": "",
  "main": "index.js",
  "scripts": {
    "test": "echo \"Error: no test specified\" && exit 1",
    "start": "npx ts-node ./src/app.ts"
  },
  "keywords": [],
  "author": "",
  "license": "ISC",
  "dependencies": {
    "cors": "^2.8.5",
    "express": "^4.18.2",
    "ts-node": "^10.9.2",
    "typescript": "^5.3.3"
  },
  "devDependencies": {
    "@types/cors": "^2.8.17",
    "@types/express": "^4.17.21",
    "@types/node": "^20.11.6"
  }
}

启动

E:\service_env\nodejs_env\nodejs\npm.cmd run start

> [email protected] start
> npx ts-node ./src/app.ts

Server listening at http://localhost:3000

然后打开http://localhost:3000/test就可以看到我们测试的路由的页面内容了。

接下来发现我们发现似乎没有装热加载插件

npm install nodemon --save-dev

修改package.json的启动命令

{
  "name": "chatgpt-api",
  "version": "1.0.0",
  "description": "",
  "main": "index.js",
  "scripts": {
    "test": "echo \"Error: no test specified\" && exit 1",
    "start": "nodemon --exec ts-node src/app.ts"
  }
}

账号的获取

接下来我们来获取下https://chat-shared.zhile.io/shared.html?v=2页面的账号。

首先,我们进入始皇的共享站去分析下页面的html源代码。

view-source:https://chat-shared.zhile.io/shared.html?v=2

通过分析可以得出账号的token是在li标签下的data-token中获取的。这样我们可以考虑写一个简单的爬虫。

 <li><a href="javascript:" data-token="0d4311dd60d71c640ea9292c134a6e44" class="disabled">361</a></li>

要爬虫。我们首先安装一个前端请求库axios

npm install axios

通过分析标签我们的到一个正则/<li><a[^>]*data-token="([^"]*)"[^>]*>.*?<\/a><\/li>/

接下来我们写一个简单的爬虫

import axios from 'axios';

const response = await axios.get('https://chat-shared.zhile.io/shared.html', {
  params: {
    'v': '2'
  },
  headers: {
    'authority': 'chat-shared.zhile.io',
    'accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,image/avif,image/webp,image/apng,*/*;q=0.8,application/signed-exchange;v=b3;q=0.7',
    'accept-language': 'zh-CN,zh;q=0.9',
    'cache-control': 'max-age=0',
    'if-modified-since': 'Mon, 22 Jan 2024 06:37:47 GMT',
    'sec-ch-ua': '"Not_A Brand";v="8", "Chromium";v="120", "Google Chrome";v="120"',
    'sec-ch-ua-mobile': '?0',
    'sec-ch-ua-platform': '"Windows"',
    'sec-fetch-dest': 'empty',
    'sec-fetch-mode': 'navigate',
    'sec-fetch-site': 'same-origin',
    'upgrade-insecure-requests': '1',
    'user-agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/120.0.0.0 Safari/537.36'
  }
});

一通捣鼓,一直请求超时,在本地关了代理,发现请求超时。说明共享站在我这个地区被墙了。只能给nodejs增加一个socks代理了

 npm install socks-proxy-agent uuid dotenv
  • socks-proxy-agent 代理
  • uuid 生成uuid
  • dotenv 加载环境变量

创建一个配置文件.env

PROXY_OPENAI_URL=https://chat-shared.zhile.io

OPENAI_API_KEY=

PROXY_URL=socks://127.0.0.1:10808

创建一个utils/api.ts用来封装axios

先简易封装下:

import axios from "axios";
import {SocksProxyAgent} from "socks-proxy-agent";

const api = axios.create({
    baseURL: process.env.PROXY_OPENAI_URL,
    headers: {
        Authorization: `Bearer ${process.env.OPENAI_API_KEY}`
    },
    httpsAgent: new SocksProxyAgent(process.env.PROXY_URL!)
})


export default api

回来编写我们的src/router/apiRouters.ts文件补全我们的简单爬虫代码

// router/testRoutes.ts
import api from "./../utils/api"

import express, {Request, Response} from 'express';

const router = express.Router();

router.get('/', (req: Request, res: Response) => {
    res.send('Test route is working!');
});

router.get('/tokens', async (req: Request, res: Response) => {
    const response = await api.get('/shared.html', {
        params: {
            'v': '2'
        },
    });

    const html: string = response.data.toString();
    const regex = /<li><a[^>]*data-token="([^"]*)"[^>]*>.*?<\/a><\/li>/g;
    
    const tokens: string[] = []
    for (const match of html.matchAll(regex)) {
        const dataTokenValue = match[1];
        tokens.push(dataTokenValue)
    }
    res.json({
        code: 200,
        message: 'request success!',
        data: {
            count: tokens.length,
            data: tokens
        },
    })
});

export default router;

现在我们已经能拿到tokens了。接下来我们分析下创建登录账号的步骤。

通过简单的抓取我们知道了共享站的登录需要两个参数token_keysession_password

通过的cookies中的credential值来进行会话。

接下来我们要构建一个login接口,需要用到token_key,session_key;

首先给我们的express项目支持post请求的参数:

并在app.ts入口文件中引入使用

import express from 'express';
import 'dotenv/config'
import cors from 'cors';
import testRoutes from './router/testRouters';
import chatRouters from './router/chatRouters';
import apiRouters from './router/apiRouters';

const app = express();
const port = 3000;


// 使用 cors 中间件处理跨域请求
app.use(cors());

// 使用 Express 内置的表单数据解析中间件
app.use(express.urlencoded({ extended: true }));

// 使用 Express 内置的 JSON 解析中间件
app.use(express.json());

app.use('/test', testRoutes)

app.use('/v1', chatRouters)

app.use("/api", apiRouters)



app.get('/', (req, res) => {
    res.send('Hello World!');
});

app.listen(port, () => {
    console.log(`Server listening at http://localhost:${port}`);
});

安装基础的参数数校验库

npm install joi

并简单的编写我们的登录接口:

// router/testRoutes.ts
import api from "./../utils/api"

import express, {Request, Response} from 'express';
import Joi from "joi";

const router = express.Router();

router.get('/', (req: Request, res: Response) => {
    res.send('Test route is working!');
});


router.post('/login', async (req: Request, res: Response) => {
    const schema = Joi.object({
        token_key: Joi.string().required(),
        session_password: Joi.string().required(),
    });

    const {error, value} = schema.validate(req.body);

    if (error) {
        return res.status(400).json({error: error.details[0].message});
    }

    const response = await api.post('/auth/login',
        new URLSearchParams({
            'token_key': value.token_key,
            'session_password': value.session_password
        }), {
            headers: {
                'sec-ch-ua': '"Not_A Brand";v="8", "Chromium";v="120", "Google Chrome";v="120"',
                'sec-ch-ua-platform': '"Windows"',
                'Referer': 'https://chat-shared3.zhile.io/shared.html?v=2',
                'sec-ch-ua-mobile': '?0',
                'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/120.0.0.0 Safari/537.36',
                'Content-Type': 'application/x-www-form-urlencoded',
            },
            withCredentials: true, // 携带 cookies
        }
    )


    const cookies = response.headers['set-cookie'];

    if (cookies == undefined) {
        return res.status(400).json({
            code: 0,
            message: "Found Request",
            data: ""
        });
    }
    if (cookies.length > 0) {
        const regex = /credential=([^;]+)/;
        // 使用正则表达式匹配
        const match = cookies[0].match(regex);

        // 提取匹配到的值
        if (match && match[1]) {
            const credentialValue = match[1];
            return res.status(200).json({
                code: 200,
                message: "request success!",
                data: {access_token: credentialValue}
            });
        } else {
            return res.status(400).json({
                code: 0,
                message: "Found Request",
                data: ""
            });
        }
    }
    return res.status(400).json({
        code: 0,
        message: "Found Request",
        data: ""
    });
})

router.get('/tokens', async (req: Request, res: Response) => {


    const response = await api.get('/shared.html', {
        params: {
            'v': '2'
        },
    });

    const html: string = response.data.toString();
    const regex = /<li><a[^>]*data-token="([^"]*)"[^>]*>.*?<\/a><\/li>/g;

    const tokens: string[] = []
    for (const match of html.matchAll(regex)) {
        const dataTokenValue = match[1];
        tokens.push(dataTokenValue)
    }
    res.json({
        code: 200,
        message: 'request success!',
        data: {
            count: tokens.length,
            data: tokens
        },
    })
});

export default router;

好了。简单完成了我们的登录接口。拿到始皇服务的credential来作为access_token。

接下来就是聊天接口了:

{
    "action": "next",
    "messages": [
        {
            "id": "aaa2b461-eb34-4f69-84ce-6493a55d2f15",
            "author": {
                "role": "user"
            },
            "content": {
                "content_type": "text",
                "parts": [
                    "你好"
                ]
            },
            "metadata": {}
        }
    ],
    "parent_message_id": "aaa179e0-c3cc-405f-b123-78b369fee10b",
    "model": "text-davinci-002-render-sha",
    "timezone_offset_min": -480,
    "suggestions": [
        "Tell me a random fun fact about the Golden State Warriors",
        "Show me a code snippet of a website's sticky header in CSS and JavaScript.",
        "Make a content strategy for a newsletter featuring free local weekend events.",
        "Explain superconductors like I'm five years old."
    ],
    "history_and_training_disabled": false,
    "arkose_token": null,
    "conversation_mode": {
        "kind": "primary_assistant"
    },
    "force_paragen": false,
    "force_rate_limit": false
}

从我们抓到的参数中,可以分析得出我们需要构建得

https://tooltt.com/json2typescript/
中将我们抓到的请求载体复制到json转typescript中我们可以得到所有的结构体,并进行参数调整:

src/types/chatgpt/request.ts

// @ts-ignore
import {v4} from "uuid";



export interface Author {
    role: string;
}

export interface Content {
    content_type: string;
    parts: string[];
}

export interface Metadata {
}

export interface Messages {
    id: string;
    author: Author;
    content: Content;
    metadata?: Metadata;
}


export interface IChatGPTRequest {
    action: string;
    messages?: Messages[];
    parent_message_id: string;
    model: string;
    history_and_training_disabled?: boolean;
    arkose_token: string;
}


export class ChatGPTRequest implements IChatGPTRequest {
    action: string;
    arkose_token: string;
    parent_message_id: string;
    messages: Messages[]
    model: string
    history_and_training_disabled: boolean

    constructor({action, arkose_token, parent_message_id, model}: IChatGPTRequest) {
        this.action = action
        this.arkose_token = arkose_token
        this.parent_message_id = parent_message_id
        this.model = model
        this.messages = []
        this.history_and_training_disabled = Boolean(process.env.history_and_training_disabled!)
    }

    AddMessage(role: string, content: string): void {
        this.messages.push({
            id: v4(),
            author: {role},
            content: {content_type: 'text', parts: [content]}
        })
    }
}

export const NewChatGPTRequest = (): ChatGPTRequest => {
    return new ChatGPTRequest({
        action: "next",
        arkose_token: "",
        parent_message_id: v4(),
        model: "",
    })
}

好了,现在我们的入参已经完成了。

接下来我们根据仿照openai api的入参来设计我们的前端请求参数:

{
    "model": "gpt-3.5-turbo",
    "messages": [
        {
            "role": "user",
            "content": "你好"
        }
    ],
    "stream": true
}

src/types/official/request.ts设计我们的入参结构体

export interface APIRequest {
    messages: api_message[]
    stream: boolean
    model: string
}


export interface api_message {
    role: string
    content: string
}

写下来我们继续写我们的聊天接口:

src/router/chatRouters.ts

router.post("/conversation", async (req: Request, res: Response) => {
    // 设置好校验规则
    const schema = Joi.object<APIRequest>({
        messages: Joi.array<api_message>().required(),
        stream: Joi.boolean().default(false),
        model: Joi.string().required(),
    });

    const {error, value} = schema.validate(req.body);

    if (error) {
        return res.status(400).json({error: error.details[0].message});
    }
})

好了,解决好前端的入参。

我们开始组装发送到共享站的聊天接口的入参了。

src/chatgpt/convert.ts,创建一个组装入参的方法:

import {APIRequest} from "../types/official/request";
import {NewChatGPTRequest} from "../types/chatgpt/request";


export const ConvertAPIRequest = (api_request: APIRequest) => {
    const chatgpt_request = NewChatGPTRequest()
    if (api_request.model.includes("gpt-3.5")) {
        chatgpt_request.model = "text-davinci-002-render-sha"
    }
    if (api_request.model.includes("gpt-4")) {
        chatgpt_request.model = api_request.model
        // 没有arkose_token 使用不了gpt-4
        chatgpt_request.arkose_token = "";
    }

    api_request.messages.map(api_message => {
        if (api_message.role == "system") {
            api_message.role = "critic"
        }
        chatgpt_request.AddMessage(api_message.role, api_message.content)
    });
    return chatgpt_request
}

好了,接下来我们开始使用axios正式发起请求了,下面我们开始封装请求方法:

import {Stream} from "stream";
import requests from "../utils/api"
import {IChatGPTRequest} from "../types/chatgpt/request";

export const POSTConversation = async (chatgpt_request: IChatGPTRequest,token:string) => {

    const headers =  {
        'authority': 'chat-shared.zhile.io',
        'accept': 'text/event-stream',
        'accept-language': 'en-US',
        'authorization': `Bearer ${token}`,
        'content-type': 'application/json',
        'cookie': `credential=${token};`,
        'origin': 'https://chat-shared.zhile.io',
        'sec-ch-ua': '"Not_A Brand";v="8", "Chromium";v="120", "Google Chrome";v="120"',
        'sec-ch-ua-mobile': '?0',
        'sec-ch-ua-platform': '"Windows"',
        'sec-fetch-dest': 'empty',
        'sec-fetch-mode': 'cors',
        'sec-fetch-site': 'same-origin',
        'user-agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/120.0.0.0 Safari/537.36'
    }
    return requests.post<Stream>("api/conversation", chatgpt_request, {headers, responseType: 'stream'})
}

好了。现在去完善我们的聊天接口:

src/router/chatRouters.ts

router.post("/chat/completions", async (req: Request, res: Response) => {
    const schema = Joi.object<APIRequest>({
        messages: Joi.array<api_message>().required(),
        stream: Joi.boolean().default(false),
        model: Joi.string().required(),
    });

    const {error, value} = schema.validate(req.body);

    if (error) {
        return res.status(400).json({error: error.details[0].message});
    }

    const convert = ConvertAPIRequest(value)

    const {data} = await POSTConversation(convert)
})

现在理论上我们已经可以拿到一个请求的响应流了。接下来就是对响应流中的数据做处理了。

首先。我们回想一下我们使用api的时候通过stream的设置。来获取不同的响应数据结构。

在官网中的非流式的响应结构:

{
  "choices": [
    {
      "finish_reason": "stop",
      "index": 0,
      "message": {
        "content": "The 2020 World Series was played in Texas at Globe Life Field in Arlington.",
        "role": "assistant"
      },
      "logprobs": null
    }
  ],
  "created": 1677664795,
  "id": "chatcmpl-7QyqpwdfhqwajicIEznoc6Q47XAyW",
  "model": "gpt-3.5-turbo-0613",
  "object": "chat.completion",
  "usage": {
    "completion_tokens": 17,
    "prompt_tokens": 57,
    "total_tokens": 74
  }
}

还有一种是流式的响应:

{"id":"chatcmpl-123","object":"chat.completion.chunk","created":1694268190,"model":"gpt-3.5-turbo-0613", "system_fingerprint": "fp_44709d6fcb", "choices":[{"index":0,"delta":{"role":"assistant","content":""},"logprobs":null,"finish_reason":null}]}

{"id":"chatcmpl-123","object":"chat.completion.chunk","created":1694268190,"model":"gpt-3.5-turbo-0613", "system_fingerprint": "fp_44709d6fcb", "choices":[{"index":0,"delta":{},"logprobs":null,"finish_reason":"stop"}]}

现在我们对这两种数据结构进行封装

首先封装非流式的响应结构:

通过在https://tooltt.com/json2typescript/中将非流式的响应结构转成typescirpt的结构体我们可以得到相对完整的结构体

src/types/official/response.ts

export interface Delta {
	role: string;
	content: string;
}

export interface Choice {
	index: number;
	delta: Delta;
	logprobs?: any;
	finish_reason?: any;
}

export interface ChatCompletion {
	id: string;
	object: string;
	created: number;
	model: string;
	system_fingerprint: string;
	choices: Choice[];
}

export const NewChatCompletion = (full_test: string, model: string, finish_reason: string): ChatCompletion => {
    return {
        id: "chatcmpl-wXhoi2FBbmROaXhpZUFyZUF3ZXNvbWUK",
        object: "chat.completion",
        created: Math.floor(new Date().getTime() / 1000),
        model: model,
        usage: {
            prompt_tokens: 0,
            completion_tokens: 0,
            total_tokens: 0
        },
        choices: [{
            message: {
                role: "assistant",
                content: full_test
            },
            index: 0,
            finish_reason: finish_reason
        }]
    }
}

然后我们封装流式响应结构:

src/types/official/response.ts

export interface Delta {
    role: string;
    content: string;
}



export interface ChunkChoice {
    index: number;
    delta: Delta;
    logprobs?: any;
    finish_reason?: any;
}


export interface ChatCompletionChunk {
    id: string;
    object: string;
    created: number;
    model: string;
    system_fingerprint: string;
    choices: ChunkChoice[];
}

export const NewChatCompletionChunk = (text: string, model: string) => {
    return <ChatCompletionChunk>{
        id: "chatcmpl-QXlha2FBbmROaXhpZUFyZUF3ZXNvbWUK",
        object: "chat.completion.chunk",
        created: Math.floor(new Date().getTime() / 1000),
        model: model,
        choices: [{
            index: 0,
            delta: {
                content: text
            },
            finish_reason: null
        }]
    }
}

export const StopChunk = (text: string, model: string) => {
    return <ChatCompletionChunk>{
        id: "chatcmpl-QXlha2FBbmROaXhpZUFyZUF3ZXNvbWUK",
        object: "chat.completion.chunk",
        created: Math.floor(new Date().getTime() / 1000),
        model: model,
        choices: [{
            index: 0,
            delta: {},
            finish_reason: text
        }]
    }
}

好了。这时候我们已经完成了openai api的返回数据结构了。

下来我让我们回到聊天接口中:

src/router/chatRouters.ts

router.post("/chat/conversation", async (req: Request, res: Response) => {
    const schema = Joi.object<APIRequest>({
        messages: Joi.array<api_message>().required(),
        stream: Joi.boolean().default(false),
        model: Joi.string().required(),
    });

    const {error, value} = schema.validate(req.body);

    if (error) {
        return res.status(400).json({error: error.details[0].message});
    }

    const convert = ConvertAPIRequest(value)

    const {data} = await POSTConversation(convert)
    
    data.on("data", chunk=> consle.log(chunk))
})

输出

data: {"message": {"id": "c2f8c3ef-4291-445f-81f0-928ee4c35b61", "author": {"role": "system", "name": null, "metadata": {}}, "create_time": null, "update_time": null, "content": {"content_type": "text", "parts": [""]}, "status": "finished_successfully", "end_turn": true, "weight": 0.0, "metadata": {}, "recipient": "all"}, "conversation_id": "763f5970-e591-445c-a060-f0e428b923c6", "error": null}

data: {"message": {"id": "aaa29fba-4ba2-4bed-87e5-3fa397a759f9", "author": {"role": "user", "name": null, "metadata": {}}, "create_time": 1706004712.040238, "update_time": null, "content": {"content_type": "text", "parts": ["\u4f60\u597d\uff0c\u6211\u662f\u5c0f\u660e\uff0c\u4f60\u53eb\u4ec0\u4e48\u540d\u5b57\uff1f"]}, "status": "finished_successfully", "end_turn": null, "weight": 1.0, "metadata": {"timestamp_": "absolute", "message_type": null}, "recipient": "all"}, "conversation_id": "763f5970-e591-445c-a060-f0e428b923c6", "error": null}


data: {"message": {"id": "f43e03ee-c98d-450b-a436-ca92f75bee23", "author": {"role": "assistant", "name": null, "metadata": {}}, "create_time": 1706004712.085695, "update_time": null, "content": {"content_type": "text", "parts": ["\u4f60\u597d\uff0c\u5c0f\u660e\uff01\u6211\u662fChatGPT\uff0c\u6709\u4ec0\u4e48\u6211\u53ef\u4ee5\u5e2e\u52a9\u4f60\u7684\u5417\uff1f"]}, "status": "finished_successfully", "end_turn": true, "weight": 1.0, "metadata": {"finish_details": {"type": "stop", "stop_tokens": [100260]}, "gizmo_id": null, "is_complete": true, "message_type": "next", "model_slug": "text-davinci-002-render-sha", "parent_id": "aaa29fba-4ba2-4bed-87e5-3fa397a759f9"}, "recipient": "all"}, "conversation_id": "763f5970-e591-445c-a060-f0e428b923c6", "error": null}

data: {"conversation_id": "763f5970-e591-445c-a060-f0e428b923c6", "message_id": "f43e03ee-c98d-450b-a436-ca92f75bee23", "is_completion": true, "moderation_response": {"flagged": false, "blocked": false, "moderation_id": "modr-8k82zrX1JvVdoe4HaPM4fuDkI89pa"}}

data: [DONE]

现在我们要对chatgpt的响应数据做处理。

首先我们需要将数据结构封装好。

在上面我们可以看出有2种不同的结构

data: {"message": {"id": "c2f8c3ef-4291-445f-81f0-928ee4c35b61", "author": {"role": "system", "name": null, "metadata": {}}, "create_time": null, "update_time": null, "content": {"content_type": "text", "parts": [""]}, "status": "finished_successfully", "end_turn": true, "weight": 0.0, "metadata": {}, "recipient": "all"}, "conversation_id": "763f5970-e591-445c-a060-f0e428b923c6", "error": null}


data: {"conversation_id": "763f5970-e591-445c-a060-f0e428b923c6", "message_id": "f43e03ee-c98d-450b-a436-ca92f75bee23", "is_completion": true, "moderation_response": {"flagged": false, "blocked": false, "moderation_id": "modr-8k82zrX1JvVdoe4HaPM4fuDkI89pa"}}

现在我们去将这些结构转成typescript的结构体,并做出相应的调整:

export interface ChatGPTResponse {
    message: Message
    conversation_id: string
    error: string | null
}

export interface ChatGPTResponseConversation {
    conversation_id: string
    message_id: string
    is_completion: boolean
    moderation_response: ModerationResponse
}


interface ModerationResponse {
    flagged: boolean
    blocked: boolean
    moderation_id: string
}

type chat_time = number | null
type end_status = boolean | null
// 在上面的数据中我们可以看到finished_successfully代表输出文字结束
type chat_status = "in_progress" | "finished_successfully"

interface Message {
    id: string
    author: Author
    create_time: chat_time
    update_time: chat_time
    content: Content
    status: chat_status
    end_turn: end_status
    metadata: MetaDate
    recipient: string
}

interface MetaDate {
    inline_gizmo_id: null
    message_type: "next" | "stop"
    finish_details?: FinishDetails
    model_slug: string
    parent_id: string
}

interface FinishDetails {
    type: string
    stop_tokens: number[]
}

interface Content {
    content_type: string
    parts: string[]
}

interface Author {
    role: string
    name: any
    metadata: {}
}

我们目前要解决的是什么?

是读取流式数据:

我们需要用到nodejs中的Stream.Transform类来对stream的响应流做数据清洗处理。

由于我们封装了2中返回数据结构,那么我们要根据请求对象中的stream来判断使用哪一种transform来对数据进行处理,为此我们需要创建一个分发的handle方法:

src/chatgpt/handler.ts

export const Handler = (response: Stream, stream: boolean, model: string) => {
    return stream ? StreamHandler(response, model) : ChatCompletionHandler(response, model)
}

接下来我们要完善这两个数据处理方法

src/chatgpt/handler.ts

export const StreamHandler = (data: Stream, model: string) => {
    const dataTransformer = new SSETransformer(model);
    data.on("data", chunk => dataTransformer.write(chunk))
    data.on("end", () => dataTransformer.write("data: [DONE]"))
    return dataTransformer
}

这个SSETransformer是为了方便对data数据流做处理封装的一个继承Stream.Transform的类:

src/chatgpt/handler.ts

export class SSETransformer extends Transform {
    private _len: number = 0

    constructor(private model: string) {
        super({objectMode: true}); // 设置流模式为对象模式
    }

    _ConvertToString(str: string): string {
        const chatChunk = NewChatCompletionChunk(str, this.model)
        return `data: ${JSON.stringify(chatChunk)}\n\n`;
    }

    _StopConvertTOString(): string {
        const stopChunk = StopChunk("stop", this.model)
        return `data: ${JSON.stringify(stopChunk)}\n\n`;
    }

    _CheckError(line: string) {
        // 防止读取的信息不完整。但是又出现了error的情况
        const match = line.match(/"error":\s*"([^"]+)"/);
        if (match && match[1]) {
            const errorText = match[1];
            this.push(this._ConvertToString(errorText));
        }
    }

    _transform(chunk: string, encoding: any, callback: () => void) {
        // 在这里对数据进行封装,然后推送到下游流
        const decoded_line: any = chunk.toString().slice(6)
        // console.log(decoded_line)
        if (decoded_line == '[DONE]') {
            this.push(this._StopConvertTOString())
            this.push("data: [DONE]\n\n");
            callback();
            return;
        }
        let resp: ChatGPTResponseConversation | ChatGPTResponse
        try {
            resp = JSON.parse(decoded_line)
        } catch (e) {
            this._CheckError(decoded_line)
            callback();
            return
        }
        if (typeof resp === 'object' && "message_id" in resp) {
            callback();
            return
        }
        if (resp.error != null) {
            this.push(this._ConvertToString(resp.error));
            callback();
            return;
        }

        if (
            resp.message.author.role !== "assistant" ||
            resp.message.content.content_type !== "text" ||
            resp.message.status !== "in_progress"
        ) {
            callback();
            return;
        }

        this.push(this._ConvertToString(resp.message.content.parts[0].slice(this._len)));
        this._len = resp.message.content.parts[0].length
        callback();
    }
}

src/chatgpt/handler.ts

export const ChatCompletionHandler = (data: Stream, model: string) => {
    const dataTransformer = new ChatCompletionTransformer(model);
    data.on("data", chunk => dataTransformer.write(chunk))
    return dataTransformer
}

这个ChatCompletionTransformer是为了方便对data数据流做处理封装的一个继承Stream.Transform的类:

src/chatgpt/handler.ts

export class ChatCompletionTransformer extends Transform {
    private _count: number = 0

    constructor(private model: string) {
        super({objectMode: true}); // 设置流模式为对象模式
    }

    _CheckError(line: string) {
        // 防止读取的信息不完整。但是又出现了error的情况
        const match = line.match(/"error":\s*"([^"]+)"/);
        if (match && match[1]) {
            const errorText = match[1];
            this.push(NewChatCompletion(errorText, this.model, "stop") as ChatCompletion);
        }
    }

    _transform(chunk: string, encoding: any, callback: () => void) {
        // 在这里对数据进行封装,然后推送到下游流
        const decoded_line: any = chunk.toString().slice(6)
        // console.log(decoded_line)
        if (decoded_line == '[DONE]') {
            callback();
            return;
        }
        let resp: ChatGPTResponseConversation | ChatGPTResponse
        try {
            resp = JSON.parse(decoded_line)
        } catch (e) {
            this._CheckError(decoded_line)
            callback();
            return
        }
        if (typeof resp === 'object' && "message_id" in resp) {
            callback();
            return
        }
        if (resp.error != null) {
            this.push(NewChatCompletion(resp.error, this.model, 'stop') as ChatCompletion);
            callback();
            return;
        }

        if (
            resp.message.author.role !== "assistant" ||
            resp.message.content.content_type !== "text" ||
            resp.message.status === "in_progress"
        ) {
            callback();
            return;
        }
        if (this._count == 0) {
            this.push(NewChatCompletion(resp.message.content.parts[0], this.model, "stop"));
        }
        this._count++
        callback();
    }
}

至此,我们的数据结构处理已经结束。

现在让我们在完成最后一步,完善聊天方法:

router.post("/chat/completions", async (req: RequestWithToken, res: Response) => {
    const authorizationHeader = req.headers.authorization;
    let token: string = process.env.OPENAI_API_KEY!
     if (authorizationHeader && authorizationHeader.startsWith('Bearer ')) {
        // 提取 Bearer 令牌
        const result_token = authorizationHeader.split(' ')[1];
        if (result_token != '') {
            token = result_token
        }
        if (token == '') {
            // 将解析后的信息存储在 req 对象中
            return res.status(401).json({message: 'No token provided'});
        }
    }

    const schema = Joi.object<APIRequest>({
        messages: Joi.array<api_message>().required(),
        stream: Joi.boolean().default(false),
        model: Joi.string().required(),
    });

    const {error, value} = schema.validate(req.body);

    if (error) {
        return res.status(400).json({error: error.details[0].message});
    }

    const convert = ConvertAPIRequest(value)

    const {data} = await POSTConversation(convert, token)

    const chat_msg = Handler(data, value.stream, value.model)
    if (value.stream) {
        res.setHeader("Content-Type", "text/event-stream")
        chat_msg.pipe(res)
        return
    }
    chat_msg.on("data", (chunk: ChatCompletion) => {
        res.json(chunk)
    })
})

自此我们就可以把这个封装好的服务在本地上使用啦。

16 个赞

常规话题软件开发

太牛逼了大佬,给你暖个

1 个赞

卧槽…大佬…前排学习下!

1 个赞

厉害

1 个赞

万字作文得靠你

2 个赞

响应响应始皇的号召。。。 :joy: 。。 水水帖

1 个赞

太牛逼了大佬

1 个赞

这里的token是只能用来访问共享栈是吗?有没有尝试过去拉起官网的session :face_with_peeking_eye:

1 个赞

还是说只能是用官方的API key才行

1 个赞

这个只是以共享站的聊天接口为例,做个引子。对api封装的思路和数据处理、数据封装做的一个例子。如果要实现官网的封装你得要过得了官网得cf验证才行。

1 个赞

这是什么没什么

1 个赞

膜拜大佬,论坛大佬云集啊

1 个赞

大佬,能不能写一个使用官网的教程,包括使用cap打码

1 个赞

好硬,先收藏

打码的可以卡看看这篇博文:如何生成 GPT-4 arkose_token | 林伟源的技术博客

目前最新打码的js文件


浏览器抓下包,把环境补齐。根据上面的博文

把这些参数都拿到。

ts版本的bda加密函数,然后翻翻这个库的源码,再结合拿到的参数,就可以请求了。

2 个赞

又发现一大佬

我是菜鸡,群里面好多佬呢:joy: :joy:

膜拜