github可以申请各大模型了(申请到的附教程使用)

感谢分享

已申请

感谢,已申请

感谢

先排个轮子

Get started

  1. Create a personal access token
    You do not need to give any permissions to the token. Note that the token will be sent to a Microsoft service.

To use the code snippets below, create an environment variable to set your token as the key for the client code.

If you’re using bash:

export GITHUB_TOKEN=“”
If you’re in powershell:

$Env:GITHUB_TOKEN=“”
If you’re using Windows command prompt:

set GITHUB_TOKEN=
2. Install dependencies
Install OpenAI SDK using pip (Requires: Python >=3.8):

pip install openai
3. Run a basic code sample
This sample demonstrates a basic call to the chat completion API. It is leveraging the GitHub AI model inference endpoint and your GitHub token. The call is synchronous.

import os
from openai import OpenAI

token = os.environ[“GITHUB_TOKEN”]
endpoint = “https://models.inference.ai.azure.com
model_name = “gpt-4o”

client = OpenAI(
base_url=endpoint,
api_key=token,
)

response = client.chat.completions.create(
messages=[
{
“role”: “system”,
“content”: “You are a helpful assistant.”,
},
{
“role”: “user”,
“content”: “What is the capital of France?”,
}
],
model=model_name,
temperature=1.,
max_tokens=1000,
top_p=1.
)

print(response.choices[0].message.content)
4. Explore more samples
Run a multi-turn conversation
This sample demonstrates a multi-turn conversation with the chat completion API. When using the model for a chat application, you’ll need to manage the history of that conversation and send the latest messages to the model.

import os
from openai import OpenAI

token = os.environ[“GITHUB_TOKEN”]
endpoint = “https://models.inference.ai.azure.com
model_name = “gpt-4o”

client = OpenAI(
base_url=endpoint,
api_key=token,
)

response = client.chat.completions.create(
messages=[
{
“role”: “system”,
“content”: “You are a helpful assistant.”,
},
{
“role”: “user”,
“content”: “What is the capital of France?”,
},
{
“role”: “assistant”,
“content”: “The capital of France is Paris.”,
},
{
“role”: “user”,
“content”: “What about Spain?”,
}
],
model=model_name,
)

print(response.choices[0].message.content)
Stream the output
For a better user experience, you will want to stream the response of the model so that the first token shows up early and you avoid waiting for long responses.

import os
from openai import OpenAI

token = os.environ[“GITHUB_TOKEN”]
endpoint = “https://models.inference.ai.azure.com
model_name = “gpt-4o”

client = OpenAI(
base_url=endpoint,
api_key=token,
)

response = client.chat.completions.create(
messages=[
{
“role”: “system”,
“content”: “You are a helpful assistant.”,
},
{
“role”: “user”,
“content”: “Give me 5 good reasons why I should exercise every day.”,
}
],
model=model_name,
stream=True
)

for update in response:
if update.choices[0].delta.content:
print(update.choices[0].delta.content, end=“”)
Chat with an image input
This model supports using images as inputs. To run a chat completion using a local image file, use the following sample.

import os
import base64
from openai import OpenAI

token = os.environ[“GITHUB_TOKEN”]
endpoint = “https://models.inference.ai.azure.com
model_name = “gpt-4o”

def get_image_data_url(image_file: str, image_format: str) → str:
“”"
Helper function to converts an image file to a data URL string.

Args:
    image_file (str): The path to the image file.
    image_format (str): The format of the image file.

Returns:
    str: The data URL of the image.
"""
try:
    with open(image_file, "rb") as f:
        image_data = base64.b64encode(f.read()).decode("utf-8")
except FileNotFoundError:
    print(f"Could not read '{image_file}'.")
    exit()
return f"data:image/{image_format};base64,{image_data}"

client = OpenAI(
base_url=endpoint,
api_key=token,
)

response = client.chat.completions.create(
messages=[
{
“role”: “system”,
“content”: “You are a helpful assistant that describes images in details.”,
},
{
“role”: “user”,
“content”: [
{
“type”: “text”,
“text”: “What’s in this image?”,
},
{
“type”: “image_url”,
“image_url”: {
“url”: get_image_data_url(“sample.jpg”, “jpg”),
“detail”: “low”
},
},
],
},
],
model=model_name,
)

print(response.choices[0].message.content)
Identify and invoke tools
A language model can be given a set of tools it can invoke, for running specific actions depending on the context of the conversation. This sample demonstrates how to define a function tool and how to act on a request from the model to invoke it.

import os
import json
from openai import OpenAI

token = os.environ[“GITHUB_TOKEN”]
endpoint = “https://models.inference.ai.azure.com
model_name = “gpt-4o”

Define a function that returns flight information between two cities (mock implementation)

def get_flight_info(origin_city: str, destination_city: str):
if origin_city == “Seattle” and destination_city == “Miami”:
return json.dumps({
“airline”: “Delta”,
“flight_number”: “DL123”,
“flight_date”: “May 7th, 2024”,
“flight_time”: “10:00AM”})
return json.dump({“error”: “No flights found between the cities”})

Define a function tool that the model can ask to invoke in order to retrieve flight information

tool={
“type”: “function”,
“function”: {
“name”: “get_flight_info”,
“description”: “”“Returns information about the next flight between two cities.
This includes the name of the airline, flight number and the date and time
of the next flight”“”,
“parameters”: {
“type”: “object”,
“properties”: {
“origin_city”: {
“type”: “string”,
“description”: “The name of the city where the flight originates”,
},
“destination_city”: {
“type”: “string”,
“description”: “The flight destination city”,
},
},
“required”: [
“origin_city”,
“destination_city”
],
},
},
}

client = OpenAI(
base_url=endpoint,
api_key=token,
)

messages=[
{“role”: “system”, “content”: “You an assistant that helps users find flight information.”},
{“role”: “user”, “content”: “I’m interested in going to Miami. What is the next flight there from Seattle?”},
]

response = client.chat.completions.create(
messages=messages,
tools=[tool],
model=model_name,
)

We expect the model to ask for a tool call

if response.choices[0].finish_reason == “tool_calls”:

# Append the model response to the chat history
messages.append(response.choices[0].message)

# We expect a single tool call
if response.choices[0].message.tool_calls and len(response.choices[0].message.tool_calls) == 1:

    tool_call = response.choices[0].message.tool_calls[0]

    # We expect the tool to be a function call
    if tool_call.type == "function":

        # Parse the function call arguments and call the function
        function_args = json.loads(tool_call.function.arguments.replace("'", '"'))
        print(f"Calling function `{tool_call.function.name}` with arguments {function_args}")
        callable_func = locals()[tool_call.function.name]
        function_return = callable_func(**function_args)
        print(f"Function returned = {function_return}")

        # Append the function call result fo the chat history
        messages.append(
            {
                "tool_call_id": tool_call.id,
                "role": "tool",
                "name": tool_call.function.name,
                "content": function_return,
            }
        )

        # Get another response from the model
        response = client.chat.completions.create(
            messages=messages,
            tools=[tool],
            model=model_name,
        )

        print(f"Model response = {response.choices[0].message.content}")
  1. Going beyond rate limits
    The rate limits for the playground and free API usage are intended to help you experiment with models and prototype your AI application. For use beyond those limits, and to bring your application to scale, you must provision resources from an Azure account, and authenticate from there instead of your GitHub personal access token. You don’t need to change anything else in your code. Use this link to discover how to go beyond the free tier limits in Azure AI.

Azure hosted. AI powered, can make mistakes. Report harmful content. Subject to Product Terms & Privacy Statement. Not intended for production/sensitive data.

开始使用

  1. 创建个人访问令牌
    您无需授予令牌任何权限。请注意,令牌将被发送到 Microsoft 服务。

要使用下面的代码片段,请创建一个环境变量以将您的令牌设置为客户端代码的密钥。

如果你使用bash:

export GITHUB_TOKEN=“”
如果你在powershell中:

$Env:GITHUB_TOKEN=“”
如果您使用 Windows 命令提示符:

set GITHUB_TOKEN=
2.安装依赖项
使用以下方式安装 OpenAI SDK pip(要求:Python >=3.8):

pip install openai
3. 运行基本代码示例
此示例演示了对聊天完成 API 的基本调用。它利用了 GitHub AI 模型推理端点和您的 GitHub 令牌。调用是同步的。

import os
from openai import OpenAI

token = os.environ[“GITHUB_TOKEN”]
endpoint = “https://models.inference.ai.azure.com
model_name = “gpt-4o”

client = OpenAI(
base_url=endpoint,
api_key=token,
)

response = client.chat.completions.create(
messages=[
{
“role”: “system”,
“content”: “You are a helpful assistant.”,
},
{
“role”: “user”,
“content”: “What is the capital of France?”,
}
],
model=model_name,
temperature=1.,
max_tokens=1000,
top_p=1.
)

print(response.choices[0].message.content)
4. 探索更多示例
进行多轮对话
此示例演示了如何使用聊天完成 API 进行多轮对话。在聊天应用中使用该模型时,您需要管理该对话的历史记录并将最新消息发送到该模型。

import os
from openai import OpenAI

token = os.environ[“GITHUB_TOKEN”]
endpoint = “https://models.inference.ai.azure.com
model_name = “gpt-4o”

client = OpenAI(
base_url=endpoint,
api_key=token,
)

response = client.chat.completions.create(
messages=[
{
“role”: “system”,
“content”: “You are a helpful assistant.”,
},
{
“role”: “user”,
“content”: “What is the capital of France?”,
},
{
“role”: “assistant”,
“content”: “The capital of France is Paris.”,
},
{
“role”: “user”,
“content”: “What about Spain?”,
}
],
model=model_name,
)

print(response.choices[0].message.content)
流式输出
为了获得更好的用户体验,您需要流式传输模型的响应,以便第一个令牌尽早出现,并且避免等待长时间的响应。

import os
from openai import OpenAI

token = os.environ[“GITHUB_TOKEN”]
endpoint = “https://models.inference.ai.azure.com
model_name = “gpt-4o”

client = OpenAI(
base_url=endpoint,
api_key=token,
)

response = client.chat.completions.create(
messages=[
{
“role”: “system”,
“content”: “You are a helpful assistant.”,
},
{
“role”: “user”,
“content”: “Give me 5 good reasons why I should exercise every day.”,
}
],
model=model_name,
stream=True
)

for update in response:
if update.choices[0].delta.content:
print(update.choices[0].delta.content, end=“”)
用图像输入进行聊天
此模型支持使用图像作为输入。要使用本地图像文件运行聊天补全,请使用以下示例。

import os
import base64
from openai import OpenAI

token = os.environ[“GITHUB_TOKEN”]
endpoint = “https://models.inference.ai.azure.com
model_name = “gpt-4o”

def get_image_data_url(image_file: str, image_format: str) → str:
“”"
Helper function to converts an image file to a data URL string.

Args:
    image_file (str): The path to the image file.
    image_format (str): The format of the image file.

Returns:
    str: The data URL of the image.
"""
try:
    with open(image_file, "rb") as f:
        image_data = base64.b64encode(f.read()).decode("utf-8")
except FileNotFoundError:
    print(f"Could not read '{image_file}'.")
    exit()
return f"data:image/{image_format};base64,{image_data}"

client = OpenAI(
base_url=endpoint,
api_key=token,
)

response = client.chat.completions.create(
messages=[
{
“role”: “system”,
“content”: “You are a helpful assistant that describes images in details.”,
},
{
“role”: “user”,
“content”: [
{
“type”: “text”,
“text”: “What’s in this image?”,
},
{
“type”: “image_url”,
“image_url”: {
“url”: get_image_data_url(“sample.jpg”, “jpg”),
“detail”: “low”
},
},
],
},
],
model=model_name,
)

print(response.choices[0].message.content)
识别并调用工具
可以为语言模型提供一组可调用的工具,用于根据对话上下文运行特定操作。此示例演示了如何定义函数工具以及如何根据模型的请求调用该工具。

import os
import json
from openai import OpenAI

token = os.environ[“GITHUB_TOKEN”]
endpoint = “https://models.inference.ai.azure.com
model_name = “gpt-4o”

Define a function that returns flight information between two cities (mock implementation)

def get_flight_info(origin_city: str, destination_city: str):
if origin_city == “Seattle” and destination_city == “Miami”:
return json.dumps({
“airline”: “Delta”,
“flight_number”: “DL123”,
“flight_date”: “May 7th, 2024”,
“flight_time”: “10:00AM”})
return json.dump({“error”: “No flights found between the cities”})

Define a function tool that the model can ask to invoke in order to retrieve flight information

tool={
“type”: “function”,
“function”: {
“name”: “get_flight_info”,
“description”: “”“Returns information about the next flight between two cities.
This includes the name of the airline, flight number and the date and time
of the next flight”“”,
“parameters”: {
“type”: “object”,
“properties”: {
“origin_city”: {
“type”: “string”,
“description”: “The name of the city where the flight originates”,
},
“destination_city”: {
“type”: “string”,
“description”: “The flight destination city”,
},
},
“required”: [
“origin_city”,
“destination_city”
],
},
},
}

client = OpenAI(
base_url=endpoint,
api_key=token,
)

messages=[
{“role”: “system”, “content”: “You an assistant that helps users find flight information.”},
{“role”: “user”, “content”: “I’m interested in going to Miami. What is the next flight there from Seattle?”},
]

response = client.chat.completions.create(
messages=messages,
tools=[tool],
model=model_name,
)

We expect the model to ask for a tool call

if response.choices[0].finish_reason == “tool_calls”:

# Append the model response to the chat history
messages.append(response.choices[0].message)

# We expect a single tool call
if response.choices[0].message.tool_calls and len(response.choices[0].message.tool_calls) == 1:

    tool_call = response.choices[0].message.tool_calls[0]

    # We expect the tool to be a function call
    if tool_call.type == "function":

        # Parse the function call arguments and call the function
        function_args = json.loads(tool_call.function.arguments.replace("'", '"'))
        print(f"Calling function `{tool_call.function.name}` with arguments {function_args}")
        callable_func = locals()[tool_call.function.name]
        function_return = callable_func(**function_args)
        print(f"Function returned = {function_return}")

        # Append the function call result fo the chat history
        messages.append(
            {
                "tool_call_id": tool_call.id,
                "role": "tool",
                "name": tool_call.function.name,
                "content": function_return,
            }
        )

        # Get another response from the model
        response = client.chat.completions.create(
            messages=messages,
            tools=[tool],
            model=model_name,
        )

        print(f"Model response = {response.choices[0].message.content}")
  1. 超越速率限制
    游乐场和免费 API 使用速率限制旨在帮助您试验模型并为您的 AI 应用程序制作原型。要超越这些限制并扩展您的应用程序,您必须从 Azure 帐户配置资源,并从那里进行身份验证,而不是从您的 GitHub 个人访问令牌进行身份验证。您无需更改代码中的任何其他内容。使用此链接了解如何超越 Azure AI 中的免费套餐限制。

Azure 托管。由 AI 驱动,可能会出错。报告有害内容。遵守产品条款和隐私声明。不适用于生产/敏感数据。

1 个赞

谢谢大佬

You’re already on the waitlist! We’ll send you an email once your access is granted

申请了,感谢分享

感谢分享,已申请

开c!

坐等通过

申请通过的 现在可以接入 one api/new api了
key token在这里申请Sign in to GitHub · GitHub
接入分发教程在这里
github models上周申请,今天通过,速度加快了!附one api和newapi填写截图

所以实际的调用限速是动态的?

1 个赞