更新README

This commit is contained in:
LzSkyline 2025-04-10 19:28:41 +08:00
parent d374400d80
commit f3f3efb862
4 changed files with 180 additions and 160 deletions

170
README.md
View File

@ -17,66 +17,56 @@ OpenDify 是一个将 Dify API 转换为 OpenAI API 格式的代理服务器。
- 兼容标准的 OpenAI API 客户端 - 兼容标准的 OpenAI API 客户端
- 自动获取 Dify 应用信息 - 自动获取 Dify 应用信息
## 效果展示
![Dify Agent应用截图](images/1.png)
*截图展示了OpenDify代理服务支持的Dify Agent应用界面可以看到Agent成功地处理了用户的Python多线程用法请求并返回了相关代码示例。*
![Dify会话记忆功能截图](images/2.png)
*上图展示了OpenDify的会话记忆功能。当用户提问"今天是什么天气?"时AI能够记住之前对话中提到"今天是晴天"的上下文信息,并给出相应回复。*
## 特性
### 会话记忆功能
该代理支持自动记忆会话上下文,无需客户端进行额外处理。提供了三种会话记忆模式:
1. **不开启会话记忆**:每次对话都是独立的,无上下文关联
2. **history_message模式**:将历史消息直接附加到当前消息中,支持客户端编辑历史消息
3. **零宽字符模式**在每个新会话的第一条回复中会自动嵌入不可见的会话ID后续消息自动继承上下文
可以通过环境变量控制此功能:
```shell
# 在 .env 文件中设置会话记忆模式
# 0: 不开启会话记忆
# 1: 构造history_message附加到消息中的模式
# 2: 当前的零宽字符模式(默认)
CONVERSATION_MEMORY_MODE=2
```
默认情况下使用零宽字符模式。对于需要支持客户端编辑历史消息的场景推荐使用history_message模式。
> 注意history_message模式会将所有历史消息追加到当前消息中可能会消耗更多的token。
### 流式输出优化
- 智能缓冲区管理
- 动态延迟计算
- 平滑的输出体验
### 配置灵活性
- 自动获取应用信息
- 简化的配置方式
- 动态模型名称映射
## 支持的模型 ## 支持的模型
支持任意 Dify 应用,系统会自动从 Dify API 获取应用名称和信息。只需在配置文件中添加应用的 API Key 即可。 支持任意 Dify 应用,系统会自动从 Dify API 获取应用名称和信息。只需在配置文件中添加应用的 API Key 即可。
## 快速开始
### 环境要求
- Python 3.9+
- pip
### 安装依赖
```bash
pip install -r requirements.txt
```
### 配置
1. 复制 `.env.example` 文件并重命名为 `.env`
```bash
cp .env.example .env
```
2. 在 Dify 平台配置应用:
- 登录 Dify 平台,进入工作室
- 点击"创建应用",配置好需要的模型(如 Claude、Gemini 等)
- 配置应用的提示语和其他参数
- 发布应用
- 进入"访问 API"页面,生成 API 密钥
> **重要说明**Dify 不支持在请求时动态传入提示词、切换模型及其他参数。所有这些配置都需要在创建应用时设置好。Dify 会根据 API 密钥来确定使用哪个应用及其对应的配置。系统会自动从 Dify API 获取应用的名称和描述信息。
3. 在 `.env` 文件中配置你的 Dify API Keys
```env
# Dify API Keys Configuration
# Format: Comma-separated list of API keys
DIFY_API_KEYS=app-xxxxxxxx,app-yyyyyyyy,app-zzzzzzzz
# Dify API Base URL
DIFY_API_BASE="https://your-dify-api-base-url/v1"
# Server Configuration
SERVER_HOST="127.0.0.1"
SERVER_PORT=5000
```
配置说明:
- `DIFY_API_KEYS`:以逗号分隔的 API Keys 列表,每个 Key 对应一个 Dify 应用
- 系统会自动从 Dify API 获取每个应用的名称和信息
- 无需手动配置模型名称和映射关系
### 运行服务
```bash
python openai_to_dify.py
```
服务将在 `http://127.0.0.1:5000` 启动
## API 使用 ## API 使用
### List Models ### List Models
@ -135,41 +125,61 @@ for chunk in response:
print(chunk.choices[0].delta.content or "", end="") print(chunk.choices[0].delta.content or "", end="")
``` ```
## 特性 ## 快速开始
### 会话记忆功能 ### 环境要求
该代理支持自动记忆会话上下文,无需客户端进行额外处理。提供了三种会话记忆模式: - Python 3.9+
- pip
1. **不开启会话记忆**:每次对话都是独立的,无上下文关联 ### 安装依赖
2. **history_message模式**:将历史消息直接附加到当前消息中,支持客户端编辑历史消息
3. **零宽字符模式**在每个新会话的第一条回复中会自动嵌入不可见的会话ID后续消息自动继承上下文
可以通过环境变量控制此功能: ```bash
pip install -r requirements.txt
```shell
# 在 .env 文件中设置会话记忆模式
# 0: 不开启会话记忆
# 1: 构造history_message附加到消息中的模式
# 2: 当前的零宽字符模式(默认)
CONVERSATION_MEMORY_MODE=2
``` ```
默认情况下使用零宽字符模式。对于需要支持客户端编辑历史消息的场景推荐使用history_message模式。 ### 配置
> 注意history_message模式会将所有历史消息追加到当前消息中可能会消耗更多的token。 1. 复制 `.env.example` 文件并重命名为 `.env`
```bash
cp .env.example .env
```
### 流式输出优化 2. 在 Dify 平台配置应用:
- 登录 Dify 平台,进入工作室
- 点击"创建应用",配置好需要的模型(如 Claude、Gemini 等)
- 配置应用的提示语和其他参数
- 发布应用
- 进入"访问 API"页面,生成 API 密钥
- 智能缓冲区管理 > **重要说明**Dify 不支持在请求时动态传入提示词、切换模型及其他参数。所有这些配置都需要在创建应用时设置好。Dify 会根据 API 密钥来确定使用哪个应用及其对应的配置。系统会自动从 Dify API 获取应用的名称和描述信息。
- 动态延迟计算
- 平滑的输出体验
### 配置灵活性 3. 在 `.env` 文件中配置你的 Dify API Keys
```env
# Dify API Keys Configuration
# Format: Comma-separated list of API keys
DIFY_API_KEYS=app-xxxxxxxx,app-yyyyyyyy,app-zzzzzzzz
- 自动获取应用信息 # Dify API Base URL
- 简化的配置方式 DIFY_API_BASE="https://your-dify-api-base-url/v1"
- 动态模型名称映射
# Server Configuration
SERVER_HOST="127.0.0.1"
SERVER_PORT=5000
```
配置说明:
- `DIFY_API_KEYS`:以逗号分隔的 API Keys 列表,每个 Key 对应一个 Dify 应用
- 系统会自动从 Dify API 获取每个应用的名称和信息
- 无需手动配置模型名称和映射关系
### 运行服务
```bash
python openai_to_dify.py
```
服务将在 `http://127.0.0.1:5000` 启动
## 贡献指南 ## 贡献指南

View File

@ -17,66 +17,56 @@ OpenDify is a proxy server that transforms the Dify API into OpenAI API format.
- Compatible with standard OpenAI API clients - Compatible with standard OpenAI API clients
- Automatic fetching of Dify application information - Automatic fetching of Dify application information
## Screenshot
![Dify Agent Application Screenshot](images/1.png)
*The screenshot shows the Dify Agent application interface supported by the OpenDify proxy service. It demonstrates how the Agent successfully processes a user's request about Python multithreading usage and returns relevant code examples.*
![Dify Conversation Memory Feature](images/2.png)
*The above image demonstrates the conversation memory feature of OpenDify. When the user asks "What's the weather today?", the AI remembers the context from previous conversations that "today is sunny" and provides an appropriate response.*
## Detailed Features
### Conversation Memory
The proxy supports automatic remembering of conversation context without requiring additional processing by the client. It provides three conversation memory modes:
1. **No conversation memory**: Each conversation is independent, with no context association
2. **history_message mode**: Directly appends historical messages to the current message, supporting client-side editing of historical messages
3. **Zero-width character mode**: Automatically embeds an invisible session ID in the first reply of each new conversation, and subsequent messages automatically inherit the context
This feature can be controlled via environment variable:
```shell
# Set conversation memory mode in the .env file
# 0: No conversation memory
# 1: Construct history_message attached to the message
# 2: Zero-width character mode (default)
CONVERSATION_MEMORY_MODE=2
```
Zero-width character mode is used by default. For scenarios that need to support client-side editing of historical messages, the history_message mode is recommended.
> Note: history_message mode will append all historical messages to the current message, which may consume more tokens.
### Streaming Output Optimization
- Intelligent buffer management
- Dynamic delay calculation
- Smooth output experience
### Configuration Flexibility
- Automatic application information retrieval
- Simplified configuration method
- Dynamic model name mapping
## Supported Models ## Supported Models
Supports any Dify application. The system automatically retrieves application names and information from the Dify API. Simply add the API Key for the application in the configuration file. Supports any Dify application. The system automatically retrieves application names and information from the Dify API. Simply add the API Key for the application in the configuration file.
## Quick Start
### Requirements
- Python 3.9+
- pip
### Installing Dependencies
```bash
pip install -r requirements.txt
```
### Configuration
1. Copy the `.env.example` file and rename it to `.env`:
```bash
cp .env.example .env
```
2. Configure your application on the Dify platform:
- Log in to the Dify platform and enter the workspace
- Click "Create Application" and configure the required models (such as Claude, Gemini, etc.)
- Configure the application prompts and other parameters
- Publish the application
- Go to the "Access API" page and generate an API key
> **Important Note**: Dify does not support dynamically passing prompts, switching models, or other parameters in requests. All these configurations need to be set when creating the application. Dify determines which application and its corresponding configuration to use based on the API key. The system will automatically retrieve the application's name and description information from the Dify API.
3. Configure your Dify API Keys in the `.env` file:
```env
# Dify API Keys Configuration
# Format: Comma-separated list of API keys
DIFY_API_KEYS=app-xxxxxxxx,app-yyyyyyyy,app-zzzzzzzz
# Dify API Base URL
DIFY_API_BASE="https://your-dify-api-base-url/v1"
# Server Configuration
SERVER_HOST="127.0.0.1"
SERVER_PORT=5000
```
Configuration notes:
- `DIFY_API_KEYS`: A comma-separated list of API Keys, each corresponding to a Dify application
- The system automatically retrieves the name and information of each application from the Dify API
- No need to manually configure model names and mapping relationships
### Running the Service
```bash
python openai_to_dify.py
```
The service will start at `http://127.0.0.1:5000`
## API Usage ## API Usage
### List Models ### List Models
@ -135,41 +125,61 @@ for chunk in response:
print(chunk.choices[0].delta.content or "", end="") print(chunk.choices[0].delta.content or "", end="")
``` ```
## Features ## Quick Start
### Conversation Memory ### Requirements
The proxy supports automatic remembering of conversation context without requiring additional processing by the client. It provides three conversation memory modes: - Python 3.9+
- pip
1. **No conversation memory**: Each conversation is independent, with no context association ### Installing Dependencies
2. **history_message mode**: Directly appends historical messages to the current message, supporting client-side editing of historical messages
3. **Zero-width character mode**: Automatically embeds an invisible session ID in the first reply of each new conversation, and subsequent messages automatically inherit the context
This feature can be controlled via environment variable: ```bash
pip install -r requirements.txt
```shell
# Set conversation memory mode in the .env file
# 0: No conversation memory
# 1: Construct history_message attached to the message
# 2: Zero-width character mode (default)
CONVERSATION_MEMORY_MODE=2
``` ```
Zero-width character mode is used by default. For scenarios that need to support client-side editing of historical messages, the history_message mode is recommended. ### Configuration
> Note: history_message mode will append all historical messages to the current message, which may consume more tokens. 1. Copy the `.env.example` file and rename it to `.env`:
```bash
cp .env.example .env
```
### Streaming Output Optimization 2. Configure your application on the Dify platform:
- Log in to the Dify platform and enter the workspace
- Click "Create Application" and configure the required models (such as Claude, Gemini, etc.)
- Configure the application prompts and other parameters
- Publish the application
- Go to the "Access API" page and generate an API key
- Intelligent buffer management > **Important Note**: Dify does not support dynamically passing prompts, switching models, or other parameters in requests. All these configurations need to be set when creating the application. Dify determines which application and its corresponding configuration to use based on the API key. The system will automatically retrieve the application's name and description information from the Dify API.
- Dynamic delay calculation
- Smooth output experience
### Configuration Flexibility 3. Configure your Dify API Keys in the `.env` file:
```env
# Dify API Keys Configuration
# Format: Comma-separated list of API keys
DIFY_API_KEYS=app-xxxxxxxx,app-yyyyyyyy,app-zzzzzzzz
- Automatic application information retrieval # Dify API Base URL
- Simplified configuration method DIFY_API_BASE="https://your-dify-api-base-url/v1"
- Dynamic model name mapping
# Server Configuration
SERVER_HOST="127.0.0.1"
SERVER_PORT=5000
```
Configuration notes:
- `DIFY_API_KEYS`: A comma-separated list of API Keys, each corresponding to a Dify application
- The system automatically retrieves the name and information of each application from the Dify API
- No need to manually configure model names and mapping relationships
### Running the Service
```bash
python openai_to_dify.py
```
The service will start at `http://127.0.0.1:5000`
## Contribution Guidelines ## Contribution Guidelines

BIN
images/1.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 300 KiB

BIN
images/2.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 329 KiB