Linux中使用Docker安装DIFY搭建本地支持库和Agent

Dify 是一款开源的大语言模型(LLM) 应用开发平台。它融合了后端即服务(Backend as Service)和 LLMOps 的理念,使开发者可以快速搭建生产级的生成式 AI 应用。即使你是非技术人员,也能参与到 AI 应用的定义和数据运营过程中。

然而它的安装不是一帆风顺,尤其是按照官方文档中使用docker安装,刚开始都是以失败告终。主要报错如下,在网上也搜了很多教程,始终没有解决问题。

按照官方教程https://docs.dify.ai

git clone https://github.com/langgenius/dify.gitcd dify/dockerdocker compose up -d

最终的解决办法是去1.修改成为国内代理的docker镜像源,2.修改安装文件下的docker/docker-compose.yaml的docker镜像路径。最终安装成功。

第一步:

echo '{"registry-mirrors": ["https://docker.1ms.run"]}' | sudo tee /etc/docker/daemon.json > /dev/nullsystemctl daemon-reloadsystemctl restart docker

第二步:vi docker/docker-compose.yaml,修改后的完整文件,可以直接拿去用。见文末

最终拉取镜像等待最终运行成功。

docker compose up -d

在浏览器中打开服务器的地址http://localhost,如下开始配置

登录成功

docker/docker-compose.yaml文件

# ==================================================================# WARNING: This file is auto-generated by generate_docker_compose# Do not modify this file directly. Instead, update the .env.example# or docker-compose-template.yaml and regenerate this file.# ==================================================================x-shared-env: &shared-api-worker-env  CONSOLE_API_URL: ${CONSOLE_API_URL:-}  CONSOLE_WEB_URL: ${CONSOLE_WEB_URL:-}  SERVICE_API_URL: ${SERVICE_API_URL:-}  APP_API_URL: ${APP_API_URL:-}  APP_WEB_URL: ${APP_WEB_URL:-}  FILES_URL: ${FILES_URL:-}  LOG_LEVEL: ${LOG_LEVEL:-INFO}  LOG_FILE: ${LOG_FILE:-/app/logs/server.log}  LOG_FILE_MAX_SIZE: ${LOG_FILE_MAX_SIZE:-20}  LOG_FILE_BACKUP_COUNT: ${LOG_FILE_BACKUP_COUNT:-5}  LOG_DATEFORMAT: ${LOG_DATEFORMAT:-%Y-%m-%d %H:%M:%S}  LOG_TZ: ${LOG_TZ:-UTC}  DEBUG: ${DEBUG:-false}  FLASK_DEBUG: ${FLASK_DEBUG:-false}  SECRET_KEY: ${SECRET_KEY:-sk-9f73s3ljTXVcMT3Blb3ljTqtsKiGHXVcMT3BlbkFJLK7U}  INIT_PASSWORD: ${INIT_PASSWORD:-}  DEPLOY_ENV: ${DEPLOY_ENV:-PRODUCTION}  CHECK_UPDATE_URL: ${CHECK_UPDATE_URL:-https://updates.dify.ai}  OPENAI_API_BASE: ${OPENAI_API_BASE:-https://api.openai.com/v1}  MIGRATION_ENABLED: ${MIGRATION_ENABLED:-true}  FILES_ACCESS_TIMEOUT: ${FILES_ACCESS_TIMEOUT:-300}  ACCESS_TOKEN_EXPIRE_MINUTES: ${ACCESS_TOKEN_EXPIRE_MINUTES:-60}  REFRESH_TOKEN_EXPIRE_DAYS: ${REFRESH_TOKEN_EXPIRE_DAYS:-30}  APP_MAX_ACTIVE_REQUESTS: ${APP_MAX_ACTIVE_REQUESTS:-0}  APP_MAX_EXECUTION_TIME: ${APP_MAX_EXECUTION_TIME:-1200}  DIFY_BIND_ADDRESS: ${DIFY_BIND_ADDRESS:-0.0.0.0}  DIFY_PORT: ${DIFY_PORT:-5001}  SERVER_WORKER_AMOUNT: ${SERVER_WORKER_AMOUNT:-1}  SERVER_WORKER_CLASS: ${SERVER_WORKER_CLASS:-gevent}  SERVER_WORKER_CONNECTIONS: ${SERVER_WORKER_CONNECTIONS:-10}  CELERY_WORKER_CLASS: ${CELERY_WORKER_CLASS:-}  GUNICORN_TIMEOUT: ${GUNICORN_TIMEOUT:-360}  CELERY_WORKER_AMOUNT: ${CELERY_WORKER_AMOUNT:-}  CELERY_AUTO_SCALE: ${CELERY_AUTO_SCALE:-false}  CELERY_MAX_WORKERS: ${CELERY_MAX_WORKERS:-}  CELERY_MIN_WORKERS: ${CELERY_MIN_WORKERS:-}  API_TOOL_DEFAULT_CONNECT_TIMEOUT: ${API_TOOL_DEFAULT_CONNECT_TIMEOUT:-10}  API_TOOL_DEFAULT_READ_TIMEOUT: ${API_TOOL_DEFAULT_READ_TIMEOUT:-60}  DB_USERNAME: ${DB_USERNAME:-postgres}  DB_PASSWORD: ${DB_PASSWORD:-difyai123456}  DB_HOST: ${DB_HOST:-db}  DB_PORT: ${DB_PORT:-5432}  DB_DATABASE: ${DB_DATABASE:-dify}  SQLALCHEMY_POOL_SIZE: ${SQLALCHEMY_POOL_SIZE:-30}  SQLALCHEMY_POOL_RECYCLE: ${SQLALCHEMY_POOL_RECYCLE:-3600}  SQLALCHEMY_ECHO: ${SQLALCHEMY_ECHO:-false}  POSTGRES_MAX_CONNECTIONS: ${POSTGRES_MAX_CONNECTIONS:-100}  POSTGRES_SHARED_BUFFERS: ${POSTGRES_SHARED_BUFFERS:-128MB}  POSTGRES_WORK_MEM: ${POSTGRES_WORK_MEM:-4MB}  POSTGRES_MAINTENANCE_WORK_MEM: ${POSTGRES_MAINTENANCE_WORK_MEM:-64MB}  POSTGRES_EFFECTIVE_CACHE_SIZE: ${POSTGRES_EFFECTIVE_CACHE_SIZE:-4096MB}  REDIS_HOST: ${REDIS_HOST:-redis}  REDIS_PORT: ${REDIS_PORT:-6379}  REDIS_USERNAME: ${REDIS_USERNAME:-}  REDIS_PASSWORD: ${REDIS_PASSWORD:-difyai123456}  REDIS_USE_SSL: ${REDIS_USE_SSL:-false}  REDIS_DB: ${REDIS_DB:-0}  REDIS_USE_SENTINEL: ${REDIS_USE_SENTINEL:-false}  REDIS_SENTINELS: ${REDIS_SENTINELS:-}  REDIS_SENTINEL_SERVICE_NAME: ${REDIS_SENTINEL_SERVICE_NAME:-}  REDIS_SENTINEL_USERNAME: ${REDIS_SENTINEL_USERNAME:-}  REDIS_SENTINEL_PASSWORD: ${REDIS_SENTINEL_PASSWORD:-}  REDIS_SENTINEL_SOCKET_TIMEOUT: ${REDIS_SENTINEL_SOCKET_TIMEOUT:-0.1}  REDIS_USE_CLUSTERS: ${REDIS_USE_CLUSTERS:-false}  REDIS_CLUSTERS: ${REDIS_CLUSTERS:-}  REDIS_CLUSTERS_PASSWORD: ${REDIS_CLUSTERS_PASSWORD:-}  CELERY_BROKER_URL: ${CELERY_BROKER_URL:-redis://:difyai123456@redis:6379/1}  BROKER_USE_SSL: ${BROKER_USE_SSL:-false}  CELERY_USE_SENTINEL: ${CELERY_USE_SENTINEL:-false}  CELERY_SENTINEL_MASTER_NAME: ${CELERY_SENTINEL_MASTER_NAME:-}  CELERY_SENTINEL_SOCKET_TIMEOUT: ${CELERY_SENTINEL_SOCKET_TIMEOUT:-0.1}  WEB_API_CORS_ALLOW_ORIGINS: ${WEB_API_CORS_ALLOW_ORIGINS:-*}  CONSOLE_CORS_ALLOW_ORIGINS: ${CONSOLE_CORS_ALLOW_ORIGINS:-*}  STORAGE_TYPE: ${STORAGE_TYPE:-opendal}  OPENDAL_SCHEME: ${OPENDAL_SCHEME:-fs}  OPENDAL_FS_ROOT: ${OPENDAL_FS_ROOT:-storage}  S3_ENDPOINT: ${S3_ENDPOINT:-}  S3_REGION: ${S3_REGION:-us-east-1}  S3_BUCKET_NAME: ${S3_BUCKET_NAME:-difyai}  S3_ACCESS_KEY: ${S3_ACCESS_KEY:-}  S3_SECRET_KEY: ${S3_SECRET_KEY:-}  S3_USE_AWS_MANAGED_IAM: ${S3_USE_AWS_MANAGED_IAM:-false}  AZURE_BLOB_ACCOUNT_NAME: ${AZURE_BLOB_ACCOUNT_NAME:-difyai}  AZURE_BLOB_ACCOUNT_KEY: ${AZURE_BLOB_ACCOUNT_KEY:-difyai}  AZURE_BLOB_CONTAINER_NAME: ${AZURE_BLOB_CONTAINER_NAME:-difyai-container}  AZURE_BLOB_ACCOUNT_URL: ${AZURE_BLOB_ACCOUNT_URL:-https://<your_account_name>.blob.core.windows.net}  GOOGLE_STORAGE_BUCKET_NAME: ${GOOGLE_STORAGE_BUCKET_NAME:-your-bucket-name}  GOOGLE_STORAGE_SERVICE_ACCOUNT_JSON_BASE64: ${GOOGLE_STORAGE_SERVICE_ACCOUNT_JSON_BASE64:-}  ALIYUN_OSS_BUCKET_NAME: ${ALIYUN_OSS_BUCKET_NAME:-your-bucket-name}  ALIYUN_OSS_ACCESS_KEY: ${ALIYUN_OSS_ACCESS_KEY:-your-access-key}  ALIYUN_OSS_SECRET_KEY: ${ALIYUN_OSS_SECRET_KEY:-your-secret-key}  ALIYUN_OSS_ENDPOINT: ${ALIYUN_OSS_ENDPOINT:-https://oss-ap-southeast-1-internal.aliyuncs.com}  ALIYUN_OSS_REGION: ${ALIYUN_OSS_REGION:-ap-southeast-1}  ALIYUN_OSS_AUTH_VERSION: ${ALIYUN_OSS_AUTH_VERSION:-v4}  ALIYUN_OSS_PATH: ${ALIYUN_OSS_PATH:-your-path}  TENCENT_COS_BUCKET_NAME: ${TENCENT_COS_BUCKET_NAME:-your-bucket-name}  TENCENT_COS_SECRET_KEY: ${TENCENT_COS_SECRET_KEY:-your-secret-key}  TENCENT_COS_SECRET_ID: ${TENCENT_COS_SECRET_ID:-your-secret-id}  TENCENT_COS_REGION: ${TENCENT_COS_REGION:-your-region}  TENCENT_COS_SCHEME: ${TENCENT_COS_SCHEME:-your-scheme}  OCI_ENDPOINT: ${OCI_ENDPOINT:-https://objectstorage.us-ashburn-1.oraclecloud.com}  OCI_BUCKET_NAME: ${OCI_BUCKET_NAME:-your-bucket-name}  OCI_ACCESS_KEY: ${OCI_ACCESS_KEY:-your-access-key}  OCI_SECRET_KEY: ${OCI_SECRET_KEY:-your-secret-key}  OCI_REGION: ${OCI_REGION:-us-ashburn-1}  HUAWEI_OBS_BUCKET_NAME: ${HUAWEI_OBS_BUCKET_NAME:-your-bucket-name}  HUAWEI_OBS_SECRET_KEY: ${HUAWEI_OBS_SECRET_KEY:-your-secret-key}  HUAWEI_OBS_ACCESS_KEY: ${HUAWEI_OBS_ACCESS_KEY:-your-access-key}  HUAWEI_OBS_SERVER: ${HUAWEI_OBS_SERVER:-your-server-url}  VOLCENGINE_TOS_BUCKET_NAME: ${VOLCENGINE_TOS_BUCKET_NAME:-your-bucket-name}  VOLCENGINE_TOS_SECRET_KEY: ${VOLCENGINE_TOS_SECRET_KEY:-your-secret-key}  VOLCENGINE_TOS_ACCESS_KEY: ${VOLCENGINE_TOS_ACCESS_KEY:-your-access-key}  VOLCENGINE_TOS_ENDPOINT: ${VOLCENGINE_TOS_ENDPOINT:-your-server-url}  VOLCENGINE_TOS_REGION: ${VOLCENGINE_TOS_REGION:-your-region}  BAIDU_OBS_BUCKET_NAME: ${BAIDU_OBS_BUCKET_NAME:-your-bucket-name}  BAIDU_OBS_SECRET_KEY: ${BAIDU_OBS_SECRET_KEY:-your-secret-key}  BAIDU_OBS_ACCESS_KEY: ${BAIDU_OBS_ACCESS_KEY:-your-access-key}  BAIDU_OBS_ENDPOINT: ${BAIDU_OBS_ENDPOINT:-your-server-url}  SUPABASE_BUCKET_NAME: ${SUPABASE_BUCKET_NAME:-your-bucket-name}  SUPABASE_API_KEY: ${SUPABASE_API_KEY:-your-access-key}  SUPABASE_URL: ${SUPABASE_URL:-your-server-url}  VECTOR_STORE: ${VECTOR_STORE:-weaviate}  WEAVIATE_ENDPOINT: ${WEAVIATE_ENDPOINT:-http://weaviate:8080}  WEAVIATE_API_KEY: ${WEAVIATE_API_KEY:-WVF5YThaHlkYwhGUSmCRgsX3tD5ngdN8pkih}  QDRANT_URL: ${QDRANT_URL:-http://qdrant:6333}  QDRANT_API_KEY: ${QDRANT_API_KEY:-difyai123456}  QDRANT_CLIENT_TIMEOUT: ${QDRANT_CLIENT_TIMEOUT:-20}  QDRANT_GRPC_ENABLED: ${QDRANT_GRPC_ENABLED:-false}  QDRANT_GRPC_PORT: ${QDRANT_GRPC_PORT:-6334}  MILVUS_URI: ${MILVUS_URI:-http://127.0.0.1:19530}  MILVUS_TOKEN: ${MILVUS_TOKEN:-}  MILVUS_USER: ${MILVUS_USER:-root}  MILVUS_PASSWORD: ${MILVUS_PASSWORD:-Milvus}  MILVUS_ENABLE_HYBRID_SEARCH: ${MILVUS_ENABLE_HYBRID_SEARCH:-False}  MYSCALE_HOST: ${MYSCALE_HOST:-myscale}  MYSCALE_PORT: ${MYSCALE_PORT:-8123}  MYSCALE_USER: ${MYSCALE_USER:-default}  MYSCALE_PASSWORD: ${MYSCALE_PASSWORD:-}  MYSCALE_DATABASE: ${MYSCALE_DATABASE:-dify}  MYSCALE_FTS_PARAMS: ${MYSCALE_FTS_PARAMS:-}  COUCHBASE_CONNECTION_STRING: ${COUCHBASE_CONNECTION_STRING:-couchbase://couchbase-server}  COUCHBASE_USER: ${COUCHBASE_USER:-Administrator}  COUCHBASE_PASSWORD: ${COUCHBASE_PASSWORD:-password}  COUCHBASE_BUCKET_NAME: ${COUCHBASE_BUCKET_NAME:-Embeddings}  COUCHBASE_SCOPE_NAME: ${COUCHBASE_SCOPE_NAME:-_default}  PGVECTOR_HOST: ${PGVECTOR_HOST:-pgvector}  PGVECTOR_PORT: ${PGVECTOR_PORT:-5432}  PGVECTOR_USER: ${PGVECTOR_USER:-postgres}  PGVECTOR_PASSWORD: ${PGVECTOR_PASSWORD:-difyai123456}  PGVECTOR_DATABASE: ${PGVECTOR_DATABASE:-dify}  PGVECTOR_MIN_CONNECTION: ${PGVECTOR_MIN_CONNECTION:-1}  PGVECTOR_MAX_CONNECTION: ${PGVECTOR_MAX_CONNECTION:-5}  PGVECTO_RS_HOST: ${PGVECTO_RS_HOST:-pgvecto-rs}  PGVECTO_RS_PORT: ${PGVECTO_RS_PORT:-5432}  PGVECTO_RS_USER: ${PGVECTO_RS_USER:-postgres}  PGVECTO_RS_PASSWORD: ${PGVECTO_RS_PASSWORD:-difyai123456}  PGVECTO_RS_DATABASE: ${PGVECTO_RS_DATABASE:-dify}  ANALYTICDB_KEY_ID: ${ANALYTICDB_KEY_ID:-your-ak}  ANALYTICDB_KEY_SECRET: ${ANALYTICDB_KEY_SECRET:-your-sk}  ANALYTICDB_REGION_ID: ${ANALYTICDB_REGION_ID:-cn-hangzhou}  ANALYTICDB_INSTANCE_ID: ${ANALYTICDB_INSTANCE_ID:-gp-ab123456}  ANALYTICDB_ACCOUNT: ${ANALYTICDB_ACCOUNT:-testaccount}  ANALYTICDB_PASSWORD: ${ANALYTICDB_PASSWORD:-testpassword}  ANALYTICDB_NAMESPACE: ${ANALYTICDB_NAMESPACE:-dify}  ANALYTICDB_NAMESPACE_PASSWORD: ${ANALYTICDB_NAMESPACE_PASSWORD:-difypassword}  ANALYTICDB_HOST: ${ANALYTICDB_HOST:-gp-test.aliyuncs.com}  ANALYTICDB_PORT: ${ANALYTICDB_PORT:-5432}  ANALYTICDB_MIN_CONNECTION: ${ANALYTICDB_MIN_CONNECTION:-1}  ANALYTICDB_MAX_CONNECTION: ${ANALYTICDB_MAX_CONNECTION:-5}  TIDB_VECTOR_HOST: ${TIDB_VECTOR_HOST:-tidb}  TIDB_VECTOR_PORT: ${TIDB_VECTOR_PORT:-4000}  TIDB_VECTOR_USER: ${TIDB_VECTOR_USER:-}  TIDB_VECTOR_PASSWORD: ${TIDB_VECTOR_PASSWORD:-}  TIDB_VECTOR_DATABASE: ${TIDB_VECTOR_DATABASE:-dify}  TIDB_ON_QDRANT_URL: ${TIDB_ON_QDRANT_URL:-http://127.0.0.1}  TIDB_ON_QDRANT_API_KEY: ${TIDB_ON_QDRANT_API_KEY:-dify}  TIDB_ON_QDRANT_CLIENT_TIMEOUT: ${TIDB_ON_QDRANT_CLIENT_TIMEOUT:-20}  TIDB_ON_QDRANT_GRPC_ENABLED: ${TIDB_ON_QDRANT_GRPC_ENABLED:-false}  TIDB_ON_QDRANT_GRPC_PORT: ${TIDB_ON_QDRANT_GRPC_PORT:-6334}  TIDB_PUBLIC_KEY: ${TIDB_PUBLIC_KEY:-dify}  TIDB_PRIVATE_KEY: ${TIDB_PRIVATE_KEY:-dify}  TIDB_API_URL: ${TIDB_API_URL:-http://127.0.0.1}  TIDB_IAM_API_URL: ${TIDB_IAM_API_URL:-http://127.0.0.1}  TIDB_REGION: ${TIDB_REGION:-regions/aws-us-east-1}  TIDB_PROJECT_ID: ${TIDB_PROJECT_ID:-dify}  TIDB_SPEND_LIMIT: ${TIDB_SPEND_LIMIT:-100}  CHROMA_HOST: ${CHROMA_HOST:-127.0.0.1}  CHROMA_PORT: ${CHROMA_PORT:-8000}  CHROMA_TENANT: ${CHROMA_TENANT:-default_tenant}  CHROMA_DATABASE: ${CHROMA_DATABASE:-default_database}  CHROMA_AUTH_PROVIDER: ${CHROMA_AUTH_PROVIDER:-chromadb.auth.token_authn.TokenAuthClientProvider}  CHROMA_AUTH_CREDENTIALS: ${CHROMA_AUTH_CREDENTIALS:-}  ORACLE_HOST: ${ORACLE_HOST:-oracle}  ORACLE_PORT: ${ORACLE_PORT:-1521}  ORACLE_USER: ${ORACLE_USER:-dify}  ORACLE_PASSWORD: ${ORACLE_PASSWORD:-dify}  ORACLE_DATABASE: ${ORACLE_DATABASE:-FREEPDB1}  RELYT_HOST: ${RELYT_HOST:-db}  RELYT_PORT: ${RELYT_PORT:-5432}  RELYT_USER: ${RELYT_USER:-postgres}  RELYT_PASSWORD: ${RELYT_PASSWORD:-difyai123456}  RELYT_DATABASE: ${RELYT_DATABASE:-postgres}  OPENSEARCH_HOST: ${OPENSEARCH_HOST:-opensearch}  OPENSEARCH_PORT: ${OPENSEARCH_PORT:-9200}  OPENSEARCH_USER: ${OPENSEARCH_USER:-admin}  OPENSEARCH_PASSWORD: ${OPENSEARCH_PASSWORD:-admin}  OPENSEARCH_SECURE: ${OPENSEARCH_SECURE:-true}  TENCENT_VECTOR_DB_URL: ${TENCENT_VECTOR_DB_URL:-http://127.0.0.1}  TENCENT_VECTOR_DB_API_KEY: ${TENCENT_VECTOR_DB_API_KEY:-dify}  TENCENT_VECTOR_DB_TIMEOUT: ${TENCENT_VECTOR_DB_TIMEOUT:-30}  TENCENT_VECTOR_DB_USERNAME: ${TENCENT_VECTOR_DB_USERNAME:-dify}  TENCENT_VECTOR_DB_DATABASE: ${TENCENT_VECTOR_DB_DATABASE:-dify}  TENCENT_VECTOR_DB_SHARD: ${TENCENT_VECTOR_DB_SHARD:-1}  TENCENT_VECTOR_DB_REPLICAS: ${TENCENT_VECTOR_DB_REPLICAS:-2}  ELASTICSEARCH_HOST: ${ELASTICSEARCH_HOST:-0.0.0.0}  ELASTICSEARCH_PORT: ${ELASTICSEARCH_PORT:-9200}  ELASTICSEARCH_USERNAME: ${ELASTICSEARCH_USERNAME:-elastic}  ELASTICSEARCH_PASSWORD: ${ELASTICSEARCH_PASSWORD:-elastic}  KIBANA_PORT: ${KIBANA_PORT:-5601}  BAIDU_VECTOR_DB_ENDPOINT: ${BAIDU_VECTOR_DB_ENDPOINT:-http://127.0.0.1:5287}  BAIDU_VECTOR_DB_CONNECTION_TIMEOUT_MS: ${BAIDU_VECTOR_DB_CONNECTION_TIMEOUT_MS:-30000}  BAIDU_VECTOR_DB_ACCOUNT: ${BAIDU_VECTOR_DB_ACCOUNT:-root}  BAIDU_VECTOR_DB_API_KEY: ${BAIDU_VECTOR_DB_API_KEY:-dify}  BAIDU_VECTOR_DB_DATABASE: ${BAIDU_VECTOR_DB_DATABASE:-dify}  BAIDU_VECTOR_DB_SHARD: ${BAIDU_VECTOR_DB_SHARD:-1}  BAIDU_VECTOR_DB_REPLICAS: ${BAIDU_VECTOR_DB_REPLICAS:-3}  VIKINGDB_ACCESS_KEY: ${VIKINGDB_ACCESS_KEY:-your-ak}  VIKINGDB_SECRET_KEY: ${VIKINGDB_SECRET_KEY:-your-sk}  VIKINGDB_REGION: ${VIKINGDB_REGION:-cn-shanghai}  VIKINGDB_HOST: ${VIKINGDB_HOST:-api-vikingdb.xxx.volces.com}  VIKINGDB_SCHEMA: ${VIKINGDB_SCHEMA:-http}  VIKINGDB_CONNECTION_TIMEOUT: ${VIKINGDB_CONNECTION_TIMEOUT:-30}  VIKINGDB_SOCKET_TIMEOUT: ${VIKINGDB_SOCKET_TIMEOUT:-30}  LINDORM_URL: ${LINDORM_URL:-http://lindorm:30070}  LINDORM_USERNAME: ${LINDORM_USERNAME:-lindorm}  LINDORM_PASSWORD: ${LINDORM_PASSWORD:-lindorm}  OCEANBASE_VECTOR_HOST: ${OCEANBASE_VECTOR_HOST:-oceanbase}  OCEANBASE_VECTOR_PORT: ${OCEANBASE_VECTOR_PORT:-2881}  OCEANBASE_VECTOR_USER: ${OCEANBASE_VECTOR_USER:-root@test}  OCEANBASE_VECTOR_PASSWORD: ${OCEANBASE_VECTOR_PASSWORD:-difyai123456}  OCEANBASE_VECTOR_DATABASE: ${OCEANBASE_VECTOR_DATABASE:-test}  OCEANBASE_CLUSTER_NAME: ${OCEANBASE_CLUSTER_NAME:-difyai}  OCEANBASE_MEMORY_LIMIT: ${OCEANBASE_MEMORY_LIMIT:-6G}  UPSTASH_VECTOR_URL: ${UPSTASH_VECTOR_URL:-https://xxx-vector.upstash.io}  UPSTASH_VECTOR_TOKEN: ${UPSTASH_VECTOR_TOKEN:-dify}  UPLOAD_FILE_SIZE_LIMIT: ${UPLOAD_FILE_SIZE_LIMIT:-15}  UPLOAD_FILE_BATCH_LIMIT: ${UPLOAD_FILE_BATCH_LIMIT:-5}  ETL_TYPE: ${ETL_TYPE:-dify}  UNSTRUCTURED_API_URL: ${UNSTRUCTURED_API_URL:-}  UNSTRUCTURED_API_KEY: ${UNSTRUCTURED_API_KEY:-}  SCARF_NO_ANALYTICS: ${SCARF_NO_ANALYTICS:-true}  PROMPT_GENERATION_MAX_TOKENS: ${PROMPT_GENERATION_MAX_TOKENS:-512}  CODE_GENERATION_MAX_TOKENS: ${CODE_GENERATION_MAX_TOKENS:-1024}  MULTIMODAL_SEND_FORMAT: ${MULTIMODAL_SEND_FORMAT:-base64}  UPLOAD_IMAGE_FILE_SIZE_LIMIT: ${UPLOAD_IMAGE_FILE_SIZE_LIMIT:-10}  UPLOAD_VIDEO_FILE_SIZE_LIMIT: ${UPLOAD_VIDEO_FILE_SIZE_LIMIT:-100}  UPLOAD_AUDIO_FILE_SIZE_LIMIT: ${UPLOAD_AUDIO_FILE_SIZE_LIMIT:-50}  SENTRY_DSN: ${SENTRY_DSN:-}  API_SENTRY_DSN: ${API_SENTRY_DSN:-}  API_SENTRY_TRACES_SAMPLE_RATE: ${API_SENTRY_TRACES_SAMPLE_RATE:-1.0}  API_SENTRY_PROFILES_SAMPLE_RATE: ${API_SENTRY_PROFILES_SAMPLE_RATE:-1.0}  WEB_SENTRY_DSN: ${WEB_SENTRY_DSN:-}  NOTION_INTEGRATION_TYPE: ${NOTION_INTEGRATION_TYPE:-public}  NOTION_CLIENT_SECRET: ${NOTION_CLIENT_SECRET:-}  NOTION_CLIENT_ID: ${NOTION_CLIENT_ID:-}  NOTION_INTERNAL_SECRET: ${NOTION_INTERNAL_SECRET:-}  MAIL_TYPE: ${MAIL_TYPE:-resend}  MAIL_DEFAULT_SEND_FROM: ${MAIL_DEFAULT_SEND_FROM:-}  RESEND_API_URL: ${RESEND_API_URL:-https://api.resend.com}  RESEND_API_KEY: ${RESEND_API_KEY:-your-resend-api-key}  SMTP_SERVER: ${SMTP_SERVER:-}  SMTP_PORT: ${SMTP_PORT:-465}  SMTP_USERNAME: ${SMTP_USERNAME:-}  SMTP_PASSWORD: ${SMTP_PASSWORD:-}  SMTP_USE_TLS: ${SMTP_USE_TLS:-true}  SMTP_OPPORTUNISTIC_TLS: ${SMTP_OPPORTUNISTIC_TLS:-false}  INDEXING_MAX_SEGMENTATION_TOKENS_LENGTH: ${INDEXING_MAX_SEGMENTATION_TOKENS_LENGTH:-4000}  INVITE_EXPIRY_HOURS: ${INVITE_EXPIRY_HOURS:-72}  RESET_PASSWORD_TOKEN_EXPIRY_MINUTES: ${RESET_PASSWORD_TOKEN_EXPIRY_MINUTES:-5}  CODE_EXECUTION_ENDPOINT: ${CODE_EXECUTION_ENDPOINT:-http://sandbox:8194}  CODE_EXECUTION_API_KEY: ${CODE_EXECUTION_API_KEY:-dify-sandbox}  CODE_MAX_NUMBER: ${CODE_MAX_NUMBER:-9223372036854775807}  CODE_MIN_NUMBER: ${CODE_MIN_NUMBER:--9223372036854775808}  CODE_MAX_DEPTH: ${CODE_MAX_DEPTH:-5}  CODE_MAX_PRECISION: ${CODE_MAX_PRECISION:-20}  CODE_MAX_STRING_LENGTH: ${CODE_MAX_STRING_LENGTH:-80000}  CODE_MAX_STRING_ARRAY_LENGTH: ${CODE_MAX_STRING_ARRAY_LENGTH:-30}  CODE_MAX_OBJECT_ARRAY_LENGTH: ${CODE_MAX_OBJECT_ARRAY_LENGTH:-30}  CODE_MAX_NUMBER_ARRAY_LENGTH: ${CODE_MAX_NUMBER_ARRAY_LENGTH:-1000}  CODE_EXECUTION_CONNECT_TIMEOUT: ${CODE_EXECUTION_CONNECT_TIMEOUT:-10}  CODE_EXECUTION_READ_TIMEOUT: ${CODE_EXECUTION_READ_TIMEOUT:-60}  CODE_EXECUTION_WRITE_TIMEOUT: ${CODE_EXECUTION_WRITE_TIMEOUT:-10}  TEMPLATE_TRANSFORM_MAX_LENGTH: ${TEMPLATE_TRANSFORM_MAX_LENGTH:-80000}  WORKFLOW_MAX_EXECUTION_STEPS: ${WORKFLOW_MAX_EXECUTION_STEPS:-500}  WORKFLOW_MAX_EXECUTION_TIME: ${WORKFLOW_MAX_EXECUTION_TIME:-1200}  WORKFLOW_CALL_MAX_DEPTH: ${WORKFLOW_CALL_MAX_DEPTH:-5}  MAX_VARIABLE_SIZE: ${MAX_VARIABLE_SIZE:-204800}  WORKFLOW_PARALLEL_DEPTH_LIMIT: ${WORKFLOW_PARALLEL_DEPTH_LIMIT:-3}  WORKFLOW_FILE_UPLOAD_LIMIT: ${WORKFLOW_FILE_UPLOAD_LIMIT:-10}  HTTP_REQUEST_NODE_MAX_BINARY_SIZE: ${HTTP_REQUEST_NODE_MAX_BINARY_SIZE:-10485760}  HTTP_REQUEST_NODE_MAX_TEXT_SIZE: ${HTTP_REQUEST_NODE_MAX_TEXT_SIZE:-1048576}  SSRF_PROXY_HTTP_URL: ${SSRF_PROXY_HTTP_URL:-http://ssrf_proxy:3128}  SSRF_PROXY_HTTPS_URL: ${SSRF_PROXY_HTTPS_URL:-http://ssrf_proxy:3128}  TEXT_GENERATION_TIMEOUT_MS: ${TEXT_GENERATION_TIMEOUT_MS:-60000}  PGUSER: ${PGUSER:-${DB_USERNAME}}  POSTGRES_PASSWORD: ${POSTGRES_PASSWORD:-${DB_PASSWORD}}  POSTGRES_DB: ${POSTGRES_DB:-${DB_DATABASE}}  PGDATA: ${PGDATA:-/var/lib/postgresql/data/pgdata}  SANDBOX_API_KEY: ${SANDBOX_API_KEY:-dify-sandbox}  SANDBOX_GIN_MODE: ${SANDBOX_GIN_MODE:-release}  SANDBOX_WORKER_TIMEOUT: ${SANDBOX_WORKER_TIMEOUT:-15}  SANDBOX_ENABLE_NETWORK: ${SANDBOX_ENABLE_NETWORK:-true}  SANDBOX_HTTP_PROXY: ${SANDBOX_HTTP_PROXY:-http://ssrf_proxy:3128}  SANDBOX_HTTPS_PROXY: ${SANDBOX_HTTPS_PROXY:-http://ssrf_proxy:3128}  SANDBOX_PORT: ${SANDBOX_PORT:-8194}  WEAVIATE_PERSISTENCE_DATA_PATH: ${WEAVIATE_PERSISTENCE_DATA_PATH:-/var/lib/weaviate}  WEAVIATE_QUERY_DEFAULTS_LIMIT: ${WEAVIATE_QUERY_DEFAULTS_LIMIT:-25}  WEAVIATE_AUTHENTICATION_ANONYMOUS_ACCESS_ENABLED: ${WEAVIATE_AUTHENTICATION_ANONYMOUS_ACCESS_ENABLED:-true}  WEAVIATE_DEFAULT_VECTORIZER_MODULE: ${WEAVIATE_DEFAULT_VECTORIZER_MODULE:-none}  WEAVIATE_CLUSTER_HOSTNAME: ${WEAVIATE_CLUSTER_HOSTNAME:-node1}  WEAVIATE_AUTHENTICATION_APIKEY_ENABLED: ${WEAVIATE_AUTHENTICATION_APIKEY_ENABLED:-true}  WEAVIATE_AUTHENTICATION_APIKEY_ALLOWED_KEYS: ${WEAVIATE_AUTHENTICATION_APIKEY_ALLOWED_KEYS:-WVF5YThaHlkYwhGUSmCRgsX3tD5ngdN8pkih}  WEAVIATE_AUTHENTICATION_APIKEY_USERS: ${WEAVIATE_AUTHENTICATION_APIKEY_USERS:-hello@dify.ai}  WEAVIATE_AUTHORIZATION_ADMINLIST_ENABLED: ${WEAVIATE_AUTHORIZATION_ADMINLIST_ENABLED:-true}  WEAVIATE_AUTHORIZATION_ADMINLIST_USERS: ${WEAVIATE_AUTHORIZATION_ADMINLIST_USERS:-hello@dify.ai}  CHROMA_SERVER_AUTHN_CREDENTIALS: ${CHROMA_SERVER_AUTHN_CREDENTIALS:-difyai123456}  CHROMA_SERVER_AUTHN_PROVIDER: ${CHROMA_SERVER_AUTHN_PROVIDER:-chromadb.auth.token_authn.TokenAuthenticationServerProvider}  CHROMA_IS_PERSISTENT: ${CHROMA_IS_PERSISTENT:-TRUE}  ORACLE_PWD: ${ORACLE_PWD:-Dify123456}  ORACLE_CHARACTERSET: ${ORACLE_CHARACTERSET:-AL32UTF8}  ETCD_AUTO_COMPACTION_MODE: ${ETCD_AUTO_COMPACTION_MODE:-revision}  ETCD_AUTO_COMPACTION_RETENTION: ${ETCD_AUTO_COMPACTION_RETENTION:-1000}  ETCD_QUOTA_BACKEND_BYTES: ${ETCD_QUOTA_BACKEND_BYTES:-4294967296}  ETCD_SNAPSHOT_COUNT: ${ETCD_SNAPSHOT_COUNT:-50000}  MINIO_ACCESS_KEY: ${MINIO_ACCESS_KEY:-minioadmin}  MINIO_SECRET_KEY: ${MINIO_SECRET_KEY:-minioadmin}  ETCD_ENDPOINTS: ${ETCD_ENDPOINTS:-etcd:2379}  MINIO_ADDRESS: ${MINIO_ADDRESS:-minio:9000}  MILVUS_AUTHORIZATION_ENABLED: ${MILVUS_AUTHORIZATION_ENABLED:-true}  PGVECTOR_PGUSER: ${PGVECTOR_PGUSER:-postgres}  PGVECTOR_POSTGRES_PASSWORD: ${PGVECTOR_POSTGRES_PASSWORD:-difyai123456}  PGVECTOR_POSTGRES_DB: ${PGVECTOR_POSTGRES_DB:-dify}  PGVECTOR_PGDATA: ${PGVECTOR_PGDATA:-/var/lib/postgresql/data/pgdata}  OPENSEARCH_DISCOVERY_TYPE: ${OPENSEARCH_DISCOVERY_TYPE:-single-node}  OPENSEARCH_BOOTSTRAP_MEMORY_LOCK: ${OPENSEARCH_BOOTSTRAP_MEMORY_LOCK:-true}  OPENSEARCH_JAVA_OPTS_MIN: ${OPENSEARCH_JAVA_OPTS_MIN:-512m}  OPENSEARCH_JAVA_OPTS_MAX: ${OPENSEARCH_JAVA_OPTS_MAX:-1024m}  OPENSEARCH_INITIAL_ADMIN_PASSWORD: ${OPENSEARCH_INITIAL_ADMIN_PASSWORD:-Qazwsxedc!@#123}  OPENSEARCH_MEMLOCK_SOFT: ${OPENSEARCH_MEMLOCK_SOFT:--1}  OPENSEARCH_MEMLOCK_HARD: ${OPENSEARCH_MEMLOCK_HARD:--1}  OPENSEARCH_NOFILE_SOFT: ${OPENSEARCH_NOFILE_SOFT:-65536}  OPENSEARCH_NOFILE_HARD: ${OPENSEARCH_NOFILE_HARD:-65536}  NGINX_SERVER_NAME: ${NGINX_SERVER_NAME:-_}  NGINX_HTTPS_ENABLED: ${NGINX_HTTPS_ENABLED:-false}  NGINX_PORT: ${NGINX_PORT:-80}  NGINX_SSL_PORT: ${NGINX_SSL_PORT:-443}  NGINX_SSL_CERT_FILENAME: ${NGINX_SSL_CERT_FILENAME:-dify.crt}  NGINX_SSL_CERT_KEY_FILENAME: ${NGINX_SSL_CERT_KEY_FILENAME:-dify.key}  NGINX_SSL_PROTOCOLS: ${NGINX_SSL_PROTOCOLS:-TLSv1.1 TLSv1.2 TLSv1.3}  NGINX_WORKER_PROCESSES: ${NGINX_WORKER_PROCESSES:-auto}  NGINX_CLIENT_MAX_BODY_SIZE: ${NGINX_CLIENT_MAX_BODY_SIZE:-15M}  NGINX_KEEPALIVE_TIMEOUT: ${NGINX_KEEPALIVE_TIMEOUT:-65}  NGINX_PROXY_READ_TIMEOUT: ${NGINX_PROXY_READ_TIMEOUT:-3600s}  NGINX_PROXY_SEND_TIMEOUT: ${NGINX_PROXY_SEND_TIMEOUT:-3600s}  NGINX_ENABLE_CERTBOT_CHALLENGE: ${NGINX_ENABLE_CERTBOT_CHALLENGE:-false}  CERTBOT_EMAIL: ${CERTBOT_EMAIL:-your_email@example.com}  CERTBOT_DOMAIN: ${CERTBOT_DOMAIN:-your_domain.com}  CERTBOT_OPTIONS: ${CERTBOT_OPTIONS:-}  SSRF_HTTP_PORT: ${SSRF_HTTP_PORT:-3128}  SSRF_COREDUMP_DIR: ${SSRF_COREDUMP_DIR:-/var/spool/squid}  SSRF_REVERSE_PROXY_PORT: ${SSRF_REVERSE_PROXY_PORT:-8194}  SSRF_SANDBOX_HOST: ${SSRF_SANDBOX_HOST:-sandbox}  SSRF_DEFAULT_TIME_OUT: ${SSRF_DEFAULT_TIME_OUT:-5}  SSRF_DEFAULT_CONNECT_TIME_OUT: ${SSRF_DEFAULT_CONNECT_TIME_OUT:-5}  SSRF_DEFAULT_READ_TIME_OUT: ${SSRF_DEFAULT_READ_TIME_OUT:-5}  SSRF_DEFAULT_WRITE_TIME_OUT: ${SSRF_DEFAULT_WRITE_TIME_OUT:-5}  EXPOSE_NGINX_PORT: ${EXPOSE_NGINX_PORT:-80}  EXPOSE_NGINX_SSL_PORT: ${EXPOSE_NGINX_SSL_PORT:-443}  POSITION_TOOL_PINS: ${POSITION_TOOL_PINS:-}  POSITION_TOOL_INCLUDES: ${POSITION_TOOL_INCLUDES:-}  POSITION_TOOL_EXCLUDES: ${POSITION_TOOL_EXCLUDES:-}  POSITION_PROVIDER_PINS: ${POSITION_PROVIDER_PINS:-}  POSITION_PROVIDER_INCLUDES: ${POSITION_PROVIDER_INCLUDES:-}  POSITION_PROVIDER_EXCLUDES: ${POSITION_PROVIDER_EXCLUDES:-}  CSP_WHITELIST: ${CSP_WHITELIST:-}  CREATE_TIDB_SERVICE_JOB_ENABLED: ${CREATE_TIDB_SERVICE_JOB_ENABLED:-false}  MAX_SUBMIT_COUNT: ${MAX_SUBMIT_COUNT:-100}  TOP_K_MAX_VALUE: ${TOP_K_MAX_VALUE:-10}  DB_PLUGIN_DATABASE: ${DB_PLUGIN_DATABASE:-dify_plugin}  EXPOSE_PLUGIN_DAEMON_PORT: ${EXPOSE_PLUGIN_DAEMON_PORT:-5002}  PLUGIN_DAEMON_PORT: ${PLUGIN_DAEMON_PORT:-5002}  PLUGIN_DAEMON_KEY: ${PLUGIN_DAEMON_KEY:-lYkiYYT6owG+71oLerGzA7GXCgOT++6ovaezWAjpCjf+Sjc3ZtU+qUEi}  PLUGIN_DAEMON_URL: ${PLUGIN_DAEMON_URL:-http://plugin_daemon:5002}  PLUGIN_MAX_PACKAGE_SIZE: ${PLUGIN_MAX_PACKAGE_SIZE:-52428800}  PLUGIN_PPROF_ENABLED: ${PLUGIN_PPROF_ENABLED:-false}  PLUGIN_DEBUGGING_HOST: ${PLUGIN_DEBUGGING_HOST:-0.0.0.0}  PLUGIN_DEBUGGING_PORT: ${PLUGIN_DEBUGGING_PORT:-5003}  EXPOSE_PLUGIN_DEBUGGING_HOST: ${EXPOSE_PLUGIN_DEBUGGING_HOST:-localhost}  EXPOSE_PLUGIN_DEBUGGING_PORT: ${EXPOSE_PLUGIN_DEBUGGING_PORT:-5003}  PLUGIN_DIFY_INNER_API_KEY: ${PLUGIN_DIFY_INNER_API_KEY:-QaHbTe77CtuXmsfyhR7+vRjI/+XbV1AaFy691iy+kGDv2Jvy0/eAh8Y1}  PLUGIN_DIFY_INNER_API_URL: ${PLUGIN_DIFY_INNER_API_URL:-http://api:5001}  ENDPOINT_URL_TEMPLATE: ${ENDPOINT_URL_TEMPLATE:-http://localhost/e/{hook_id}}  MARKETPLACE_ENABLED: ${MARKETPLACE_ENABLED:-true}  MARKETPLACE_API_URL: ${MARKETPLACE_API_URL:-https://marketplace.dify.ai}  FORCE_VERIFYING_SIGNATURE: ${FORCE_VERIFYING_SIGNATURE:-true}services:  # API service  api:    image: docker.1ms.run/langgenius/dify-api:latest    restart: always    environment:      # Use the shared environment variables.      <<: *shared-api-worker-env      # Startup mode, 'api' starts the API server.      MODE: api      SENTRY_DSN: ${API_SENTRY_DSN:-}      SENTRY_TRACES_SAMPLE_RATE: ${API_SENTRY_TRACES_SAMPLE_RATE:-1.0}      SENTRY_PROFILES_SAMPLE_RATE: ${API_SENTRY_PROFILES_SAMPLE_RATE:-1.0}      PLUGIN_MAX_PACKAGE_SIZE: ${PLUGIN_MAX_PACKAGE_SIZE:-52428800}      INNER_API_KEY_FOR_PLUGIN: ${PLUGIN_DIFY_INNER_API_KEY:-QaHbTe77CtuXmsfyhR7+vRjI/+XbV1AaFy691iy+kGDv2Jvy0/eAh8Y1}    depends_on:      - db      - redis    volumes:      # Mount the storage directory to the container, for storing user files.      - ./volumes/app/storage:/app/api/storage    networks:      - ssrf_proxy_network      - default  # worker service  # The Celery worker for processing the queue.  worker:    image: docker.1ms.run/langgenius/dify-api:latest    restart: always    environment:      # Use the shared environment variables.      <<: *shared-api-worker-env      # Startup mode, 'worker' starts the Celery worker for processing the queue.      MODE: worker      SENTRY_DSN: ${API_SENTRY_DSN:-}      SENTRY_TRACES_SAMPLE_RATE: ${API_SENTRY_TRACES_SAMPLE_RATE:-1.0}      SENTRY_PROFILES_SAMPLE_RATE: ${API_SENTRY_PROFILES_SAMPLE_RATE:-1.0}      PLUGIN_MAX_PACKAGE_SIZE: ${PLUGIN_MAX_PACKAGE_SIZE:-52428800}      INNER_API_KEY_FOR_PLUGIN: ${PLUGIN_DIFY_INNER_API_KEY:-QaHbTe77CtuXmsfyhR7+vRjI/+XbV1AaFy691iy+kGDv2Jvy0/eAh8Y1}    depends_on:      - db      - redis    volumes:      # Mount the storage directory to the container, for storing user files.      - ./volumes/app/storage:/app/api/storage    networks:      - ssrf_proxy_network      - default  # Frontend web application.  web:    image: docker.1ms.run/langgenius/dify-web:latest    restart: always    environment:      CONSOLE_API_URL: ${CONSOLE_API_URL:-}      APP_API_URL: ${APP_API_URL:-}      SENTRY_DSN: ${WEB_SENTRY_DSN:-}      NEXT_TELEMETRY_DISABLED: ${NEXT_TELEMETRY_DISABLED:-0}      TEXT_GENERATION_TIMEOUT_MS: ${TEXT_GENERATION_TIMEOUT_MS:-60000}      CSP_WHITELIST: ${CSP_WHITELIST:-}      MARKETPLACE_API_URL: ${MARKETPLACE_API_URL:-https://marketplace.dify.ai}      MARKETPLACE_URL: ${MARKETPLACE_URL:-https://marketplace.dify.ai}      TOP_K_MAX_VALUE: ${TOP_K_MAX_VALUE:-}      INDEXING_MAX_SEGMENTATION_TOKENS_LENGTH: ${INDEXING_MAX_SEGMENTATION_TOKENS_LENGTH:-}  # The postgres database.  db:    image: docker.1ms.run/library/postgres:15-alpine    restart: always    environment:      PGUSER: ${PGUSER:-postgres}      POSTGRES_PASSWORD: ${POSTGRES_PASSWORD:-difyai123456}      POSTGRES_DB: ${POSTGRES_DB:-dify}      PGDATA: ${PGDATA:-/var/lib/postgresql/data/pgdata}    command: >      postgres -c 'max_connections=${POSTGRES_MAX_CONNECTIONS:-100}'               -c 'shared_buffers=${POSTGRES_SHARED_BUFFERS:-128MB}'               -c 'work_mem=${POSTGRES_WORK_MEM:-4MB}'               -c 'maintenance_work_mem=${POSTGRES_MAINTENANCE_WORK_MEM:-64MB}'               -c 'effective_cache_size=${POSTGRES_EFFECTIVE_CACHE_SIZE:-4096MB}'    volumes:      - ./volumes/db/data:/var/lib/postgresql/data    healthcheck:      test: [ 'CMD', 'pg_isready' ]      interval: 1s      timeout: 3s      retries: 30    ports:      - '${EXPOSE_DB_PORT:-5432}:5432'  # The redis cache.  redis:    image: docker.1ms.run/library/redis:6-alpine    restart: always    environment:      REDISCLI_AUTH: ${REDIS_PASSWORD:-difyai123456}    volumes:      # Mount the redis data directory to the container.      - ./volumes/redis/data:/data    # Set the redis password when startup redis server.    command: redis-server --requirepass ${REDIS_PASSWORD:-difyai123456}    healthcheck:      test: [ 'CMD', 'redis-cli', 'ping' ]  # The DifySandbox  sandbox:    image: docker.1ms.run/langgenius/dify-sandbox:0.2.10    restart: always    environment:      # The DifySandbox configurations      # Make sure you are changing this key for your deployment with a strong key.      # You can generate a strong key using `openssl rand -base64 42`.      API_KEY: ${SANDBOX_API_KEY:-dify-sandbox}      GIN_MODE: ${SANDBOX_GIN_MODE:-release}      WORKER_TIMEOUT: ${SANDBOX_WORKER_TIMEOUT:-15}      ENABLE_NETWORK: ${SANDBOX_ENABLE_NETWORK:-true}      HTTP_PROXY: ${SANDBOX_HTTP_PROXY:-http://ssrf_proxy:3128}      HTTPS_PROXY: ${SANDBOX_HTTPS_PROXY:-http://ssrf_proxy:3128}      SANDBOX_PORT: ${SANDBOX_PORT:-8194}    volumes:      - ./volumes/sandbox/dependencies:/dependencies    healthcheck:      test: [ 'CMD', 'curl', '-f', 'http://localhost:8194/health' ]    networks:      - ssrf_proxy_network  # plugin daemon  plugin_daemon:    image: docker.1ms.run/langgenius/dify-plugin-daemon:0.0.1-local    restart: always    environment:      # Use the shared environment variables.      <<: *shared-api-worker-env      DB_DATABASE: ${DB_PLUGIN_DATABASE:-dify_plugin}      SERVER_PORT: ${PLUGIN_DAEMON_PORT:-5002}      SERVER_KEY: ${PLUGIN_DAEMON_KEY:-lYkiYYT6owG+71oLerGzA7GXCgOT++6ovaezWAjpCjf+Sjc3ZtU+qUEi}      MAX_PLUGIN_PACKAGE_SIZE: ${PLUGIN_MAX_PACKAGE_SIZE:-52428800}      PPROF_ENABLED: ${PLUGIN_PPROF_ENABLED:-false}      DIFY_INNER_API_URL: ${PLUGIN_DIFY_INNER_API_URL:-http://api:5001}      DIFY_INNER_API_KEY: ${INNER_API_KEY_FOR_PLUGIN:-QaHbTe77CtuXmsfyhR7+vRjI/+XbV1AaFy691iy+kGDv2Jvy0/eAh8Y1}      PLUGIN_REMOTE_INSTALLING_HOST: ${PLUGIN_REMOTE_INSTALL_HOST:-0.0.0.0}      PLUGIN_REMOTE_INSTALLING_PORT: ${PLUGIN_REMOTE_INSTALL_PORT:-5003}      PLUGIN_WORKING_PATH: ${PLUGIN_WORKING_PATH:-/app/storage/cwd}      FORCE_VERIFYING_SIGNATURE: ${FORCE_VERIFYING_SIGNATURE:-true}    ports:      - "${EXPOSE_PLUGIN_DEBUGGING_PORT:-5003}:${PLUGIN_DEBUGGING_PORT:-5003}"    volumes:      - ./volumes/plugin_daemon:/app/storage  # ssrf_proxy server  # for more information, please refer to  # https://docs.dify.ai/learn-more/faq/install-faq#id-18.-why-is-ssrf_proxy-needed  ssrf_proxy:    image: docker.1ms.run/ubuntu/squid:latest    restart: always    volumes:      - ./ssrf_proxy/squid.conf.template:/etc/squid/squid.conf.template      - ./ssrf_proxy/docker-entrypoint.sh:/docker-entrypoint-mount.sh    entrypoint: [ 'sh', '-c', "cp /docker-entrypoint-mount.sh /docker-entrypoint.sh && sed -i 's/\r$$//' /docker-entrypoint.sh && chmod +x /docker-entrypoint.sh && /docker-entrypoint.sh" ]    environment:      # pls clearly modify the squid env vars to fit your network environment.      HTTP_PORT: ${SSRF_HTTP_PORT:-3128}      COREDUMP_DIR: ${SSRF_COREDUMP_DIR:-/var/spool/squid}      REVERSE_PROXY_PORT: ${SSRF_REVERSE_PROXY_PORT:-8194}      SANDBOX_HOST: ${SSRF_SANDBOX_HOST:-sandbox}      SANDBOX_PORT: ${SANDBOX_PORT:-8194}    networks:      - ssrf_proxy_network      - default  # Certbot service  # use `docker-compose --profile certbot up` to start the certbot service.  certbot:    image: docker.1ms.run/certbot/certbot    profiles:      - certbot    volumes:      - ./volumes/certbot/conf:/etc/letsencrypt      - ./volumes/certbot/www:/var/www/html      - ./volumes/certbot/logs:/var/log/letsencrypt      - ./volumes/certbot/conf/live:/etc/letsencrypt/live      - ./certbot/update-cert.template.txt:/update-cert.template.txt      - ./certbot/docker-entrypoint.sh:/docker-entrypoint.sh    environment:      - CERTBOT_EMAIL=${CERTBOT_EMAIL}      - CERTBOT_DOMAIN=${CERTBOT_DOMAIN}      - CERTBOT_OPTIONS=${CERTBOT_OPTIONS:-}    entrypoint: [ '/docker-entrypoint.sh' ]    command: [ 'tail', '-f', '/dev/null' ]  # The nginx reverse proxy.  # used for reverse proxying the API service and Web service.  nginx:    image: docker.1ms.run/library/nginx:latest    restart: always    volumes:      - ./nginx/nginx.conf.template:/etc/nginx/nginx.conf.template      - ./nginx/proxy.conf.template:/etc/nginx/proxy.conf.template      - ./nginx/https.conf.template:/etc/nginx/https.conf.template      - ./nginx/conf.d:/etc/nginx/conf.d      - ./nginx/docker-entrypoint.sh:/docker-entrypoint-mount.sh      - ./nginx/ssl:/etc/ssl # cert dir (legacy)      - ./volumes/certbot/conf/live:/etc/letsencrypt/live # cert dir (with certbot container)      - ./volumes/certbot/conf:/etc/letsencrypt      - ./volumes/certbot/www:/var/www/html    entrypoint: [ 'sh', '-c', "cp /docker-entrypoint-mount.sh /docker-entrypoint.sh && sed -i 's/\r$$//' /docker-entrypoint.sh && chmod +x /docker-entrypoint.sh && /docker-entrypoint.sh" ]    environment:      NGINX_SERVER_NAME: ${NGINX_SERVER_NAME:-_}      NGINX_HTTPS_ENABLED: ${NGINX_HTTPS_ENABLED:-false}      NGINX_SSL_PORT: ${NGINX_SSL_PORT:-443}      NGINX_PORT: ${NGINX_PORT:-80}      # You're required to add your own SSL certificates/keys to the `./nginx/ssl` directory      # and modify the env vars below in .env if HTTPS_ENABLED is true.      NGINX_SSL_CERT_FILENAME: ${NGINX_SSL_CERT_FILENAME:-dify.crt}      NGINX_SSL_CERT_KEY_FILENAME: ${NGINX_SSL_CERT_KEY_FILENAME:-dify.key}      NGINX_SSL_PROTOCOLS: ${NGINX_SSL_PROTOCOLS:-TLSv1.1 TLSv1.2 TLSv1.3}      NGINX_WORKER_PROCESSES: ${NGINX_WORKER_PROCESSES:-auto}      NGINX_CLIENT_MAX_BODY_SIZE: ${NGINX_CLIENT_MAX_BODY_SIZE:-15M}      NGINX_KEEPALIVE_TIMEOUT: ${NGINX_KEEPALIVE_TIMEOUT:-65}      NGINX_PROXY_READ_TIMEOUT: ${NGINX_PROXY_READ_TIMEOUT:-3600s}      NGINX_PROXY_SEND_TIMEOUT: ${NGINX_PROXY_SEND_TIMEOUT:-3600s}      NGINX_ENABLE_CERTBOT_CHALLENGE: ${NGINX_ENABLE_CERTBOT_CHALLENGE:-false}      CERTBOT_DOMAIN: ${CERTBOT_DOMAIN:-}    depends_on:      - api      - web    ports:      - '${EXPOSE_NGINX_PORT:-80}:${NGINX_PORT:-80}'      - '${EXPOSE_NGINX_SSL_PORT:-443}:${NGINX_SSL_PORT:-443}'  # The Weaviate vector store.  weaviate:    image: docker.1ms.run/semitechnologies/weaviate:1.19.0    profiles:      - ''      - weaviate    restart: always    volumes:      # Mount the Weaviate data directory to the con tainer.      - ./volumes/weaviate:/var/lib/weaviate    environment:      # The Weaviate configurations      # You can refer to the [Weaviate](https://weaviate.io/developers/weaviate/config-refs/env-vars) documentation for more information.      PERSISTENCE_DATA_PATH: ${WEAVIATE_PERSISTENCE_DATA_PATH:-/var/lib/weaviate}      QUERY_DEFAULTS_LIMIT: ${WEAVIATE_QUERY_DEFAULTS_LIMIT:-25}      AUTHENTICATION_ANONYMOUS_ACCESS_ENABLED: ${WEAVIATE_AUTHENTICATION_ANONYMOUS_ACCESS_ENABLED:-false}      DEFAULT_VECTORIZER_MODULE: ${WEAVIATE_DEFAULT_VECTORIZER_MODULE:-none}      CLUSTER_HOSTNAME: ${WEAVIATE_CLUSTER_HOSTNAME:-node1}      AUTHENTICATION_APIKEY_ENABLED: ${WEAVIATE_AUTHENTICATION_APIKEY_ENABLED:-true}      AUTHENTICATION_APIKEY_ALLOWED_KEYS: ${WEAVIATE_AUTHENTICATION_APIKEY_ALLOWED_KEYS:-WVF5YThaHlkYwhGUSmCRgsX3tD5ngdN8pkih}      AUTHENTICATION_APIKEY_USERS: ${WEAVIATE_AUTHENTICATION_APIKEY_USERS:-hello@dify.ai}      AUTHORIZATION_ADMINLIST_ENABLED: ${WEAVIATE_AUTHORIZATION_ADMINLIST_ENABLED:-true}      AUTHORIZATION_ADMINLIST_USERS: ${WEAVIATE_AUTHORIZATION_ADMINLIST_USERS:-hello@dify.ai}  # Qdrant vector store.  # (if used, you need to set VECTOR_STORE to qdrant in the api & worker service.)  qdrant:    image: docker.1ms.run/langgenius/qdrant:v1.7.3    profiles:      - qdrant    restart: always    volumes:      - ./volumes/qdrant:/qdrant/storage    environment:      QDRANT_API_KEY: ${QDRANT_API_KEY:-difyai123456}  # The Couchbase vector store.  couchbase-server:    build: ./couchbase-server    profiles:      - couchbase    restart: always    environment:      - CLUSTER_NAME=dify_search      - COUCHBASE_ADMINISTRATOR_USERNAME=${COUCHBASE_USER:-Administrator}      - COUCHBASE_ADMINISTRATOR_PASSWORD=${COUCHBASE_PASSWORD:-password}      - COUCHBASE_BUCKET=${COUCHBASE_BUCKET_NAME:-Embeddings}      - COUCHBASE_BUCKET_RAMSIZE=512      - COUCHBASE_RAM_SIZE=2048      - COUCHBASE_EVENTING_RAM_SIZE=512      - COUCHBASE_INDEX_RAM_SIZE=512      - COUCHBASE_FTS_RAM_SIZE=1024    hostname: couchbase-server    container_name: couchbase-server    working_dir: /opt/couchbase    stdin_open: true    tty: true    entrypoint: [ "" ]    command: sh -c "/opt/couchbase/init/init-cbserver.sh"    volumes:      - ./volumes/couchbase/data:/opt/couchbase/var/lib/couchbase/data    healthcheck:      # ensure bucket was created before proceeding      test: [ "CMD-SHELL", "curl -s -f -u Administrator:password http://localhost:8091/pools/default/buckets | grep -q '\\[{' || exit 1" ]      interval: 10s      retries: 10      start_period: 30s      timeout: 10s  # The pgvector vector database.  pgvector:    image: docker.1ms.run/pgvector/pgvector:pg16    profiles:      - pgvector    restart: always    environment:      PGUSER: ${PGVECTOR_PGUSER:-postgres}      # The password for the default postgres user.      POSTGRES_PASSWORD: ${PGVECTOR_POSTGRES_PASSWORD:-difyai123456}      # The name of the default postgres database.      POSTGRES_DB: ${PGVECTOR_POSTGRES_DB:-dify}      # postgres data directory      PGDATA: ${PGVECTOR_PGDATA:-/var/lib/postgresql/data/pgdata}    volumes:      - ./volumes/pgvector/data:/var/lib/postgresql/data    healthcheck:      test: [ 'CMD', 'pg_isready' ]      interval: 1s      timeout: 3s      retries: 30  # pgvecto-rs vector store  pgvecto-rs:    image: docker.1ms.run/tensorchord/pgvecto-rs:pg16-v0.3.0    profiles:      - pgvecto-rs    restart: always    environment:      PGUSER: ${PGVECTOR_PGUSER:-postgres}      # The password for the default postgres user.      POSTGRES_PASSWORD: ${PGVECTOR_POSTGRES_PASSWORD:-difyai123456}      # The name of the default postgres database.      POSTGRES_DB: ${PGVECTOR_POSTGRES_DB:-dify}      # postgres data directory      PGDATA: ${PGVECTOR_PGDATA:-/var/lib/postgresql/data/pgdata}    volumes:      - ./volumes/pgvecto_rs/data:/var/lib/postgresql/data    healthcheck:      test: [ 'CMD', 'pg_isready' ]      interval: 1s      timeout: 3s      retries: 30  # Chroma vector database  chroma:    image: ghcr.io/chroma-core/chroma:0.5.20    profiles:      - chroma    restart: always    volumes:      - ./volumes/chroma:/chroma/chroma    environment:      CHROMA_SERVER_AUTHN_CREDENTIALS: ${CHROMA_SERVER_AUTHN_CREDENTIALS:-difyai123456}      CHROMA_SERVER_AUTHN_PROVIDER: ${CHROMA_SERVER_AUTHN_PROVIDER:-chromadb.auth.token_authn.TokenAuthenticationServerProvider}      IS_PERSISTENT: ${CHROMA_IS_PERSISTENT:-TRUE}  # OceanBase vector database  oceanbase:    image: quay.io/oceanbase/oceanbase-ce:4.3.3.0-100000142024101215    profiles:      - oceanbase    restart: always    volumes:      - ./volumes/oceanbase/data:/root/ob      - ./volumes/oceanbase/conf:/root/.obd/cluster      - ./volumes/oceanbase/init.d:/root/boot/init.d    environment:      OB_MEMORY_LIMIT: ${OCEANBASE_MEMORY_LIMIT:-6G}      OB_SYS_PASSWORD: ${OCEANBASE_VECTOR_PASSWORD:-difyai123456}      OB_TENANT_PASSWORD: ${OCEANBASE_VECTOR_PASSWORD:-difyai123456}      OB_CLUSTER_NAME: ${OCEANBASE_CLUSTER_NAME:-difyai}      OB_SERVER_IP: '127.0.0.1'  # Oracle vector database  oracle:    image: container-registry.oracle.com/database/free:latest    profiles:      - oracle    restart: always    volumes:      - source: oradata        type: volume        target: /opt/oracle/oradata      - ./startupscripts:/opt/oracle/scripts/startup    environment:      ORACLE_PWD: ${ORACLE_PWD:-Dify123456}      ORACLE_CHARACTERSET: ${ORACLE_CHARACTERSET:-AL32UTF8}  # Milvus vector database services  etcd:    container_name: milvus-etcd    image: quay.io/coreos/etcd:v3.5.5    profiles:      - milvus    environment:      ETCD_AUTO_COMPACTION_MODE: ${ETCD_AUTO_COMPACTION_MODE:-revision}      ETCD_AUTO_COMPACTION_RETENTION: ${ETCD_AUTO_COMPACTION_RETENTION:-1000}      ETCD_QUOTA_BACKEND_BYTES: ${ETCD_QUOTA_BACKEND_BYTES:-4294967296}      ETCD_SNAPSHOT_COUNT: ${ETCD_SNAPSHOT_COUNT:-50000}    volumes:      - ./volumes/milvus/etcd:/etcd    command: etcd -advertise-client-urls=http://127.0.0.1:2379 -listen-client-urls http://0.0.0.0:2379 --data-dir /etcd    healthcheck:      test: [ 'CMD', 'etcdctl', 'endpoint', 'health' ]      interval: 30s      timeout: 20s      retries: 3    networks:      - milvus  minio:    container_name: milvus-minio    image: docker.1ms.run/minio/minio:RELEASE.2023-03-20T20-16-18Z    profiles:      - milvus    environment:      MINIO_ACCESS_KEY: ${MINIO_ACCESS_KEY:-minioadmin}      MINIO_SECRET_KEY: ${MINIO_SECRET_KEY:-minioadmin}    volumes:      - ./volumes/milvus/minio:/minio_data    command: minio server /minio_data --console-address ":9001"    healthcheck:      test: [ 'CMD', 'curl', '-f', 'http://localhost:9000/minio/health/live' ]      interval: 30s      timeout: 20s      retries: 3    networks:      - milvus  milvus-standalone:    container_name: milvus-standalone    image: docker.1ms.run/milvusdb/milvus:v2.5.0-beta    profiles:      - milvus    command: [ 'milvus', 'run', 'standalone' ]    environment:      ETCD_ENDPOINTS: ${ETCD_ENDPOINTS:-etcd:2379}      MINIO_ADDRESS: ${MINIO_ADDRESS:-minio:9000}      common.security.authorizationEnabled: ${MILVUS_AUTHORIZATION_ENABLED:-true}    volumes:      - ./volumes/milvus/milvus:/var/lib/milvus    healthcheck:      test: [ 'CMD', 'curl', '-f', 'http://localhost:9091/healthz' ]      interval: 30s      start_period: 90s      timeout: 20s      retries: 3    depends_on:      - etcd      - minio    ports:      - 19530:19530      - 9091:9091    networks:      - milvus  # Opensearch vector database  opensearch:    container_name: opensearch    image: docker.1ms.run/opensearchproject/opensearch:latest    profiles:      - opensearch    environment:      discovery.type: ${OPENSEARCH_DISCOVERY_TYPE:-single-node}      bootstrap.memory_lock: ${OPENSEARCH_BOOTSTRAP_MEMORY_LOCK:-true}      OPENSEARCH_JAVA_OPTS: -Xms${OPENSEARCH_JAVA_OPTS_MIN:-512m} -Xmx${OPENSEARCH_JAVA_OPTS_MAX:-1024m}      OPENSEARCH_INITIAL_ADMIN_PASSWORD: ${OPENSEARCH_INITIAL_ADMIN_PASSWORD:-Qazwsxedc!@#123}    ulimits:      memlock:        soft: ${OPENSEARCH_MEMLOCK_SOFT:--1}        hard: ${OPENSEARCH_MEMLOCK_HARD:--1}      nofile:        soft: ${OPENSEARCH_NOFILE_SOFT:-65536}        hard: ${OPENSEARCH_NOFILE_HARD:-65536}    volumes:      - ./volumes/opensearch/data:/usr/share/opensearch/data    networks:      - opensearch-net  opensearch-dashboards:    container_name: opensearch-dashboards    image: opensearchproject/opensearch-dashboards:latest    profiles:      - opensearch    environment:      OPENSEARCH_HOSTS: '["https://opensearch:9200"]'    volumes:      - ./volumes/opensearch/opensearch_dashboards.yml:/usr/share/opensearch-dashboards/config/opensearch_dashboards.yml    networks:      - opensearch-net    depends_on:      - opensearch  # MyScale vector database  myscale:    container_name: myscale    image: myscale/myscaledb:1.6.4    profiles:      - myscale    restart: always    tty: true    volumes:      - ./volumes/myscale/data:/var/lib/clickhouse      - ./volumes/myscale/log:/var/log/clickhouse-server      - ./volumes/myscale/config/users.d/custom_users_config.xml:/etc/clickhouse-server/users.d/custom_users_config.xml    ports:      - ${MYSCALE_PORT:-8123}:${MYSCALE_PORT:-8123}  # https://www.elastic.co/guide/en/elasticsearch/reference/current/settings.html  # https://www.elastic.co/guide/en/elasticsearch/reference/current/docker.html#docker-prod-prerequisites  elasticsearch:    image: docker.elastic.co/elasticsearch/elasticsearch:8.14.3    container_name: elasticsearch    profiles:      - elasticsearch      - elasticsearch-ja    restart: always    volumes:      - ./elasticsearch/docker-entrypoint.sh:/docker-entrypoint-mount.sh      - dify_es01_data:/usr/share/elasticsearch/data    environment:      ELASTIC_PASSWORD: ${ELASTICSEARCH_PASSWORD:-elastic}      VECTOR_STORE: ${VECTOR_STORE:-}      cluster.name: dify-es-cluster      node.name: dify-es0      discovery.type: single-node      xpack.license.self_generated.type: basic      xpack.security.enabled: 'true'      xpack.security.enrollment.enabled: 'false'      xpack.security.http.ssl.enabled: 'false'    ports:      - ${ELASTICSEARCH_PORT:-9200}:9200    deploy:      resources:        limits:          memory: 2g    entrypoint: [ 'sh', '-c', "sh /docker-entrypoint-mount.sh" ]    healthcheck:      test: [ 'CMD', 'curl', '-s', 'http://localhost:9200/_cluster/health?pretty' ]      interval: 30s      timeout: 10s      retries: 50  # https://www.elastic.co/guide/en/kibana/current/docker.html  # https://www.elastic.co/guide/en/kibana/current/settings.html  kibana:    image: docker.elastic.co/kibana/kibana:8.14.3    container_name: kibana    profiles:      - elasticsearch    depends_on:      - elasticsearch    restart: always    environment:      XPACK_ENCRYPTEDSAVEDOBJECTS_ENCRYPTIONKEY: d1a66dfd-c4d3-4a0a-8290-2abcb83ab3aa      NO_PROXY: localhost,127.0.0.1,elasticsearch,kibana      XPACK_SECURITY_ENABLED: 'true'      XPACK_SECURITY_ENROLLMENT_ENABLED: 'false'      XPACK_SECURITY_HTTP_SSL_ENABLED: 'false'      XPACK_FLEET_ISAIRGAPPED: 'true'      I18N_LOCALE: zh-CN      SERVER_PORT: '5601'      ELASTICSEARCH_HOSTS: http://elasticsearch:9200    ports:      - ${KIBANA_PORT:-5601}:5601    healthcheck:      test: [ 'CMD-SHELL', 'curl -s http://localhost:5601 >/dev/null || exit 1' ]      interval: 30s      timeout: 10s      retries: 3  # unstructured .  # (if used, you need to set ETL_TYPE to Unstructured in the api & worker service.)  unstructured:    image: downloads.unstructured.io/unstructured-io/unstructured-api:latest    profiles:      - unstructured    restart: always    volumes:      - ./volumes/unstructured:/app/datanetworks:  # create a network between sandbox, api and ssrf_proxy, and can not access outside.  ssrf_proxy_network:    driver: bridge    internal: true  milvus:    driver: bridge  opensearch-net:    driver: bridge    internal: truevolumes:  oradata:  dify_es01_data:

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:http://www.mzph.cn/bicheng/71538.shtml

如若内容造成侵权/违法违规/事实不符,请联系多彩编程网进行投诉反馈email:809451989@qq.com,一经查实,立即删除!

相关文章

开源工具推荐--思维导图、流程图等绘制

1. 前言 在工作中&#xff0c;经常要用到各种不同的工具&#xff0c;随着系统的升级&#xff0c;有些工具也在不断更新升级。这里收集整理一些好用的开源工具推荐&#xff0c;遵循以下一些基本原则&#xff1a;开源免费&#xff0c;商业工具的有效平替&#xff0c;轻量级&…

Ubuntu 下 nginx-1.24.0 源码分析 - ngx_create_pool函数

ngx_create_pool 声明在 src\core\ngx_palloc.h 中 ngx_pool_t *ngx_create_pool(size_t size, ngx_log_t *log); 实现在 src\core\ngx_palloc.c 中 ngx_pool_t * ngx_create_pool(size_t size, ngx_log_t *log) {ngx_pool_t *p;p ngx_memalign(NGX_POOL_ALIGNMENT, size, lo…

ac的dhcp池里option43配错导致ap无法上线问题排查过程

dhcp池里ac地址配错&#xff0c;导致ap无法上线问题排查过程 问题&#xff1a;ap手动设置ac的ip正常注册在线&#xff0c;但dhcp获得ip和ac地址发现无法在ac上注册成功。 组网&#xff1a; ac旁路结构&#xff0c;路由器lan口地址172.16.1.1&#xff0c;开dhcp服务&#xff0…

IntelliJ IDEA中Maven配置全指南

一、环境准备与基础配置 1.1 Windows 环境下载并配置 Maven 见此篇博文&#xff1a;环境配置 1.2 IDEA配置步骤 打开设置面板&#xff1a;File → Settings → Build → Build Tools → Maven 关键配置项&#xff1a; Maven home path E:\apache-maven-3.9.9 &#xff08;…

存储区域网络(SAN)管理

存储区域网络&#xff08;Storage Area Network&#xff0c;SAN&#xff09;采用网状通道&#xff08;Fibre Channel &#xff0c;简称FC&#xff09;技术&#xff0c;通过FC交换机连接存储阵列和服务器主机&#xff0c;建立专用于数据存储的区域网络。SAN提供了一种与现有LAN连…

使用vue-office报错TypeError: ft.createElementVNode is not a function

支持多种文件(.docx、.xlsx、.xls、.pdf、.pptx)预览的vue组件库&#xff0c;支持vue2/3。也支持非Vue框架的预览。 不支持.doc、.ppt&#xff08;2003年及以前的版本&#xff09; 官网&#xff1a;https://www.npmjs.com/package/vue-office/excel?activeTabreadme 官方有实…

Ubuntu部署ktransformers

准备工作 一台服务器 CPU&#xff1a;500G GPU&#xff1a;48G&#xff08;NVIDIA4090&#xff09; 系统&#xff1a;Ubuntu20.04&#xff08;github的文档好像用的是22.04&#xff09; 第一步&#xff1a;下载权重文件 1.下载hfd wget https://hf-mirror.com/hfd/hfd.s…

C++初阶——简单实现vector

目录 1、前言 2、Vector.h 3、Test.cpp 1、前言 简单实现std::vector类模板。 相较于前面的string&#xff0c;vector要注意&#xff1a; 深拷贝&#xff0c;因为vector的元素可能是类类型&#xff0c;类类型元素可以通过赋值重载&#xff0c;自己实现深拷贝。 迭代器失效…

全志A133 android10 适配SLM770A 4G模块

一&#xff0c;模块基本信息 1.官方介绍 SLM770A是美格智能最新推出的一款LTE Cat.4无线通讯模组&#xff0c;最大支持下行速率150Mbps及上行速率50Mbps。同时向下兼容现有的3G和2G网络&#xff0c;以确保即使在偏远地区也可以进行网络通信。 SLM770A模组支持分集接收和MIMO技…

微信小程序:多菜单栏设计效果

一、实现效果 二、代码 wxml 编辑前端界面,步骤 菜单逻辑: 逐步取出数组中的项,首先取出顶部菜单项,然后选中后取出选中的底部数据(左侧菜单+右侧内容),然后点击左侧菜单取出选中的左侧菜单对应的右侧内容 ①这里我的数据是全部封装到一个数组对象的,首先我的循环…

C++基础知识学习记录—string类

string实际上是C内置的一个类&#xff0c;内部对char *进行了封装&#xff0c;不用担心数组越界问题&#xff0c;string类中&#xff0c;除了上课讲解的函数外&#xff0c;还有很多函数可以使用&#xff0c;可以自行查阅文档。 构造函数原型&#xff1a; string(); //创建一个…

Ollama+DeepSeek+Open-WebUi

环境准备 Docker Ollama Open-WebUi Ollama 下载地址&#xff1a;Ollama docker安装ollama docker run -d \ -v /data/ollama/data:/root/.ollama \ -p 11434:11434 \ --name ollama ollama/ollama 下载模型 Ollama模型仓库 # 示例&#xff1a;安装deepseek-r1:7b doc…

设计模式--访问者模式【行为型模式】

设计模式的分类 我们都知道有 23 种设计模式&#xff0c;这 23 种设计模式可分为如下三类&#xff1a; 创建型模式&#xff08;5 种&#xff09;&#xff1a;单例模式、工厂方法模式、抽象工厂模式、建造者模式、原型模式。结构型模式&#xff08;7 种&#xff09;&#xff1…

前端循环全解析:JS/ES/TS 循环写法与实战示例

循环是编程中控制流程的核心工具。本文将详细介绍 JavaScript、ES6 及 TypeScript 中各种循环的写法、特性&#xff0c;并通过实际示例帮助你掌握它们的正确使用姿势。 目录 传统三剑客 for 循环 while 循环 do...while 循环 ES6 新特性 forEach for...of for...in 数组…

数据中心储能蓄电池状态监测管理系统 组成架构介绍

安科瑞刘鸿鹏 摘要 随着数据中心对供电可靠性要求的提高&#xff0c;蓄电池储能系统成为关键的后备电源。本文探讨了蓄电池监测系统在数据中心储能系统中的重要性&#xff0c;分析了ABAT系列蓄电池在线监测系统的功能、技术特点及其应用优势。通过蓄电池监测系统的实施&#…

Mac端homebrew安装配置

拷打了一下午o3-mini-high&#xff0c;不如这位博主的超强帖子&#xff0c;10分钟结束战斗 跟随该文章即可&#xff0c;2025/2/19亲测可行 mac 安装HomeBrew(100%成功)_mac安装homebrew-CSDN博客文章浏览阅读10w次&#xff0c;点赞258次&#xff0c;收藏837次。一直觉得自己写…

机器学习实战(8):降维技术——主成分分析(PCA)

第8集&#xff1a;降维技术——主成分分析&#xff08;PCA&#xff09; 在机器学习中&#xff0c;降维&#xff08;Dimensionality Reduction&#xff09; 是一种重要的数据处理技术&#xff0c;用于减少特征维度、去除噪声并提高模型效率。主成分分析&#xff08;Principal C…

windows环境下用docker搭建php开发环境dnmp

安装WSL WSL即Linux子系统&#xff0c;比虚拟机占用资源少&#xff0c;安装的前提是系统必须是win10以上。 WSL的安装比较简单&#xff0c;网上有很多教程&#xff0c;例如&#xff1a;WSL简介与安装流程&#xff08;Windows 下的 Linux 子系统&#xff09;_wsl安装-CSDN博客&…

Python网络爬虫技术详解文档

Python网络爬虫技术详解文档 目录 网络爬虫概述爬虫核心技术解析常用Python爬虫库实战案例演示反爬虫机制与应对策略爬虫法律与道德规范高级爬虫技术资源推荐与学习路径1. 网络爬虫概述 1.1 什么是网络爬虫 网络爬虫(Web Crawler)是一种按特定规则自动抓取互联网信息的程序…

位运算,双指针,二分,排序算法

文章目录 位运算二进制中1的个数题解代码我们需要0题解代码 排序模版排序1题解代码模版排序2题解代码模版排序3题解代码 双指针最长连续不重复子序列题解代码 二分查找题解代码 位运算 1. bitset< 16 >将十进制数转为16位的二进制数 int x 25; cout << bitset<…