file-type

本地数据库快速部署:一套Docker Compose配置集

下载需积分: 9 | 8KB | 更新于2025-03-14 | 63 浏览量 | 0 下载量 举报 收藏
download 立即下载
从给定的文件信息中,我们可以提取出关于Docker、Docker Compose以及各种数据库技术的知识点。以下是详细的介绍: ### Docker和Docker Compose概述 Docker是一个开源的应用容器引擎,允许开发者打包他们的应用以及应用的依赖包到一个可移植的容器中,然后发布到任何流行的Linux机器上,也可以实现虚拟化。容器是完全使用沙箱机制,相互之间不会有任何接口(类似 iPhone 的 app)。 Docker Compose是一个用于定义和运行多容器Docker应用程序的工具。通过Compose,可以使用YAML文件来配置应用程序的服务。然后,使用一个命令,就可以创建并启动所有服务,使用这个工具可以极大的简化Docker容器应用的部署和管理过程。 ### Docker Compose文件 Docker Compose文件通常被命名为`docker-compose.yml`或`docker-compose.yaml`,它们定义了服务、网络和卷等应用服务的配置信息。在给定信息中,提到的是一个包含多个Docker Compose文件的集合,这意味着用户可以方便地根据自己的需要选择不同的配置文件来启动不同类型的本地数据库服务。 ### 支持的数据库技术 在【标签】中列出了多个数据库技术标签,包括MySQL, PostgreSQL, MongoDB, MariaDB, FaunaDB等。这些标签表示在这些Docker Compose文件中包含了相应的数据库服务配置。 #### MySQL MySQL是一个流行的关系型数据库管理系统,被广泛使用在Web应用中。使用Docker和Docker Compose,可以方便地创建MySQL容器,并快速启动数据库服务。 #### PostgreSQL PostgreSQL是一个功能强大的开源对象关系数据库系统,它提供了广泛的功能特性,如复杂的查询、外键、触发器、视图、事务完整性等。Docker Compose配置文件中也可以包含PostgreSQL服务,以便开发和测试使用。 #### MongoDB MongoDB是一个面向文档的数据库系统,它提供高性能、高可用性以及易于扩展的特性。通过Docker Compose文件,开发者可以快速搭建和运行MongoDB服务环境。 #### MariaDB MariaDB是MySQL的一个分支,旨在保持开源数据库的连续性。它由MySQL的原始创建者维护,提供与MySQL高度兼容的数据库服务。借助Docker Compose,可以轻松地管理和部署MariaDB实例。 #### FaunaDB FaunaDB是一个现代的、可水平扩展的数据库,旨在提供全球的、可靠的和可扩展的数据存储能力。尽管它不像其他数据库那样常见,但Docker Compose文件的存在表明,用户希望在本地环境中进行FaunaDB的快速部署和测试。 ### 关于“local-docker-db-master”的文件名称列表 “local-docker-db-master”表明这个压缩包中包含的是多个Docker Compose文件,这些文件可能被组织在一个主目录“master”下。这个主目录可能包含了多个子目录,每个子目录对应一个特定数据库的配置文件。例如,一个名为“mysql”的子目录可能包含一个专门为MySQL数据库设计的Docker Compose文件,以此类推。用户可以解压这个压缩包,然后根据自己的需求选择合适的子目录来启动对应的数据库服务。 ### 总结 给定的文件信息指向了一个包含多个Docker Compose配置文件的集合,这些文件旨在帮助开发者快速启动本地数据库服务。通过这个集合,用户可以轻松地测试和开发自己的应用程序,而不必担心复杂的数据库安装和配置过程。无论是关系型数据库MySQL和PostgreSQL,还是文档型数据库MongoDB,或者MariaDB和较为现代的FaunaDB,都可以通过Docker Compose文件得到支持。这对于提高开发效率和测试环境的搭建速度有着重要意义。

相关推荐

filetype

WARN[0000] /paperless-ngx/docker-compose.yml: `version` is obsolete name: paperless-ngx services: broker: image: redis:7 networks: default: null restart: unless-stopped volumes: - type: volume source: redisdata target: /data volume: {} db: environment: POSTGRES_DB: paperless POSTGRES_PASSWORD: paperless POSTGRES_USER: paperless image: postgres:15 networks: default: null restart: unless-stopped volumes: - type: volume source: pgdata target: /var/lib/postgresql/data volume: {} webserver: depends_on: broker: condition: service_started required: true db: condition: service_started required: true environment: PAPERLESS_DBHOST: db PAPERLESS_DBNAME: paperless PAPERLESS_DBPASS: paperless PAPERLESS_DBUSER: paperless PAPERLESS_REDIS: redis://broker:6379 env_file: .env healthcheck: test: - CMD - curl - -f - http://localhost:8000 timeout: 10s interval: 30s retries: 5 image: ghcr.io/paperless-ngx/paperless-ngx:latest networks: default: null ports: - mode: ingress target: 8000 published: "8000" protocol: tcp restart: unless-stopped volumes: - type: volume source: data target: /usr/src/paperless/data volume: {} - type: volume source: media target: /usr/src/paperless/media volume: {} - type: bind source: /paperless-ngx/data/export target: /usr/src/paperless/export bind: create_host_path: true - type: bind source: /paperless-ngx/data/consume target: /usr/src/paperless/consume bind: create_host_path: true networks: default: name: paperless-ngx_default volumes: data: name: paperless-ngx_data media: name: paperless-ngx_media pgdata: name: paperless-ngx_pgdata redisdata: name: paperless-ngx_redisdata

filetype

PS C:\Users\34537\dify\docker> docker compose up -d [+] Running 12/12 ✔ Network docker_default Created 0.1s ✔ Network docker_ssrf_proxy_network Created 0.0s ✔ Container docker-weaviate-1 Started 1.3s ✔ Container docker-sandbox-1 Started 1.5s ✔ Container docker-db-1 Healthy 4.0s ✔ Container docker-redis-1 Started 1.4s ✔ Container docker-ssrf_proxy-1 Started 1.5s ✔ Container docker-web-1 Started 1.5s ✔ Container docker-api-1 Started 4.7s ✔ Container docker-worker-1 Started 4.6s ✔ Container docker-plugin_daemon-1 Started 4.4s ✔ Container docker-nginx-1 Started 4.9s PS C:\Users\34537\dify\docker> docker compose ps NAME IMAGE COMMAND SERVICE CREATED STATUS PORTS docker-api-1 langgenius/dify-api:1.5.1 "/bin/bash /entrypoi…" api 14 seconds ago Up 9 seconds 5001/tcp docker-db-1 postgres:15-alpine "docker-entrypoint.s…" db 14 seconds ago Up 12 seconds (healthy) 5432/tcp docker-nginx-1 nginx:latest "sh -c 'cp /docker-e…" nginx 14 seconds ago Up 4 seconds 0.0.0.0:80->80/tcp, 0.0.0.0:443->443/tcp docker-plugin_daemon-1 langgenius/dify-plugin-daemon:0.1.3-local "/bin/bash -c /app/e…" plugin_daemon 14 seconds ago Up 9 seconds 0.0.0.0:5003->5003/tcp docker-redis-1 redis:6-alpine "docker-entrypoint.s…" redis 14 seconds ago Up 13 seconds (health: starting) 6379/tcp docker-sandbox-1 langgenius/dify-sandbox:0.2.12 "/main" sandbox 14 seconds ago Up 13 seconds (health: starting) docker-ssrf_proxy-1 ubuntu/squid:latest "sh -c 'cp /docker-e…" ssrf_proxy 14 seconds ago Up 12 seconds 3128/tcp docker-weaviate-1 semitechnologies/weaviate:1.19.0 "/bin/weaviate --hos…" weaviate 14 seconds ago Up 13 seconds docker-web-1 langgenius/dify-web:1.5.1 "/bin/sh ./entrypoin…" web 14 seconds ago Restarting (0) 3 seconds ago docker-worker-1 langgenius/dify-api:1.5.1 "/bin/bash /entrypoi…" worker 14 seconds ago Up 9 seconds 5001/tcp PS C:\Users\34537\dify\docker> docker compose ps NAME IMAGE COMMAND SERVICE CREATED STATUS PORTS docker-api-1 langgenius/dify-api:1.5.1 "/bin/bash /entrypoi…" api 2 minutes ago Up 2 minutes 5001/tcp docker-db-1 postgres:15-alpine "docker-entrypoint.s…" db 2 minutes ago Up 2 minutes (healthy) 5432/tcp docker-nginx-1 nginx:latest "sh -c 'cp /docker-e…" nginx 2 minutes ago Restarting (1) 10 seconds ago docker-plugin_daemon-1 langgenius/dify-plugin-daemon:0.1.3-local "/bin/bash -c /app/e…" plugin_daemon 2 minutes ago Up 2 minutes 0.0.0.0:5003->5003/tcp docker-redis-1 redis:6-alpine "docker-entrypoint.s…" redis 2 minutes ago Up 2 minutes (healthy) 6379/tcp docker-sandbox-1 langgenius/dify-sandbox:0.2.12 "/main" sandbox 2 minutes ago Up 2 minutes (unhealthy) docker-ssrf_proxy-1 ubuntu/squid:latest "sh -c 'cp /docker-e…" ssrf_proxy 2 minutes ago Up 2 minutes 3128/tcp docker-weaviate-1 semitechnologies/weaviate:1.19.0 "/bin/weaviate --hos…" weaviate 2 minutes ago Up 2 minutes docker-web-1 langgenius/dify-web:1.5.1 "/bin/sh ./entrypoin…" web 2 minutes ago Restarting (0) 56 seconds ago docker-worker-1 langgenius/dify-api:1.5.1 "/bin/bash /entrypoi…" worker 2 minutes ago Up 2 minutes 5001/tcp PS C:\Users\34537\dify\docker> docker logs docker-web-1 PS C:\Users\34537\dify\docker> docker logs --tail 100 docker-web-1 PS C:\Users\34537\dify\docker> docker logs --previous docker-web-1 unknown flag: --previous See 'docker logs --help'. 其中docker-web-1一直在Restarting,并且关于web日志一直没反应,该怎么办(说中文)

filetype
filetype

结合我们之前沟通及操作历史记录,就我以下系统及操作记录作出下一步操作指命,不要跨步,我会依据执行结果反馈给你,你再依据所有信息分板如何进行下一步操作。 硬件:斐讯N1盒子 底层ARMBIAN系统信息: _ _ _ ___ ___ /_\ _ _ _ __ | |__(_)__ _ _ _ / _ \/ __| / _ \| '_| ' \| '_ \ / _` | ' \ | (_) \__ \ /_/ \_\_| |_|_|_|_.__/_\__,_|_||_| \___/|___/ v25.08.0 for Aml.S905d running Armbian Linux 5.15.186-ophub Packages: Debian stable (bullseye) IPv4: (LAN) 192.168.1.7 (WAN) 113.78.***.*** Performance: Load: 14% Uptime: 2:05 Memory usage: 26% of 1.76G CPU temp: 49°C Usage of /: 42% of 6.4G storage/: 10% of 29G RX today: 303 MiB Commands: 1PANEL安装日志: [1Panel Log]: Docker 服务已成功重启。 设置 1Panel 端口 (默认是 14687): [1Panel Log]: 您设置的端口是: 14687 设置 1Panel 安全入口 (默认是 8382ffe897): haoyong [1Panel Log]: 您设置的面板安全入口是 haoyong 设置 1Panel 面板用户 (默认是 39e66cce8f): haoyong [1Panel Log]: 您设置的面板用户是 haoyong [1Panel Log]: 设置 1Panel 面板密码,设置后按回车键继续 (默认是 1f8ae2a537): ************** [1Panel Log]: 正在配置 1Panel 服务 Created symlink /etc/systemd/system/multi-user.target.wants/1panel-agent.service → /etc/systemd/system/1panel-agent.service. Created symlink /etc/systemd/system/multi-user.target.wants/1panel-core.service → /etc/systemd/system/1panel-core.service. [1Panel Log]: 正在启动 1Panel 服务 [1Panel Log]: 1Panel 服务已成功启动,正在继续执行后续配置,请稍候... [1Panel Log]: [1Panel Log]: =================感谢您的耐心等待,安装已完成================== [1Panel Log]: [1Panel Log]: 请使用您的浏览器访问面板: [1Panel Log]: 外部地址: http://113.78.237.211:14687/haoyong [1Panel Log]: 内部地址: http://192.168.1.7:14687/haoyong [1Panel Log]: 面板用户: haoyong [1Panel Log]: 面板密码: SANDking100005 [1Panel Log]: [1Panel Log]: 官方网站: https://1panel.cn [1Panel Log]: 项目文档: https://1panel.cn/docs [1Panel Log]: 代码仓库: https://github.com/1Panel-dev/1Panel [1Panel Log]: 前往 1Panel 官方论坛获取帮助: https://bbs.fit2cloud.com/c/1p/7 [1Panel Log]: [1Panel Log]: 如果您使用的是云服务器,请在安全组中打开端口 14687 [1Panel Log]: [1Panel Log]: 为了您的服务器安全,离开此屏幕后您将无法再次看到您的密码,请记住您的密码。 [1Panel Log]: [1Panel Log]: ================================================================ root@armbian:~# mysql安装内容: 安装 名称 mysql 版本 Root 密码 100005 端口 3306 容器名称 可以为空,为空自动生成 允许端口外部访问会放开防火墙端口 CPU 限制 0 核心数 限制为 0 则关闭限制,最大可用为 4核 内存限制 0 限制为 0 则关闭限制,最大可用为 1800.98MB 在应用启动之前执行 docker pull 来拉取镜像 编辑 compose 文件可能导致软件安装失败 系统相关检查如下: root@armbian:~# uname -m aarch64 root@armbian:~# uname -r 5.15.186-ophub root@armbian:~# lsb_release -a || cat /etc/os-release No LSB modules are available. Distributor ID: Debian Description: Debian GNU/Linux 11 (bullseye) Release: 11 Codename: bullseye root@armbian:~# free -h total used free shared buff/cache available Mem: 1.8Gi 447Mi 598Mi 19Mi 755Mi 1.2Gi Swap: 900Mi 0B 900Mi root@armbian:~# nproc nproc 4 4 root@armbian:~# free -h total used free shared buff/cache available Mem: 1.8Gi 445Mi 599Mi 19Mi 755Mi 1.2Gi Swap: 900Mi 0B 900Mi root@armbian:~# df -h / Filesystem Size Used Avail Use% Mounted on /dev/mmcblk2p2 6.4G 2.7G 3.7G 42% / root@armbian:~# sudo ss -tulpn | grep -E '80|443|3306|6379' tcp LISTEN 0 4096 0.0.0.0:14687 0.0.0.0:* users:(("1panel",pid=801,fd=11)) root@armbian:~# curl -I https://xibo.org.uk HTTP/2 301 server: nginx date: Sat, 19 Jul 2025 07:14:41 GMT content-type: text/html content-length: 169 location: https://xibosignage.com/ referrer-policy: no-referrer-when-downgrade strict-transport-security: max-age=31536000 root@armbian:~# 拉取源的测试如下: oot@armbian:~# docker images | grep hello-world hello-world latest f1f77a0f96b7 5 months ago 5.2kB root@armbian:~# docker images | grep hello-world hello-world latest f1f77a0f96b7 5 months ago 5.2kB root@armbian:~# docker history hello-world IMAGE CREATED CREATED BY SIZE COMMENT f1f77a0f96b7 5 months ago CMD ["/hello"] 0B buildkit.dockerfile.v0 <missing> 5 months ago COPY hello / # buildkit 5.2kB buildkit.dockerfile.v0 root@armbian:~# ^C 用1PANEL设置DOCKER及拉取MYSQL的操作日志: _ _ _ ___ ___ /_\ _ _ _ __ | |__(_)__ _ _ _ / _ \/ __| / _ \| '_| ' \| '_ \ / _` | ' \ | (_) \__ \ /_/ \_\_| |_|_|_|_.__/_\__,_|_||_| \___/|___/ v25.08.0 for Aml.S905d running Armbian Linux 5.15.186-ophub Packages: Debian stable (bullseye) IPv4: (LAN) 192.168.1.7 (WAN) 116.4.***.*** Performance: Load: 13% Uptime: 7:23 Memory usage: 27% of 1.76G CPU temp: 51°C Usage of /: 42% of 6.4G storage/: 19% of 29G RX today: 79 MiB Commands: Configuration : armbian-config Monitoring : htop root@armbian:~# docker run -d \ --name mysql \ -p 3306:3306 \ -v /mnt/docker/mysql/data:/var/lib/mysql \ -v /mnt/docker/mysql/conf:/etc/mysql/conf.d \ -v /mnt/docker/mysql/log:/var/log/mysql \ -e MYSQL_ROOT_PASSWORD=100005 \ --memory=512m \ --cpus=2 \ --restart=always \ mysql:8.0-oracle Unable to find image 'mysql:8.0-oracle' locally 8.0-oracle: Pulling from library/mysql 66c8c73e9d3d: Pull complete e45847b03d78: Pull complete 87befc648177: Pull complete 008e8e968476: Pull complete a72970729c8f: Pull complete 89b1faffd43a: Pull complete 2bd146ae1d6c: Pull complete a0967528f1a2: Pull complete 38c697cea99a: Pull complete fab608026c1e: Pull complete 24e041f1adac: Pull complete Digest: sha256:63823b8e2cbe4ae0c558155e02d00beba56130fbc3d147efccbdb328ae2dbb9e Status: Downloaded newer image for mysql:8.0-oracle 67460db937cbeedc95691f01d4b961e6a42616b45f60cd62107f73bd48b2b2b9 root@armbian:~# ^C root@armbian:~# docker ps -a | grep mysql 67460db937cb mysql:8.0-oracle "docker-entrypoint.s…" 8 minutes ago Up 6 minutes 0.0.0.0:3306->3306/tcp, 33060/tcp mysql root@armbian:~# docker logs mysql 2025-07-19 10:55:41+00:00 [Note] [Entrypoint]: Entrypoint script for MySQL Server 8.0.42-1.el9 started. 2025-07-19 10:55:43+00:00 [Note] [Entrypoint]: Switching to dedicated user 'mysql' 2025-07-19 10:55:43+00:00 [Note] [Entrypoint]: Entrypoint script for MySQL Server 8.0.42-1.el9 started. 2025-07-19 10:55:44+00:00 [Note] [Entrypoint]: Initializing database files 2025-07-19T10:55:44.643965Z 0 [Warning] [MY-011068] [Server] The syntax '--skip-host-cache' is deprecated and will be removed in a future release. Please use SET GLOBAL host_cache_size=0 instead. 2025-07-19T10:55:44.644644Z 0 [System] [MY-013169] [Server] /usr/sbin/mysqld (mysqld 8.0.42) initializing of server in progress as process 82 2025-07-19T10:55:44.722791Z 1 [System] [MY-013576] [InnoDB] InnoDB initialization has started. 2025-07-19T10:55:48.060736Z 1 [System] [MY-013577] [InnoDB] InnoDB initialization has ended. 2025-07-19T10:56:03.156308Z 6 [Warning] [MY-010453] [Server] root@localhost is created with an empty password ! Please consider switching off the --initialize-insecure option. 2025-07-19 10:56:16+00:00 [Note] [Entrypoint]: Database files initialized 2025-07-19 10:56:16+00:00 [Note] [Entrypoint]: Starting temporary server 2025-07-19T10:56:16.845924Z 0 [Warning] [MY-011068] [Server] The syntax '--skip-host-cache' is deprecated and will be removed in a future release. Please use SET GLOBAL host_cache_size=0 instead. 2025-07-19T10:56:16.851097Z 0 [System] [MY-010116] [Server] /usr/sbin/mysqld (mysqld 8.0.42) starting as process 126 2025-07-19T10:56:16.917937Z 1 [System] [MY-013576] [InnoDB] InnoDB initialization has started. 2025-07-19T10:56:17.897029Z 1 [System] [MY-013577] [InnoDB] InnoDB initialization has ended. 2025-07-19T10:56:21.388398Z 0 [Warning] [MY-010068] [Server] CA certificate ca.pem is self signed. 2025-07-19T10:56:21.388602Z 0 [System] [MY-013602] [Server] Channel mysql_main configured to support TLS. Encrypted connections are now supported for this channel. 2025-07-19T10:56:21.415665Z 0 [Warning] [MY-011810] [Server] Insecure configuration for --pid-file: Location '/var/run/mysqld' in the path is accessible to all OS users. Consider choosing a different directory. 2025-07-19T10:56:21.548697Z 0 [System] [MY-011323] [Server] X Plugin ready for connections. Socket: /var/run/mysqld/mysqlx.sock 2025-07-19T10:56:21.549457Z 0 [System] [MY-010931] [Server] /usr/sbin/mysqld: ready for connections. Version: '8.0.42' socket: '/var/run/mysqld/mysqld.sock' port: 0 MySQL Community Server - GPL. 2025-07-19 10:56:21+00:00 [Note] [Entrypoint]: Temporary server started. '/var/lib/mysql/mysql.sock' -> '/var/run/mysqld/mysqld.sock' Warning: Unable to load '/usr/share/zoneinfo/iso3166.tab' as time zone. Skipping it. Warning: Unable to load '/usr/share/zoneinfo/leap-seconds.list' as time zone. Skipping it. Warning: Unable to load '/usr/share/zoneinfo/leapseconds' as time zone. Skipping it. Warning: Unable to load '/usr/share/zoneinfo/tzdata.zi' as time zone. Skipping it. Warning: Unable to load '/usr/share/zoneinfo/zone.tab' as time zone. Skipping it. Warning: Unable to load '/usr/share/zoneinfo/zone1970.tab' as time zone. Skipping it. 2025-07-19 10:56:44+00:00 [Note] [Entrypoint]: Stopping temporary server 2025-07-19T10:56:44.779995Z 10 [System] [MY-013172] [Server] Received SHUTDOWN from user root. Shutting down mysqld (Version: 8.0.42). 2025-07-19T10:56:46.899809Z 0 [System] [MY-010910] [Server] /usr/sbin/mysqld: Shutdown complete (mysqld 8.0.42) MySQL Community Server - GPL. 2025-07-19 10:56:47+00:00 [Note] [Entrypoint]: Temporary server stopped 2025-07-19 10:56:47+00:00 [Note] [Entrypoint]: MySQL init process done. Ready for start up. 2025-07-19T10:56:48.299792Z 0 [Warning] [MY-011068] [Server] The syntax '--skip-host-cache' is deprecated and will be removed in a future release. Please use SET GLOBAL host_cache_size=0 instead. 2025-07-19T10:56:48.305067Z 0 [System] [MY-010116] [Server] /usr/sbin/mysqld (mysqld 8.0.42) starting as process 1 2025-07-19T10:56:48.334120Z 1 [System] [MY-013576] [InnoDB] InnoDB initialization has started. 2025-07-19T10:56:49.180497Z 1 [System] [MY-013577] [InnoDB] InnoDB initialization has ended. 2025-07-19T10:56:54.411653Z 0 [Warning] [MY-010068] [Server] CA certificate ca.pem is self signed. 2025-07-19T10:56:54.411875Z 0 [System] [MY-013602] [Server] Channel mysql_main configured to support TLS. Encrypted connections are now supported for this channel. 2025-07-19T10:57:02.537906Z 0 [Warning] [MY-011810] [Server] Insecure configuration for --pid-file: Location '/var/run/mysqld' in the path is accessible to all OS users. Consider choosing a different directory. 2025-07-19T10:57:02.670404Z 0 [System] [MY-011323] [Server] X Plugin ready for connections. Bind-address: '::' port: 33060, socket: /var/run/mysqld/mysqlx.sock 2025-07-19T10:57:02.670930Z 0 [System] [MY-010931] [Server] /usr/sbin/mysqld: ready for connections. Version: '8.0.42' socket: '/var/run/mysqld/mysqld.sock' port: 3306 MySQL Community Server - GPL. root@armbian:~# docker exec -it mysql mysql -uroot -p100005 -e "SHOW DATABASES;" mysql: [Warning] Using a password on the command line interface can be insecure. +--------------------+ | Database | +--------------------+ | information_schema | | mysql | | performance_schema | | sys | +--------------------+ root@armbian:~# mysql -h 192.168.1.7 -P 3306 -uroot -p100005 Welcome to the MariaDB monitor. Commands end with ; or \g. Your MySQL connection id is 9 Server version: 8.0.42 MySQL Community Server - GPL Copyright (c) 2000, 2018, Oracle, MariaDB Corporation Ab and others. Type 'help;' or '\h' for help. Type '\c' to clear the current input statement. MySQL [(none)]> ^接下来怎么操作

filetype

任务描述 本关任务:使用 docker-compose 进行多容器的管理工作,(可以做到同时关闭同时开启等)。 相关知识 为了完成本关任务,你需要掌握: 前期准备工作; 配置文件; 使用 docker-compose 以及命令。 前期准备工作 下载 docker-compose 在 linux 控制台中下载 docker-compose 命令如下: sudo curl -L https://get.daocloud.io/docker/compose/releases/download/1.25.5/docker-compose-`uname -s`-`uname -m` > /usr/local/bin/docker-compose 图 1 如图所示 检验 docker-compose 是否安装成功 使用命令 docker-compose version 进行检验。 图 2 成功如图 如果报错则需要进行下面的全局可用配置。 部署 docker-compose 全局可用(未报错可忽略) docker-compose 并不是安装之后全局可用的,只有在项目文件可用,比如 docker-compose ps 命令所以需要进行 docker-compose 的全局配置(在前文我们下载的路径就是这个所以不需要进行更改和下面的代码)。命令是: cp -p docker-compose /usr/local/bin chmod +x docker-compose 使用之后进行命令:docker-compose version 如若结果为图2则配置成功。 创建项目文件目录 此处需要创建你的项目目录,项目包括配置文件都在此处生成。命令如下: mkdir /data/workspace/myshixun/docker-compose cd /data/workspace/myshixun/docker-compose mkdir mysql cd mysql mkdir config 环境中没有 yum 命令的处理 使用代替代码 此处用 apt-get 的方式来进行下载。 apt-get install -y tree tree /root/docker-compose 查看完成后目录结构如图: 图 3 目录结构 配置文件 mysql 配置 配置完成相关文件比如 yaml 和 mysql 的文件。此次先进行 mysql 的文件配置。命令如下: cd /data/workspace/myshixun/docker-compose/mysql/config/ vi my.cnf 配置信息如下: [mysqld] user=mysql default-storage-engine=INNODB character-set-server=utf8 [client] default-character-set=utf8 [mysql] default-character-set=utf8 保存退出即可。 docker-compose 配置 此时需要使用 yaml ( yml )文件来进行 docker-compose 的文件配置。 命令如下: cd /data/workspace/myshixun/docker-compose/mysql vi docker-compose.yml 配置信息如下: version: '3' services: mysql: image: mysql restart: always container_name: mysql environment: MYSQL_ROOT_PASSWORD: password TZ: Asia/Shanghai ports: - 3306:3306 volumes: - /data/workspace/myshixun/docker-compose/mysql/data:/var/lib/mysql - /data/workspace/myshixun/docker-compose/mysql/config/my.cnf:/etc/mysql/my.cnf command: --max_connections=1000 --character-set-server=utf8mb4 --collation-server=utf8mb4_general_ci --default-authentication-plugin=mysql_native_password 至

filetype
filetype

services: haproxy: image: haproxy:2.4 container_name: haproxy volumes: - ./haproxy.cfg:/usr/local/etc/haproxy/haproxy.cfg ports: - "80:5000" - "8404:8404" depends_on: - nginx1 - nginx2 networks: - student-network nginx1: image: nginx:1.21 container_name: nginx1 volumes: - ./default.conf:/etc/nginx/conf.d/default.conf - /root/pes/web/dist/:/usr/share/nginx/html/ depends_on: - tomcat1 - tomcat2 - tomcat3 networks: - student-network nginx2: image: nginx:1.21 container_name: nginx2 volumes: - ./default.conf:/etc/nginx/conf.d/default.conf - /root/pes/web/dist/:/usr/share/nginx/html/ depends_on: - tomcat1 - tomcat2 - tomcat3 networks: - student-network tomcat1: image: tomcat:9.0 container_name: tomcat1 volumes: - /root/pes/java/src/:/usr/local/tomcat/webapps/ environment: - DB_HOST=mysql - DB_NAME=project_exam_system - DB_USER=zhangmin - DB_PASSWORD=zhangmin depends_on: - mysql networks: - student-network tomcat2: image: tomcat:9.0 container_name: tomcat2 volumes: - /root/pes/java/src/:/usr/local/tomcat/webapps/ environment: - DB_HOST=mysql - DB_NAME=project_exam_system - DB_USER=zhangmin - DB_PASSWORD=zhangmin depends_on: - mysql networks: - student-network tomcat3: image: tomcat:9.0 container_name: tomcat3 volumes: - /root/pes/java/src/:/usr/local/tomcat/webapps/ environment: - DB_HOST=mysql - DB_NAME=project_exam_system - DB_USER=zhangmin - DB_PASSWORD=zhangmin depends_on: - mysql networks: - student-network mysql: image: mysql:5.7.44 container_name: mysql environment: MYSQL_ROOT_PASSWORD: rootpassword MYSQL_DATABASE: project_exam_system MYSQL_USER: zhangmi

filetype

# Make sure to update the credential placeholders with your own secrets. # We mark them with # CHANGEME in the file below. # In addition, we recommend to restrict inbound traffic on the host to langfuse-web (port 3000) and minio (port 9090) only. # All other components are bound to localhost (127.0.0.1) to only accept connections from the local machine. # External connections from other machines will not be able to reach these services directly. services: langfuse-worker: image: docker.io/langfuse/langfuse-worker:3 restart: always depends_on: &langfuse-depends-on postgres: condition: service_healthy minio: condition: service_healthy redis: condition: service_healthy clickhouse: condition: service_healthy ports: - 127.0.0.1:3030:3030 environment: &langfuse-worker-env DATABASE_URL: postgresql://postgres:postgres@postgres:5432/postgres # CHANGEME SALT: "mysalt" # CHANGEME ENCRYPTION_KEY: "0000000000000000000000000000000000000000000000000000000000000000" # CHANGEME: generate via `openssl rand -hex 32` TELEMETRY_ENABLED: ${TELEMETRY_ENABLED:-true} LANGFUSE_ENABLE_EXPERIMENTAL_FEATURES: ${LANGFUSE_ENABLE_EXPERIMENTAL_FEATURES:-true} CLICKHOUSE_MIGRATION_URL: ${CLICKHOUSE_MIGRATION_URL:-clickhouse://clickhouse:9000} CLICKHOUSE_URL: ${CLICKHOUSE_URL:-http://clickhouse:8123} CLICKHOUSE_USER: ${CLICKHOUSE_USER:-clickhouse} CLICKHOUSE_PASSWORD: ${CLICKHOUSE_PASSWORD:-clickhouse} # CHANGEME CLICKHOUSE_CLUSTER_ENABLED: ${CLICKHOUSE_CLUSTER_ENABLED:-false} LANGFUSE_USE_AZURE_BLOB: ${LANGFUSE_USE_AZURE_BLOB:-false} LANGFUSE_S3_EVENT_UPLOAD_BUCKET: ${LANGFUSE_S3_EVENT_UPLOAD_BUCKET:-langfuse} LANGFUSE_S3_EVENT_UPLOAD_REGION: ${LANGFUSE_S3_EVENT_UPLOAD_REGION:-auto} LANGFUSE_S3_EVENT_UPLOAD_ACCESS_KEY_ID: ${LANGFUSE_S3_EVENT_UPLOAD_ACCESS_KEY_ID:-minio} LANGFUSE_S3_EVENT_UPLOAD_SECRET_ACCESS_KEY: ${LANGFUSE_S3_EVENT_UPLOAD_SECRET_ACCESS_KEY:-miniosecret} # CHANGEME LANGFUSE_S3_EVENT_UPLOAD_ENDPOINT: ${LANGFUSE_S3_EVENT_UPLOAD_ENDPOINT:-http://minio:9000} LANGFUSE_S3_EVENT_UPLOAD_FORCE_PATH_STYLE: ${LANGFUSE_S3_EVENT_UPLOAD_FORCE_PATH_STYLE:-true} LANGFUSE_S3_EVENT_UPLOAD_PREFIX: ${LANGFUSE_S3_EVENT_UPLOAD_PREFIX:-events/} LANGFUSE_S3_MEDIA_UPLOAD_BUCKET: ${LANGFUSE_S3_MEDIA_UPLOAD_BUCKET:-langfuse} LANGFUSE_S3_MEDIA_UPLOAD_REGION: ${LANGFUSE_S3_MEDIA_UPLOAD_REGION:-auto} LANGFUSE_S3_MEDIA_UPLOAD_ACCESS_KEY_ID: ${LANGFUSE_S3_MEDIA_UPLOAD_ACCESS_KEY_ID:-minio} LANGFUSE_S3_MEDIA_UPLOAD_SECRET_ACCESS_KEY: ${LANGFUSE_S3_MEDIA_UPLOAD_SECRET_ACCESS_KEY:-miniosecret} # CHANGEME LANGFUSE_S3_MEDIA_UPLOAD_ENDPOINT: ${LANGFUSE_S3_MEDIA_UPLOAD_ENDPOINT:-http://localhost:9090} LANGFUSE_S3_MEDIA_UPLOAD_FORCE_PATH_STYLE: ${LANGFUSE_S3_MEDIA_UPLOAD_FORCE_PATH_STYLE:-true} LANGFUSE_S3_MEDIA_UPLOAD_PREFIX: ${LANGFUSE_S3_MEDIA_UPLOAD_PREFIX:-media/} LANGFUSE_S3_BATCH_EXPORT_ENABLED: ${LANGFUSE_S3_BATCH_EXPORT_ENABLED:-false} LANGFUSE_S3_BATCH_EXPORT_BUCKET: ${LANGFUSE_S3_BATCH_EXPORT_BUCKET:-langfuse} LANGFUSE_S3_BATCH_EXPORT_PREFIX: ${LANGFUSE_S3_BATCH_EXPORT_PREFIX:-exports/} LANGFUSE_S3_BATCH_EXPORT_REGION: ${LANGFUSE_S3_BATCH_EXPORT_REGION:-auto} LANGFUSE_S3_BATCH_EXPORT_ENDPOINT: ${LANGFUSE_S3_BATCH_EXPORT_ENDPOINT:-http://minio:9000} LANGFUSE_S3_BATCH_EXPORT_EXTERNAL_ENDPOINT: ${LANGFUSE_S3_BATCH_EXPORT_EXTERNAL_ENDPOINT:-http://localhost:9090} LANGFUSE_S3_BATCH_EXPORT_ACCESS_KEY_ID: ${LANGFUSE_S3_BATCH_EXPORT_ACCESS_KEY_ID:-minio} LANGFUSE_S3_BATCH_EXPORT_SECRET_ACCESS_KEY: ${LANGFUSE_S3_BATCH_EXPORT_SECRET_ACCESS_KEY:-miniosecret} # CHANGEME LANGFUSE_S3_BATCH_EXPORT_FORCE_PATH_STYLE: ${LANGFUSE_S3_BATCH_EXPORT_FORCE_PATH_STYLE:-true} LANGFUSE_INGESTION_QUEUE_DELAY_MS: ${LANGFUSE_INGESTION_QUEUE_DELAY_MS:-} LANGFUSE_INGESTION_CLICKHOUSE_WRITE_INTERVAL_MS: ${LANGFUSE_INGESTION_CLICKHOUSE_WRITE_INTERVAL_MS:-} REDIS_HOST: ${REDIS_HOST:-redis} REDIS_PORT: ${REDIS_PORT:-6379} REDIS_AUTH: ${REDIS_AUTH:-myredissecret} # CHANGEME REDIS_TLS_ENABLED: ${REDIS_TLS_ENABLED:-false} REDIS_TLS_CA: ${REDIS_TLS_CA:-/certs/ca.crt} REDIS_TLS_CERT: ${REDIS_TLS_CERT:-/certs/redis.crt} REDIS_TLS_KEY: ${REDIS_TLS_KEY:-/certs/redis.key} EMAIL_FROM_ADDRESS: ${EMAIL_FROM_ADDRESS:-} SMTP_CONNECTION_URL: ${SMTP_CONNECTION_URL:-} langfuse-web: image: docker.io/langfuse/langfuse:3 restart: always depends_on: *langfuse-depends-on ports: - 3000:3000 environment: <<: *langfuse-worker-env NEXTAUTH_URL: http://localhost:3000 NEXTAUTH_SECRET: mysecret # CHANGEME LANGFUSE_INIT_ORG_ID: ${LANGFUSE_INIT_ORG_ID:-} LANGFUSE_INIT_ORG_NAME: ${LANGFUSE_INIT_ORG_NAME:-} LANGFUSE_INIT_PROJECT_ID: ${LANGFUSE_INIT_PROJECT_ID:-} LANGFUSE_INIT_PROJECT_NAME: ${LANGFUSE_INIT_PROJECT_NAME:-} LANGFUSE_INIT_PROJECT_PUBLIC_KEY: ${LANGFUSE_INIT_PROJECT_PUBLIC_KEY:-} LANGFUSE_INIT_PROJECT_SECRET_KEY: ${LANGFUSE_INIT_PROJECT_SECRET_KEY:-} LANGFUSE_INIT_USER_EMAIL: ${LANGFUSE_INIT_USER_EMAIL:-} LANGFUSE_INIT_USER_NAME: ${LANGFUSE_INIT_USER_NAME:-} LANGFUSE_INIT_USER_PASSWORD: ${LANGFUSE_INIT_USER_PASSWORD:-} clickhouse: image: docker.io/clickhouse/clickhouse-server restart: always user: "101:101" environment: CLICKHOUSE_DB: default CLICKHOUSE_USER: clickhouse CLICKHOUSE_PASSWORD: clickhouse # CHANGEME volumes: - langfuse_clickhouse_data:/var/lib/clickhouse - langfuse_clickhouse_logs:/var/log/clickhouse-server ports: - 127.0.0.1:8123:8123 - 127.0.0.1:9000:9000 healthcheck: test: wget --no-verbose --tries=1 --spider http://localhost:8123/ping || exit 1 interval: 5s timeout: 5s retries: 10 start_period: 1s minio: image: docker.io/minio/minio restart: always entrypoint: sh # create the 'langfuse' bucket before starting the service command: -c 'mkdir -p /data/langfuse && minio server --address ":9000" --console-address ":9001" /data' environment: MINIO_ROOT_USER: minio MINIO_ROOT_PASSWORD: miniosecret # CHANGEME ports: - 9090:9000 - 127.0.0.1:9091:9001 volumes: - langfuse_minio_data:/data healthcheck: test: ["CMD", "mc", "ready", "local"] interval: 1s timeout: 5s retries: 5 start_period: 1s redis: image: docker.io/redis:7 restart: always # CHANGEME: row below to secure redis password command: > --requirepass ${REDIS_AUTH:-myredissecret} ports: - 127.0.0.1:6379:6379 healthcheck: test: ["CMD", "redis-cli", "ping"] interval: 3s timeout: 10s retries: 10 postgres: image: docker.io/postgres:${POSTGRES_VERSION:-latest} restart: always healthcheck: test: ["CMD-SHELL", "pg_isready -U postgres"] interval: 3s timeout: 3s retries: 10 environment: POSTGRES_USER: postgres POSTGRES_PASSWORD: postgres # CHANGEME POSTGRES_DB: postgres ports: - 127.0.0.1:5432:5432 volumes: - langfuse_postgres_data:/var/lib/postgresql/data volumes: langfuse_postgres_data: driver: local langfuse_clickhouse_data: driver: local langfuse_clickhouse_logs: driver: local langfuse_minio_data: driver: local 帮我分析一下这个langfuse-web为啥在前端页面访问时,一直提示正在加载,nginx上提示404,不过我直接用curl-v在本地服务器上是完全没问题,容器也都启动成功了

filetype

# Make sure to update the credential placeholders with your own secrets. # We mark them with # CHANGEME in the file below. # In addition, we recommend to restrict inbound traffic on the host to langfuse-web (port 3000) and minio (port 9090) only. # All other components are bound to localhost (127.0.0.1) to only accept connections from the local machine. # External connections from other machines will not be able to reach these services directly. services: langfuse-worker: image: docker.io/langfuse/langfuse-worker:3 restart: always depends_on: &langfuse-depends-on postgres: condition: service_healthy minio: condition: service_healthy redis: condition: service_healthy clickhouse: condition: service_healthy ports: - 127.0.0.1:3030:3030 environment: &langfuse-worker-env DATABASE_URL: postgresql://postgres:postgres@postgres:5432/postgres # CHANGEME SALT: "mysalt" # CHANGEME ENCRYPTION_KEY: "0000000000000000000000000000000000000000000000000000000000000000" # CHANGEME: generate via `openssl rand -hex 32` TELEMETRY_ENABLED: ${TELEMETRY_ENABLED:-true} LANGFUSE_ENABLE_EXPERIMENTAL_FEATURES: ${LANGFUSE_ENABLE_EXPERIMENTAL_FEATURES:-true} CLICKHOUSE_MIGRATION_URL: ${CLICKHOUSE_MIGRATION_URL:-clickhouse://clickhouse:9000} CLICKHOUSE_URL: ${CLICKHOUSE_URL:-http://clickhouse:8123} CLICKHOUSE_USER: ${CLICKHOUSE_USER:-clickhouse} CLICKHOUSE_PASSWORD: ${CLICKHOUSE_PASSWORD:-clickhouse} # CHANGEME CLICKHOUSE_CLUSTER_ENABLED: ${CLICKHOUSE_CLUSTER_ENABLED:-false} LANGFUSE_USE_AZURE_BLOB: ${LANGFUSE_USE_AZURE_BLOB:-false} LANGFUSE_S3_EVENT_UPLOAD_BUCKET: ${LANGFUSE_S3_EVENT_UPLOAD_BUCKET:-langfuse} LANGFUSE_S3_EVENT_UPLOAD_REGION: ${LANGFUSE_S3_EVENT_UPLOAD_REGION:-auto} LANGFUSE_S3_EVENT_UPLOAD_ACCESS_KEY_ID: ${LANGFUSE_S3_EVENT_UPLOAD_ACCESS_KEY_ID:-minio} LANGFUSE_S3_EVENT_UPLOAD_SECRET_ACCESS_KEY: ${LANGFUSE_S3_EVENT_UPLOAD_SECRET_ACCESS_KEY:-miniosecret} # CHANGEME LANGFUSE_S3_EVENT_UPLOAD_ENDPOINT: ${LANGFUSE_S3_EVENT_UPLOAD_ENDPOINT:-http://minio:9000} LANGFUSE_S3_EVENT_UPLOAD_FORCE_PATH_STYLE: ${LANGFUSE_S3_EVENT_UPLOAD_FORCE_PATH_STYLE:-true} LANGFUSE_S3_EVENT_UPLOAD_PREFIX: ${LANGFUSE_S3_EVENT_UPLOAD_PREFIX:-events/} LANGFUSE_S3_MEDIA_UPLOAD_BUCKET: ${LANGFUSE_S3_MEDIA_UPLOAD_BUCKET:-langfuse} LANGFUSE_S3_MEDIA_UPLOAD_REGION: ${LANGFUSE_S3_MEDIA_UPLOAD_REGION:-auto} LANGFUSE_S3_MEDIA_UPLOAD_ACCESS_KEY_ID: ${LANGFUSE_S3_MEDIA_UPLOAD_ACCESS_KEY_ID:-minio} LANGFUSE_S3_MEDIA_UPLOAD_SECRET_ACCESS_KEY: ${LANGFUSE_S3_MEDIA_UPLOAD_SECRET_ACCESS_KEY:-miniosecret} # CHANGEME LANGFUSE_S3_MEDIA_UPLOAD_ENDPOINT: ${LANGFUSE_S3_MEDIA_UPLOAD_ENDPOINT:-http://localhost:9090} LANGFUSE_S3_MEDIA_UPLOAD_FORCE_PATH_STYLE: ${LANGFUSE_S3_MEDIA_UPLOAD_FORCE_PATH_STYLE:-true} LANGFUSE_S3_MEDIA_UPLOAD_PREFIX: ${LANGFUSE_S3_MEDIA_UPLOAD_PREFIX:-media/} LANGFUSE_S3_BATCH_EXPORT_ENABLED: ${LANGFUSE_S3_BATCH_EXPORT_ENABLED:-false} LANGFUSE_S3_BATCH_EXPORT_BUCKET: ${LANGFUSE_S3_BATCH_EXPORT_BUCKET:-langfuse} LANGFUSE_S3_BATCH_EXPORT_PREFIX: ${LANGFUSE_S3_BATCH_EXPORT_PREFIX:-exports/} LANGFUSE_S3_BATCH_EXPORT_REGION: ${LANGFUSE_S3_BATCH_EXPORT_REGION:-auto} LANGFUSE_S3_BATCH_EXPORT_ENDPOINT: ${LANGFUSE_S3_BATCH_EXPORT_ENDPOINT:-http://minio:9000} LANGFUSE_S3_BATCH_EXPORT_EXTERNAL_ENDPOINT: ${LANGFUSE_S3_BATCH_EXPORT_EXTERNAL_ENDPOINT:-http://localhost:9090} LANGFUSE_S3_BATCH_EXPORT_ACCESS_KEY_ID: ${LANGFUSE_S3_BATCH_EXPORT_ACCESS_KEY_ID:-minio} LANGFUSE_S3_BATCH_EXPORT_SECRET_ACCESS_KEY: ${LANGFUSE_S3_BATCH_EXPORT_SECRET_ACCESS_KEY:-miniosecret} # CHANGEME LANGFUSE_S3_BATCH_EXPORT_FORCE_PATH_STYLE: ${LANGFUSE_S3_BATCH_EXPORT_FORCE_PATH_STYLE:-true} LANGFUSE_INGESTION_QUEUE_DELAY_MS: ${LANGFUSE_INGESTION_QUEUE_DELAY_MS:-} LANGFUSE_INGESTION_CLICKHOUSE_WRITE_INTERVAL_MS: ${LANGFUSE_INGESTION_CLICKHOUSE_WRITE_INTERVAL_MS:-} REDIS_HOST: ${REDIS_HOST:-redis} REDIS_PORT: ${REDIS_PORT:-6379} REDIS_AUTH: ${REDIS_AUTH:-myredissecret} # CHANGEME REDIS_TLS_ENABLED: ${REDIS_TLS_ENABLED:-false} REDIS_TLS_CA: ${REDIS_TLS_CA:-/certs/ca.crt} REDIS_TLS_CERT: ${REDIS_TLS_CERT:-/certs/redis.crt} REDIS_TLS_KEY: ${REDIS_TLS_KEY:-/certs/redis.key} EMAIL_FROM_ADDRESS: ${EMAIL_FROM_ADDRESS:-} SMTP_CONNECTION_URL: ${SMTP_CONNECTION_URL:-} langfuse-web: image: docker.io/langfuse/langfuse:3 restart: always depends_on: *langfuse-depends-on ports: - 3000:3000 environment: <<: *langfuse-worker-env NEXTAUTH_URL: http://localhost:3000 NEXTAUTH_SECRET: mysecret # CHANGEME LANGFUSE_INIT_ORG_ID: ${LANGFUSE_INIT_ORG_ID:-} LANGFUSE_INIT_ORG_NAME: ${LANGFUSE_INIT_ORG_NAME:-} LANGFUSE_INIT_PROJECT_ID: ${LANGFUSE_INIT_PROJECT_ID:-} LANGFUSE_INIT_PROJECT_NAME: ${LANGFUSE_INIT_PROJECT_NAME:-} LANGFUSE_INIT_PROJECT_PUBLIC_KEY: ${LANGFUSE_INIT_PROJECT_PUBLIC_KEY:-} LANGFUSE_INIT_PROJECT_SECRET_KEY: ${LANGFUSE_INIT_PROJECT_SECRET_KEY:-} LANGFUSE_INIT_USER_EMAIL: ${LANGFUSE_INIT_USER_EMAIL:-} LANGFUSE_INIT_USER_NAME: ${LANGFUSE_INIT_USER_NAME:-} LANGFUSE_INIT_USER_PASSWORD: ${LANGFUSE_INIT_USER_PASSWORD:-} clickhouse: image: docker.io/clickhouse/clickhouse-server restart: always user: "101:101" environment: CLICKHOUSE_DB: default CLICKHOUSE_USER: clickhouse CLICKHOUSE_PASSWORD: clickhouse # CHANGEME volumes: - langfuse_clickhouse_data:/var/lib/clickhouse - langfuse_clickhouse_logs:/var/log/clickhouse-server ports: - 127.0.0.1:8123:8123 - 127.0.0.1:9000:9000 healthcheck: test: wget --no-verbose --tries=1 --spider http://localhost:8123/ping || exit 1 interval: 5s timeout: 5s retries: 10 start_period: 1s minio: image: docker.io/minio/minio restart: always entrypoint: sh # create the 'langfuse' bucket before starting the service command: -c 'mkdir -p /data/langfuse && minio server --address ":9000" --console-address ":9001" /data' environment: MINIO_ROOT_USER: minio MINIO_ROOT_PASSWORD: miniosecret # CHANGEME ports: - 9090:9000 - 127.0.0.1:9091:9001 volumes: - langfuse_minio_data:/data healthcheck: test: ["CMD", "mc", "ready", "local"] interval: 1s timeout: 5s retries: 5 start_period: 1s redis: image: docker.io/redis:7 restart: always # CHANGEME: row below to secure redis password command: > --requirepass ${REDIS_AUTH:-myredissecret} ports: - 127.0.0.1:6379:6379 healthcheck: test: ["CMD", "redis-cli", "ping"] interval: 3s timeout: 10s retries: 10 postgres: image: docker.io/postgres:${POSTGRES_VERSION:-latest} restart: always healthcheck: test: ["CMD-SHELL", "pg_isready -U postgres"] interval: 3s timeout: 3s retries: 10 environment: POSTGRES_USER: postgres POSTGRES_PASSWORD: postgres # CHANGEME POSTGRES_DB: postgres ports: - 127.0.0.1:5432:5432 volumes: - langfuse_postgres_data:/var/lib/postgresql/data volumes: langfuse_postgres_data: driver: local langfuse_clickhouse_data: driver: local langfuse_clickhouse_logs: driver: local langfuse_minio_data: driver: local 用它部署了一组容器,启动后发现langfuse-web前端一直提示正在加载什么原因呢,是正常现象还是故障

kudrei
  • 粉丝: 51
上传资源 快速赚钱