背景:
这个是我4年半以前接受测试平台过程中遇到问题记录,因为交接成都这边,拿出来直接用了。这里做个记录。
一、美东测试服务器相关
1.主服务器部署机器
该机器是美东服务器。
机器配置:t5.xlarge cpu 4核, 内存 8G。
-
主程序部署目录:
/home/odin-ws
-
每次执行服务端接口自动化拉取kenzo放置目录:
/data/odin-ws_archive/kenzo_space/
-
/odin-ws/deploy.sh 部署文件中有对应的 odin_ws_archive 配置
2.技术栈:
odin-ws后端工程使用到的技术栈:python+flask+celery+redis+mysql+decoker
接口自动化使用到的技术栈:python+pytest+auller
前端页面技术栈:vue+js
二、美东服务器部署
1.更新odin-ws 代码
2.开启防火墙
防火墙必须是在开启状态下,进行步骤安装,才能下载到依赖包。否则部署安装会报错
# 查看防火墙状态。 这个时候应该是关闭的
[root@liveme-qa-odin-ws-use1a-1 odin-ws]# sudo systemctl status firewalld
● firewalld.service - firewalld - dynamic firewall daemonLoaded: loaded (/usr/lib/systemd/system/firewalld.service; enabled; vendor preset: enabled)Active: inactive (dead) since Mon 2023-08-14 17:18:23 CST; 1h 12min agoDocs: man:firewalld(1)Process: 25525 ExecStart=/usr/sbin/firewalld --nofork --nopid $FIREWALLD_ARGS (code=exited, status=0/SUCCESS)Main PID: 25525 (code=exited, status=0/SUCCESS)Aug 14 17:14:43 liveme-qa-odin-ws-use1a-1 systemd[1]: Starting firewalld - dynamic firewall daemon...
Aug 14 17:14:43 liveme-qa-odin-ws-use1a-1 systemd[1]: Started firewalld - dynamic firewall daemon.
Aug 14 17:14:43 liveme-qa-odin-ws-use1a-1 firewalld[25525]: WARNING: AllowZoneDrifting is enabled. This is considered an insecure configuration option. It will be removed in a future releas...ling it now.
Aug 14 17:14:43 liveme-qa-odin-ws-use1a-1 firewalld[25525]: WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w10 -D FORWARD -i docker0 -o docker0 -j DROP' failed: iptables: Bad rule (does a ma...hat chain?).
Aug 14 17:14:43 liveme-qa-odin-ws-use1a-1 firewalld[25525]: WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w10 -D FORWARD -i docker0 -o docker0 -j DROP' failed: iptables: Bad rule (does a ma...hat chain?).
Aug 14 17:18:23 liveme-qa-odin-ws-use1a-1 systemd[1]: Stopping firewalld - dynamic firewall daemon...
Aug 14 17:18:24 liveme-qa-odin-ws-use1a-1 systemd[1]: Stopped firewalld - dynamic firewall daemon.
Hint: Some lines were ellipsized, use -l to show in full.# 开启防火墙
[root@liveme-qa-odin-ws-use1a-1 odin-ws]# sudo systemctl start firewalld
[root@liveme-qa-odin-ws-use1a-1 odin-ws]# sudo systemctl status firewalld
● firewalld.service - firewalld - dynamic firewall daemonLoaded: loaded (/usr/lib/systemd/system/firewalld.service; enabled; vendor preset: enabled)Active: active (running) since Mon 2023-08-14 17:14:43 CST; 4s agoDocs: man:firewalld(1)Main PID: 25525 (firewalld)Tasks: 2Memory: 21.9MCGroup: /system.slice/firewalld.service└─25525 /usr/bin/python2 -Es /usr/sbin/firewalld --nofork --nopidAug 14 17:14:43 liveme-qa-odin-ws-use1a-1 systemd[1]: Starting firewalld - dynamic firewall daemon...
Aug 14 17:14:43 liveme-qa-odin-ws-use1a-1 systemd[1]: Started firewalld - dynamic firewall daemon.
Aug 14 17:14:43 liveme-qa-odin-ws-use1a-1 firewalld[25525]: WARNING: AllowZoneDrifting is enabled. This is considered an insecure configuration option. It will be removed in a future releas...ling it now.
Aug 14 17:14:43 liveme-qa-odin-ws-use1a-1 firewalld[25525]: WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w10 -D FORWARD -i docker0 -o docker0 -j DROP' failed: iptables: Bad rule (does a ma...hat chain?).
Aug 14 17:14:43 liveme-qa-odin-ws-use1a-1 firewalld[25525]: WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w10 -D FORWARD -i docker0 -o docker0 -j DROP' failed: iptables: Bad rule (does a ma...hat chain?).
Hint: Some lines were ellipsized, use -l to show in full.
3.部署odin-ws服务
# 查看现在跑的odin-ws 的版本号
[root@liveme-qa-odin-ws-use1a-1 odin-ws]# docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
4ed4b2ed3780 odin-ws:35 "gunicorn --config..." About an hour ago Up About an hour odin-ws
14ecb697e472 ae28c0361d82 "/bin/sh -c 'pip3 ..." About an hour ago Exited (1) About an hour ago pensive_lamarr
b0d9e02a2d0b odin-celery "celery -A manage:..." 11 days ago Exited (0) 11 days ago odin-celery
b845942370e6 redis "docker-entrypoint..." 12 days ago Exited (1) 12 days ago naughty_yalow
2b65046f05bd 43a198b3e303 "/bin/sh -c 'apt-k..." 2 months ago Exited (0) 11 days ago romantic_kilby
# 通过deplay.sh 脚本部署odin-ws程序,在现有的版本号上+1,比如现在的版本号是35,那么新程序部署版本号为36
[root@liveme-qa-odin-ws-use1a-1 odin-ws]# sh deploy.sh 36
......
Successfully installed Deprecated-1.2.13 Flask-2.0.2 Flask-Cors-3.0.10 Flask-SQLAlchemy-2.4.1 Flask-SocketIO-5.1.1 GitPython-3.1.26 Jinja2-3.0.3 MarkupSafe-2.0.1 Naked-0.1.31 PyMySQL-1.0.2 PyYAML-6.0 SQLAlchemy-1.3.24 Werkzeug-2.0.2 addict-2.4.0 amqp-5.0.9 attrs-21.4.0 bidict-0.21.4 billiard-3.6.4.0 cached-property-1.5.2 celery-5.2.3 certifi-2021.10.8 charset-normalizer-2.0.11 click-8.0.3 click-didyoumean-0.3.0 click-plugins-1.1.1 click-repl-0.2.0 dnspython-1.16.0 eventlet-0.30.2 execnet-1.9.0 gevent-21.12.0 gevent-websocket-0.10.1 gitdb-4.0.9 greenlet-1.1.2 gunicorn-20.1.0 idna-3.3 importlib-metadata-4.10.1 iniconfig-1.1.1 itsdangerous-2.0.1 kombu-5.2.3 packaging-21.3 passlib-1.7.4 pluggy-1.0.0 prompt-toolkit-3.0.26 protobuf-3.19.4 py-1.11.0 pycryptodome-3.9.8 pyparsing-3.0.7 pytest-7.0.0 pytest-forked-1.4.0 pytest-xdist-2.5.0 python-engineio-4.3.1 python-socketio-5.5.1 pytz-2021.3 redis-3.5.3 requests-2.27.1 setuptools-59.6.0 shellescape-3.8.1 six-1.16.0 smmap-5.0.0 tomli-2.0.0 typing_extensions-4.0.1 urllib3-1.26.8 vine-5.0.0 wcwidth-0.2.5 wheel-0.37.1 wrapt-1.13.3 yacs-0.1.8 zipp-3.7.0 zope.event-4.5.0 zope.interface-5.4.0
WARNING: Running pip as the 'root' user can result in broken permissions and conflicting behaviour with the system package manager. It is recommended to use a virtual environment instead: https://pip.pypa.io/warnings/venv---> e2d7c548e6de
Removing intermediate container 72ddfea1b27e
Step 11/17 : EXPOSE 8899---> Running in 7996170d26f3---> 01bdefd78cc0
Removing intermediate container 7996170d26f3
Step 12/17 : ENV FLASK_CONFIG prod---> Running in d5b5cc06359f---> b053c0d89cde
Removing intermediate container d5b5cc06359f
Step 13/17 : ENV KENZO_ENV server---> Running in 65b7450f2b24---> 104ec0180db1
Removing intermediate container 65b7450f2b24
Step 14/17 : ENV JAVA_HOME /home/odin-ws/jdk1.8---> Running in 79cb6370ff08---> a85afc456161
Removing intermediate container 79cb6370ff08
Step 15/17 : ENV PATH $JAVA_HOME/bin:$PATH---> Running in 90a41cdecf74---> eac782b72a05
Removing intermediate container 90a41cdecf74
Step 16/17 : ENV CLASSPATH .:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar---> Running in ead9877fac19---> 0884bf118cf1
Removing intermediate container ead9877fac19
Step 17/17 : CMD gunicorn --config /home/odin-ws/gunicorn_config.py manage:app---> Running in 94733a698982---> 514a34388744
Removing intermediate container 94733a698982
Successfully built 514a34388744
build odin-ws image finished!
5.run odin-ws container···
3b865956d033791f4468efdf6a17b90b0fa6a1bc27920c13187b66b34b2a9b48
6.check the running odin-ws container···
3b865956d033 odin-ws:36 "gunicorn --config..." Less than a second ago Up Less than a second odin-ws
run container done!
success!!!
4.关闭防火墙
只有关闭了防火墙,odin前端才能访问到odin-ws的接口
# 关闭防火墙。
[root@liveme-qa-odin-ws-use1a-1 odin-ws]# sudo systemctl stop firewalld
# 查看防火墙状态- 必须为已关闭
[root@liveme-qa-odin-ws-use1a-1 odin-ws]# sudo systemctl status firewalld
● firewalld.service - firewalld - dynamic firewall daemonLoaded: loaded (/usr/lib/systemd/system/firewalld.service; enabled; vendor preset: enabled)Active: inactive (dead) since Mon 2023-08-14 17:08:35 CST; 3min 23s agoDocs: man:firewalld(1)Process: 24140 ExecStart=/usr/sbin/firewalld --nofork --nopid $FIREWALLD_ARGS (code=exited, status=0/SUCCESS)Main PID: 24140 (code=exited, status=0/SUCCESS)
5.进入docker容器启动celery
# 查看重新部署的odin-ws服务的容器id[root@liveme-qa-odin-ws-use1a-1 odin-ws]# docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
4ed4b2ed3780 odin-ws:35 "gunicorn --config..." 2 minutes ago Up 2 minutes odin-ws
14ecb697e472 ae28c0361d82 "/bin/sh -c 'pip3 ..." 18 minutes ago Exited (1) 15 minutes ago pensive_lamarr
b0d9e02a2d0b odin-celery "celery -A manage:..." 11 days ago Exited (0) 11 days ago odin-celery
b845942370e6 redis "docker-entrypoint..." 11 days ago Exited (1) 11 days ago naughty_yalow
2b65046f05bd 43a198b3e303 "/bin/sh -c 'apt-k..." 2 months ago Exited (0) 11 days ago romantic_kilby# 进入odin-ws服务的容器,启动celery
[root@liveme-qa-odin-ws-use1a-1 odin-ws]# docker exec -it 4ed4b2ed3780 /bin/bash
root@liveme-qa-odin-ws-use1a-1:/home/odin-ws# celery -A manage.celery worker -l INFO# 启动一个新的 shell 窗口,运行下面的命令
[liuxiaomei@liveme-qa-odin-ws-use1a-1 ~]$ sudo su
[root@liveme-qa-odin-ws-use1a-1 liuxiaomei]# docker exec -it 4ed4b2ed3780 /bin/bash
root@liveme-qa-odin-ws-use1a-1:/home/odin-ws# celery -A manage.celery beat
Server initialized for eventlet.
2023-08-14 17:25:33 - redis_util.py:14 - INFO: cmd=MyRedis.__init__:Redis<ConnectionPool<Connection<host=10.66.100.133,port=6379,db=0>>>
2023-08-14 17:25:33 - __init__.py:29 - INFO: script_list:['scripts.auto_gmail', 'scripts.celery_task', 'scripts.firebase', 'scripts.kenzo_auto_task']
2023-08-14 17:25:33 - redis_util.py:14 - INFO: cmd=MyRedis.__init__:Redis<ConnectionPool<Connection<host=10.66.100.133,port=6379,db=0>>>
6.检查odin中的kenzo功能
-
odin前端页面创建一个master 线上p0 case的任务,是否可以正常执行
-
自动化工程定时任务可以正常执行
三.测试平台工程功能介绍
服务端接口自动化
用来定时启动和执行服务端接口自动化测试
前端代码路径
LiveMeHydra/src/views/kenzo/
LiveMeHydra/src/api/kenzo.js
后端代码路径
后端接口:odin-ws/application/kenzo/
异步任务:odin-ws/scripts/kenzo_auto_task.py
数据库增删改查:odin-ws/models/dao/kenzo.py
数据库表结构:odin-ws/models/kenzo.py
通用工具:odin-ws/utils/kenzo_util.py
四、有用的经验
1.服务器执行用例
因为测试服务是通过docker部署的,所以如果想在 服务器上执行kenzo用例,需要进入对应的docker容器,才能执行
sudo su docker ps -a
docker exec -it 2e6a8cea9ead /bin/bash
cd kenzo_space/1644573245/Kenzo
python -m venv venv && source venv/bin/activate && python -m pytest tests/featurelist_all/test_featurelist.py -x -v --durations=0
2.服务器上查看运行日志
[root@liveme-qa-odin-ws-use1a-1 odin-ws]# docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
3b865956d033 odin-ws:36 "gunicorn --config..." 4 minutes ago Up 4 minutes odin-ws
14ecb697e472 ae28c0361d82 "/bin/sh -c 'pip3 ..." About an hour ago Exited (1) About an hour ago pensive_lamarr
b0d9e02a2d0b odin-celery "celery -A manage:..." 11 days ago Exited (0) 11 days ago odin-celery
b845942370e6 redis "docker-entrypoint..." 12 days ago Exited (1) 12 days ago naughty_yalow
2b65046f05bd 43a198b3e303 "/bin/sh -c 'apt-k..." 2 months ago Exited (0) 11 days ago romantic_kilby[root@liveme-qa-odin-ws-use1a-1 odin-ws]# docker logs -f 3b865956d033
Server initialized for eventlet.
2023-08-14 18:32:44 - redis_util.py:14 - INFO: cmd=MyRedis.__init__:Redis<ConnectionPool<Connection<host=10.66.100.133,port=6379,db=0>>>
2023-08-14 18:32:44 - __init__.py:29 - INFO: script_list:['scripts.auto_gmail', 'scripts.celery_task', 'scripts.firebase', 'scripts.kenzo_auto_task']
2023-08-14 18:32:44 - redis_util.py:14 - INFO: cmd=MyRedis.__init__:Redis<ConnectionPool<Connection<host=10.66.100.133,port=6379,db=0>>>
2023-08-14 18:38:26 - views.py:289 - INFO: kenzo/list POSTdata={'page_index': 1, 'page_size': 10, 'start_time': '', 'end_time': '', 'task_type': 'all', 'operator': '', 'case_type': 'all'}
2023-08-14 18:38:42 - views.py:86 - INFO: /kenzo/task POSTdata={'branch': 'master', 'environment': 'online1', 'scope': 'p0', 'cases': '', 'operator': 'liuxiaomei@joyme.sg', 'task_type': 'manual'}
2023-08-14 18:38:42 - views.py:117 - INFO: cmd=KenzoTask.server的值:no server
2023-08-14 18:38:42 - views.py:155 - INFO: cmd=KenzoTask 调用异步任务结果->4413a710-a68a-45a5-8f59-f3964f95cc91
2023-08-14 18:38:42 - views.py:156 - INFO: cmd=------><celery.backends.redis.RedisBackend object at 0x7f3c96c0ef90>
2023-08-14 18:38:42 - views.py:163 - INFO: cmd=KenzoTask.scope入参:p0
2023-08-14 18:38:42 - app.py:1458 - ERROR: Exception on /kenzo/task [POST]
Traceback (most recent call last):File "/home/odin-ws/application/kenzo/views.py", line 186, in postkenzo.add_kenzo_task(**kwargs)File "/home/odin-ws/models/dao/__init__.py", line 26, in innerres = func(*args, **kwargs)File "/home/odin-ws/models/dao/kenzo.py", line 125, in add_kenzo_tasktask_info = KenzoTask(**kwargs)
TypeError: __init__() missing 5 required positional arguments: 'html_path', 'collected_case_num', 'failed_case_num', 'case_status', and 'failed_case'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):File "/usr/local/lib/python3.7/site-packages/flask/app.py", line 2073, in wsgi_appresponse = self.full_dispatch_request()File "/usr/local/lib/python3.7/site-packages/flask/app.py", line 1518, in full_dispatch_requestrv = self.handle_user_exception(e)File "/usr/local/lib/python3.7/site-packages/flask_cors/extension.py", line 165, in wrapped_functionreturn cors_after_request(app.make_response(f(*args, **kwargs)))File "/usr/local/lib/python3.7/site-packages/flask/app.py", line 1516, in full_dispatch_requestrv = self.dispatch_request()File "/usr/local/lib/python3.7/site-packages/flask/app.py", line 1502, in dispatch_requestreturn self.ensure_sync(self.view_functions[rule.endpoint])(**req.view_args)File "/usr/local/lib/python3.7/site-packages/flask/views.py", line 84, in viewreturn current_app.ensure_sync(self.dispatch_request)(*args, **kwargs)File "/usr/local/lib/python3.7/site-packages/flask/views.py", line 158, in dispatch_requestreturn current_app.ensure_sync(meth)(*args, **kwargs)File "/home/odin-ws/application/kenzo/views.py", line 188, in postkenzo.update_kenzo_status(task_id, "FAILED")File "/home/odin-ws/models/dao/__init__.py", line 26, in innerres = func(*args, **kwargs)File "/home/odin-ws/models/dao/kenzo.py", line 147, in update_kenzo_statusres = KenzoTask.query.filter_by(task_id=task_id).one()File "/usr/local/lib/python3.7/site-packages/sqlalchemy/orm/query.py", line 3500, in oneraise orm_exc.NoResultFound("No row was found for one()")
sqlalchemy.orm.exc.NoResultFound: No row was found for one()
3.本地新安装了第三方库,写入到requirements.txt文件中
本地安装了新的第三方库,需要更新requirements.txt文件,否则服务器部署的时候,会失败
pip freeze > requirements.txt
4.Linux定时任务自动清理报告
因为接口自动化每天都在run,所以过一段时间500G磁盘空间就满,上定时任务执行/data/odin-ws/scripts/deletefile.sh 脚本(北京时间17点每天定时执行)
# 切换到root账号 [liuxiaomei@liveme-qa-odin-ws-use1a-1 scripts]$ sudo su # 查看系统定时任务,修改定时任务以后保存立马生效,北京时间17点每天定时执行 [root@liveme-qa-odin-ws-use1a-1 scripts]# crontab -e # 查看系统定时任务运行状态 [root@liveme-qa-odin-ws-use1a-1 scripts]#service crond status
5.odin-ws 美东服务器启动报/simple/addict/错误
WARNING: Retrying (Retry(total=4, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NewConnectionError('<pip._vendor.urllib3.connection.HTTPSConnection object at 0x7fd779bba310>: Failed to establish a new connection: [Errno -3] Temporary failure in name resolution')': /simple/addict/
WARNING: Retrying (Retry(total=3, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NewConnectionError('<pip._vendor.urllib3.connection.HTTPSConnection object at 0x7fd779ba4c50>: Failed to establish a new connection: [Errno -3] Temporary failure in name resolution')': /simple/addict/
WARNING: Retrying (Retry(total=2, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NewConnectionError('<pip._vendor.urllib3.connection.HTTPSConnection object at 0x7fd779bba790>: Failed to establish a new connection: [Errno -3] Temporary failure in name resolution')': /simple/addict/
WARNING: Retrying (Retry(total=1, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NewConnectionError('<pip._vendor.urllib3.connection.HTTPSConnection object at 0x7fd779bbab50>: Failed to establish a new connection: [Errno -3] Temporary failure in name resolution')': /simple/addict/
WARNING: Retrying (Retry(total=0, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NewConnectionError('<pip._vendor.urllib3.connection.HTTPSConnection object at 0x7fd779bbaf10>: Failed to establish a new connection: [Errno -3] Temporary failure in name resolution')': /simple/addict/
ERROR: Could not find a version that satisfies the requirement addict==2.4.0 (from versions: none)
ERROR: No matching distribution found for addict==2.4.0
检查防火墙状态,需要开启防火墙。如果防火墙状态是关闭的,请打开防火墙
[root@liveme-qa-odin-ws-use1a-1 odin-ws]# sudo systemctl start firewalld
[root@liveme-qa-odin-ws-use1a-1 odin-ws]# sudo systemctl status firewalld
● firewalld.service - firewalld - dynamic firewall daemonLoaded: loaded (/usr/lib/systemd/system/firewalld.service; enabled; vendor preset: enabled)Active: active (running) since Mon 2023-08-14 17:14:43 CST; 4s agoDocs: man:firewalld(1)Main PID: 25525 (firewalld)Tasks: 2Memory: 21.9MCGroup: /system.slice/firewalld.service└─25525 /usr/bin/python2 -Es /usr/sbin/firewalld --nofork --nopidAug 14 17:14:43 liveme-qa-odin-ws-use1a-1 systemd[1]: Starting firewalld - dynamic firewall daemon...
Aug 14 17:14:43 liveme-qa-odin-ws-use1a-1 systemd[1]: Started firewalld - dynamic firewall daemon.
Aug 14 17:14:43 liveme-qa-odin-ws-use1a-1 firewalld[25525]: WARNING: AllowZoneDrifting is enabled. This is considered an insecure configuration option. It will be removed in a future releas...ling it now.
Aug 14 17:14:43 liveme-qa-odin-ws-use1a-1 firewalld[25525]: WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w10 -D FORWARD -i docker0 -o docker0 -j DROP' failed: iptables: Bad rule (does a ma...hat chain?).
Aug 14 17:14:43 liveme-qa-odin-ws-use1a-1 firewalld[25525]: WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w10 -D FORWARD -i docker0 -o docker0 -j DROP' failed: iptables: Bad rule (does a ma...hat chain?).
Hint: Some lines were ellipsized, use -l to show in full.
6.odin-ws 美东服务请求正常,但是kenzo任务队列堆积均处于运行中
原因:
磁盘空间满了,导致任务无法写入文件
解决方案:
删除/data/odin-ws_archive/kenzo_space文件下的一些日志和老的报告
具体排查步骤
第一步:使用df -h查看磁盘空间占用情况
[root@qa-lm-awsuse1a-01 odin-ws]# df -h
Filesystem Size Used Avail Use% Mounted on
devtmpfs 7.5G 0 7.5G 0% /dev
tmpfs 7.5G 0 7.5G 0% /dev/shm
tmpfs 7.5G 807M 6.7G 11% /run
tmpfs 7.5G 0 7.5G 0% /sys/fs/cgroup
/dev/nvme0n1p1 30G 14G 17G 46% /
/dev/nvme1n1 500G 500G 664M 100% /data
tmpfs 1.5G 0 1.5G 0% /run/user/0
tmpfs 1.5G 0 1.5G 0% /run/user/1003
tmpfs 1.5G 0 1.5G 0% /run/user/1014
overlay 500G 500G 664M 100% /data/docker/overlay2/0026d2fadfcd5bbf2710879845ee7f709e38d574f7c5e25465f94a89a60bd99b/merged
shm 64M 0 64M 0% /data/docker/containers/2f407899b6ccb028f19ae4e45fb4b32f7ce131fb0405be44b2ea5123a9588335/shm
tmpfs 1.5G 0 1.5G 0% /run/user/1017
第二步:使用du -h --max-depth=1查看当前目录下文件夹大小情况
[root@qa-lm-awsuse1a-01 data]# du -h --max-depth=1
0 ./scripts
137M ./mydan
0 ./logs
13M ./Software
4.8G ./app
134M ./mysql
478G ./odin-ws_archive
5.6G ./gcc-7.3.0
4.0K ./__MACOSX
1.9G ./protobuf-3.1.0
0 ./grafana
3.7G ./docker
343M ./odin-ws
494G .
[root@qa-lm-awsuse1a-01 odin-ws_archive]# du -h --max-depth=1
9.7M ./log
1.6G ./jmeter_report
388M ./jdk1.8
120M ./jmeter1
120M ./jmeter
1.1M ./jmeter_script
19M ./allure
476G ./kenzo_space
478G .
第三步:删除老的日志和报告
手动删除
cd /data/odin-ws_archive/kenzo_space
find ./ -ctime +7 -type d | xargs rm -rf
find ./ -ctime +7 -type f | xargs rm -rf
第四步:shell脚本定时删除
对应文件:odin-ws/script/deletfile.sh
定时任务
>crontab -e
0 0 * * * /data/odin-ws/scripts/deletefile.sh > /dev/null 2>&1
7.美东服务器启动报错
原因:由于deploy.sh文件被人修改,找不到构建的celery镜像
解决方案:撤销修改,重新启动
具体排查步骤
第一步:查看启动的容器:docker ps -a
第二步:查看docker镜像:docker images
然后删除掉 odin-celery 镜像
第三步:查看本地代码修改git status && git diff
第四步:撤销修改git reset --hard && git clean -df
第五步:启动程序sh deploy.sh v139 true
第六步:查看日志更新
/data/odin-ws_archive/kenzo_space/log 这个文件夹下的日志可以滚动更新
查看celery队列日志:docker logs -f xxx