ceph 常用命令

bucket 常用命令

查看 realm (区域)

radosgw-admin realm list

输出

{"default_info": "43c462f5-5634-496e-ad4e-978d28c2x9090","realms": ["myrgw"]
}
radosgw-admin realm get
{"id": "2cfc7b36-43b6-4a9b-a89e-2a2264f54733","name": "mys3","current_period": "4999b859-83e2-42f9-8d3c-c7ae4b9685ff","epoch": 2
}

查看区域组

radosgw-admin zonegroups list

或者

 radosgw-admin zonegroup list

输出

{"default_info": "f3a96381-12e2-4e7e-8221-c1d79708bc59","zonegroups": ["myrgw"]
}
radosgw-admin zonegroup get
{"id": "ad97bbae-61f1-41cb-a585-d10dd54e86e4","name": "mys3","api_name": "mys3","is_master": "true","endpoints": ["http://rook-ceph-rgw-mys3.rook-ceph.svc:80"],"hostnames": [],"hostnames_s3website": [],"master_zone": "9b5c0c9f-541d-4176-8527-89b4dae02ac2","zones": [{"id": "9b5c0c9f-541d-4176-8527-89b4dae02ac2","name": "mys3","endpoints": ["http://rook-ceph-rgw-mys3.rook-ceph.svc:80"],"log_meta": "false","log_data": "false","bucket_index_max_shards": 11,"read_only": "false","tier_type": "","sync_from_all": "true","sync_from": [],"redirect_zone": ""}],"placement_targets": [{"name": "default-placement","tags": [],"storage_classes": ["STANDARD"]}],"default_placement": "default-placement","realm_id": "2cfc7b36-43b6-4a9b-a89e-2a2264f54733","sync_policy": {"groups": []}
}

查看 zone

radosgw-admin zone list
{"default_info": "9b5c0c9f-541d-4176-8527-89b4dae02ac2","zones": ["mys3","default"]
}
radosgw-admin zone get
{"id": "9b5c0c9f-541d-4176-8527-89b4dae02ac2","name": "mys3","domain_root": "mys3.rgw.meta:root","control_pool": "mys3.rgw.control","gc_pool": "mys3.rgw.log:gc","lc_pool": "mys3.rgw.log:lc","log_pool": "mys3.rgw.log","intent_log_pool": "mys3.rgw.log:intent","usage_log_pool": "mys3.rgw.log:usage","roles_pool": "mys3.rgw.meta:roles","reshard_pool": "mys3.rgw.log:reshard","user_keys_pool": "mys3.rgw.meta:users.keys","user_email_pool": "mys3.rgw.meta:users.email","user_swift_pool": "mys3.rgw.meta:users.swift","user_uid_pool": "mys3.rgw.meta:users.uid","otp_pool": "mys3.rgw.otp","system_key": {"access_key": "","secret_key": ""},"placement_pools": [{"key": "default-placement","val": {"index_pool": "mys3.rgw.buckets.index","storage_classes": {"STANDARD": {"data_pool": "mys3.rgw.buckets.data"}},"data_extra_pool": "mys3.rgw.buckets.non-ec","index_type": 0,"inline_data": "true"}}],"realm_id": "","notif_pool": "mys3.rgw.log:notif"
}

查看bucket 名字

radosgw-admin bucket list

查看某个bucket的详细信息
说明:对象 id ,有多少个对象,存储限制等信息都能查到。

radosgw-admin bucket stats --bucket=ceph-bkt-9
{"bucket": "ceph-bkt-9","num_shards": 9973,"tenant": "","zonegroup": "f3a96381-12e2-4e7e-8221-c1d79708bc59","placement_rule": "default-placement","explicit_placement": {"data_pool": "","data_extra_pool": "","index_pool": ""},"id": "af3f8be9-99ee-44b7-9d17-5b616dca80ff.45143.53","marker": "af3f8be9-99ee-44b7-9d17-5b616dca80ff.45143.53","index_type": "Normal","owner": "mys3-juicefs","ver": "0#536,1#475,省略","master_ver": "0#0,1#0,2#0,3#0,4#0,省略","mtime": "0.000000","creation_time": "2023-11-03T16:58:09.692764Z","max_marker": "0#,1#,2#,3#,省略","usage": {"rgw.main": {"size": 88057775893,"size_actual": 99102711808,"size_utilized": 88057775893,"size_kb": 85993922,"size_kb_actual": 96779992,"size_kb_utilized": 85993922,"num_objects": 4209803}},"bucket_quota": {"enabled": false,"check_on_raw": false,"max_size": -1,"max_size_kb": 0,"max_objects": -1}
}

查看bucket的配置信息,例如索引分片值

radosgw-admin bucket limit check

说明:由于输出太多,所以只显示50行
radosgw-admin bucket limit check|head -50

[{"user_id": "dashboard-admin","buckets": []},{"user_id": "obc-default-ceph-bkt-openbayes-juicefs-6a2b2c57-d393-4529-8620-c0af6c9c30f8","buckets": [{"bucket": "ceph-bkt-20d5f58a-7501-4084-baca-98d9e68a7e57","tenant": "","num_objects": 355,"num_shards": 11,"objects_per_shard": 32,"fill_status": "OK"}]},{"user_id": "rgw-admin-ops-user","buckets": []},{"user_id": "mys3-user","buckets": [{"bucket": "ceph-bkt-caa8a9d1-c278-4015-ba2d-354e142c0","tenant": "","num_objects": 80,"num_shards": 11,"objects_per_shard": 7,"fill_status": "OK"},{"bucket": "ceph-bkt-caa8a9d1-c278-4015-ba2d-354e142c1","tenant": "","num_objects": 65,"num_shards": 11,"objects_per_shard": 5,"fill_status": "OK"},{"bucket": "ceph-bkt-caa8a9d1-c278-4015-ba2d-354e142c10","tenant": "","num_objects": 83,"num_shards": 11,"objects_per_shard": 7,"fill_status": "OK"},{

查看存储使用情况命令

ceph df

输出

--- RAW STORAGE ---
CLASS     SIZE    AVAIL    USED  RAW USED  %RAW USED
hdd    900 GiB  834 GiB  66 GiB    66 GiB       7.29
TOTAL  900 GiB  834 GiB  66 GiB    66 GiB       7.29--- POOLS ---
POOL                     ID  PGS   STORED  OBJECTS     USED  %USED  MAX AVAIL
.mgr                      1    1  449 KiB        2  1.3 MiB      0    255 GiB
replicapool               2   32   19 GiB    5.87k   56 GiB   6.79    255 GiB
myfs-metadata             3   16   34 MiB       33  103 MiB   0.01    255 GiB
myfs-replicated           4   32  1.9 MiB        9  5.8 MiB      0    255 GiB
.rgw.root                26    8  5.6 KiB       20  152 KiB      0    383 GiB
default.rgw.log          27   32    182 B        2   24 KiB      0    255 GiB
default.rgw.control      28   32      0 B        8      0 B      0    255 GiB
default.rgw.meta         29   32      0 B        0      0 B      0    255 GiB
ceph osd df

输出

ID  CLASS  WEIGHT   REWEIGHT  SIZE     RAW USE  DATA     OMAP     META     AVAIL    %USE  VAR   PGS  STATUS0    hdd  0.09769   1.00000  100 GiB  6.4 GiB  4.8 GiB  3.2 MiB  1.5 GiB   94 GiB  6.35  0.87   89      up3    hdd  0.19530   1.00000  200 GiB   15 GiB   14 GiB   42 MiB  1.1 GiB  185 GiB  7.61  1.04  152      up1    hdd  0.09769   1.00000  100 GiB  7.3 GiB  5.3 GiB  1.5 MiB  1.9 GiB   93 GiB  7.27  1.00   78      up4    hdd  0.19530   1.00000  200 GiB   15 GiB   14 GiB  4.2 MiB  1.1 GiB  185 GiB  7.32  1.00  157      up2    hdd  0.09769   1.00000  100 GiB  9.9 GiB  7.6 GiB  1.2 MiB  2.3 GiB   90 GiB  9.94  1.36   73      up5    hdd  0.19530   1.00000  200 GiB   12 GiB   11 GiB   43 MiB  1.1 GiB  188 GiB  6.18  0.85  158      upTOTAL  900 GiB   66 GiB   57 GiB   95 MiB  9.1 GiB  834 GiB  7.31                   
MIN/MAX VAR: 0.85/1.36  STDDEV: 1.24
rados df
POOL_NAME                   USED  OBJECTS  CLONES  COPIES  MISSING_ON_PRIMARY  UNFOUND  DEGRADED    RD_OPS       RD     WR_OPS       WR  USED COMPR  UNDER COMPR
.mgr                     1.3 MiB        2       0       6                   0        0         0   2696928  5.5 GiB     563117   29 MiB         0 B          0 B
.rgw.root                152 KiB       20       0      40                   0        0         0       428  443 KiB         10    7 KiB         0 B          0 B
default.rgw.control          0 B        8       0      24                   0        0         0         0      0 B          0      0 B         0 B          0 B
default.rgw.log           24 KiB        2       0       6                   0        0         0         0      0 B          0      0 B         0 B          0 B
default.rgw.meta             0 B        0       0       0                   0        0         0         0      0 B          0      0 B         0 B          0 B
myfs-metadata            103 MiB       33       0      99                   0        0         0  18442579   10 GiB     272672  194 MiB         0 B          0 B
myfs-replicated          5.8 MiB        9       0      27                   0        0         0        24   24 KiB         33  1.9 MiB         0 B          0 B
mys3.rgw.buckets.data    307 MiB    18493       0   36986                   0        0         0    767457  942 MiB    2713288  1.2 GiB         0 B          0 B
mys3.rgw.buckets.index    20 MiB     2827       0    5654                   0        0         0   7299856  6.2 GiB    1208180  598 MiB         0 B          0 B
mys3.rgw.buckets.non-ec      0 B        0       0       0                   0        0         0         0      0 B          0      0 B         0 B          0 B
mys3.rgw.control             0 B        8       0      16                   0        0         0         0      0 B          0      0 B         0 B          0 B
mys3.rgw.log              76 MiB      342       0     684                   0        0         0   4944901  4.5 GiB    3764847  1.1 GiB         0 B          0 B
mys3.rgw.meta            4.3 MiB      526       0    1052                   0        0         0   4617928  3.8 GiB     658074  321 MiB         0 B          0 B
mys3.rgw.otp                 0 B        0       0       0                   0        0         0         0      0 B          0      0 B         0 B          0 B
replicapool               56 GiB     5873       0   17619                   0        0         0   4482521   65 GiB  132312964  1.3 TiB         0 B          0 Btotal_objects    28143
total_used       65 GiB
total_avail      835 GiB
total_space      900 GiB
ceph osd tree
ID  CLASS  WEIGHT   TYPE NAME        STATUS  REWEIGHT  PRI-AFF
-1         0.87895  root default                              
-5         0.29298      host node01                           0    hdd  0.09769          osd.0        up   1.00000  1.000003    hdd  0.19530          osd.3        up   1.00000  1.00000
-3         0.29298      host node02                           1    hdd  0.09769          osd.1        up   1.00000  1.000004    hdd  0.19530          osd.4        up   1.00000  1.00000
-7         0.29298      host node03                           2    hdd  0.09769          osd.2        up   1.00000  1.000005    hdd  0.19530          osd.5        up   1.00000  1.00000
ceph osd find 1

说明:1为 osd 的 id 号

{"osd": 1,"addrs": {"addrvec": [{"type": "v2","addr": "10.96.12.109:6800","nonce": 701714258},{"type": "v1","addr": "10.96.12.109:6801","nonce": 701714258}]},"osd_fsid": "9b165ff1-1116-4dd8-ab04-59abb6e5e3b5","host": "node02","pod_name": "rook-ceph-osd-1-5cd7b7fd9b-pq76v","pod_namespace": "rook-ceph","crush_location": {"host": "node02","root": "default"}
}

pg 常用命令

ceph pg ls-by-osd 0|head -20
PG     OBJECTS  DEGRADED  MISPLACED  UNFOUND  BYTES      OMAP_BYTES*  OMAP_KEYS*  LOG   STATE         SINCE  VERSION        REPORTED       UP         ACTING     SCRUB_STAMP                      DEEP_SCRUB_STAMP                 LAST_SCRUB_DURATION  SCRUB_SCHEDULING                                               
2.5        185         0          0        0  682401810           75           8  4202  active+clean     7h  89470'3543578  89474:3900145  [2,4,0]p2  [2,4,0]p2  2023-11-15T01:09:50.285147+0000  2023-11-15T01:09:50.285147+0000                    4  periodic scrub scheduled @ 2023-11-16T12:33:31.356087+0000     
2.9        178         0          0        0  633696256          423          13  2003  active+clean   117m  89475'2273592  89475:2445503  [4,0,5]p4  [4,0,5]p4  2023-11-15T06:22:49.007240+0000  2023-11-12T21:00:41.277161+0000                    1  periodic scrub scheduled @ 2023-11-16T13:29:45.298222+0000     
2.c        171         0          0        0  607363106          178          12  4151  active+clean    14h  89475'4759653  89475:4985220  [2,4,0]p2  [2,4,0]p2  2023-11-14T17:41:46.959311+0000  2023-11-13T07:10:45.084379+0000                    1  periodic scrub scheduled @ 2023-11-15T23:58:48.840924+0000     
2.f        174         0          0        0  641630226          218           8  4115  active+clean    12h  89475'4064519  89475:4177515  [2,0,4]p2  [2,0,4]p2  2023-11-14T20:11:34.002882+0000  2023-11-13T13:19:50.306895+0000                    1  periodic scrub scheduled @ 2023-11-16T02:52:50.646390+0000     
2.11       172         0          0        0  637251602            0           0  3381  active+clean     7h  89475'4535730  89475:4667861  [0,4,5]p0  [0,4,5]p0  2023-11-15T00:41:28.325584+0000  2023-11-08T22:50:59.120985+0000                    1  periodic scrub scheduled @ 2023-11-16T05:10:15.810837+0000     
2.13       198         0          0        0  762552338          347          19  1905  active+clean     5h  89475'6632536  89475:6895777  [5,0,4]p5  [5,0,4]p5  2023-11-15T03:06:33.483129+0000  2023-11-15T03:06:33.483129+0000                    5  periodic scrub scheduled @ 2023-11-16T10:29:19.975736+0000     
2.16       181         0          0        0  689790976           75           8  3427  active+clean    18h  89475'5897648  89475:6498260  [0,2,1]p0  [0,2,1]p0  2023-11-14T14:07:00.475337+0000  2023-11-13T08:59:03.104478+0000                    1  periodic scrub scheduled @ 2023-11-16T01:55:30.581835+0000     
2.1b       181         0          0        0  686268416          437          16  1956  active+clean     5h  89475'4001434  89475:4376306  [5,0,4]p5  [5,0,4]p5  2023-11-15T02:36:36.002761+0000  2023-11-15T02:36:36.002761+0000                    4  periodic scrub scheduled @ 2023-11-16T09:15:09.271395+0000     
3.2          0         0          0        0          0            0           0    68  active+clean     4h       67167'68    89474:84680  [4,5,0]p4  [4,5,0]p4  2023-11-15T04:01:14.378817+0000  2023-11-15T04:01:14.378817+0000                    1  periodic scrub scheduled @ 2023-11-16T09:26:55.350003+0000     
3.3          2         0          0        0         34         4880          10    71  active+clean     6h       71545'71    89474:97438  [0,4,5]p0  [0,4,5]p0  2023-11-15T01:55:57.633258+0000  2023-11-12T07:28:22.391454+0000                    1  periodic scrub scheduled @ 2023-11-16T02:46:05.613867+0000     
3.6          1         0          0        0          0            0           0  1987  active+clean    91m    89475'54154   89475:145435  [4,0,5]p4  [4,0,5]p4  2023-11-15T06:48:38.818739+0000  2023-11-08T20:05:08.257800+0000                    1  periodic scrub scheduled @ 2023-11-16T15:08:59.546203+0000     
3.8          0         0          0        0          0            0           0    44  active+clean    16h       83074'44    89474:84245  [5,1,0]p5  [5,1,0]p5  2023-11-14T15:26:04.057142+0000  2023-11-13T03:51:42.271364+0000                    1  periodic scrub scheduled @ 2023-11-15T19:49:15.168863+0000     
3.b          3         0          0        0    8388608            0           0  2369  active+clean    24h    29905'26774  89474:3471652  [4,0,5]p4  [4,0,5]p4  2023-11-14T07:50:38.682896+0000  2023-11-10T20:06:19.530705+0000                    1  periodic scrub scheduled @ 2023-11-15T12:35:50.298157+0000     
3.f          4         0          0        0    4194880            0           0  4498  active+clean    15h    42287'15098   89474:905369  [0,5,4]p0  [0,5,4]p0  2023-11-14T17:15:38.681549+0000  2023-11-10T14:00:49.535978+0000                    1  periodic scrub scheduled @ 2023-11-15T22:26:56.705010+0000     
4.6          0         0          0        0          0            0           0   380  active+clean     2h      20555'380    89474:84961  [5,1,0]p5  [5,1,0]p5  2023-11-15T05:29:28.833076+0000  2023-11-09T09:41:36.198863+0000                    1  periodic scrub scheduled @ 2023-11-16T11:28:34.901957+0000     
4.a          0         0          0        0          0            0           0   274  active+clean    16h      20555'274    89474:91274  [0,1,2]p0  [0,1,2]p0  2023-11-14T16:09:50.743410+0000  2023-11-14T16:09:50.743410+0000                    1  periodic scrub scheduled @ 2023-11-15T18:12:35.709178+0000     
4.b          0         0          0        0          0            0           0   352  active+clean     6h      20555'352    89474:85072  [4,0,5]p4  [4,0,5]p4  2023-11-15T01:49:06.361454+0000  2023-11-12T12:44:50.143887+0000                    7  periodic scrub scheduled @ 2023-11-16T03:42:06.193542+0000     
4.10         1         0          0        0       2474            0           0   283  active+clean    17h      20555'283    89474:89904  [4,2,0]p4  [4,2,0]p4  2023-11-14T14:57:49.174637+0000  2023-11-09T18:56:58.241925+0000                    1  periodic scrub scheduled @ 2023-11-16T02:56:36.556523+0000     
4.14         0         0          0        0          0            0           0   304  active+clean    33h      20555'304    89474:85037  [5,1,0]p5  [5,1,0]p5  2023-11-13T22:55:24.034723+0000  2023-11-11T09:51:00.248512+0000                    1  periodic scrub scheduled @ 2023-11-15T09:18:55.094605+0000
ceph pg ls-by-pool myfs-replicated|head -10
PG    OBJECTS  DEGRADED  MISPLACED  UNFOUND  BYTES   OMAP_BYTES*  OMAP_KEYS*  LOG  STATE         SINCE  VERSION    REPORTED     UP         ACTING     SCRUB_STAMP                      DEEP_SCRUB_STAMP                 LAST_SCRUB_DURATION  SCRUB_SCHEDULING                                               
4.0         0         0          0        0       0            0           0  294  active+clean    14m  20555'294  89474:89655  [4,3,5]p4  [4,3,5]p4  2023-11-15T08:06:30.504646+0000  2023-11-11T14:10:37.423797+0000                    1  periodic scrub scheduled @ 2023-11-16T20:00:48.189584+0000     
4.1         0         0          0        0       0            0           0  282  active+clean    19h  20555'282  89474:91316  [2,3,4]p2  [2,3,4]p2  2023-11-14T13:11:39.095045+0000  2023-11-08T02:29:45.827302+0000                    1  periodic deep scrub scheduled @ 2023-11-15T23:05:45.143337+0000
4.2         0         0          0        0       0            0           0  228  active+clean    30h  20555'228  89474:84866  [5,3,4]p5  [5,3,4]p5  2023-11-14T01:51:16.091750+0000  2023-11-14T01:51:16.091750+0000                    1  periodic scrub scheduled @ 2023-11-15T13:37:08.420266+0000     
4.3         0         0          0        0       0            0           0  228  active+clean    12h  20555'228  89474:91622  [2,3,1]p2  [2,3,1]p2  2023-11-14T19:23:46.585302+0000  2023-11-07T22:06:51.216573+0000                    1  periodic deep scrub scheduled @ 2023-11-16T02:02:54.588932+0000
4.4         1         0          0        0    2474            0           0  236  active+clean    18h  20555'236  89474:35560  [1,5,3]p1  [1,5,3]p1  2023-11-14T13:42:45.498057+0000  2023-11-10T13:03:03.664431+0000                    1  periodic scrub scheduled @ 2023-11-15T22:08:15.399060+0000     
4.5         0         0          0        0       0            0           0  171  active+clean    23h  20555'171  89474:88153  [3,5,1]p3  [3,5,1]p3  2023-11-14T09:01:04.687468+0000  2023-11-09T23:45:29.913888+0000                    6  periodic scrub scheduled @ 2023-11-15T13:08:21.849161+0000     
4.6         0         0          0        0       0            0           0  380  active+clean     2h  20555'380  89474:84961  [5,1,0]p5  [5,1,0]p5  2023-11-15T05:29:28.833076+0000  2023-11-09T09:41:36.198863+0000                    1  periodic scrub scheduled @ 2023-11-16T11:28:34.901957+0000     
4.7         0         0          0        0       0            0           0  172  active+clean    18h  20555'172  89474:77144  [1,5,3]p1  [1,5,3]p1  2023-11-14T13:52:17.458837+0000  2023-11-09T16:56:57.755836+0000                   17  periodic scrub scheduled @ 2023-11-16T01:10:07.099940+0000     
4.8         0         0          0        0       0            0           0  272  active+clean    15h  20555'272  89474:84994  [5,3,4]p5  [5,3,4]p5  2023-11-14T17:14:47.534009+0000  2023-11-14T17:14:47.534009+0000                    1  periodic scrub scheduled @ 2023-11-15T19:30:59.254042+0000 
ceph pg ls-by-primary 0|head -10
PG     OBJECTS  DEGRADED  MISPLACED  UNFOUND  BYTES      OMAP_BYTES*  OMAP_KEYS*  LOG   STATE         SINCE  VERSION        REPORTED       UP         ACTING     SCRUB_STAMP                      DEEP_SCRUB_STAMP                 LAST_SCRUB_DURATION  SCRUB_SCHEDULING                                               
2.11       172         0          0        0  637251602            0           0  3375  active+clean     7h  89475'4536024  89475:4668155  [0,4,5]p0  [0,4,5]p0  2023-11-15T00:41:28.325584+0000  2023-11-08T22:50:59.120985+0000                    1  periodic scrub scheduled @ 2023-11-16T05:10:15.810837+0000     
2.16       181         0          0        0  689790976           75           8  3380  active+clean    18h  89475'5898101  89475:6498713  [0,2,1]p0  [0,2,1]p0  2023-11-14T14:07:00.475337+0000  2023-11-13T08:59:03.104478+0000                    1  periodic scrub scheduled @ 2023-11-16T01:55:30.581835+0000     
3.3          2         0          0        0         34         4880          10    71  active+clean     6h       71545'71    89474:97438  [0,4,5]p0  [0,4,5]p0  2023-11-15T01:55:57.633258+0000  2023-11-12T07:28:22.391454+0000                    1  periodic scrub scheduled @ 2023-11-16T02:46:05.613867+0000     
3.f          4         0          0        0    4194880            0           0  4498  active+clean    15h    42287'15098   89474:905369  [0,5,4]p0  [0,5,4]p0  2023-11-14T17:15:38.681549+0000  2023-11-10T14:00:49.535978+0000                    1  periodic scrub scheduled @ 2023-11-15T22:26:56.705010+0000     
4.a          0         0          0        0          0            0           0   274  active+clean    16h      20555'274    89474:91274  [0,1,2]p0  [0,1,2]p0  2023-11-14T16:09:50.743410+0000  2023-11-14T16:09:50.743410+0000                    1  periodic scrub scheduled @ 2023-11-15T18:12:35.709178+0000     
4.1b         0         0          0        0          0            0           0   188  active+clean     9h      20572'188    89474:60345  [0,4,5]p0  [0,4,5]p0  2023-11-14T22:45:32.243017+0000  2023-11-09T15:22:58.954604+0000                   15  periodic scrub scheduled @ 2023-11-16T05:26:22.970008+0000     
26.0         4         0          0        0       2055            0           0     4  active+clean    16h       74696'14    89474:22375    [0,5]p0    [0,5]p0  2023-11-14T16:07:57.126669+0000  2023-11-09T12:57:29.272721+0000                    1  periodic scrub scheduled @ 2023-11-15T17:12:43.441862+0000     
26.3         1         0          0        0        104            0           0     1  active+clean    10h        74632'8    89474:22487    [0,4]p0    [0,4]p0  2023-11-14T21:43:19.284917+0000  2023-11-11T13:26:08.679346+0000                    1  periodic scrub scheduled @ 2023-11-16T01:39:45.617371+0000     
27.5         1         0          0        0        154            0           0     2  active+clean    23h        69518'2    89474:22216  [0,4,2]p0  [0,4,2]p0  2023-11-14T08:56:33.324158+0000  2023-11-10T23:46:33.688281+0000                    1  periodic scrub scheduled @ 2023-11-15T20:32:30.759743+0000   
ceph osd perf
osd  commit_latency(ms)  apply_latency(ms)5                   2                  24                   2                  23                   2                  22                   0                  00                   0                  01                   1                  1

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:http://www.mzph.cn/news/146844.shtml

如若内容造成侵权/违法违规/事实不符,请联系多彩编程网进行投诉反馈email:809451989@qq.com,一经查实,立即删除!

相关文章

7.docker部署前端vue项目,实现反向代理配置

介绍: 构建镜像:通过docker构建以nginx为基础的镜像,将vue项目生成的dist包拷贝至nginx目录下,.conf文件做反向代理配置;部署服务:docker stack启动部署服务; 通过执行两个脚本既可以实现构建…

[Linux版本Debian系统]安装cuda 和对应的cudnn以cuda 12.0为例

写在前面 先检查自己有没有安装使用wget的命令,没有的话输入下面命令安装: apt-get install wget -y查看gcc的安装 sudo apt install gcc #安装gcc gcc --version #查看gcc是否安装成功 #若上述命令不成功使用下面的命令尝试之后再执行上面…

算法通关村第十关-白银挑战数组最大K数

大家好我是苏麟 , 今天带来一道应用快排的题 . 数组中的第K个最大元素 描述 : 给定整数数组 nums 和整数 k,请返回数组中第 k 个最大的元素。 请注意,你需要找的是数组排序后的第 k 个最大的元素,而不是第 k 个不同的元素。 题目 : Le…

《卡拉马佐夫兄弟》人物表

《卡拉马佐夫兄弟》人物表 重要人物表 费奥多尔巴甫洛维奇卡拉马佐夫——地主。费多尔巴甫洛维奇 德米特里(米特里、米嘉、米剑卡)费奥多罗维奇卡拉马佐夫——长子。 伊万费奥多罗维奇卡拉马佐夫——次子。伊凡费多罗维奇,。 阿列克塞&#…

ts学习02-数据类型

新建index.html <!DOCTYPE html> <html lang"zh-CN"> <head><meta charset"UTF-8"><meta name"viewport" content"widthdevice-width, initial-scale1.0"><title>Document</title> </h…

【P1008 [NOIP1998 普及组] 三连击】

[NOIP1998 普及组] 三连击 题目背景 本题为提交答案题&#xff0c;您可以写程序或手算在本机上算出答案后&#xff0c;直接提交答案文本&#xff0c;也可提交答案生成程序。 题目描述 将 1 , 2 , … , 9 1, 2, \ldots , 9 1,2,…,9 共 9 9 9 个数分成 3 3 3 组&#xff…

OpenCV快速入门:绘制图形、图像金字塔和感兴趣区域

文章目录 前言一、绘制图形1. 绘制直线2. 绘制圆3. 绘制矩形4. 绘制椭圆5. 绘制多边形6. 绘制文字7. 可选参数8. 手工绘制OpenCV的logo 二、图像金字塔1. 高斯金字塔2. 拉普拉斯金字塔 三、感兴趣区域&#xff08;ROI&#xff09;数组切片方式OpenCV截取方式 总结 前言 OpenCV…

Docker与Kubernetes结合的难题与技术解决方案

文章目录 1. **版本兼容性**技术解决方案 2. **网络通信**技术解决方案 3. **存储卷的管理**技术解决方案 4. **安全性**技术解决方案 5. **监控和日志**技术解决方案 6. **扩展性与自动化**技术解决方案 7. **多集群管理**技术解决方案 结语 &#x1f388;个人主页&#xff1a…

Swagger(3):Swagger入门案例

1 编写SpringBoot项目 新建一个Rest请求控制器。 package com.example.demo.controller;import org.springframework.web.bind.annotation.GetMapping; import org.springframework.web.bind.annotation.PostMapping; import org.springframework.web.bind.annotation.Reques…

C++20并发编程之线程闩(std::latch)和线程卡(std::barrier)

std::latch std::latch类是一种基于std::ptrdiff_t类型的倒计数器&#xff0c;可用于同步线程。计数器的值在创建时进行初始化。线程可以在 latch 上阻塞&#xff0c;直到计数器减少到零为止。无法增加或重置计数器&#xff0c;这使得 latch 成为一次性的屏障。 std::latch的成…

JS原型对象prototype

让我简单的为大家介绍一下原型对象prototype吧&#xff01; 使用原型实现方法共享 1.构造函数通过原型分配的函数是所有对象所 共享的。 2.JavaScript 规定&#xff0c;每一个构造函数都有一个 prototype 属性&#xff0c;指向另一个对象&#xff0c;所以我们也称为原型对象…

2023NOIP游寄

停课停了一个月&#xff0c;考炸了就真的寄了。 DAY -2 模拟赛出人意外的简单&#xff0c;信心赛吗&#xff1f; 开局30s切了T1。总共做出三题&#xff0c;但挂了 150pts。难绷。 直接没有信心了。 DAY -1 晚上直接跑路回家&#xff0c;表示&#xff1a;休息一天。 DAY …

深度学习(五)softmax 回归之:分类算法介绍,如何加载 Fashion-MINIST 数据集

Softmax 回归 基本原理 回归和分类&#xff0c;是两种深度学习常用方法。回归是对连续的预测&#xff08;比如我预测根据过去开奖列表下次双色球号&#xff09;&#xff0c;分类是预测离散的类别&#xff08;手写语音识别&#xff0c;图片识别&#xff09;。 现在我们已经对回…

3DEXPERIENCE许可合规性:确保企业设计流程的合法与安全

在当今复杂多变的设计领域&#xff0c;企业需要格外关注设计流程的合规性。3DEXPERIENCE作为一款全球领先的设计软件&#xff0c;其许可合规性是确保企业设计流程合法与安全的重要保障。本文将详细介绍3DEXPERIENCE在许可合规性方面的优势和实践&#xff0c;帮助您更好地了解和…

Java多线程下使用TransactionTemplate控制事务

简介 本文展示了在Java的多线程环境下使用Spring的TransactionTemplate控制事务的提交与回滚&#xff0c;当任何一个子线程出现异常时&#xff0c;所有子线程都将回滚 环境 JDK&#xff1a;1.8.0_211 SpringBoot&#xff1a;2.5.10 说明 本文通过同时写入用户(User)和用户详细…

zookeperkafka学习

1、why kafka 优点 缺点kafka 吞吐量高&#xff0c;对批处理和异步处理做了大量的设计&#xff0c;因此Kafka可以得到非常高的性能。延迟也会高&#xff0c;不适合电商场景。RabbitMQ 如果有大量消息堆积在队列中&#xff0c;性能会急剧下降每秒处理几万到几十万的消息。如果…

记录一些涉及到界的题

文章目录 coppersmith的一些相关知识题1 [N1CTF 2023] e2Wrmup题2 [ACTF 2023] midRSA题3 [qsnctf 2023]浅记一下 coppersmith的一些相关知识 上界 X c e i l ( 1 2 ∗ N β 2 d − ϵ ) X ceil(\frac{1}{2} * N^{\frac{\beta^2}{d} - \epsilon}) Xceil(21​∗Ndβ2​−ϵ) …

elementPlus+vue3引入icon图标

安装包管理&#xff0c;推荐使用 yarn npm包有时候会有包冲突&#xff0c;在项目的根目录下执行&#xff0c;在终端下 # Yarn $ yarn add element-plus/icons-vue在main.js或main.ts中进行全局注册&#xff0c;导入所有图标 import * as ElementPlusIconsVue from element-plu…

【Proteus仿真】【STM32单片机】防火防盗GSM智能家居设计

文章目录 一、功能简介二、软件设计三、实验现象联系作者 一、功能简介 本项目使用Proteus8仿真STM32单片机控制器&#xff0c;使用声光报警模块、LCD1602显示模块、DS18B20温度、烟雾传感器模块、按键模块、PCF8591 ADC模块、红外检测模块等。 主要功能&#xff1a; 系统运行…

Leetcode_49:字母异位词分组

题目描述&#xff1a; 给你一个字符串数组&#xff0c;请你将 字母异位词 组合在一起。可以按任意顺序返回结果列表。 字母异位词 是由重新排列源单词的所有字母得到的一个新单词。 就是将相同字母组合的单词放在一起 示例 1: 输入: strs ["eat", "tea&qu…