postgresql|数据库|pg_repack插件的部署和使用

一,

表和索引的膨胀现象

 Postgres SQL 实现的MVCC的机制不同于 oracle , mysql innodb 的 undo tablespace 的机制。 表上所用的更新和删除等操作的行为,都不会实际的删除或修改,而是标记为死元祖 (dead rows or dead tuples),也因此,在大表进行长事务增删改的时候,表的磁盘使用空间会逐渐变大,并且,表的读写性能会随着表膨胀的程度加深而逐渐下降。

那么说人话就是,表和索引的膨胀会造成两个后果,第一是磁盘空间的占用,比如,某个几百G的大表delete 删除后,并不会释放磁盘空间,并且在删除的过程中还会引发wal日志的膨胀,而数据库服务器的磁盘空间并不是无限的,第二个就是会使得表的查询和写入性能下降,也就是查询速度降低或者插入/更新速度明显下降。

因此,我们在数据库的使用过程中,应该避免表膨胀,至少是将表膨胀控制在一个合理的,可接受的范围内,完全的避免表膨胀是确定无疑的不可能。

postgresql数据库对于表膨胀这个问题是有几种处理方式

第一,是在postgresql的主配置文件内定义autovacuum,也就是让postgresql数据库自己决定何时治理表膨胀

第二,手动vacuum 治理表膨胀

第三,CLUSTER命令治理表膨胀

第四,利用外部插件,例如pg_repack 治理表膨胀

第五,recreate table or reindex : 相当于重建表和索引。

  • 如果选择重建表的话 是类似于 create table tab_new as select * from tab_old, 然后在 创建相关索引,最后进行表名的 rename 切换。还需注意表的权限:需要重新赋权。
  • 另外这个也是需要应用系统的维护窗口时间的。
  • 如果选择重建索引的话, 类似于 reindex CONCURRENTLY index_name, 需要注意的是需要2倍的索引存储空间,进行online的索引重建。

CLUSTER背后的代码与VACUUM (FULL)相同,只是增加了一个排序。因此,CLUSTER存在和VACUUM (FULL)一样的问题:

  • CLUSTER以ACCESS EXCLUSIVE模式锁定表,锁定期间阻塞所有操作,VACUUM (FULL)也是一样的
  • 需要二倍于表的空间进行操作

对于大表来说,根据表的数据规模大小,很多时候vacuum或者CLUSTER的时候都是几个小时甚至十几个小时,而在此期间表被锁住是无法接受的,读写都有问题,锁表会造成业务的中断。

pg_repack 锁表的时间相对vacuum或者cluster来说是比较少的,大概是vacuum的锁表时间的20%,因此,pg_repack 是一个比较好的选择,但pg_repack 的使用仍然是推荐在业务低峰期使用,虽然锁表时间大幅减少。

repack 实际上是创建了一张临时表, 并在原始表上创建触发器捕获数据变化,同步到临时表中, 并在临时表中重新创建索引,最后进行临时表和原始表的切换。
工作原理和mysql 的 pt-online-schema-change 的工具是十分类似的.

下面将就pg_repack 的部署和基本使用做一个介绍

二,

表膨胀治理的时机

在表膨胀治理之前,我们需要了解哪些表需要治理,在表膨胀治理之后,我们需要清楚的知道,具体治理了多少表膨胀

监控数据库级别的膨胀:Show database bloat - PostgreSQL wiki

SQL语句如下:

这个SQL语句比较简略,主要关注上图标识的这两行,wastedbytes的值越大,表明表膨胀越严重

SELECTcurrent_database(), schemaname, tablename, /*reltuples::bigint, relpages::bigint, otta,*/ROUND((CASE WHEN otta=0 THEN 0.0 ELSE sml.relpages::float/otta END)::numeric,1) AS tbloat,CASE WHEN relpages < otta THEN 0 ELSE bs*(sml.relpages-otta)::BIGINT END AS wastedbytes,iname, /*ituples::bigint, ipages::bigint, iotta,*/ROUND((CASE WHEN iotta=0 OR ipages=0 THEN 0.0 ELSE ipages::float/iotta END)::numeric,1) AS ibloat,CASE WHEN ipages < iotta THEN 0 ELSE bs*(ipages-iotta) END AS wastedibytes
FROM (SELECTschemaname, tablename, cc.reltuples, cc.relpages, bs,CEIL((cc.reltuples*((datahdr+ma-(CASE WHEN datahdr%ma=0 THEN ma ELSE datahdr%ma END))+nullhdr2+4))/(bs-20::float)) AS otta,COALESCE(c2.relname,'?') AS iname, COALESCE(c2.reltuples,0) AS ituples, COALESCE(c2.relpages,0) AS ipages,COALESCE(CEIL((c2.reltuples*(datahdr-12))/(bs-20::float)),0) AS iotta -- very rough approximation, assumes all colsFROM (SELECTma,bs,schemaname,tablename,(datawidth+(hdr+ma-(case when hdr%ma=0 THEN ma ELSE hdr%ma END)))::numeric AS datahdr,(maxfracsum*(nullhdr+ma-(case when nullhdr%ma=0 THEN ma ELSE nullhdr%ma END))) AS nullhdr2FROM (SELECTschemaname, tablename, hdr, ma, bs,SUM((1-null_frac)*avg_width) AS datawidth,MAX(null_frac) AS maxfracsum,hdr+(SELECT 1+count(*)/8FROM pg_stats s2WHERE null_frac<>0 AND s2.schemaname = s.schemaname AND s2.tablename = s.tablename) AS nullhdrFROM pg_stats s, (SELECT(SELECT current_setting('block_size')::numeric) AS bs,CASE WHEN substring(v,12,3) IN ('8.0','8.1','8.2') THEN 27 ELSE 23 END AS hdr,CASE WHEN v ~ 'mingw32' THEN 8 ELSE 4 END AS maFROM (SELECT version() AS v) AS foo) AS constantsGROUP BY 1,2,3,4,5) AS foo) AS rsJOIN pg_class cc ON cc.relname = rs.tablenameJOIN pg_namespace nn ON cc.relnamespace = nn.oid AND nn.nspname = rs.schemaname AND nn.nspname <> 'information_schema'LEFT JOIN pg_index i ON indrelid = cc.oidLEFT JOIN pg_class c2 ON c2.oid = i.indexrelid
) AS sml
ORDER BY wastedbytes DESC

监控表级别的膨胀:

https://github.com/ioguix/pgsql-bloat-estimation/blob/master/table/table_bloat.sql

注:稍作修改,屏蔽了系统库

该SQL语句执行完后,重点关注上图标识的地方就行了,blooat_pct 越大,表明表膨胀越严重

/* WARNING: executed with a non-superuser role, the query inspect only tables and materialized view (9.3+) you are granted to read.
* This query is compatible with PostgreSQL 9.0 and more
*/
SELECT current_database(), schemaname, tblname, bs*tblpages AS real_size,(tblpages-est_tblpages)*bs AS extra_size,CASE WHEN tblpages > 0 AND tblpages - est_tblpages > 0THEN 100 * (tblpages - est_tblpages)/tblpages::floatELSE 0END AS extra_pct, fillfactor,CASE WHEN tblpages - est_tblpages_ff > 0THEN (tblpages-est_tblpages_ff)*bsELSE 0END AS bloat_size,CASE WHEN tblpages > 0 AND tblpages - est_tblpages_ff > 0THEN 100 * (tblpages - est_tblpages_ff)/tblpages::floatELSE 0END AS bloat_pct, is_na-- , tpl_hdr_size, tpl_data_size, (pst).free_percent + (pst).dead_tuple_percent AS real_frag -- (DEBUG INFO)
FROM (SELECT ceil( reltuples / ( (bs-page_hdr)/tpl_size ) ) + ceil( toasttuples / 4 ) AS est_tblpages,ceil( reltuples / ( (bs-page_hdr)*fillfactor/(tpl_size*100) ) ) + ceil( toasttuples / 4 ) AS est_tblpages_ff,tblpages, fillfactor, bs, tblid, schemaname, tblname, heappages, toastpages, is_na-- , tpl_hdr_size, tpl_data_size, pgstattuple(tblid) AS pst -- (DEBUG INFO)FROM (SELECT( 4 + tpl_hdr_size + tpl_data_size + (2*ma)- CASE WHEN tpl_hdr_size%ma = 0 THEN ma ELSE tpl_hdr_size%ma END- CASE WHEN ceil(tpl_data_size)::int%ma = 0 THEN ma ELSE ceil(tpl_data_size)::int%ma END) AS tpl_size, bs - page_hdr AS size_per_block, (heappages + toastpages) AS tblpages, heappages,toastpages, reltuples, toasttuples, bs, page_hdr, tblid, schemaname, tblname, fillfactor, is_na-- , tpl_hdr_size, tpl_data_sizeFROM (SELECTtbl.oid AS tblid, ns.nspname AS schemaname, tbl.relname AS tblname, tbl.reltuples,tbl.relpages AS heappages, coalesce(toast.relpages, 0) AS toastpages,coalesce(toast.reltuples, 0) AS toasttuples,coalesce(substring(array_to_string(tbl.reloptions, ' ')FROM 'fillfactor=([0-9]+)')::smallint, 100) AS fillfactor,current_setting('block_size')::numeric AS bs,CASE WHEN version()~'mingw32' OR version()~'64-bit|x86_64|ppc64|ia64|amd64' THEN 8 ELSE 4 END AS ma,24 AS page_hdr,23 + CASE WHEN MAX(coalesce(s.null_frac,0)) > 0 THEN ( 7 + count(s.attname) ) / 8 ELSE 0::int END+ CASE WHEN bool_or(att.attname = 'oid' and att.attnum < 0) THEN 4 ELSE 0 END AS tpl_hdr_size,sum( (1-coalesce(s.null_frac, 0)) * coalesce(s.avg_width, 0) ) AS tpl_data_size,bool_or(att.atttypid = 'pg_catalog.name'::regtype)OR sum(CASE WHEN att.attnum > 0 THEN 1 ELSE 0 END) <> count(s.attname) AS is_naFROM pg_attribute AS attJOIN pg_class AS tbl ON att.attrelid = tbl.oidJOIN pg_namespace AS ns ON ns.oid = tbl.relnamespaceLEFT JOIN pg_stats AS s ON s.schemaname=ns.nspnameAND s.tablename = tbl.relname AND s.inherited=false AND s.attname=att.attnameLEFT JOIN pg_class AS toast ON tbl.reltoastrelid = toast.oidWHERE NOT att.attisdroppedAND tbl.relkind in ('r','m')AND schemaname not IN('pg_catalog','information_schema','repack')GROUP BY 1,2,3,4,5,6,7,8,9,10ORDER BY 2,3) AS s) AS s2
) AS s3
-- WHERE NOT is_na
--   AND tblpages*((pst).free_percent + (pst).dead_tuple_percent)::float4/100 >= 1
ORDER BY schemaname, tblname;

 监控索引级别的膨胀:

https://github.com/ioguix/pgsql-bloat-estimation/blob/master/btree/btree_bloat.sql

注:稍作修改,屏蔽了系统库

-- WARNING: executed with a non-superuser role, the query inspect only index on tables you are granted to read.
-- WARNING: rows with is_na = 't' are known to have bad statistics ("name" type is not supported).
-- This query is compatible with PostgreSQL 8.2 and after
SELECT current_database(), nspname AS schemaname, tblname, idxname, bs*(relpages)::bigint AS real_size,bs*(relpages-est_pages)::bigint AS extra_size,100 * (relpages-est_pages)::float / relpages AS extra_pct,fillfactor,CASE WHEN relpages > est_pages_ffTHEN bs*(relpages-est_pages_ff)ELSE 0END AS bloat_size,100 * (relpages-est_pages_ff)::float / relpages AS bloat_pct,is_na-- , 100-(pst).avg_leaf_density AS pst_avg_bloat, est_pages, index_tuple_hdr_bm, maxalign, pagehdr, nulldatawidth, nulldatahdrwidth, reltuples, relpages -- (DEBUG INFO)
FROM (SELECT coalesce(1 +ceil(reltuples/floor((bs-pageopqdata-pagehdr)/(4+nulldatahdrwidth)::float)), 0 -- ItemIdData size + computed avg size of a tuple (nulldatahdrwidth)) AS est_pages,coalesce(1 +ceil(reltuples/floor((bs-pageopqdata-pagehdr)*fillfactor/(100*(4+nulldatahdrwidth)::float))), 0) AS est_pages_ff,bs, nspname, tblname, idxname, relpages, fillfactor, is_na-- , pgstatindex(idxoid) AS pst, index_tuple_hdr_bm, maxalign, pagehdr, nulldatawidth, nulldatahdrwidth, reltuples -- (DEBUG INFO)FROM (SELECT maxalign, bs, nspname, tblname, idxname, reltuples, relpages, idxoid, fillfactor,( index_tuple_hdr_bm +maxalign - CASE -- Add padding to the index tuple header to align on MAXALIGNWHEN index_tuple_hdr_bm%maxalign = 0 THEN maxalignELSE index_tuple_hdr_bm%maxalignEND+ nulldatawidth + maxalign - CASE -- Add padding to the data to align on MAXALIGNWHEN nulldatawidth = 0 THEN 0WHEN nulldatawidth::integer%maxalign = 0 THEN maxalignELSE nulldatawidth::integer%maxalignEND)::numeric AS nulldatahdrwidth, pagehdr, pageopqdata, is_na-- , index_tuple_hdr_bm, nulldatawidth -- (DEBUG INFO)FROM (SELECT n.nspname, i.tblname, i.idxname, i.reltuples, i.relpages,i.idxoid, i.fillfactor, current_setting('block_size')::numeric AS bs,CASE -- MAXALIGN: 4 on 32bits, 8 on 64bits (and mingw32 ?)WHEN version() ~ 'mingw32' OR version() ~ '64-bit|x86_64|ppc64|ia64|amd64' THEN 8ELSE 4END AS maxalign,/* per page header, fixed size: 20 for 7.X, 24 for others */24 AS pagehdr,/* per page btree opaque data */16 AS pageopqdata,/* per tuple header: add IndexAttributeBitMapData if some cols are null-able */CASE WHEN max(coalesce(s.null_frac,0)) = 0THEN 8 -- IndexTupleData sizeELSE 8 + (( 32 + 8 - 1 ) / 8) -- IndexTupleData size + IndexAttributeBitMapData size ( max num filed per index + 8 - 1 /8)END AS index_tuple_hdr_bm,/* data len: we remove null values save space using it fractionnal part from stats */sum( (1-coalesce(s.null_frac, 0)) * coalesce(s.avg_width, 1024)) AS nulldatawidth,max( CASE WHEN i.atttypid = 'pg_catalog.name'::regtype THEN 1 ELSE 0 END ) > 0 AS is_naFROM (SELECT ct.relname AS tblname, ct.relnamespace, ic.idxname, ic.attpos, ic.indkey, ic.indkey[ic.attpos], ic.reltuples, ic.relpages, ic.tbloid, ic.idxoid, ic.fillfactor,coalesce(a1.attnum, a2.attnum) AS attnum, coalesce(a1.attname, a2.attname) AS attname, coalesce(a1.atttypid, a2.atttypid) AS atttypid,CASE WHEN a1.attnum IS NULLTHEN ic.idxnameELSE ct.relnameEND AS attrelnameFROM (SELECT idxname, reltuples, relpages, tbloid, idxoid, fillfactor, indkey,pg_catalog.generate_series(1,indnatts) AS attposFROM (SELECT ci.relname AS idxname, ci.reltuples, ci.relpages, i.indrelid AS tbloid,i.indexrelid AS idxoid,coalesce(substring(array_to_string(ci.reloptions, ' ')from 'fillfactor=([0-9]+)')::smallint, 90) AS fillfactor,i.indnatts,pg_catalog.string_to_array(pg_catalog.textin(pg_catalog.int2vectorout(i.indkey)),' ')::int[] AS indkeyFROM pg_catalog.pg_index iJOIN pg_catalog.pg_class ci ON ci.oid = i.indexrelidWHERE ci.relam=(SELECT oid FROM pg_am WHERE amname = 'btree')AND ci.relpages > 0) AS idx_data) AS icJOIN pg_catalog.pg_class ct ON ct.oid = ic.tbloidLEFT JOIN pg_catalog.pg_attribute a1 ONic.indkey[ic.attpos] <> 0AND a1.attrelid = ic.tbloidAND a1.attnum = ic.indkey[ic.attpos]LEFT JOIN pg_catalog.pg_attribute a2 ONic.indkey[ic.attpos] = 0AND a2.attrelid = ic.idxoidAND a2.attnum = ic.attpos) iJOIN pg_catalog.pg_namespace n ON n.oid = i.relnamespaceJOIN pg_catalog.pg_stats s ON s.schemaname = n.nspnameAND s.tablename = i.attrelnameAND s.attname = i.attnameAND schemaname not IN('pg_catalog','information_schema','repack')GROUP BY 1,2,3,4,5,6,7,8,9,10,11) AS rows_data_stats) AS rows_hdr_pdg_stats
) AS relation_stats
ORDER BY nspname, tblname, idxname;

三,

pg_repack的部署

pg_repack现在支持到postgresql-16版本,下载地址:https://github.com/reorg/pg_repack/releases/tag/ver_1.5.0

下载下来的压缩包上传到服务器后,进入解压目录

先安装编译依赖:

yum install openssl openssl-devel readline readline-devel -y

然后编译三连就好了:

make && make install

编译日志如下:

[root@centos10 pg_repack-ver_1.5.0]# make
make[1]: Entering directory `/root/pg_repack-ver_1.5.0/bin'
gcc -std=gnu99 -Wall -Wmissing-prototypes -Wpointer-arith -Wdeclaration-after-statement -Werror=vla -Wendif-labels -Wmissing-format-attribute -Wformat-security -fno-strict-aliasing -fwrapv -fexcess-precision=standard -O2 pg_repack.o pgut/pgut.o pgut/pgut-fe.o  -L/data/pgsql/lib   -Wl,--as-needed -Wl,-rpath,'/data/pgsql/lib',--enable-new-dtags  -L/data/pgsql/lib -lpq -L/data/pgsql/lib -lpgcommon -lpgport -lpthread -lz -lreadline -lrt -lcrypt -ldl -lm -o pg_repack
make[1]: Leaving directory `/root/pg_repack-ver_1.5.0/bin'
make[1]: Entering directory `/root/pg_repack-ver_1.5.0/lib'
gcc -std=gnu99 -Wall -Wmissing-prototypes -Wpointer-arith -Wdeclaration-after-statement -Werror=vla -Wendif-labels -Wmissing-format-attribute -Wformat-security -fno-strict-aliasing -fwrapv -fexcess-precision=standard -O2 -fPIC -DREPACK_VERSION=1.5.0 -I. -I./ -I/data/pgsql/include/server -I/data/pgsql/include/internal  -D_GNU_SOURCE   -c -o repack.o repack.c
gcc -std=gnu99 -Wall -Wmissing-prototypes -Wpointer-arith -Wdeclaration-after-statement -Werror=vla -Wendif-labels -Wmissing-format-attribute -Wformat-security -fno-strict-aliasing -fwrapv -fexcess-precision=standard -O2 -fPIC -DREPACK_VERSION=1.5.0 -I. -I./ -I/data/pgsql/include/server -I/data/pgsql/include/internal  -D_GNU_SOURCE   -c -o pgut/pgut-spi.o pgut/pgut-spi.c
( echo '{ global:'; gawk '/^[^#]/ {printf "%s;\n",$1}' exports.txt; echo ' local: *; };' ) >exports.list
gcc -std=gnu99 -Wall -Wmissing-prototypes -Wpointer-arith -Wdeclaration-after-statement -Werror=vla -Wendif-labels -Wmissing-format-attribute -Wformat-security -fno-strict-aliasing -fwrapv -fexcess-precision=standard -O2 -fPIC -shared -Wl,--version-script=exports.list -o pg_repack.so repack.o pgut/pgut-spi.o -L/data/pgsql/lib    -Wl,--as-needed -Wl,-rpath,'/data/pgsql/lib',--enable-new-dtags  
sed 's,REPACK_VERSION,1.5.0,g' pg_repack.sql.in \
| sed 's,relhasoids,false,g'> pg_repack--1.5.0.sql;
sed 's,REPACK_VERSION,1.5.0,g' pg_repack.control.in > pg_repack.control
make[1]: Leaving directory `/root/pg_repack-ver_1.5.0/lib'
make[1]: Entering directory `/root/pg_repack-ver_1.5.0/regress'
make[1]: Nothing to be done for `all'.
make[1]: Leaving directory `/root/pg_repack-ver_1.5.0/regress'
[root@centos10 pg_repack-ver_1.5.0]# echo $?
0
[root@centos10 pg_repack-ver_1.5.0]# make install
make[1]: Entering directory `/root/pg_repack-ver_1.5.0/bin'
/usr/bin/mkdir -p '/data/pgsql/bin'
/usr/bin/install -c  pg_repack '/data/pgsql/bin'
make[1]: Leaving directory `/root/pg_repack-ver_1.5.0/bin'
make[1]: Entering directory `/root/pg_repack-ver_1.5.0/lib'
/usr/bin/mkdir -p '/data/pgsql/lib'
/usr/bin/mkdir -p '/data/pgsql/share/extension'
/usr/bin/mkdir -p '/data/pgsql/share/extension'
/usr/bin/install -c -m 755  pg_repack.so '/data/pgsql/lib/pg_repack.so'
/usr/bin/install -c -m 644 .//pg_repack.control '/data/pgsql/share/extension/'
/usr/bin/install -c -m 644  pg_repack--1.5.0.sql pg_repack.control '/data/pgsql/share/extension/'
make[1]: Leaving directory `/root/pg_repack-ver_1.5.0/lib'
make[1]: Entering directory `/root/pg_repack-ver_1.5.0/regress'
make[1]: Nothing to be done for `install'.
make[1]: Leaving directory `/root/pg_repack-ver_1.5.0/regress'
[root@centos10 pg_repack-ver_1.5.0]# 
[root@centos10 pg_repack-ver_1.5.0]# cd
[root@centos10 ~]# whereis pg_repack
pg_repack: /data/pgsql/bin/pg_repack

登录postgresql的命令行,激活插件,这里是哪个数据库需要此插件就切换到哪个数据库内:

比如,我需要在名为test的数据库内使用此插件:

\c test
create extension pg_repack;

此插件激活后,将会看到一个名为repack的scheme和一系列相关函数和两个视图:

四,

 pg_repack的表膨胀治理能力测试

1,

创建新库和大表

新库名称为test,大表的创建语句如下:

大表名称为testpg,数据量级为2000w,只有两列数据

create or replace function gen_id(  a date,  b date  
)   
returns text as $$  
select lpad((random()*99)::int::text, 3, '0') ||   lpad((random()*99)::int::text, 3, '0') ||   lpad((random()*99)::int::text, 3, '0') ||   to_char(a + (random()*(b-a))::int, 'yyyymmdd') ||   lpad((random()*99)::int::text, 3, '0') ||   random()::int ||   (case when random()*10 >9 then 'xy' else (random()*9)::int::text end ) ;  
$$ language sql strict;CREATE SEQUENCE test START 1;
create table if not exists testpg ("id" int8 not null DEFAULT nextval('test'::regclass),CONSTRAINT "user_vendorcode_pkey" PRIMARY KEY ("id"),"suijishuzi" VARCHAR ( 255 ) COLLATE "pg_catalog"."default"
);insert into testpg SELECT generate_series(1,20000000) as xm, gen_id('1949-01-01', '2023-10-16') as num;

2,

查看大表大小和该表的膨胀情况,SQL语句如下:

SELECT
table_schema,
table_name,
pg_size_pretty(table_size) AS table_size,
pg_size_pretty(indexes_size) AS indexes_size,
pg_size_pretty(total_size) AS total_size
FROM (
SELECT
table_schema,
table_name,
pg_table_size(table_name) AS table_size,
pg_indexes_size(table_name) AS indexes_size,
pg_total_relation_size(table_name) AS total_size
FROM (
SELECT table_schema,('"' || table_schema || '"."' || table_name || '"') AS table_name
FROM information_schema.tables
) AS all_tables
ORDER BY total_size DESC
) AS pretty_sizes where table_schema not in ('pg_catalog','information_schema','repack');

输出如下:

使用上面的第二个查询表膨胀的语句,结果如下:

3,

大量更新testpg这个表,人为制造表膨胀:

UPDATE testpg set suijishuzi='123456789' WHERE suijishuzi like'%1%'

查看表大小和膨胀率:

可以看到,膨胀率达到了45%

4,

pg_repack表膨胀治理:

[root@centos10 ~]# su - postgres -c "/data/pgsql/bin/pg_repack -d test  -t public.testpg"
INFO: repacking table "public.testpg"

在膨胀治理期间,可以正常的对该表读写,治理完毕后,查看表大小和表膨胀率:

可以看到,膨胀治理完全成功,

观察磁盘使用,可以看到,符合本次膨胀治理的结果:

[root@centos10 ~]# du -sh /data/pgsql/data/
4.0G	/data/pgsql/data/
[root@centos10 ~]# du -sh /data/pgsql/data/
2.7G	/data/pgsql/data/

未完待续!!!

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:http://www.mzph.cn/news/662371.shtml

如若内容造成侵权/违法违规/事实不符,请联系多彩编程网进行投诉反馈email:809451989@qq.com,一经查实,立即删除!

相关文章

3D应用开发平台HOOPS Platforms优化制造流程和数字化转型

Tech Soft 3D公司的HOOPS Platform &#xff08;包括HOOPS Native Platform 和HOOPS Web Platform&#xff09;&#xff0c;是一种用于开发顶级3D软件的集成技术。具有高性能3D图形&#xff0c;准确&#xff0c;快速的CAD数据转换&#xff0c;3D数据发布以及与流行的建模内核的…

微分几何——梅向明第四版学习笔记(一) 向量函数和曲线论

目录 引出向量函数曲线论简单曲线定义曲线的向量参数表示 曲线的切线【重要】曲线的法面【重要】曲线的自然参数表示 空间曲线曲线的密切平面空间曲线的基本三棱形【重要】单位切向量主法向量副法向量Frenet标架螺旋线的案例 曲线的曲率和曲率半径曲率的几何意义 曲线的挠率挠率…

玻璃钢制品三维扫描机械抄数全尺寸检测服务对比测量检查重合度

玻璃钢制品是一种广泛应用于建筑、汽车、航空航天等领域的复合材料。其制作过程中&#xff0c;需要确保每个环节的精确度&#xff0c;以确保最终产品的质量和性能。为了实现这一目标&#xff0c;三维扫描仪在玻璃钢制品的生产过程中发挥着至关重要的作用。 CASAIM中科广电高精度…

2024美赛数学建模A题思路分析 - 资源可用性和性别比例

1 赛题 问题A&#xff1a;资源可用性和性别比例 虽然一些动物物种存在于通常的雄性或雌性性别之外&#xff0c;但大多数物种实质上是雄性或雌性。虽然许多物种在出生时的性别比例为1&#xff1a;1&#xff0c;但其他物种的性别比例并不均匀。这被称为适应性性别比例的变化。例…

BC1.2 SDP/CDP/DCP介绍

参考&#xff1a;文章链接 Microchip Lightning Support 问题 Q1.) 在Microchip产品的数据表中提到了电池充电技术&#xff0c;但以下术语是什么意思: BC1.2? SDP? CDP? DCP? “SE1”? Q2.) 如何配置Microchip Hub以启用这些功能&#xff1f; Q3.) 如何在我的硬件上物…

基于QPSO-LSTM的短期风电负荷MATLAB预测程序

微❤关注“电气仔推送”获得资料&#xff08;专享优惠&#xff09; 参考文献 基于QPSO-LSTM的短期风电负荷预测模型——谭才兴&#xff08;完全复现&#xff09; 程序简介 传统的LSTM神经网络超参数和拓扑结构通常是基于经验和试验确定&#xff0c;但这种方法容易受到人为因…

学习嵌入式第十五天之结构体

用变量a给出下面的定义 a) 一个整型数&#xff08;An integer&#xff09; //int a;b) 一个指向整型数的指针&#xff08;A pointer to an integer&#xff09; //int *a;c) 一个指向指针的的指针&#xff0c;它指向的指针是指向一个整型数&#xff08;A pointer to a poin…

Leetcode—2950. 可整除子串的数量【中等】Plus(前缀和题型)

2024每日刷题&#xff08;一零八&#xff09; Leetcode—2950. 可整除子串的数量 算法思想 让 f ( c ) d , 其中 d 1 , 2 , . . . , 9 f(c) d, 其中d 1, 2, ..., 9 f(c)d,其中d1,2,...,9. // f(c1) f(c2) ... f(ck) / k avg // > f(c1) f(c2) ... f(ck) - …

[opencvsharp]C#基于Fast算法实现角点检测

角点检测算法有很多&#xff0c;比如Harris角点检测、Shi-Tomas算法、sift算法、SURF算法、ORB算法、BRIEF算法、Fast算法等&#xff0c;今天我们使用C#的opencvsharp库实现Fast角点检测 【算法介绍】 fast算法 Fast(全称Features from accelerated segment test)是一种用于角…

Docker 集群配置

1、配置 MySQL MySQL 简单安装 docker安装完MySQL并run出容器后&#xff0c;建议请先修改完字符集编码后再新建mysql库-表-插数据 docker run -d -p 2222:3306 --privilegedtrue -e MYSQL_ROOT_PASSWORD123456 \ -v /opt/mysql/log:/var/log/mysql \ -v /opt/mysql/data:/va…

Spring:JDBCTemplate 的源码分析

一&#xff1a;JdbcTemplate的简介 JdbcTemplate 是 Spring Template设置模式中的一员。类似的还有 TransactionTemplate、 MongoTemplate 等。通过 JdbcTemplate 我们可以使得 Spring 访问数据库的过程简单化。 二&#xff1a;执行SQL语句的方法 1&#xff1a;在JdbcTempla…

前端性能优化:Vue项目打包后app.xxx.js 和 chunk-vendors.xxx.js 文件太大,导致页面加载时间太长

问题场景&#xff0c;如下图&#xff0c;环境上的 app.js 和chunk-vendors.js 两个文件大小&#xff0c;高达3.4M 和 2M &#xff0c;加载所耗费的时间也很长。 下面说一下如何解决&#xff1a; 1、首先需要安装插件 compression-webpack-plugin&#xff0c;我这里用的是6.1.1…

情人节送什么礼给男朋友合适?适合送男友的礼物合集

情人节即将来临&#xff0c;作为贴心的女友&#xff0c;你是否已经开始为男友精心挑选礼物了呢&#xff1f;为了让这个特殊的日子充满温馨与甜蜜&#xff0c;选择一份既实用又充满心意的礼物是至关重要的&#xff0c;下面为大家推荐一些适合在情人节送给男友的好物&#xff0c;…

探索自然语言处理在改善搜索引擎、语音助手和机器翻译中的应用

文章目录 每日一句正能量前言文本分析语音识别机器翻译语义分析自然语言生成情感分析后记 每日一句正能量 努力学习&#xff0c;勤奋工作&#xff0c;让青春更加光彩。 前言 自然语言处理&#xff08;NLP&#xff09;是人工智能领域中与人类语言相关的重要研究方向&#xff0c…

腾讯主导制定全球首个车载小程序国际标准,助力车载应用生态发展

2024年1月&#xff0c;国际电信联盟标准部门&#xff08;ITU-T&#xff09;正式发布了由腾讯主导制定的《F.749.8 In-vehicle multimedia applets: Framework and functional requirements》(车载多媒体小程序框架和技术需求)国际标准。 这是全球首个由中国企业主导制定的车载小…

LNMP环境搭建动态网站

一、环境准备 服务器&#xff1a;openEuler 22.03 Linux IPV4 &#xff1a;192.168.110.144/24 网页服务器&#xff1a;Nginx1.21.0 数据库&#xff1a;MySQL 8.0.36 PHP&#xff1a;8.0.30 1.安装软件 [rootnode3 ~]# yum install php-mysqlnd php php-gd php-fpm php-xml -y…

在ESXi中部署时出现the host does not support intel vt-x

在VCenter中新建了一台ESXi用于部署VCSA进行实验 在部署VCSA的第二阶段&#xff0c;出现the host does not support intel vt-x&#xff0c;部署失败。 解决办法&#xff1a;点进ESXi虚拟机的设置界面&#xff08;要先关机&#xff09;&#xff0c;将硬件虚拟化打开&#xff0c…

【Vue3+Vite】路由机制router 快速学习 第四期

文章目录 路由简介路由是什么路由的作用 一、路由入门案例1. 创建项目 导入路由依赖2. 准备页面和组件3. 准备路由配置4. main.js引入router配置 二、路由重定向三、编程式路由(useRouter)四、路由传参(useRoute)五、路由守卫总结 路由简介 路由是什么 路由就是根据不同的 URL…

正点原子--STM32中断系统学习笔记(2)

引言 上篇帖子STM32中断系统学习笔记(1)是理论&#xff0c;这篇帖子开始实战&#xff0c;目标是通过按键实现LED的控制。 1.工程建立 以正点原子HAL库 实验1 跑马灯实验为基础&#xff0c;复制工程&#xff0c;在“Drivers--BSP”目录下建立EXTI文件夹&#xff0c;并创建ext…

Windows - 防火墙 - 如何开启单个端口以供Web应用访问(以82端口为例) - 开启端口后还是访问失败了?

Windows - 防火墙 - 如何开启单个端口以供Web应用访问(以82端口为例) - 开启端口后还是访问失败了&#xff1f; 前言 在网上搜“防火墙开启某个端口”供其他机器访问&#xff0c;都是只讲到了“如何允许某个端口被访问”&#xff0c;而没有后续了。 我之前就遇到过这个问题&…