当前位置: 首页>数据库>正文

greenplum 怎么查询一个表的所有索引 greenplum 创建索引

标签

PostgreSQL , CONCURRENTLY index , snapshot , 两阶段 , 等待 , snapshot


背景

PostgreSQL支持在线创建索引(CREATE INDEX CONCURRENTLY),不堵塞其他会话对被创建索引表的DML(INSERT,UPDATE,DELETE)操作。特别适合于在线业务。

注意,传统的创建索引的方法,会堵塞其他会话的DML。

那么CREATE INDEX CONCURRENTLY的内部实现如何?有有一些什么需要注意的?

比如我们有一个这样的CASE,在执行CREATE INDEX CONCURRENTLY前,开启了一个事务(COPY),虽然操作的并不是创建索引的表,但是却导致了CREATE INDEX CONCURRENTLY迟迟不能结束。

为什么?需要研究一下CREATE INDEX CONCURRENTLY的原理。

复现

1、创建测试表

postgres=# create table a(id int);    
CREATE TABLE    
postgres=# create table b(id int);    
CREATE TABLE

2、会话1

postgres=# begin;    
BEGIN    
postgres=# copy a from stdin;    
Enter data to be copied followed by a newline.    
End with a backslash and a period on a line by itself, or an EOF signal.    
>>

3、会话2

postgres=# create index idx_b_1 on b (id);    
CREATE INDEX    
postgres=# create index CONCURRENTLY idx_b_2 on b (id);    
    
hang住(实际已结束)

4、会话3,查看锁等待信息,建议用这个QUERY查看。

《PostgreSQL 锁等待监控 珍藏级SQL - 谁堵塞了谁》

postgres=# select * from pg_locks where granted is not true;    
  locktype  | database | relation | page | tuple | virtualxid | transactionid | classid | objid | objsubid | virtualtransaction |  pid  |     mode      | granted | fastpath     
------------+----------+----------+------+-------+------------+---------------+---------+-------+----------+--------------------+-------+---------------+---------+----------    
 virtualxid |          |          |      |       | 33/500     |               |         |       |          | 61/48              | 18690 | ShareLock     | f       | f    
(17 rows)    
    
    
postgres=# select * from pg_locks where virtualxid='33/500';    
  locktype  | database | relation | page | tuple | virtualxid | transactionid | classid | objid | objsubid | virtualtransaction |  pid  |     mode      | granted | fastpath     
------------+----------+----------+------+-------+------------+---------------+---------+-------+----------+--------------------+-------+---------------+---------+----------    
 virtualxid |          |          |      |       | 33/500     |               |         |       |          | 33/500             | 17371 | ExclusiveLock | t       | f    
 virtualxid |          |          |      |       | 33/500     |               |         |       |          | 61/48              | 18690 | ShareLock     | f       | f    
(2 rows)    
    
    
postgres=# select * from pg_stat_activity where pid=17371;    
-[ RECORD 1 ]----+------------------------------------------------------------------------------------------------------    
datid            | 13220    
datname          | postgres    
pid              | 17371    
usesysid         | 10    
usename          | postgres    
application_name | psql    
client_addr      |     
client_hostname  |     
client_port      | -1    
backend_start    | 2018-04-24 19:54:54.838402+08    
xact_start       | 2018-04-24 19:59:10.884774+08    
query_start      | 2018-04-24 19:59:10.884792+08    
state_change     | 2018-04-24 19:59:10.884792+08    
wait_event_type  |     
wait_event       |     
state            | active    
backend_xid      | 27958    
backend_xmin     | 4405    
query            | copy a from stdin;    
backend_type     | client backend

5、查看pstack

pstack 18690    
#0  0x00007f44f4c4b903 in __epoll_wait_nocancel () from /lib64/libc.so.6    
#1  0x000000000070125e in WaitEventSetWait ()    
#2  0x0000000000701697 in WaitLatchOrSocket ()    
#3  0x000000000070fe76 in ProcSleep ()    
#4  0x000000000070af6f in WaitOnLock ()    
#5  0x000000000070c4f5 in LockAcquireExtended ()    
#6  0x000000000070ec0e in VirtualXactLock ()    
#7  0x00000000005a269e in DefineIndex ()    
#8  0x0000000000725eec in ProcessUtilitySlow.isra.2 ()    
#9  0x0000000000724ac6 in standard_ProcessUtility ()    
#10 0x0000000000722416 in PortalRunUtility ()    
#11 0x0000000000722e57 in PortalRunMulti ()    
#12 0x00000000007239bc in PortalRun ()    
#13 0x000000000071fb57 in exec_simple_query ()    
#14 0x0000000000720e02 in PostgresMain ()    
#15 0x000000000047a96b in ServerLoop ()    
#16 0x00000000006b9029 in PostmasterMain ()    
#17 0x000000000047b321 in main ()

6、干掉会话,

postgres=# select pg_cancel_backend(17371);    
-[ RECORD 1 ]-----+--    
pg_cancel_backend | t

7、索引结束

postgres=# create index CONCURRENTLY idx_b_2 on b (id);    
CREATE INDEX

CREATE INDEX CONCURRENTLY 原理

为了搞明白前面那个原因,需要了解CREATE INDEX CONCURRENTLY的流程。

https://www.postgresql.org/docs/devel/static/sql-createindex.html#SQL-CREATEINDEX-CONCURRENTLY

In a concurrent index build, the index is actually entered into the system catalogs in one transaction, then two table scans occur in two more transactions. Before each table scan, the index build must wait for existing transactions that have modified the table to terminate. After the second scan, the index build must wait for any transactions that have a snapshot ( see Chapter 13 ) predating the second scan to terminate. Then finally the index can be marked ready for use, and the CREATE INDEX command terminates. Even then, however, the index may not be immediately usable for queries: in the worst case, it cannot be used as long as transactions exist that predate the start of the index build.

使用CREATE INDEX CONCURRENTLY创建索引,分为三个阶段,扫描两次TABLE。

create index CONCURRENTLY idx_b_2 on b (id);

阶段如下:

1、开启事务1,拿到当前snapshot1。

2、扫描B表前,等待所有修改过B表(写入、删除、更新)的事务结束。

3、扫描B表,并建立索引。

4、结束事务1。

5、开启事务2,拿到当前snapshot2。

6、再次扫描B表前,等待所有修改过B表(写入、删除、更新)的事务结束。

7、在snapshot2之后启动的事务对B表执行的DML,会修改这个idx_b_2的索引。

8、再次扫描B表,更新索引。(从TUPLE中可以拿到版本号,在snapshot1到snapshot2之间变更的记录,将其合并到索引)

9、上一步更新索引结束后,等待事务2之前开启的持有snapshot的事务结束。

10、结束索引创建。索引可见。

前面复现的例子,问题出在哪里呢?

实际上create index CONCURRENTLY需要2次扫描,三次等待。三次等待分别是2次扫描表前,结束创建索引前。前面的例子,实际上是在结束创建索引前,等待第二次SCAN之前持有snapshot的事务结束。

小结

1、注意事项,为了减少等待的时间,需要1. 尽量避免创建索引过程中,两次SCAN之前对被创建索引表实施长事务,并且长事务中包含修改被创建索引的表。2. 在第二次SCAN前,尽量避免开启长事务。

2、最后的一次等待,应该还有改进的空间,减少等待。实际上最后一次等待是为了防止那些还存在的事务,在事务中可能查询B表,如果结束的话,这个索引实际上是SNAPSHOT2后的状态(可能有一些对snapshot2前的事务STALE的状态)。

3、注意,因为第一次扫描并建立中间状态的索引(INVALID)后,索引实际上就对后面的DML起作用了,所以如果是在第二SCAN阶段,索引创建失败了,这个索引会一直影响DML(性能、约束)。

Another caveat when building a unique index concurrently is that the uniqueness constraint is already being enforced against other transactions when the second table scan begins. This means that constraint violations could be reported in other queries prior to the index becoming available for use, or even in cases where the index build eventually fails. Also, if a failure does occur in the second scan, the “invalid” index continues to enforce its uniqueness constraint afterwards.

参考

https://www.postgresql.org/docs/devel/static/sql-createindex.html#SQL-CREATEINDEX-CONCURRENTLY


https://www.xamrdz.com/database/6b71935697.html

相关文章: