0

Working With Snapshots in PostgreSQL

 2 years ago
source link: https://www.percona.com/blog/working-with-snapshots-in-postgresql/
Go to the source link to view the article. You can view the picture content, updated content and better typesetting reading experience. If the link is broken, please click the button below to view the snapshot at that time.
neoserver,ios ssh client

Working With Snapshots in PostgreSQL

One of the reasons I’ve never strayed far from PostgreSQL is that I am always discovering features that overcome all sorts of interesting problems. Although, to be honest, from time to time I’ve had the feeling that sometimes some features are a solution in search of a problem. Take, for example, exporting transaction snapshots

Originally implemented in version 9.2, snapshot exports allow multiple transactions to share identical views of the same database state. Back then I wasn’t certain how I could use it until pg version 9.3 came out presenting a true use-case when multi-threaded downloads were introduced to pg_dump.

Here’s a simple example; let’s start by creating two tables with lots of data:

PgSQL
begin;
    create table t1(
        c1 serial primary key,
        c2 text default lpad('',500,md5(random()::text)),
        c3 timestamp default clock_timestamp()
    create table t2(like t1 including all);
    insert into t1 select generate_series(1,5e6);
    insert into t2 select generate_series(1,5e6);
    analyze;
commit;
                    List of relations
Schema | Name | Type  |  Owner   |  Size   | Description
--------+------+-------+----------+---------+-------------
public | t1   | table | postgres | 2791 MB |
public | t2   | table | postgres | 2791 MB |

TIP: Changing the arguments in the generate_series function will populate the tables that best fits your hardware’s capabilities.

The traditional invocation of a logical dump uses a single CPU/thread archiving the tables one at a time. The CLI time generates timing statistics returning the time it takes to perform the dump using the “compressed” format:

Shell
# execution time: 60 seconds
time pg_dump -Fc db01 > db01.db
real    1m0.322s
user    0m50.203s
sys     0m1.309s

An alternate invocation generating the dump uses the “directory” format:

Shell
# execution time: 52 seconds
time pg_dump -Fd db01 -f db01_dirA
real    0m52.304s
user    0m50.908s
sys     0m1.152s

Thus far the execution utilizes a single CPU. Now look at the execution time when the ‘-j’, or ‘–jobs’ switch, where multiple connections are used to generate the dump:

Shell
# execution time: 31 seconds
time pg_dump -Fd db01 -f db01_dirB -j 2
real    0m31.115s
user    1m0.242s
sys     0m1.377s

Notice the number of processes running in the background. The third connection is a leader process which coordinates the two worker processes:

Shell
$: ps aux | grep -v grep | grep pg_dump
rbernier 1872809  0.1  0.0  14444  5968 pts/1    S+   08:52   0:00 pg_dump -Fd db01 -f db01_dirB -j 2
rbernier 1872811  100  0.0  14804  4480 pts/1    R+   08:52   0:12 pg_dump -Fd db01 -f db01_dirB -j 2
rbernier 1872812  100  0.0  14804  4488 pts/1    R+   08:52   0:12 pg_dump -Fd db01 -f db01_dirB -j 2

Without the synchronized snapshot feature, the different worker jobs wouldn’t be guaranteed to see the same data in each connection, which could lead to an inconsistent backup.

One important caveat to remember is that the pg_dump jobs switch only works with the “directory” format. Attempting to use jobs to a single database dump fails:

Shell
pg_dump -Fc db01 -j 2 > db01.db              
pg_dump: error: parallel backup only supported by the directory format

There is a full description of other caveats using the jobs switch in the PostgreSQL documentation.

So we’ve shown how it works with pg_dump, but can we go further? … YES!

We can replicate the behavior using the snapshot synchronization function pg_export_snapshot().

Continuing with the two previously created tables, let’s create another scenario. Look at the table below for each step:

  • STEP 1: Three psql sessions login and are interacting with tables t1 and t2 in tandem.
  • STEP 2: Session 1 inserts every five seconds to the tables. Session 2 sees the data differently than session 3, note the three-second delay queried in session 3, thus making it problematic seeing the same dataset at the same time.
  • STEP 3: Session 1 continues updating tables t1, t2 but now sessions 2 and 3 are using the same snapshot session.
  • STEP 4: While session 1 continues updating tables t1, t2 sessions 2 and 3 see the same datasets i.e. they are synchronized.
  • STEP 5: Both sessions 2, and 3 are actively copying data at the same time without fear of inconsistency.
  • STEP 6: The COPY is completed so let’s finish up by committing the transactions.

STEP

Session 1

Session 2

Session 3

psql db01

psql db01

psql db01

with a(x) as (select max(c1) from t1),

b as (insert into t1

select generate_series(x+1,x+10)

from a returning *),

c as (insert into t2

select generate_series(x+1,x+10)

from a returning *)

select x as increment from a;

\watch 5

with a(x) as (select max(c1) from t1)

select c1,md5(c2),c3 from t1,a where c1=x

union all

select c1,md5(c2),c3 from t2,a where c1=x;

with a(x) as (select max(c1) from t1)

select c1,md5(c2),c3 from t1,a where c1=x

union all

select c1,md5(c2),c3 from t2,a,pg_sleep(3) where c1=x;

Step 2 is active

— let’s export the snapshot

— 00000003-000021CE-1

begin transaction isolation level repeatable read;

select pg_export_snapshot();

— let’s import the snapshot

begin transaction isolation level repeatable read;

set transaction snapshot ‘00000003-000021CE-1’;

Step 2 is active

with a(x) as (select max(c1) from t1)

select c1,md5(c2),c3 from t1,a where c1=x

union all

select c1,md5(c2),c3 from t2,a where c1=x;

with a(x) as (select max(c1) from t1)

select c1,md5(c2),c3 from t1,a where c1=x

union all

select c1,md5(c2),c3 from t2,a,pg_sleep(3) where c1=x;

Step 2 is active

\copy t1 to program ‘gzip > t1.gz’

\copy t2 to program ‘gzip > t2.gz’

Step 2 is terminated

— commit or rollback,
— it’s your choice

commit

— commit or rollback,
— it’s your choice

commit

And there you have it, a neat little trick exporting snapshots! Of course, there’s more you can do than just copying tables in a multi-threaded fashion but hopefully, this little exposé will lead to more opportunities to simplify your life while working with Postgres.


About Joyk


Aggregate valuable and interesting links.
Joyk means Joy of geeK