
Sign up to save your podcasts
Or


Learn more about database and OS internals, check out my courses
Fundamentals of database engineering https://databases.win
Fundamentals of operating systems https://oscourse.win
This new PostgreSQL 17 feature is game changer.
You see, postgres like most databases work with fixed size pages. Pretty much everything is in this format, indexes, table data, etc. Those pages are 8K in size, each page will have the rows, or index tuples and a fixed header. The pages are just bytes in files and they are read and cached in the buffer pool.
To read page 0, for example, you would call read on offset 0 for 8192 bytes, To read page 1 that is another read system call from offset 8193 for 8192, page 7 is offset 57,345 for 8192 and so on.
If table is 100 pages stored a file, to do a full table scan, we would be making 100 system calls, each system call had an overhead (I talk about all of that in my OS course).
The enhancement in Postgres 17 is to combine I/Os you can specify how much IO to combine, so technically while possible you can scan that entire table in one system call doesn’t mean its always a good idea of course and Ill talk about that.
This also seems to included a vectorized I/O, with preadv system call which takes an array of offsets and lengths for random reads.
The challenge will become how to not read too much, say I’m doing a seq scan to find something, I read page 0 and found it and quit I don’t need to read any more pages. With this feature I might read 10 pages in one I/O and pull all its content, put in shared buffers only to find my result in the first page (essentially wasting disk bandwidth, memory etc)
It is going to be interesting to balance this out.
By Hussein Nasser4.9
4040 ratings
Learn more about database and OS internals, check out my courses
Fundamentals of database engineering https://databases.win
Fundamentals of operating systems https://oscourse.win
This new PostgreSQL 17 feature is game changer.
You see, postgres like most databases work with fixed size pages. Pretty much everything is in this format, indexes, table data, etc. Those pages are 8K in size, each page will have the rows, or index tuples and a fixed header. The pages are just bytes in files and they are read and cached in the buffer pool.
To read page 0, for example, you would call read on offset 0 for 8192 bytes, To read page 1 that is another read system call from offset 8193 for 8192, page 7 is offset 57,345 for 8192 and so on.
If table is 100 pages stored a file, to do a full table scan, we would be making 100 system calls, each system call had an overhead (I talk about all of that in my OS course).
The enhancement in Postgres 17 is to combine I/Os you can specify how much IO to combine, so technically while possible you can scan that entire table in one system call doesn’t mean its always a good idea of course and Ill talk about that.
This also seems to included a vectorized I/O, with preadv system call which takes an array of offsets and lengths for random reads.
The challenge will become how to not read too much, say I’m doing a seq scan to find something, I read page 0 and found it and quit I don’t need to read any more pages. With this feature I might read 10 pages in one I/O and pull all its content, put in shared buffers only to find my result in the first page (essentially wasting disk bandwidth, memory etc)
It is going to be interesting to balance this out.

32,245 Listeners

273 Listeners

374 Listeners

12,172 Listeners

987 Listeners

8,107 Listeners

210 Listeners

1,658 Listeners

10,228 Listeners

541 Listeners

513 Listeners

5,548 Listeners

655 Listeners

1,474 Listeners

74 Listeners