Cutting Edge or at the Sharp End – How to Stay Ahead Around Your Infrastructure

By Alastair Turner of Percona

Every year in September,  Postgres releases an updated version that will contain the latest work by the community,   and shortly after this will be the final release of the retiring version. In 2025, this was Postgres 18.0 on September 25 and the final patch for the oldest supported version 13.23 on November 13. For those that want to be at the cutting edge, the latest version of Postgres is hugely exciting. But for many organisations, this is not currently actionable information.

While the IT sector has a constant thirst for innovation, the reality for most businesses is that working infrastructure beats new potential. Too many IT teams have been bitten by unforeseen issues in version x.o releases that promised huge improvements but had not been thoroughly tested in real world environments. Typically, policies require the use of software one version before the latest release. After the launch of Postgres 18, these ‘N-1 policies’ would only now allow the deployment of Postgres 17, which has had a year to mature. Similarly, companies should have completed all their upgrades of Postgres 13 to the next version to avoid that end of life situation.

In practice, this means that 2026 will be the year of upgrading Postgres 14 systems and rolling out Postgres 17. So, how can teams get ready for this work in 2026, and what are the potential gaps that could affect your success?

Pay attention to the details

In the spirit of eating the brussel sprouts first at Christmas dinner, start with the few things which need to be changed in applications before their databases can be upgraded past Postgres 14. One example here is that Python 2 was removed from the list of supported procedural languages in Postgres 15. Python 2 support has been discontinued in most operating system updates, but may still be out there in systems which have been left alone since installation. Any Python procedural code in these systems will have to be updated for Python 3 before any upgrade takes place. Auditing your code – and the tools that your developers use – is therefore a worthwhile preparatory exercise for any migration, as this can easily be overlooked.

Alongside your database update, you may also have to change the operating system version that your instances run on. If the operating systems on the database hosts are planned for upgrade along with Postgres, changes in the version of GNU libc may result in changes in collation order for character fields. Changes in sort order can corrupt indexes on text fields, leading to incorrect query results or duplicate values in unique indexes. Affected indexes are easy to identify and fix, but rebuilding indexes at upgrade time will extend the upgrade window.

Using collations from the International Components for Unicode (ICU) libraries, indexes can be rebuilt with the target collation ahead of time, providing stable sorting across the upgrade. While the process of building the index will have performance impact, it can be done without disrupting access to the affected tables through Postgres’s CREATE INDEX CONCURRENTLY command. This non-blocking command builds the new index in parallel with normal operations, then atomically replaces the old index, trading longer preparation time for shorter downtime for the upgrade.

Alongside the tooling and the platform side, any upgrade project should include an audit of potential security changes. Many new versions of Postgres have improved the default security posture of databases created on that version in order to increase security over time. However, for existing databases that are upgraded in place, those old and more permissive security defaults will persist. This reduces the chance of applications failing on newly upgraded databases due to missing permissions, at the cost of making it harder to reason about risk, because not all systems on the same Postgres version will have the same posture.

As an example, in Postgres 15, access to the default public schema is changed in two ways – the permission for all users to create objects in the schema is revoked, and the ownership of the schema is assigned to the pg_database_owner builtin role. Updating the permissions on existing databases, and applications to deal with the changes, is worth doing to ensure that the permissive values don’t remain in use across upgrades long after the reason they exist is forgotten. By understanding and applying those changes over time, you can also make life easier for your developers and your security team by reducing the number of potential configurations that they have to support from the beginning.

The long term benefit

Upgrades are not all pain, no gain. There are new features and changes to improve life for administrators and application developers, as well as a performance gain in going from one version to the next. Postgres 17 offers a few in each category. For administrators there’s a significant reduction in the memory usage for VACUUM, allowing a single pass to clean up far more dead tuples from indexes and release space for reuse faster. Incremental backups with pg_basebackup also allow for shorter backup windows.

For those wanting more control over user sessions, there are event triggers on user login (and a configuration parameter to disable them for a superuser connection if something has gone wrong). For developers who love their JSON, there is a function (JSON_TABLE()) to present JSON data as a virtual table accessible with regular SQL queries.

Dealing with these changes will be a significant project for many teams as they manage those migrations alongside keeping those databases running consistently and performing to service levels.. However, if you have any time available in 2026, there is one area of preparation for Postgres 18 which may be worth looking into early. Asynchronous IO in Postgres 18 provides significant performance gains for some read-intensive workloads. The feature also changes the impact of the effective_io_concurrency configuration parameter significantly. Up to Postgres 17, effective_io_concurrency controlled the prefetch distance on systems with prefetch advice support. High values showed diminishing returns, but the impact of setting the value too high was just a small increase in CPU utilisation.

From Postgres 18, effective_io_concurrency affects how database processes request data from database IO workers or the Linux kernel’s asynchronous IO buffers. Values which work well for devices supporting many concurrent IO requests (like NVME drives) on Postgres 17 may be in the hundreds. This is likely to be too high on a Postgres 18 system, driving up IO latency for all queries. Systems with non-default values for effective_io_concurrency are likely to already be pushing the limits of their IO in some way, so getting the new settings right will be important.

Every year, new releases come out that promise more performance, more functionality, and more efficiency. However, the work needed to realise those promised gains is significant and should not be underestimated. To make the most of the work that the community has put in, plan ahead and understand the changes that are involved. By looking at these issues in advance, you can keep ahead of the update curve, but not too far ahead.