The role of Postgres for a heavily digitised world

Postgres: the database that was purpose-built for a data-dense world

More data is generated today than ever before, and the pace is accelerating. Recently I participated in a virtual roundtable discussion where senior technology leaders shared their thoughts and discussed the evolving role of database management systems (DBMS).

As part of the online event, the community of the PostgreSQL project discussed the growing relevance of Postgres and how the popular open-source DBMS fits into today’s heavily digitalised, cloud-centric landscape.

An extensible core for data 

The shift to digital interactions with customers has pushed databases to the forefront of the IT discussion. Where we once had people manually keying in data at computer terminals in the office, we now have data streaming in from a myriad of sources. From web browsers, mobile apps, IoT devices, documents, GPS coordinates, social media platforms – today we have a greater need than ever to store and analyse data.

Fortunately, the groundwork laid decades ago with the initial release of Postgres meant that it was up to the task of meeting the requirements of today’s data-dense environment. The core database engine – designed in 1986 – was created to be extensible. Even the integer data type is defined in the system tables, not hard coded into the executable.

Let’s say we want to add support for JSON (JavaScript Object Notation). We don’t have to modify 20 different settings or go to a few hundred different locations in the code to implement it. If we want to add a new indexing method for a data warehouse, we can use an SQL command to define new index methods.

If we want to add a new aggregate expression, like Sum or Count for a new data type, or want to create a new data type, we can simply run the relevant SQL command. Postgres has had this extensible design from the beginning that has been carried until today, where it has now become extremely useful.

Postgres everywhere 

The sustained development over so many years has given Postgres a level of reliability and performance that often exceeds the expectations of users. For example, a large credit card firm wanted to develop new scalability capabilities that would allow them to accommodate their usage. They felt Postgres would be inadequate for their high write volumes.

After thinking it through, I came to them later and said ‘Okay, we will work on this feature.’ And they didn’t reply to me. I asked, ‘Why are you so quiet?’ And they responded ‘Well, we thought we needed it, but we ended up just putting it on one machine, and Postgres ran exceedingly well. We won’t need the capabilities we asked for.’”

The maturity and quality of the Postgres code have not gone unnoticed – there are dozens of PostgreSQL-derived DBMSs over the years, ranging from other open-source projects to proprietary implementations to address niche requirements.  Indeed, even cloud-based database services such as Amazon Aurora and Amazon Redshift were built with the Postgres core.

The sheer breadth of features in Postgres means that an existing deployment can easily be configured to support new use cases such as a data warehouse. For instance, an organisation looking to build a data repository for data analytics can create a replica to serve as a data warehouse, maintaining the primary one for transactional records. This can be done easily in Postgres.

A springboard to innovation

One common reason for deploying open-source solutions is how it helps organisations reduce the cost of IT. However, the most compelling advantage of open-source software is how it serves as a springboard for innovation.

Many people turn to open source because they want to save money. But when you ask them two years later about the advantages they have gained from open source, money won’t be the first thing they mention. They will tell you it is about innovation, freedom, quality, and the ability for customisation that make up the true value of going open source. And the cost is this little piece among them.

And while some might have preconceived notions that open-source projects are slow to release new features due to a lack of financial incentives, the reality is that Postgres releases over a hundred new features every year. 

One gentleman from a bank once told me that they needed three months for internal testing and could not do it fast enough with the release cadence of Postgres. So, we talked about the fact that you may not need to test as much with Postgres because the quality of the code is significantly higher. With Postgres, you can deploy faster and do not have to work through as many barriers – such as licensing and reliability issues.

Ultimately, the rapid cadence of releases becomes like a “siren song” to leverage new capabilities and innovate. You must upgrade because new features are coming all the time. Proprietary databases can’t do that, but we can, and we are not going to slow down.

Making the move to Postgres

Organisations need to tweak some of their processes to adapt to Postgres, such as switching to a one-month deployment cadence or shortening their testing cycle. To get the maximum benefit, don’t use the same mindset of other software with Postgres. With Postgres, the proof is in the pudding. If Postgres was simply about being a less expensive DBMS, we would not have all these activities like Postgres Vision 2022 and user groups around the world, and we would not have this growth. Nobody gets excited about saving money more than once. But Postgres has become this amazing platform that people gravitate to.