
Rishil Suresh Tyayil
Sep 03, 2025
PostgreSQL supports three main isolation levels: READ COMMITTED
, REPEATABLE READ
, and SERIALIZABLE
. Each level builds upon the previous one by offering stronger guarantees around data visibility and consistency but at a potential cost to concurrency and performance.
READ COMMITTED
is PostgreSQL’s default and fastest isolation level. It ensures that a transaction sees only committed data, but since each statement gets a fresh snapshot, anomalies like non-repeatable reads and phantom reads are possible. It’s generally not recommended for production systems unless you have a strong reason to allow those anomalies.
REPEATABLE READ
offers a stronger guarantee and is the isolation level most commonly used in production systems. It prevents non-repeatable reads and effectively eliminates phantom reads in PostgreSQL. While it does not guard against every possible anomaly (such as write skew), these issues typically arise only in niche or highly specialized workloads. For the vast majority of applications, REPEATABLE READ
provides both safety and performance.
SERIALIZABLE
is the strictest isolation level. It simulates true serial execution of transactions using Serializable Snapshot Isolation (SSI), which tracks read and write dependencies between transactions to detect and prevent dangerous conflicts. While this level provides the highest degree of correctness, it may result in transaction rollbacks under high contention and thus requires careful application design to manage retries gracefully.
If terms like non-repeatable read, phantom read, write skew, or MVCC sound unfamiliar, don’t worry, this post breaks them down step by step with simple explanations and PostgreSQL-specific examples.
The choice of isolation level in PostgreSQL is ultimately a trade-off between consistency and concurrency. Understanding how MVCC, visibility rules, and snapshot timing affect isolation guarantees is key to making informed architectural decisions for reliable and performant PostgreSQL-backed applications.
- Introduction to Transaction Isolation Levels
- What Are Transactions in Databases?
- Understanding ACID: The Foundation of Transactions
- So, Where Do Isolation Levels Come Into Play?
- Behind the Scenes of Transaction Isolation: The Anomalies That Define It
- The Story Behind Isolation Levels
- What Are These Anomalies? (And how do they manifest in PostgreSQL?)
- Dirty Reads (Not possible in PostgreSQL)
- Non-Repeatable Reads
- Phantom Reads
- Wait, Is Repeatable Read Really Repeatable?
- Why the mismatch?
- PostgreSQL’s Serialization Anomaly
- Repeatable Read
- Performance Impact
- Trade-Offs
- Serializable
- Performance Impact
- Trade-Offs
- Key Takeaways: Isolation Levels in PostgreSQL, Demystified
Introduction to Transaction Isolation Levels
What Are Transactions in Databases?
Imagine you’re withdrawing money from an ATM. You insert your card, enter the amount, and wait for the cash to dispense. Behind the scenes, several operations happen:
- Verifying your balance
- Debiting your account
- Recording the transaction in logs
From your perspective, this is one action but technically, it’s a group of operations bundled together. This bundle is called a transaction.
Definition: A transaction in a database is a sequence of one or more operations (like INSERT, UPDATE, DELETE, SELECT) that are executed as a single logical unit of work. The essential point of a transaction is that it bundles multiple steps into a single, all-or-nothing operation. The intermediate states between the steps are not visible to other concurrent transactions, and if some failure occurs that prevents the transaction from completing, then none of the steps affect the database at all.
Understanding ACID: The Foundation of Transactions
To ensure transactions behave safely and predictably, databases adhere to a principle called ACID. It’s like a checklist that guarantees your data doesn’t go rogue.
Let’s unpack each letter with a fun analogy:
- Atomicity – “All or Nothing”
Like sending a rocket into space: if any component fails, the whole mission is aborted.
In databases, either every operation in a transaction succeeds, or everything is rolled back. There’s no halfway update that leaves your data in a broken state.
- Consistency – “The Rules Must Always Be Followed”
Think of it like playing chess: you can’t suddenly decide knights can move diagonally.
A consistent database means it always adheres to defined rules: constraints, data types, triggers, etc. A transaction must take the database from one valid state to another, not violate rules in between.
- Isolation – “Transactions in Their Own Bubble”
Like runners in their own lanes - they might start at the same time, but don’t bump into each other.
Multiple transactions can run at the same time, but isolation ensures they don’t interfere in a way that corrupts results. We’ll explore this in-depth shortly, it’s the heart of this blog.
- Durability – “What’s Written Stays Written”
Like carving into stone - once it’s done, it won’t fade.
Once a transaction is committed, its effects survive crashes, power loss, or system restarts. PostgreSQL ensures this with write-ahead logging (WAL). If you would like to dive deeper into the theory behind ACID transactions, there are some timeless resources worth checking out. A classic starting point is Jim Gray’s research paper, The Transaction Concept: Virtues and Limitations, which laid down the very principles that still define database reliability today.
So, Where Do Isolation Levels Come Into Play?
Let’s zoom in on the “I” - Isolation.
In an ideal world, every transaction would run one after the other. But in reality, we want speed and performance. That means running many transactions concurrently.
Here’s where the challenge starts. Without isolation, these parallel transactions can:
- See each other’s uncommitted changes (aka dirty reads)
- Overwrite each other’s data
- See different data on repeated reads
So Isolation Levels, is a way to define how “separate” each transaction should be from others.
Think of isolation levels as traffic rules for transactions:
- Some allow free merging, like roundabouts.
- Some require strict stops, like traffic lights.
By offering multiple isolation levels, PostgreSQL gives you flexibility to balance performance vs consistency, depending on what your application needs.
In the upcoming sections, we’ll break down each isolation level, see what anomalies they allow or prevent, and even demonstrate them in a live terminal setup.
Behind the Scenes of Transaction Isolation: The Anomalies That Define It
The Story Behind Isolation Levels
Imagine you’re building a banking app. You define a simple invariant: “No account balance should fall below zero.” Now, you wrap every operation like withdrawals, transfers, and deposits inside a transaction. Great! But what happens when multiple users perform operations at the same time? That’s where transaction isolation enters the stage.
The ANSI SQL committee formalized transaction isolation in SQL 92 with the goal of ensuring that even if transactions are run concurrently, the final outcome should be the same as if they had been run one at a time, in some order.
This ideal is known as:
Serializable Isolation - Every concurrent execution appears as if the transactions were executed serially, one after another.
Sounds great, right?
Back then, serializability was thought to be too expensive to implement efficiently. So instead of enforcing the ideal, the ANSI committee settled on this compromise: Let’s define a few levels of isolation, each allowing some degree of “weirdness” and clearly define the types of anomalies each one permits.
This gave birth to isolation levels:
- Read Uncommitted
- Read Committed
- Repeatable Read
- Serializable
These levels are defined by the anomalies they allow or prevent:
Isolation Level | Dirty Reads | Non-repeatable Reads | Phantom Reads |
---|---|---|---|
Read Uncommitted | Yes | Yes | Yes |
Read Committed | No | Yes | Yes |
Repeatable Read | No | No | Yes |
Serializable | No | No | No |
What Are These Anomalies? (And how do they manifest in PostgreSQL?)
Each anomaly represents an undesirable effect that can happen when transactions interleave incorrectly.
We’ll go through each anomaly below with real-world SQL examples you can try in PostgreSQL using two terminals (to simulate concurrent sessions).
The following section will use a simplified database schema as the reference for illustrating all SQL queries on the specified table:
CREATE TABLE account(
id SERIAL PRIMARY KEY,
user_id TEXT NOT NULL,
account_number TEXT NOT NULL,
balance INT NOT NULL
);
Additionally, the records currently present in this table are:
bank_database> select * from account;
+----+--------------+----------------+---------+
| id | user_id | account_number | balance |
|----+--------------+----------------+---------|
| 1 | CUST_1 | ACC_1 | 500 |
+----+--------------+----------------+---------+
Dirty Reads (Not possible in PostgreSQL)
Definition: A dirty read occurs when a transaction reads data that has been modified by another transaction but not yet committed. If the first transaction is rolled back, the data read by the second transaction will be invalid.
In other DBs (like SQL Server with READ UNCOMMITTED
), it might look like this:
Session A (Terminal 1) | Session B (Terminal 2) |
---|---|
BEGIN; | - |
- | BEGIN; |
UPDATE account SET balance = balance - 100 WHERE account_number = 'ACC_1'; . | - |
-- Don’t commit yet | - |
- | SELECT balance FROM account WHERE account_number = 'ACC_1'; |
- | -- You see a balance of 400 even though the transaction hasn’t been committed |
But in PostgreSQL, Session B will always see the committed version, not the uncommitted (in this balance will be 500).
Non-Repeatable Reads
Definition: A transaction re-reads data it has previously read and finds that data has been modified by another transaction (that committed since the initial read).
Session A (Terminal 1) | Session B (Terminal 2) |
---|---|
BEGIN; | - |
- | BEGIN; |
SELECT balance FROM account WHERE account_number = 'ACC_1'; | - |
-- You see a balance of 500 and pause here | - |
- | UPDATE account SET balance = 300 WHERE account_number = 'ACC_1'; |
- | COMMIT; |
- | -- Pause here |
SELECT balance FROM account WHERE account_number = 'ACC_1'; | - |
-- You see a balance of 300 this time! | - |
In Session A, we read the balance of account ACC_1
expecting it to stay the same throughout our transaction. But by the time we re-read, the value has changed because of a concurrent transaction (Session B). This violates consistency assumptions. Imagine you’re checking if a user has enough balance (eg: 500) to process a withdrawal. You see 500 initially, but when you proceed with the operation based on that data, the balance has already changed behind your back - you might overdraw or make decisions on stale data.
- This happens in
READ COMMITTED
isolation level in PostgreSQL. PostgreSQL’s MVCC (Multi-Version Concurrency Control) gives each query a new snapshot of the latest committed data. So the first SELECT sees the old version (before Session B’s commit), the second sees the new version.(If you would like a deeper dive into how MVCC works under the hood, PostgreSQL’s official documentation provides an excellent introduction to Multi-Version Concurrency Control (MVCC))
Performance Impact
- Minimal overhead because each query takes a fresh snapshot, avoiding complex locks.
- Faster query execution since no predicate locks or range locks are required.
- Allows higher transaction throughput as readers are never blocked by writers.
- Reduced risk of deadlocks as transactions don’t hold locks across multiple statements.
- Optimized for OLTP workloads where quick, consistent visibility of committed data is preferable over strict repeatability.
Trade-Offs
- Data instability within a single transaction - reading the same row twice can yield different results.
- Applications need to be designed to tolerate or detect data changes mid-transaction.
- Potentially inconsistent business decisions if application logic depends on stable row values between statements.
- Harder to debug intermittent anomalies caused by changes in data between reads.
- Not suitable for scenarios needing strict consistency across multi-row or repeat reads within a long transaction.
Phantom Reads
Definition: A transaction re-executes a query returning a set of rows that satisfy a search condition and finds that the set of rows satisfying the condition has changed due to another recently-committed transaction.
Session A (Terminal 1) | Session B (Terminal 2) |
---|---|
BEGIN ISOLATION LEVEL READ COMMITTED; | - |
- | BEGIN ISOLATION LEVEL READ COMMITTED; |
- | SELECT * FROM account WHERE balance > 300; |
- | -- Only one record is returned |
- | -- Pause here |
INSERT INTO account (id, user_id, account_number, balance) VALUES (2,'CUST_2','ACC_2',400); | - |
COMMIT; | - |
-- Pause here | - |
- | SELECT * FROM account WHERE balance > 300; |
- | -- Now two records are returned! |
Initially, it returns just one row. But after Session A inserts a new record with a balance of 400 and commits, running the same query again in Session B returns two rows. Even though Session B is inside a single transaction, the same query gives different results. That breaks the assumption that reads within a transaction are stable. This inconsistency is known as a phantom read a new “phantom” row appears between two reads of the same condition within the same transaction.
Why This Happens:
- Each SQL statement gets a fresh snapshot of the database at the start of the statement, reflecting new commits since the previous statement.
- Row-level locks or range locks are not used for simple reads, so concurrent modifications are possible and visible within the same transaction.
- This design allows phantom reads in
Read Committed
mode, as SQL queries (specifically SELECT queries) executed later in the same transaction can see a different set of rows than queries executed earlier. - In PostgreSQL, phantom reads are only possible for
READ COMMITTED
isolation level. Its not possible forREPEATABLE READ
andSERIALIZABLE
isolation levels.
Wait, Is Repeatable Read Really Repeatable?
Before we dive into PostgreSQL’s serialization anomaly, here’s something that often confuses developers:
In PostgreSQL, REPEATABLE READ is not equivalent to SQL92’s REPEATABLE READ.
Yep, that’s not a typo.
In the SQL92 standard REPEATABLE READ
prevents non-repeatable reads, but not phantom reads. But in PostgreSQL REPEATABLE READ
prevents both non-repeatable reads and phantom reads. In other words, PostgreSQL’s REPEATABLE READ
is closer to SQL92’s SERIALIZABLE
(except for one very important detail: it doesn’t guarantee true serial execution). If you would like to dive deeper for more information on the classic isolation levels, the influential research paper A Critique of ANSI SQL Isolation Levels is a highly recommended read, as it has shaped much of the modern understanding of transaction isolation.
Why the mismatch?
PostgreSQL uses MVCC (Multi-Version Concurrency Control) to implement isolation. Because of that, it doesn’t rely on locking to block phantoms instead, it shows each transaction a frozen snapshot of the database at the moment it begins. Any new rows inserted by other transactions? They won’t exist in your snapshot. Problem solved. But that snapshot trick also means PostgreSQL’s REPEATABLE READ
has its limits. It does not prevent serialization anomalies the kind where two valid transactions interleave and produce an invalid result.
PostgreSQL’s Serialization Anomaly
Definition: The result of successfully committing a group of transactions is inconsistent with all possible orderings of running those transactions one at a time.
A serialization anomaly occurs when the concurrent execution of transactions results in a state that cannot be produced by any serial (one-at-a-time) execution of those same transactions. In other words, interleaving operations from multiple transactions lead to inconsistencies or violations of application constraints that would never happen if the transactions were run one after the other, sequentially.
Let’s look at an example to understand this more clearly. We’ll use the same table we have been using so far and compare how the REPEATABLE READ
and SERIALIZABLE
isolation levels behave differently in the same scenario.
Assume a business rule:
At least one customer should have the prefix CUST
in the user_id
column, while all remaining customers should have the prefix PM_CUST
.
The current records in the account
table are:
bank_database> SELECT * FROM account
+----+--------------+----------------+---------+
| id | user_id | account_number | balance |
|----+--------------+----------------+---------|
| 1 | CUST_1 | ACC_1 | 500 |
| 2 | CUST_2 | ACC_2 | 400 |
+----+--------------+----------------+---------+
SELECT 2
Time: 0.011s
Now suppose two admins, each in separate sessions, inspect two different customers with the CUST
prefix and independently decide to update the remaining customers’ user_id
s to use the PM_CUST
prefix.
Repeatable Read
Session A (Terminal 1) | Session B (Terminal 2) |
---|---|
BEGIN ISOLATION LEVEL REPEATABLE READ | - |
- | BEGIN ISOLATION LEVEL REPEATABLE READ |
SELECT user_id FROM account WHERE id = 1; | - |
-- Verify whether the CUST prefix exists - it exists | - |
- | SELECT user_id FROM account WHERE id = 2; |
- | -- Verify whether the CUST prefix exists - it exists |
UPDATE account SET user_id = 'PM_' || user_id WHERE id = 2; | - |
-- Prefix the other record with PM_ . | - |
- | UPDATE account SET user_id = 'PM_' || user_id WHERE id = 1; |
- | -- Prefix the other record with PM_ . |
COMMIT; | - |
-- Successful | - |
- | COMMIT; |
- | -- Successful |
So what happened here?
- Both transactions take a snapshot of the database at their start.
- Each sees the other’s
user_id
having prefix asCUST
. - Both decide to set other user’s
user_id
to have prefixPM_
. - Both transactions commit successfully.
Result: Both records now have the PM_CUST
prefix, violating the rule that “at least one customer should have the prefix CUST
in the user_id
column.”
This anomaly called write skew is possible because each transaction’s snapshot does not see the uncommitted changes made by the other, nor does PostgreSQL prevent such concurrent updates which combined violate a constraint. In other words, write skew arises when two database transactions read the same data, make decisions based on that data, and then update different records, resulting in an inconsistent state that violates business rules.
Performance Impact
- Uses consistent snapshots at the start of the transaction for all queries, allowing repeatable reads without extra locking overhead for predicate/range locks.
- Generally performs better than
SERIALIZABLE
since it does not monitor complex inter-transaction dependencies. - Fewer transaction aborts compared to
SERIALIZABLE
and concurrency is higher. - Suitable for workloads requiring consistent reads but with moderate isolation guarantees.
- Allows higher throughput and lower latency than
SERIALIZABLE
in many use cases.
Trade-Offs
- Does not fully prevent serialization anomalies like write skew; subtle race conditions may violate application-level constraints.
- Transactions may commit successfully even if the combined effects violate invariants that
SERIALIZABLE
would catch. - Requires developers to handle potential anomalies at the application level or tolerate some inconsistencies.
- Does not guarantee strict serializability, might be insufficient for critical systems needing complete correctness.
- Less complexity and overhead but weaker isolation guarantees.
Serializable
Session A (Terminal 1) | Session B (Terminal 2) |
---|---|
BEGIN ISOLATION LEVEL SERIALIZABLE; | - |
- | BEGIN ISOLATION LEVEL SERIALIZABLE; |
SELECT user_id FROM account WHERE id = 1; | - |
-- Verify whether the CUST prefix exists - it exists | - |
- | SELECT user_id FROM account WHERE id = 2; |
- | -- Verify whether the CUST prefix exists - it exists |
UPDATE account SET user_id = 'PM_' || user_id WHERE id = 2; | - |
-- Prefix the other record with PM_ . | - |
- | UPDATE account SET user_id = 'PM_' || user_id WHERE id = 1; |
- | -- Prefix the other record with PM_ . |
COMMIT; | - |
-- Successful | - |
- | COMMIT; |
- | -- Failed with serialization error |
So what happened here?
- PostgreSQL detects the conflicting dependency between the two transactions when they attempt to commit.
- It recognizes that allowing both commits would violate the serializability guarantee.
- PostgreSQL aborts one of the transactions with an serialization error.
- The aborted transaction must retry. On retry, it will see the updated data and prevent the violation.
Result: Violation is prevented.
NOTE: As mentioned in PostgreSQL documentation:
Applications using this level must be prepared to retry transactions due to serialization failures.
So if you are using SERIALIZABLE
isolation level, you need to be prepared to retry transactions that fail due to serialization errors.
Performance Impact
- Provides the strongest isolation guarantees by ensuring transaction outcomes are equivalent to some serial execution order.
- Uses Serializable Snapshot Isolation (SSI), tracking read/write dependencies to detect conflicts.
- Increased overhead compared to Repeatable Read due to monitoring dependencies and potential for more blocking.
- More frequent transaction aborts and retries, increasing latency especially under high concurrency or write-intensive workloads.
- Suitable for applications requiring strict correctness and data integrity (e.g., financial, auditing systems).
Trade-Offs
- The constant need to monitor conflicts and abort transactions makes it CPU and memory - intensive, expensive, and unpredictable under load.
- Lower throughput compared to
Repeatable Read
, especially in high-concurrency scenarios with conflicting transactions. - Increased complexity for application logic due to the need to handle transaction serialization failures and implement retry mechanisms.
- May lead to decreased user experience or system throughput if many retries are needed.
- Strong guarantees come at the cost of potentially greater contention and reduced scalability. While
SERIALIZABLE
offers the strongest guarantees, in practice it is rarely the right choice for most applications. - For most workloads,
REPEATABLE READ
strikes a better balance between safety and performance, makingSERIALIZABLE
feel overkill and almost not useful outside of niche domains (e.g., financial ledgers, auditing, or compliance-critical systems).
Key Takeaways: Isolation Levels in PostgreSQL, Demystified
Here’s a quick recap of what we’ve covered:
Isolation Level | Dirty Read | Non-repeatable Read | Phantom Read | Serialization Anomaly |
---|---|---|---|---|
Read uncommitted | Allowed, but not in PostgreSQL | Possible | Possible | Possible |
Read committed | Not possible | Possible | Possible | Possible |
Repeatable read | Not possible | Not possible | Allowed, but not in PostgreSQL | Possible |
Serializable | Not possible | Not possible | Not possible | Not possible |
Read Committed
is PostgreSQL’s default and relatively fast compared to other isolation levels, but it still allows anomalies like non-repeatable reads and phantoms. It’s generally not recommended for production systems unless you have a strong reason to allow those anomalies.Repeatable Read
is the go-to choice for most real-world projects. In PostgreSQL, it prevents non-repeatable and phantom reads (unlike SQL92) but can still allow serialization anomalies such as write skew.Serializable
provides the strongest correctness guarantees by enforcing serial execution, but it comes with high CPU/memory overhead, more transaction aborts, and the need to implement retry logic. It’s usually reserved for niche cases (e.g., strict financial/auditing apps) where correctness trumps performance.
Isolation levels are not just database jargon they shape how your app behaves under pressure, how consistent your data stays, and how reliable your business logic is. Whether you’re building fintech software, multi-user collaboration tools, or simply aiming to avoid a nasty race condition, choosing the right isolation level matters. Transactions are like conversations: what’s seen, said, or overheard depends entirely on the room you’re in. Choose the room wisely.
Until Next Time - Keep Your Transactions Clean!
Happy debugging, clean committing, and as always - keep learning.