Why CTE is non-optional, but a Requirement in Financial Systems

You might think "CTE" (Common Table Expression) is just a syntactic sql sugar, a nice-to-have feature for readability or organizing subqueries. In most domains, you might be right.

However, in a high-integrity, high-concurrency financial systems, i.e. trading platforms, CTEs become non-negotiables.

To show you why, let's build a minimal demo of an order book matching system. We'll explore the pitfalls of a naive, sequential transaction-based approach, and then see how CTEs maintain atomic correctness even under concurrency.

In the examples below, we'll use PostgreSQL as the database and Drizzle ORM as the Database ORM layer for demonstration. These concepts apply to any SQL database that supports CTEs and any ORM that supports CTE and transaction.

The Concurrency Problem

Let's say we have two simple tables:

  • order table to store trade (buy/sell) orders from users
  • trade table to store executed trades against orders

The SQL schema might look like this:

example.sql
CREATE TABLE "order" (
    "id" bigserial PRIMARY KEY NOT NULL,
    "symbol" text NOT NULL,
    "side" text NOT NULL,
    "quantity" numeric NOT NULL,
    "filled_quantity" numeric DEFAULT 0 NOT NULL
);

CREATE TABLE "trade" (
    "id" bigserial PRIMARY KEY NOT NULL,
    "order_id" bigint NOT NULL,
    "trade_quantity" numeric NOT NULL,
    "executed_at" timestamp DEFAULT now() NOT NULL
);

ALTER TABLE "trade"
    ADD CONSTRAINT "trade_order_id_order_id_fk" FOREIGN KEY ("order_id")
    REFERENCES "public"."order"("id") ON DELETE cascade ON UPDATE no action;

In Drizzle ORM, the equivalent definitions look like this:

schema.ts
export const order = pgTable("order", (t) => ({
    id: t.bigserial({ mode: "number" }).primaryKey(),
    symbol: t.text().notNull(),
    side: t.text().$type<"BUY" | "SELL">().notNull(),
    quantity: t.numeric({ mode: "number" }).notNull(),
    filledQuantity: t.numeric({ mode: "number" }).notNull().default(0),
}));

export const trade = pgTable("trade", (t) => ({
    id: t.bigserial({ mode: "number" }).primaryKey(),
    orderId: t
        .bigint({ mode: "number" })
        .notNull()
        .references(() => order.id, { onDelete: "cascade" }),
    tradeQuantity: t.numeric({ mode: "number" }).notNull(),
    executedAt: t.timestamp({ mode: "date" }).notNull().defaultNow(),
}));

Now, suppose we have a service class to handle all trade operations:

trade-service.ts
export class TradeService {
    constructor(private db: DBClient) {}

    async getOrder(id: number) {
        // ...
    }

    async createOrder(data: typeof schema.order.$inferInsert) {
        // ...
    }

    async createTrade(orderId: number, tradeQuantity: number) {
        // ...
    }
}

Let's inspect the createTrade method:

trade-service.ts
async createTrade(orderId: number, tradeQuantity: number) {
    const record =
        // start a transaction
        await this.db.transaction(async (tx) => {
            const [order] = tx.select().from(schema.order).where(
                eq(schema.order.id, orderId)
            );

            // step 1: check if trade quantity exceeds order unfilled quantity
            if (order.quantity < order.filledQuantity + tradeQuantity) {
                throw new Error("Trade quantity exceeded order quantity");
            }

            // step 2: create trade record and update existing order's filled quantity
            const [trade, _] = await Promise.all([
                tx
                    .insert(schema.trade)
                    .values({ orderId: order.id, tradeQuantity: tradeQuantity })
                    .returning()
                    .then((r) => r[0]),
                tx.update(schema.order).set({
                    filledQuantity: order.filledQuantity + tradeQuantity,
                }).where(eq(schema.order.id, order.id)),
            ]);

            return trade;
        });

    if (!record) throw new Error("Failed to create trade");
    return record;
}

At first glace, this looks correct.

Wrapping the query operations in a transaction ensures both the trade insert and the order update happen atomically.

However, there's a hidden flaw.

Let’s write a quick test to simulate what happens under concurrency.

trade-service.concurrency.spec.ts
describe("concurrent inserts", async () => {
    const { $db } = await getTestDB();
    const service = new TradeService($db);

    it("should insert a single trade", async () => {
        const order = await service.createOrder({
            symbol: "XAU/USD",
            side: "BUY",
            quantity: 1,
        });
        await service.createTrade(order.id, 1);
        const result = await service.getOrder(order.id);
        expect(result.filledQuantity).toBe(1);
    })

    it("should insert 10 trades concurrently", async () => {
        const order = await service.createOrder({
            symbol: "XAU/USD",
            side: "BUY",
            quantity: 10,
        });

        // simulate 10 concurrent trade inserts, each filling 1 quantity
        await Promise.all(
            Array.from({ length: 10 }).map(() => service.createTrade(order.id, 1))
        );

        const result = await service.getOrder(order.id);
        expect(result.filledQuantity).toBe(10);
    })
});

The first test reflects how developers would typically test their code, while the second simulates what actually happens under real-world conditions.

When we run it, we get:

Test Output
❯ src/tests/concurrent-inserts.spec.ts (2 tests | 1 failed) 322ms
  ✅ concurrent inserts > should insert a single trade 14ms
  ❌ concurrent inserts > should insert 10 trades concurrently 24ms
    → expected 1 to be 10 // Object.is equality

- Expected
+ Received

- 10
+ 1

 ❯ src/tests/concurrent-inserts.spec.ts:33:33
     31|
     32|   const result = await service.getOrder(order.id);
     33|   expect(result.filledQuantity).toBe(10);
       |                                 ^
     34|  });
     35| });

We can see that the first test case passed, but the second one failed. The expected filled quantity was 10, but we got 1. What exactly happened here?

This is a classic "lost update" concurrency problem, where multiple transactions read the same order record concurrently and they all see the same initial filled quantity (0 in this case). Each transaction calculates the new filled quantity independently and updates it. As a result, one of them updates, while the others overwrite it, leading to an incorrect final state.

So how do we fix this?

Perhaps we can fix this by referencing the existing column in the update statement:

trade-service.ts
async createTrade(orderId: number, tradeQuantity: number) {
    const record =
        await this.db.transaction(async (tx) => {
        // ...

            const [trade, _] = await Promise.all([
                tx
                    .insert(schema.trade)
                    .values({ orderId: order.id, tradeQuantity: tradeQuantity })
                    .returning()
                    .then((r) => r[0]),
                tx.update(schema.order).set({
-                   filledQuantity: order.filledQuantity + tradeQuantity,
+                   filledQuantity: sql`${schema.order.filledQuantity} + ${tradeQuantity}`,
                }).where(eq(schema.order.id, order.id)),
            ]);

        // ...
        }
    // ...
}

The SQL equivalent of the UPDATE statement:

UPDATE "order"
SET "filled_quantity" = "filled_quantity" + $1
WHERE "id" = $2

This way, each transaction will increment the filled quantity based on the current value in that row, rather than relying on a potentially stale value read earlier.

Now, let's run the tests again.

Test Output
✅ src/tests/concurrent-inserts.spec.ts (2 tests) 318ms
  ✅ concurrent inserts > should insert a single trade 10ms
  ✅ concurrent inserts > should insert 10 trades concurrently 16ms

All tests passed! Problem solved, right? Not quite.

The Pitfalls

While this approach resolves the lost update issue, it introduces a new problem.

Remember the validation step in the createTrade method that checks whether the trade quantity exceeds the order's unfilled quantity?

Let's add a new test case to test whether the validation step still works.

trade-service.concurrency.spec.ts
it("should not exceed trade order's quantity", async () => {
    const order = await service.createOrder({
        symbol: "XAU/USD",
        side: "BUY",
        quantity: 5,
    });

    // simulate 10 concurrent trade creations, each filling 1 quantity, which exceeds the order quantity (max limit)
    await Promise.all(
        Array.from({ length: 10 }).map(() => service.createTrade(order.id, 1)),
    ).catch(() => { /* ignores error */});

    const result = await service.getOrder(order.id);
    expect(result.filledQuantity).toBe(5);
})

When we run the tests again, we get:

Test Output
❯ src/tests/concurrent-inserts.spec.ts (3 tests | 1 failed) 320ms
  ✅ concurrent inserts > should insert a single trade 9ms
  ✅ concurrent inserts > should insert 10 trades concurrently 15ms
  ❌ concurrent inserts > should not exceed trade order's quantity 14ms
    → expected 10 to be 5 // Object.is equality

- Expected
+ Received

- 5
+ 10

 ❯ src/tests/concurrent-inserts.spec.ts:51:33
     49|
     50|   const result = await service.getOrder(order.id);
     51|   expect(result.filledQuantity).toBe(5);
       |                                 ^
     52|  });
     53| });

We can see that the new test failed. The expected filled quantity was 5, but we got 10 instead, exceeding the order quantity.

We can conclude that the validation step is no longer reliable. So what happened here?

Although we fixed the UPDATE statement to prevent lost updates, the root cause remains unaddressed. Multiple transactions still reads the same initial filled quantity (0) before any updates were commited. Each transaction independently passes validation and updates the order, allowing the total filled quantity to exceed its limit.

So how do we fix this?

Perhaps we can add a row-level lock to the order record when reading it, ensuring no other transaction can read or modify it until the current transaction completes.

trade-service.ts
async createTrade(orderId: number, tradeQuantity: number) {
    const record =
        await this.db.transaction(async (tx) => {
        const [order] = tx.select().from(schema.order).where(
            eq(schema.order.id, orderId)
        )
+       .for("update") // this adds a row-level lock `FOR UPDATE`

        // ...
        }
    // ...
}

Let's try running the test again.

Test Output
❯ src/tests/concurrent-inserts.spec.ts (3 tests | 1 failed) 320ms
  ✅ concurrent inserts > should insert a single trade 9ms
  ✅ concurrent inserts > should insert 10 trades concurrently 15ms
  ❌ concurrent inserts > should not exceed trade order's quantity 14ms
    → expected 10 to be 5 // Object.is equality

- Expected
+ Received

- 5
+ 10

 ❯ src/tests/concurrent-inserts.spec.ts:51:33
     49|
     50|   const result = await service.getOrder(order.id);
     51|   expect(result.filledQuantity).toBe(5);
       |                                 ^
     52|  });
     53| });

The test still failed. The filled quantity is still 10. Nothing changed. So what happened here?

This is a common developer mistake.

A row-level lock only prevents other transactions from modifying the same record while the current one is active. It does not prevent other transactions from reading that record before the lock is acquired. In this case, multiple transactions still read the same initial filled quantity (0) before any commit occurs.

At this point, you might be thinking:

Let's just implement an application-level mutex or distributed lock to serialize access to the createTrade method.

That's a fair suggestion. You could serialize access to the createTrade method with an in-memory mutex or a distributed lock using Redis or a message queue.

But those solutions shift the responsibility for concurrency control away from the database, the one layer designed to enforce it reliably.

What if there were a way to combine the validation step, conditionally update the order, and insert the trade record all within a single atomic operation, without relying on any external locking mechanism?

Enter CTE (Common Table Expression)

Let's rewrite the createTrade method using CTE.

trade-service.ts
async createTrade(orderId: number, tradeQuantity: number) {
        // define a SQL expression for the fillable condition
        const isFillable = sql<boolean>`
            ${schema.order.quantity} >= ${schema.order.filledQuantity} + ${tradeQuantity}
        `;

        // CTE 1: select valid (fillable) order
        const fillableOrder = this.db.$with("fillable_order").as(
            this.db
                .select()
                .from(schema.order)
                .where(and(eq(schema.order.id, orderId), isFillable)),
        );

        // CTE 2: update order's filled quantity if it's fillable
        const updateOrder = this.db.$with("update_order").as(
            this.db
                .update(schema.order)
                .set({
                    filledQuantity: sql<number>`
                        ${schema.order.filledQuantity} + ${tradeQuantity}
                    `,
                })
                .where(
                    sql`
                        EXISTS (
                            SELECT 1 FROM ${fillableOrder}
                            WHERE ${fillableOrder.id} = ${schema.order.id}
                        )
                    `,
                )
                .returning({ id: schema.order.id }),
        );

        // execute CTE
        const [trade] = await this.db
            .with(fillableOrder, updateOrder)
            .insert(schema.trade)
            .select(
                this.db
                    .select({
                        id: sql`nextval('trade_id_seq'::regclass)`.as("id"),
                        orderId: sql`${updateOrder.id}`.as("order_id"),
                        tradeQuantity: sql`${tradeQuantity}`.as("trade_quantity"),
                        executedAt: sql`NOW()`.as("executed_at"),
                    })
                    .from(updateOrder),
            )
            .returning();

        if (!trade) throw new Error("Failed to create trade");
        return trade;
    }

Let's break down the above operation step by step:

  1. We define a SQL expression isFillable that checks if the order can be filled with the given trade quantity.
  2. We create the first CTE fillable_order that selects the order record only if it satisfies the fillable condition.
  3. We create the second CTE update_order that updates the order's filled quantity only if the order is fillable (i.e., it exists in the fillable_order CTE).
  4. Finally, we execute the CTEs and insert a new trade record only if the order was successfully updated.

The SQL equivalent looks like this:

example.sql
WITH fillable_order AS (
  -- step 1: validate the order
  SELECT *
  FROM "order"
  WHERE "id" = $1
    AND "quantity" >= "filled_quantity" + $2
),

updated_order AS (
  -- step 2: update the order only if validation passed
  UPDATE "order"
  SET "filled_quantity" = "filled_quantity" + $2
  WHERE EXISTS (
    SELECT 1 FROM fillable_order WHERE fillable_order.id = "order".id
  )
  RETURNING "id"
)

-- step 3: insert the trade only if the update succeeded
INSERT INTO "trade" ("id", "order_id", "trade_quantity", "executed_at")
SELECT
  nextval('trade_id_seq'::regclass) AS "id",
  $1 AS "order_id",
  $2 AS "trade_quantity",
  NOW() AS "executed_at"
FROM updated_order
RETURNING *;

When we run the tests again:

Test Output
✅ src/tests/concurrent-inserts.spec.ts (3 tests) 389ms
  ✅ concurrent inserts > should insert a single trade 12ms
  ✅ concurrent inserts > should insert 10 trades concurrently 32ms
  ✅ concurrent inserts > should not exceed trade order's quantity 17ms

Perfect! All tests passed!

So what changed?

By using CTEs, we've fused the validation, update and insert operations into a single atomic operation. The database engine handles the concurrency control internally, ensuring that even under high concurrency, every operation maintains data integrity. No race conditions. No external locking mechanism needed.

Why not just use a CHECK constraint?

Some readers might be thinking:

Why not just add a check constraint on the order table so that it never exceeds the total quantity?

And you're absolutely right. For a simple case like this, a CHECK constraint works.

example.sql
ALTER TABLE "order"
ADD CONSTRAINT "filled_quantity_check"
CHECK ("filled_quantity" <= "quantity");

This would indeed prevent an UPDATE from setting filled_quantity to a value greater than quantity.

However, that kind of constraint only works when your validation logic lives within a single table and depends solely on static columns of that row.

The moment your validation depends on other tables or dynamic conditions, things start to break down, like say:

  • ensuring a user's available balance is sufficient before a trade is filled
  • checking the trade's execution status or matching engine results,
  • verifying account limits, margin levels or risk exposure,
  • enforcing cross-table dependencies (i.e. a trade cannot exceed remaining position size)

In such cases, simple CHECK constraints are too limited as they can’t express these relationships (they can't reference other tables or execute subqueries). And if you try to enforce those conditions in application code, you're back to the same concurrency risks we started with.

At that point, you need something more expressive, something that allows multi-step, conditional validation within the database itself, something that can combine multiple operations into one single atomic operation, without relying on external or application-level locks.

That's where CTEs truly shine, not as syntax sugar, but as a guarantee of correctness.

14 Oct 2025 technique, database