I had a question that wouldn't leave me alone: what if I could run Angular's SSR engine and a NestJS backend on the exact same Express instance? Not two separate processes, not a reverse proxy stitching things together — one server, one port, one deployment artifact.

This is the story of building home, a full-stack playground where I tested that idea to its limits. Here are the lessons I learned along the way.

Lesson 1: NestJS owns the Express instance, Angular borrows it

The key insight is that NestFactory.create can give you a raw Express instance. Angular's AngularNodeAppEngine doesn't care where its Express server comes from — it just needs one.

The trick is to let NestJS create the application, extract its underlying Express server, and hand that to Angular's SSR middleware:

import { NestFactory } from '@nestjs/core';
import { NestExpressApplication } from '@nestjs/platform-express';
import { AngularNodeAppEngine, writeResponseToNodeResponse, createNodeRequestHandler } from '@angular/ssr/node';
import { ApiModule } from './api.module';

// NestJS creates the Express app
const app = await NestFactory.create<NestExpressApplication>(ApiModule);
const server = app.getHttpAdapter().getInstance();

At this point, server is a plain Express instance with all of NestJS's controllers, guards, pipes, and interceptors already registered on it.

Lesson 2: Angular SSR is just Express middleware

Once you have the Express instance, Angular's SSR engine slots in as a catch-all middleware. It tries to render the request as an Angular route. If it can't, it calls next() and Express continues down the middleware chain:

const angularApp = new AngularNodeAppEngine();

server.use('*splat', (req, res, next) => {
  angularApp
    .handle(req, {
      server: 'express',
      request: req,
      response: res,
      cookies: req.headers.cookie,
    })
    .then((response) =>
      response
        ? writeResponseToNodeResponse(response, res)
        : next()
    )
    .catch(next);
});

This is elegant because it means your NestJS API routes (/api/*) get matched first by NestJS's own router, and everything else falls through to Angular. No path conflicts, no special configuration — just middleware ordering.

Lesson 3: Order matters — a lot

This was probably the biggest gotcha. The order in which middleware is registered on the Express instance determines everything:

  1. Static assets — serve your dist/browser folder first
  2. Reverse proxies — if you proxy third-party APIs, register those paths early
  3. NestJS controllers — these are automatically registered when you call app.init()
  4. Angular SSR — the catch-all *splat middleware goes last

Get this wrong and Angular will try to server-side render your /api/users endpoint, returning an HTML page instead of JSON. I learned this one the hard way.

Lesson 4: The reverse proxy pattern keeps things clean

My app integrates with third-party APIs (weather data from met.no, financial data from Nordnet). Instead of calling these directly from the browser — which would expose API keys and hit CORS walls — I route them through the Express server using http-proxy-middleware:

import { createProxyMiddleware } from 'http-proxy-middleware';
import { proxyRoutes } from './proxy.routes';

Object.entries(proxyRoutes).forEach(([path, config]) => {
  server.use(path, createProxyMiddleware(config));
});

Because NestJS gives us the raw Express instance, this works exactly like it would in any Express app. No NestJS abstraction needed — just plain middleware.

Lesson 5: The database just works

I was worried that running SQLite inside an SSR process would cause issues — file locks, concurrent access during pre-rendering, that sort of thing. Turns out it's fine. TypeORM with SQLite slots into a NestJS module cleanly:

@Module({
  imports: [
    TypeOrmModule.forRoot({
      type: 'sqlite',
      database: resolve(process.cwd(), 'db', 'home.db'),
      autoLoadEntities: true,
      synchronize: true,
    }),
  ],
})
export class ApiModule {}

Entities are auto-detected when you use TypeOrmModule.forFeature() in your sub-modules. During SSR, Angular components can call NestJS API routes internally — same process, no network round-trip.

Lesson 6: Export the request handler

This is easy to forget but critical. Angular's build system expects a reqHandler export so it knows how to wire up the server:

await app.init();
export const reqHandler = createNodeRequestHandler(server);

Without this export, Angular's dev server and production builds won't know how to start your application. The app.init() call finalizes all NestJS module initialization, and createNodeRequestHandler wraps the Express instance for Angular's consumption.

Lesson 7: Service workers and SSR are uneasy allies

This one took me a while. Angular's built-in ngsw generates a generic service worker at build time, but I wanted more control — specifically Workbox with a custom pre-cache list.

The problem: during development (nx serve), files are built and served in memory. There's no dist folder to scan for pre-cacheable assets. My solution was two custom build plugins:

  1. A custom esbuild plugin that hooks into onEnd during dev serves to generate a partial pre-cache list
  2. A custom webpack plugin that runs during production builds to generate the full list from files on disk

The lesson: if you need a service worker that works in both dev and prod, you'll need to bridge the gap between in-memory and on-disk builds yourself.

Lesson 8: One process = one deployment artifact

The biggest payoff of this architecture is deployment simplicity. The entire application — frontend, backend, database, SSR — compiles down to a single Docker image:

  • One Dockerfile

  • One dist folder

  • One node process

  • One port

No orchestration, no service mesh, no container-to-container networking. For a side project or a small team, this is incredibly freeing. And the Lighthouse scores from the Docker environment are solid — SSR gives you fast first-contentful paint, and the service worker handles everything after that.

Lesson 9: Widget lazy-loading needs extra thought

The app uses a dashboard of mini-applications (widgets) that are each a self-contained library. Individual route-based lazy loading with loadChildren is straightforward, but loading multiple widgets in a single dashboard view required a custom widget-loader component that conditionally instantiates widgets based on a backend configuration.

The key was a widget.service that maps widget names to their lazy routes, and a dashboard view that creates one loader per widget. Each widget only loads when the dashboard tells it to, keeping the initial bundle small.

Lesson 10: Know when this pattern breaks down

This architecture works great for:

  • Side projects and proof of concepts

  • Small teams that want deployment simplicity

  • Applications where the backend and frontend are tightly coupled

It starts to strain when:

  • You need to scale the API independently of the SSR layer

  • Multiple frontends share the same backend

  • The backend has long-running tasks that could block SSR responses

For my playground, it's perfect. For a production system with different scaling needs, you'd want to split these back apart. But the experiment proved the architecture is viable — and surprisingly pleasant to work with.

Try it yourself

The full source is at github.com/OysteinAmundsen/home. Clone it, run bun install && bun start, and you'll have Angular SSR + NestJS + SQLite running on a single port in seconds.