This repository is a proof of concept playground for building a full-stack application with Angular and NestJS. It demonstrates setting up Angular SSR with a NestJS-powered backend, integrating a database, configuring a reverse proxy, and generating Swagger API documentation. The frontend is a fully installable PWA with a custom service worker that supports continuous builds during development. Perfect for exploring modern web development techniques and best practices.
- Full-stack Angular + NestJS + SQLite database executed using angular SSR.
- Workbox service-worker integrated into the development process
- Browser API experimentation and permission handling best practices
- Push notifications
- Offline detection
- Dark/Light mode styling with modern css functions
- View transitions
- Widget dashboard system where each widget is a self-contained library with both dashboard and fullscreen views:
- Third-party integrations like met.no (weather data) and nordnet.no (financial data)
- Canvas and WebGPU experiments
- Local AI model experimentation
This is a personal feature playground. It is not intended for production use.
This project uses bun
for dependency management and development.
bun install
bun start
This installs dependencies and starts the development server. You can also start the backend and frontend separately:
# in one shell:
bun run dash-api
# and in another shell
bun run dash
If you encounter issues with bun
or prefer npm
, you can try the following:
rm -rf bun.lock
npm install --force
sed -i.bak -e 's/bun x/npx/g' -e 's/bun /npm /g' package.json
This removes the bun
lockfile, installs dependencies with npm
, and replaces bun
commands in package.json
with their npm
equivalents.
- Launching VSCode chrome debugger prints errors in the console: VSCode launches chrome using the Default profile instead of your regular profile. This has a weird access to the browser cache storage, which does not allow the service worker to pre-cache our resources. Open the url in a "normal" browser profile to get all service worker functionality
Angular supports using an Express HTTP server for server-side rendering (SSR). This project extends that capability by integrating NestJS as a backend framework.
In server.ts, a NestExpressApplication
is created:
const app = await NestFactory.create<NestExpressApplication>(ApiModule);
The Express instance is retrieved:
const server = app.getHttpAdapter().getInstance();
This instance is required by AngularNodeAppEngine
for SSR:
const angularNodeAppEngine = new AngularNodeAppEngine();
server.use('*splat', (req, res, next) => {
angularNodeAppEngine
.handle(req, {
server: 'express',
request: req,
response: res,
cookies: req.headers.cookie,
})
.then((response) => {
return response
? // If the Angular app returned a response, write it to the Express response
writeResponseToNodeResponse(response, res)
: // If not, this is not an Angular route, so continue to the next middleware
next();
})
.catch(next);
});
Finally, the NestJS application is initialized:
app.init();
You would also need to expose the request handler so that the angular app engine can properly work:
export const reqHandler = createNodeRequestHandler(server);
This produces an environment which has an api, a database and a frontend fully built and served by angular SSR. If served through the production ready docker image, it also gives a nice lighthouse score:
Note
The Angular SSR process is used as Express middleware. This could potentially be moved into a NestMiddleware
for further experimentation.
Entities are autodetected using the .forFeature
function in TypeOrmModule
@Module({
imports: [
MyModule,
TypeOrmModule.forRoot({
type: 'sqlite',
database: resolve(process.cwd(), 'db', 'home.db'),
autoLoadEntities: true,
synchronize: true,
logging: true,
}),
],
})
export class ApiModule {}
Then include entities in your sub modules:
@Module({
imports: [TypeOrmModule.forFeature([MyEntity])],
...
})
export class MyModule {}
This app is a PWA, requiring a web manifest and a service worker script registered at startup.
One of the things a service worker does is preloading and caching static resources like JavaScript, CSS, and index.html
in the client. Angular's built-in ngsw generates a generic service worker at build time. However, for more control, this project uses WorkBox. But WorkBox is not quite compatible with Angulars build process yet.
- Production Builds (
nx build
): Static files are available in thedist
folder, making it straightforward to generate a pre-cache list. - Development Builds (
ng serve
): Files are built and served in memory, so thedist
folder is unavailable.
This project uses two custom plugins:
- Custom esbuild plugin: Runs during
nx serve
to hook into esbuild'sonEnd
event. It generates a partial pre-cache list but cannot include CSS files. - Custom webpack plugin: Runs during
nx build
to generate a complete pre-cache list from files written to disk.
The esbuild plugin is configured in project.json:
"targets": {
"build": {
"executor": "@nx/angular:application",
"options": {
"plugins": ["apps/dash/builders/custom-esbuild.ts"],
For production builds, the pre-cache is overwritten using:
nx build
bun x webpack --config ./apps/dash/builders/webpack.config.js
This ensures an active service worker during both development and production, enabling testing of service worker-specific code without requiring production builds.
This repository includes reusable utilities. Feel free to use anything you find helpful.
- Connectivity: Monitors offline/online status
- GeoLocation: Tracks device latitude/longitude (requires permission)
- Notification: Enables push notifications (requires permission)
- ResizeObserver Directive: Observes DOM element size changes
- Service Worker Initializer
- LocalStorage Abstraction: Stores complex JSON structures in
localStorage
- Theme Service: Manages dark/light mode using CSS variables
- Active Tab Listener: Detects if the browser tab is active, helping optimize background tasks
- Cache operator: Caches observable results for reuse across subscribers
- Color Manipulation
- Cookie Management
- Debounce: Includes a function and decorator
- Number Utilities
- Object Utilities
- String Manipulation: Includes a pipe for templates
- View Transition Helper: Simplifies view transition animations
This app features a dashboard of mini-applications (widgets), each lazily loaded. While individual routes can use Angular's loadChildren
, displaying multiple widgets in a single view (dashboard) requires additional logic.
The dashboard view uses a widget-loader to load widgets dynamically. Widgets are only loaded when instructed, minimizing client-side resource usage.
The widget service references widget routes to determine available widgets and their loading mechanisms. The dashboard view fetches a configuration from the backend (an array of widget names) and creates one widget-loader per widget. This approach also supports fullscreen widget routes.
This system allows for multiple dashboard configurations tailored to different needs.
Third-party integrations use a reverse proxy configured in the SSR Express server:
import { createProxyMiddleware } from 'http-proxy-middleware';
import { proxyRoutes } from './proxy.routes';
Object.entries(proxyRoutes).forEach(([path, config]) => {
server.use(path, createProxyMiddleware(config));
});
Two widgets demonstrate integrations with third-party apis:
- Weather - integrates with https://api.met.no/weatherapi
- Fund - integrates with https://public.nordnet.no
Two widgets demonstrate canvas and WebGPU effects:
- Starfield Animation - uses 2d canvas
- Rotating Pyramid - uses WebGPU
These are simple experiments to explore some new (for me) technologies and techniques.
A transcription widget uses AI to convert audio to text. To set it up:
winget install --id Python.Python.3.11
python -m pip install --upgrade pip
pip install faster-whisper
python -c "from faster_whisper import WhisperModel; WhisperModel('NbAiLab/nb-whisper-small', device='cpu', compute_type='int8')"
This installs Python and the Whisper AI model. After setup, bun start
enables audio transcription (currently Norwegian only).
The transcription process involves:
- A Python script for transcription
- A backend controller to handle file uploads
- A widget for audio input - via microphone (requires permission) or file upload
The WebLLM chat experiment loads and runs a language model entirely in the browser. This utilizes WebGPU for performance. There are small models which could be cheap to run on low performance graphics systems, but here I'm using a medium model which performs best on high-performance GPU's like nvidia or amd. It can be a bit sluggish on low-end GPU's like intel.
Most systems have more than one GPU on their system. One integrated on the motherboard and a second more powerful GPU mounted as an expansion card. The integrated one will on most systems struggle a bit to run the selected model here, so for best performance - open your OS graphics settings, select advanced settings, choose your browser and enable "High-performance" graphics processor for that program.