Sentry Setup in Practice: Know About Errors Before Your Users Do
I was tired of users reporting errors before I knew about them. How I set up Sentry in a Next.js project for real-time error tracking and alerting.
I was tired of users reporting errors before I knew about them. How I set up Sentry in a Next.js project for real-time error tracking and alerting.
Solving server waste at dawn and crashes at lunch. Understanding Auto Scaling vs Serverless through 'Taxi Dispatch' and 'Pizza Delivery' analogies. Plus, cost-saving tips using Spot Instances.

How to deploy without shutting down servers. Differences between Rolling, Canary, and Blue-Green. Deep dive into Database Rollback strategies, Online Schema Changes, AWS CodeDeploy integration, and Feature Toggles.

Why your server isn't hacked. From 'Packet Filtering' checking ports/IPs to AWS Security Groups. Evolution of Firewalls.

Why would Netflix intentionally shut down its own production servers? Explore the philosophy of Chaos Engineering, the Simian Army, and detailed strategies like GameDays and Automating Chaos to build resilient distributed systems.

It was 11pm. A KakaoTalk message appeared.
"Hey, it's been broken for a while now?"
What's broken? Where? How? I had nothing. I checked the server logs. Everything looked fine. I tried to reproduce the issue and got nothing. I asked the user to clarify.
"Which screen? When you click a button, or when the page loads?"
"Just... in general."
Just. In general.
After a few of these conversations, I made a decision. I need to know about errors before my users do. Debugging by intuition had to end. I set up Sentry properly.
I initially thought of Sentry as "a prettier error logger." It isn't.
Sentry is CCTV for your application. When you install security cameras in a store, you can rewind the footage when something goes wrong. You see who came in, when, what path they took, what they did. Sentry does exactly this. When an error occurs, you get the user's entire journey up to that moment — browser info, the exact function that threw, and even the network requests made right before the crash.
Planes have black boxes. When a crash happens, investigators pull the black box and analyze the final flight record. Sentry's Breadcrumbs feature is this black box. It records, in order, what the user clicked, which API calls were made, and what was logged to the console — all leading up to the error.
Here's a quick overview of the core features:
| Feature | Description |
|---|---|
| Error Capture | Automatically collects exceptions with full stack traces |
| Breadcrumbs | Records user actions leading up to the error |
| Source Maps | Maps minified production code back to original source |
| Performance Monitoring | Detects slow transactions, N+1 queries |
| Alerting | Real-time notifications to Slack, Discord, or email |
| Replays | Session video replay at the moment of the error |
Sentry has a Next.js-specific SDK: @sentry/nextjs. There's an official wizard (npx @sentry/wizard) that generates files automatically, but auto-generated files you don't understand become a liability later. I went through each file manually.
npm install @sentry/nextjs
Three config files are needed.
sentry.client.config.ts — runs in the browser.
import * as Sentry from "@sentry/nextjs";
Sentry.init({
dsn: process.env.NEXT_PUBLIC_SENTRY_DSN,
// Only report in production
enabled: process.env.NODE_ENV === "production",
// Sample 10% of traces (increase if you need more coverage)
tracesSampleRate: 0.1,
// Replay: always capture on error, 10% of normal sessions
replaysOnErrorSampleRate: 1.0,
replaysSessionSampleRate: 0.1,
integrations: [
Sentry.replayIntegration({
maskAllText: true, // Mask all text for privacy
blockAllMedia: false,
}),
],
// Ignore noise
ignoreErrors: [
"ResizeObserver loop limit exceeded",
"Non-Error promise rejection captured",
/^Network request failed/,
/^ChunkLoadError/,
],
beforeSend(event) {
// Drop localhost errors
if (event.request?.url?.includes("localhost")) {
return null;
}
return event;
},
});
sentry.server.config.ts — runs in the Node.js server.
import * as Sentry from "@sentry/nextjs";
Sentry.init({
dsn: process.env.SENTRY_DSN,
enabled: process.env.NODE_ENV === "production",
tracesSampleRate: 0.1,
beforeSend(event, hint) {
const error = hint.originalException;
// 404s are not errors
if (error instanceof Error && error.message.includes("NEXT_NOT_FOUND")) {
return null;
}
// Strip cookies from server events
if (event.request?.cookies) {
event.request.cookies = "[Filtered]";
}
return event;
},
});
sentry.edge.config.ts — runs in Next.js Edge Runtime (middleware).
import * as Sentry from "@sentry/nextjs";
Sentry.init({
dsn: process.env.SENTRY_DSN,
enabled: process.env.NODE_ENV === "production",
tracesSampleRate: 0.05, // Edge handles high volume, keep this low
});
The config files alone aren't enough. Wrap next.config.ts with withSentryConfig to enable source map uploads and build-time instrumentation.
import { withSentryConfig } from "@sentry/nextjs";
import type { NextConfig } from "next";
const nextConfig: NextConfig = {
// your existing config
};
export default withSentryConfig(nextConfig, {
org: process.env.SENTRY_ORG,
project: process.env.SENTRY_PROJECT,
authToken: process.env.SENTRY_AUTH_TOKEN,
sourcemaps: {
deleteSourcemapsAfterUpload: true, // Upload to Sentry, then delete from bundle
},
silent: !process.env.CI,
autoInstrumentServerFunctions: true,
autoInstrumentMiddleware: true,
autoInstrumentAppDirectory: true,
});
deleteSourcemapsAfterUpload: true is non-negotiable. Source maps expose your original code if left in the production bundle. Upload them to Sentry, then delete them from what ships to the client.
Automatic capture handles uncaught exceptions. But sometimes you want to report a bad state that didn't throw — or add context to a caught error before it goes to Sentry.
// lib/sentry.ts
import * as Sentry from "@sentry/nextjs";
export function captureError(
error: unknown,
context?: Record<string, unknown>
) {
Sentry.withScope((scope) => {
if (context) {
scope.setContext("additional", context);
}
Sentry.captureException(error);
});
}
export function captureWarning(message: string, data?: Record<string, unknown>) {
Sentry.captureMessage(message, {
level: "warning",
extra: data,
});
}
For React component trees, I built a custom Error Boundary:
// components/error-boundary.tsx
"use client";
import * as Sentry from "@sentry/nextjs";
import { Component, ReactNode } from "react";
interface Props {
children: ReactNode;
fallback?: ReactNode;
}
interface State {
hasError: boolean;
eventId?: string;
}
export class SentryErrorBoundary extends Component<Props, State> {
constructor(props: Props) {
super(props);
this.state = { hasError: false };
}
static getDerivedStateFromError() {
return { hasError: true };
}
componentDidCatch(error: Error, errorInfo: React.ErrorInfo) {
const eventId = Sentry.captureException(error, {
extra: { componentStack: errorInfo.componentStack },
});
this.setState({ eventId });
}
render() {
if (this.state.hasError) {
return (
this.props.fallback ?? (
<div className="error-container">
<p>Something went wrong. The team has been notified.</p>
{this.state.eventId && (
<p className="text-sm text-muted-foreground">
Event ID: {this.state.eventId}
</p>
)}
</div>
)
);
}
return this.props.children;
}
}
The eventId returned by captureException is worth surfacing to the user. When someone contacts support saying "I got an error," you ask for their event ID and pull up the exact incident in Sentry. No back-and-forth trying to reproduce it.
The first time you turn on Sentry alerts, Slack explodes. Trivial errors, real outages, browser quirks — it all floods in. You end up muting the channel. Now you have CCTV with the monitor turned off.
I run three alert rules.
Rule 1: New issue created (immediate)First time a new error type appears, I want to know right away. Known errors firing again don't alert me — only genuinely new issues do.
A new issue is created#errors-critical in SlackEven a known, low-priority error becomes urgent if it suddenly fires a hundred times in an hour. That's a symptom of something larger breaking.
The issue occurs more than 100 times in 1 hour#errors-critical + send emailNot every error needs immediate action. A daily summary keeps me aware of slow-growing problems without interrupting my day.
#errors-dailyConnecting Slack takes two minutes: Settings → Integrations → Slack in the Sentry dashboard. Authorize the workspace, pick a channel, done. Discord works the same way.
Without source maps, a Sentry stack trace looks like this:
Error: Cannot read properties of undefined (reading 'map')
at t.default (main-abc123.js:1:28491)
at e (chunk-xyz789.js:1:14832)
Numbers. No filenames, no function names, no context. Useless. With source maps properly configured:
Error: Cannot read properties of undefined (reading 'map')
at PostList (src/components/blog/PostList.tsx:42:18)
at BlogPage (src/app/[locale]/blog/page.tsx:28:12)
That difference collapses debugging time from hours to minutes.
Finding out later that user emails were being sent to Sentry is not a good position to be in. Set the rules before the data flows in. Sentry has a built-in Data Scrubbing feature in project settings — list keywords like password, token, email, and credit_card and those fields get automatically masked. Run that alongside your beforeSend hook for double coverage.
tracesSampleRatetracesSampleRate: 1.0 sends 100% of transactions to Sentry. Fine when traffic is low. When users grow, so does the bill. Start at 0.1 and raise it deliberately if you need more coverage.
Sentry.init({
dsn: process.env.NEXT_PUBLIC_SENTRY_DSN,
environment: process.env.NODE_ENV,
release: process.env.NEXT_PUBLIC_VERCEL_GIT_COMMIT_SHA,
});
The environment tag lets you filter the Sentry dashboard to production-only errors. The release tag tied to a commit SHA tells you which deployment introduced a bug. When a regression appears, you immediately know which deploy broke it.
Now I know about errors before users send me a message. When an alert fires in Slack, I already have the reproduction path, the stack trace, the breadcrumbs, and the number of affected users. "Just... in general" doesn't paralyze me anymore.
The setup that made this real:
Source maps are non-negotiable. deleteSourcemapsAfterUpload in withSentryConfig is the key.
Cut noise first with beforeSend. 404s, localhost errors, and network failures should never reach Sentry.
Three alert tiers, three channels. New issues, spikes, daily digest. Mixing them into one channel means eventually muting everything.
PII filtering from the start. beforeSend plus Data Scrubbing in project settings. Two layers.
Surface the eventId to users. One ID means one precise lookup in Sentry when support requests come in.
Sentry is a radar system. Flying without radar means navigating by instinct and hoping you don't hit a mountain. With radar, you see the obstacle before you reach it. The goal was to know about errors before users do. That's solved now.