
Product Analytics: I Had No Idea Who Was Using My Service or How
I was running a service with zero insight into user behavior. Setting up PostHog changed how I make product decisions—from guessing to knowing.

I was running a service with zero insight into user behavior. Setting up PostHog changed how I make product decisions—from guessing to knowing.
Why a finished toy project is better than an unfinished masterpiece. Lessons learned from failing a 6-month project and succeeding with a weekend tool.

A practical guide to choosing between Supervised, Unsupervised, and Semi-Supervised Learning when you don't have labeled data.

For developers, a tech blog is not an option but a necessity. Beyond just recording what you learn, it becomes a powerful weapon in the job market and the best tool for practicing metacognitive learning. I share the changes that happened 6 months after starting my blog and tips for consistency.

Integrating Stripe for my SaaS was nerve-wracking. Code that handles money feels different. From Checkout Sessions to Webhooks—lessons from the trenches.

I shipped features constantly. Filters, social login, dashboard customization, notification settings. Every sprint, something new went out. Every time, I told myself the users would love it.
Then one day I asked myself a blunt question: do I actually know what users are doing in my product? I couldn't answer it.
I knew the number of signups. I knew yesterday's visitor count. That was it. Which features were being used, where people got stuck, what someone does right after signing up, whether they came back a week later — I had no idea on any of it.
What made it worse was what I found out later. A filter feature I'd spent three months building had less than 3% actual usage. Meanwhile, a CSV export I'd thrown together as a quick workaround was being used weekly by 40% of my active users. Because I didn't know it mattered, I hadn't maintained it. There was no plan to improve the UI.
It was like cooking with a blindfold on. No smell, no taste, just a blind guess that "this is probably good enough." Whether the dish comes out edible is mostly luck.
That's when I got serious about product analytics.
All the numbers I had were vanity metrics. Total signups, pageviews, DAU. They look impressive in a pitch deck. They're useless for making product decisions.
The problem with vanity metrics is they don't tell you what to do next. Great, you have a thousand signups. So what do you fix? Where do you spend your time? No answer.
What I actually needed were behavioral metrics.
| Vanity Metric | Behavioral Metric |
|---|---|
| Total signups | 7-day retention rate |
| Pageviews | Core feature usage rate |
| Session count | Funnel completion rate |
| DAU | Activation rate |
| Avg. session length | Feature reach per user |
Behavioral metrics start from the question "what did the user do?" Did a new signup actually try the core feature (activation)? Did they come back a week later (retention)? At which step did they leave (funnel drop-off)? These numbers show you what to improve.
Vanity metrics are like a bathroom scale. They tell you your current state but nothing about which part of your body is the problem or what you should eat. Behavioral metrics are more like an MRI. You see exactly where the weakness is and where to focus.
I compared several tools: Mixpanel, Amplitude, Heap, Google Analytics. I landed on PostHog for straightforward reasons.
Open source, self-hostable. Your data doesn't have to leave your infrastructure. In B2B, sending customer data to a third party can straight-up violate contract terms. With PostHog, you can use their cloud or host it yourself.
Everything in one product. Event tracking, funnel analysis, session replay, feature flags, A/B testing — all PostHog. The alternative is Mixpanel for events, LaunchDarkly for feature flags, FullStory for session replay. That fragmentation has a cost beyond pricing: you lose context between tools.
The free tier is generous. PostHog Cloud gives you 1 million events per month for free. For an early-stage product, that's more than enough to start making data-informed decisions.
Google Analytics is still useful, but it's designed for web traffic and marketing attribution, not for understanding how users behave inside your product. For in-product behavioral analysis, you want an event-based tool.
Installation is straightforward.
npm install posthog-js
In a Next.js App Router setup, create a provider component and wrap your root layout with it.
// src/components/PostHogProvider.tsx
"use client";
import posthog from "posthog-js";
import { PostHogProvider as PHProvider } from "posthog-js/react";
import { useEffect } from "react";
export function PostHogProvider({ children }: { children: React.ReactNode }) {
useEffect(() => {
posthog.init(process.env.NEXT_PUBLIC_POSTHOG_KEY!, {
api_host: process.env.NEXT_PUBLIC_POSTHOG_HOST ?? "https://app.posthog.com",
capture_pageview: false, // handle pageviews manually
capture_pageleave: true,
session_recording: {
maskAllInputs: true, // mask all user inputs for privacy
},
});
}, []);
return <PHProvider client={posthog}>{children}</PHProvider>;
}
App Router does client-side route transitions, so automatic pageview capture gets it wrong. Set capture_pageview: false and fire pageviews manually on route changes.
// src/components/PostHogPageView.tsx
"use client";
import { usePathname, useSearchParams } from "next/navigation";
import { usePostHog } from "posthog-js/react";
import { useEffect } from "react";
export function PostHogPageView() {
const pathname = usePathname();
const searchParams = useSearchParams();
const posthog = usePostHog();
useEffect(() => {
if (pathname && posthog) {
let url = window.origin + pathname;
if (searchParams.toString()) {
url += `?${searchParams.toString()}`;
}
posthog.capture("$pageview", { $current_url: url });
}
}, [pathname, searchParams, posthog]);
return null;
}
Pageviews alone don't tell you much. You need to instrument the specific actions that matter — button clicks, feature completions, errors.
import { usePostHog } from "posthog-js/react";
function ExportButton({ reportId }: { reportId: string }) {
const posthog = usePostHog();
const handleExport = async () => {
posthog.capture("report_exported", {
report_id: reportId,
format: "csv",
source: "dashboard",
});
await exportReport(reportId);
};
return <button onClick={handleExport}>Export as CSV</button>;
}
Event naming matters more than it seems. When you have hundreds of events, bad names make the dashboard unreadable. I use a <noun>_<verb> convention — not button_clicked but report_exported, user_invited, filter_applied. Concrete and specific.
User identification is critical for accurate funnel analysis. Anonymous events before login need to connect to identified events after login.
import posthog from "posthog-js";
async function onLoginSuccess(user: User) {
// Links the anonymous session to this user
posthog.identify(user.id, {
email: user.email,
name: user.name,
plan: user.plan,
created_at: user.createdAt,
company_name: user.company,
});
// Group analytics for B2B: track at org level, not just individual
if (user.organizationId) {
posthog.group("organization", user.organizationId, {
name: user.organizationName,
plan: user.organizationPlan,
});
}
}
// Always reset on logout
function onLogout() {
posthog.reset();
}
Skipping posthog.reset() on logout means the next person who logs in on the same device inherits the previous user's session. On shared machines or shared accounts, your user data becomes fiction.
After setting up PostHog, my first instinct was to look at everything. That was a mistake. Too many metrics is the same as none. Here are the ones I actually use to make decisions.
Of new signups, how many reached their "aha moment" — the point where the product clicked for them? I defined mine as: created their first report within 48 hours of signup.
Low activation means onboarding is broken. Not missing features — users are leaving before they experience the core value.
What percentage of users came back on day 7, day 30? This is the health of the product. Pouring money into acquisition with low retention is filling a bucket with a hole in the bottom.
PostHog's retention chart works on cohorts — groups defined by signup date. You can compare whether June signups are retained better than May signups. That's how you measure whether a product improvement actually worked.
Map out a flow: signup → email verification → first project → invite teammate → upgrade. Then look at what percentage makes it through each step.
In my product, 60% of users were dropping off between email verification and creating their first project. The reason: after verifying their email, they landed on a blank screen with no guidance. Adding a template gallery to that screen dropped the abandonment rate to 35%.
Of active users, what percentage has ever used a specific feature? If that number is 3%, the feature is effectively invisible to 97% of your users. That was my filter feature. Three months of work, invisible to nearly everyone.
Low reach means one of two things: the feature isn't needed, or users don't know it exists. Session replay helps you tell the difference.
Numbers tell you what happened. Session replay shows you why.
You can see from analytics that 60% of users drop off at a specific step. But you can't see from a number that users are rage-clicking around the screen looking for a button they can't find, or that a form throws an error on submit that nobody ever reported. Watching actual sessions makes the fix obvious.
PostHog lets you filter sessions by event — "show me only sessions where the user hit this specific drop-off point." You don't have to watch everything.
Privacy note: session replay captures what users type unless you mask it. PostHog masks inputs by default, but verify the config.
posthog.init(POSTHOG_KEY, {
session_recording: {
maskAllInputs: true,
maskInputOptions: {
password: true,
email: true,
},
blockSelector: ".sensitive", // block entire elements containing PII
},
});
More events is not better. Instrument what you'll actually use to make decisions.
Track: Core feature usage, funnel step completions and exits, errors, conversion-related actions.
Don't track: Personally identifiable information, raw input content, browser behaviors unrelated to your product.
Event naming conventions that hold up at scale:
Good:
project_created
report_exported
team_member_invited
subscription_upgraded
onboarding_step_completed
Bad:
click (what was clicked?)
button_clicked (which button?)
user_action (what action?)
pageView (mixing camelCase and snake_case)
Keep property names consistent across events too. Mixing report_id and reportId in different events makes aggregation a pain later.
A fully instrumented product is like a dashcam in your car. When something goes wrong, you can pull the footage and understand exactly what happened. But having a dashcam doesn't make you a better driver. Analytics helps you make better decisions — it doesn't make them for you.
Vanity metrics don't tell you what to do. Signups and pageviews are fine for reference. Activation rate, retention, and funnel drop-off are what you need for actual decisions.
PostHog is an open-source, self-hostable all-in-one tool. Events, funnels, session replay, and feature flags in one product. Free up to 1M events/month on PostHog Cloud.
Design events around decisions, not actions. Tracking every click creates noise. Define events based on what questions you need to answer.
identify and reset always go together. Link anonymous sessions to users on login, clear the session on logout. Missing reset silently corrupts your user data.
Session replay fills in the context that numbers miss. High drop-off at a specific step? Watch the sessions from that step. The fix becomes obvious faster than any analysis.
I built features nobody used for three months not because I lacked data, but because I never thought to look. Now the first question I ask before building a feature is: what event will tell me if this is working? If I can't define that, I don't build it.