2025

InternTrack – Job Application Tracker

A modern React + TypeScript application for tracking job/internship applications with timelines, analytics, folders, Supabase persistence, and Groq-powered AI parsing for structured job data extraction.

InternTrack – Job Application Tracker

Project Overview

InternTrack streamlines how I capture and monitor internship / early‑career applications. It combines:

  • Structured application records
  • Status & timeline progression
  • Folder-based organization
  • Derived analytics (distribution, velocity, monthly trend, skill frequency, remote vs onsite)
  • Dynamic UI variants (classic / new)
  • Supabase persistence (auth + Postgres CRUD)
  • Groq-powered AI parsing to accelerate data entry from raw job descriptions

Problem & Motivation

Pain PointEffect
Manual transcription of job descriptionsSlows application throughput
No temporal context for progressHard to spot stalled applications
Inconsistent field formattingHinders aggregation & analytics
Repetitive tagging & skill extractionCognitive load / human error

InternTrack reduces friction by pairing a typed domain model with enrichment & analytics. The AI parsing feature turns unstructured JD text into pre-filled, editable application drafts.

Feature Snapshot

CategoryImplemented Details
Core DataRole, company, location, experience, skills, remote, status, notes, timeline, folder
Status FlowApplied → Online Assessment → Interview → Offer → Closed (non-linear updates allowed)
TimelineAppend-only status events with date + note
AnalyticsStatus distribution, time-to-interview / offer, monthly counts, top companies/skills/locations, remote vs onsite
VisualizationPie charts, stat tiles, trend groupings
PersistenceSupabase (auth + job_applications + folders)
UI SystemVariant loader (dynamic CSS & font), memoized components
AI ParsingGroq model invocation converts raw description → structured fields (role, skills, inferred tags, remote signal, experience)
Type SafetyComprehensive TypeScript interfaces & derived stat typing

Domain Modeling

Getting the data structure right was crucial since everything else builds on top of it. I wanted the types to be explicit about the application lifecycle while staying flexible for future features. After going through several iterations, I settled on this structure because it balances specificity with extensibility—the status enum prevents invalid states while the timeline array accommodates the messy reality of how job applications actually progress. The optional fields like folderId and jobPostingUrl were added later when I realized users needed better organization and wanted to reference the original postings.

export type JobStatus =
  'Applied' | 'Online Assessment' | 'Interview' | 'Offer' | 'Closed';

export interface TimelineEvent {
  status: JobStatus;
  date: string;
  note: string;
}

export interface Job {
  id: number;
  role: string;
  company: string;
  location: string;
  experienceRequired: string;
  skills: string[];
  remote: boolean;
  notes: string;
  status: JobStatus;
  dateApplied: string;
  timeline: TimelineEvent[];
  folderId?: string;
  jobPostingUrl?: string;
}

export interface JobStats {
  total: number;
  applied: number;
  onlineAssessment: number;
  interview: number;
  offer: number;
  closed: number;
}

The timeline array turned out to be one of my better decisions—it captures the non-linear nature of job applications (sometimes you skip stages or circle back) while making it trivial to calculate time spent at each stage. Initially, I considered just storing the current status and dates, but that approach falls apart when companies have different processes or when you need to track multiple interview rounds. The append-only timeline design means I never lose historical context, and it makes features like “time since last update” incredibly straightforward to implement. Plus, it gives me rich data for future analytics—like identifying which companies tend to have longer assessment phases or spotting patterns in my own application velocity.

Timeline Rationale

Persisting historical stage events enables:

  • Time-to-stage calculations
  • Progression visualization
  • Future predictive heuristics (e.g., “Assessment > X days” flags)
  • Auditable change history

Supabase Persistence (Excerpt)

This is where the rubber meets the road—transforming my clean TypeScript interfaces into database records. The mapping between camelCase and snake_case feels tedious but keeps the database conventions clean. I spent way too much time debating whether to use snake_case everywhere or camelCase everywhere, but ultimately decided to follow each platform’s conventions—JavaScript uses camelCase, PostgreSQL prefers snake_case, and the transformation layer bridges them cleanly. The user authentication check at the top is crucial since Supabase’s Row Level Security policies depend on having a valid user context for every database operation.

static async createJobApplication(jobData: Omit<Job, 'id'>): Promise<Job> {
  const { data: user } = await supabase.auth.getUser();
  if (!user.user) throw new Error('User not authenticated');

  const databaseJob = {
    user_id: user.user.id,
    role: jobData.role,
    company: jobData.company,
    location: jobData.location,
    experience_required: jobData.experienceRequired,
    skills: jobData.skills,
    remote: jobData.remote,
    notes: jobData.notes,
    status: jobData.status,
    date_applied: jobData.dateApplied,
    timeline: jobData.timeline || [],
    folder_id: jobData.folderId || null,
    job_posting_url: jobData.jobPostingUrl || null
  };

  const { data } = await supabase
    .from('job_applications')
    .insert([databaseJob])
    .select()
    .single();

  return this.mapDatabaseJobToJob(data);
}

I appreciate how Supabase handles user authentication transparently—no JWT decoding or session management headaches. The mapDatabaseJobToJob helper does the reverse transformation back to my preferred interface shape.

State & Derivations

React’s useMemo saves me from recalculating stats on every render. With potentially hundreds of job records, these filter operations could get expensive without memoization:

const stats = useMemo(() => ({
  total: jobs.length,
  applied: jobs.filter(j => j.status === 'Applied').length,
  onlineAssessment: jobs.filter(j => j.status === 'Online Assessment').length,
  interview: jobs.filter(j => j.status === 'Interview').length,
  offer: jobs.filter(j => j.status === 'Offer').length,
  closed: jobs.filter(j => j.status === 'Closed').length
}), [jobs]);

Analytics Construction (Distribution Sample)

For the pie charts and progress visualization, I needed to transform the raw counts into percentages. This pattern of mapping over status enums keeps things consistent and makes adding new statuses straightforward:

const statuses: JobStatus[] = ['Applied','Online Assessment','Interview','Offer','Closed'];
const statusDistribution = statuses.map(status => {
  const count = jobs.filter(j => j.status === status).length;
  return {
    status,
    count,
    percentage: total > 0 ? (count / total) * 100 : 0
  };
});

Dynamic UI Variant Loader

This was a fun experiment in code splitting and theme switching. Instead of bundling both UI variants upfront, I load only what’s needed. The dynamic imports keep the initial bundle smaller and allow for completely different design systems:

const variant: UIVariant = useMemo(() => getStoredVariant(), []);

useEffect(() => {
  (async () => {
    if (variant === 'new') {
      await import('../new-ui/index.css');
    } else {
      await import('./index.css');
    }
    setCssReady(true);
  })();
}, [variant]);

const AppComponent = useMemo(
  () => React.lazy(() => variant === 'new'
    ? import('../new-ui/App')
    : import('./App')),
  [variant]
);

The React.lazy approach means users switching between UI modes get a slight loading delay, but it was worth it for the bundle size savings. Plus, it makes A/B testing design iterations much cleaner.

AI Parsing & Enrichment (Groq-Powered)

The AI parsing pipeline reduces friction when adding a new application:

  1. User pastes a raw job description (or optionally a job posting URL after manual retrieval).
  2. A Groq model call produces structured JSON capturing:
    • Inferred role title normalization
    • Extracted skills (token-/phrase-level, deduplicated)
    • Location signals (remote / hybrid detection)
    • Experience phrasing → normalized experienceRequired
    • Potential tags (e.g., “frontend”, “data”, “cloud”)
  3. Draft form fields are pre-populated; user can edit before saving.
  4. Accepted values become the persisted Job plus an initial timeline event (“Applied” or “Created via AI Parse”).

Representative Invocation (Simplified)

The magic happens in this surprisingly straightforward function. I keep the system prompt minimal and focused—LLMs work better with clear, single-purpose instructions than verbose prompts:

interface ParsedJobDraft {
  role: string;
  company?: string;
  location?: string;
  experienceRequired?: string;
  skills: string[];
  remote: boolean;
  notes: string;
}

async function parseJobDescription(raw: string): Promise<ParsedJobDraft> {
  const resp = await groq.chat.completions.create({
    model: 'llama-4-scout-17b-16e-instruct',      // deterministic, low-temp
    temperature: 0.1,
    messages: [
      {
        role: 'system',
        content: 'Extract structured job posting data. Return ONLY JSON.'
      },
      {
        role: 'user',
        content: raw
      }
    ]
  });

  const json = JSON.parse(resp.choices[0].message?.content ?? '{}');
  return {
    role: json.role ?? '',
    company: json.company ?? '',
    location: json.location ?? '',
    experienceRequired: json.experience ?? '',
    skills: Array.isArray(json.skills) ? json.skills : [],
    remote: !!json.remote,
    notes: json.summary ?? ''
  };
}

The ?? '' fallbacks and array checks are essential—LLMs occasionally return malformed JSON or skip fields entirely. Better to have empty strings than runtime crashes when the user is trying to quickly log an application.

Design Considerations

AspectDecision
DeterminismLow temperature to keep output stable
Field ValidationPost-parse normalization (title casing, skill dedupe)
User TrustParsed draft never auto-saves—explicit confirmation required
Error HandlingGraceful fallback to blank form if JSON invalid
ExtendabilityFuture: classify seniority, predict stage success probability

Why Groq?

CriterionBenefit
LatencySub-second parse → preserves input flow
CostEconomical for frequent, small prompts
Model VarietyAccess to performant general-purpose LLMs
SimplicityStraightforward JSON-style extraction prompts

Job Card Interaction (Excerpt)

I went with a simple dropdown for status changes rather than a fancy multi-step wizard. Users can update job statuses quickly without overthinking the workflow:

<select
  value={job.status}
  onChange={handleStatusChange}
  className="bg-slate-700 border border-slate-600 rounded-lg px-3 py-1.5 text-sm"
>
  <option>Applied</option>
  <option>Online Assessment</option>
  <option>Interview</option>
  <option>Offer</option>
  <option>Closed</option>
</select>

Timeline Modal (Conceptual Behavior)

Each TimelineEvent renders:

  • Color-coded panel
  • Icon mapped to status
  • Note / contextual message
  • Date (localized)

This narrative presentation aids recall and follow-up cadence.

Styling & Visual System

  • Tailwind utility foundation
  • Glass layering via translucent slate backgrounds + subtle borders
  • Gradients & spotlight radial overlays for depth
  • Motion limited to transform/opacity for performance

The glass-panel effect took some tweaking to get right—too much blur and it feels heavy, too little and it looks flat:

.glass-panel {
  @apply bg-slate-900/60 backdrop-blur-md border border-slate-700/60 rounded-xl;
}

This Tailwind pattern keeps the styling consistent across components while making the glass effect easy to reuse. The opacity and blur values were chosen through trial and error—what looks good in design tools doesn’t always translate to the browser.

Performance Strategies

TargetTechnique
Avoid redundant rendersReact.memo, narrow dependency arrays
Analytics costPre-compute & memoize derived arrays
Variant assetsConditional dynamic imports
Chart overheadData shaped before passing to <Recharts /> components

Architectural Advantages

  • Composable Domain: Timeline events & status model decoupled from UI
  • Deterministic Enrichment: AI used as an accelerator, not a hidden mutation layer
  • Seam for Expansion: Future analytics (e.g., skill gap analysis) can hook into parsed fields
  • Strict Typing: Minimizes drift between parsed JSON & internal models

Strategic Roadmap (High-Level)

  • Enhanced AI: classification (seniority, category), multi-pass validation
  • Bulk ingestion: parse multiple postings sequentially
  • Export / import with enrichment retention
  • Follow-up reminders triggered by stage dwell thresholds
  • Server-side aggregation for heavy analytics (scaling beyond client loops)

Reflection

InternTrack pairs a pragmatic product surface (clean, fast tracker) with useful AI—not flashy automation for its own sake. The Groq parsing workflow collapses the “blank form” hurdle while preserving user oversight. The codebase demonstrates intentional layering: domain modeling first, enrichment second, analytics third—yielding clarity and maintainability.

Last updated on August 24, 2025 at 12:16 PM EST. See Changelog

Explore more projects