HomeBrainAnalyticsUpdatesGet In Touch
HomeBrainAnalyticsUpdatesGet In Touch
© 2026 Lucky For Sum

Updates

This portfolio is treated as a product — built in public, shipped iteratively. These are the release notes.

v1.6.0Phase 23 March 2026

BlurImage component, hero headshot & project photography

A visual quality pass across the site. New BlurImage component adds a blur-to-sharp reveal on every image. Home hero gains a headshot. Project headers now show real photography instead of SVG illustrations. ProjectSlider preloads adjacent slides.

What shipped

  • BlurImage component: wraps Next.js Image with a JS-driven blur placeholder — tiny base64 PNG fades out as the full image loads; supports both fill and fixed-size modes
  • getBlurDataURL utility: reads images from /public at build time and returns a base64 PNG to use as the blur placeholder
  • Home hero headshot: photo of Sumner added to the hero section with blur placeholder; stats strip repositioned below the photo
  • Project headers: new headerImage field on ProjectEntry — Robin AI, Total Platform, and Portfolio For All now display real photography; falls back to SVG illustration when no image is set
  • Project subsection images: migrated to BlurImage for consistent blur-to-sharp behaviour across all in-content imagery
  • ProjectSlider preloading: adjacent slides are preloaded once the section enters the viewport; per-slide cache avoids hiding already-loaded images on revisit
  • New image assets: headshot-hero.png, Robin-2.jpg, portfolio-header-cropped.png, total-platform-header.jpg, Draftwise-Assistant.jpg

Why we built it this way

  • BlurImage over native Next.js placeholder='blur': native approach requires blurDataURL to be co-located with src as a static prop — awkward with paths computed at render time; the component separates concerns cleanly
  • JS-driven onLoad fade (useState) rather than CSS-only: Next.js onLoadingComplete is deprecated; onLoad on the underlying img element is the current reliable hook
  • Wrapper span gets position: absolute; inset: 0 when fill is set — matches Next.js's own fill-mode expectations without an extra CSS class
  • headerImage is optional and falls back to illustration — existing projects without photography continue to work with no migration required
v1.5.0Phase 22 March 2026

Scroll-reveal animations, career history & chat polish

New ScrollingTextReveal primitive deployed across every major heading on the site. New career history section on the homepage. Chat UI refinements: rotating thinking phrases and per-message citation chips.

What shipped

  • ScrollingTextReveal component: IntersectionObserver-triggered word-by-word slide-up animation — each word clips into view with a staggered delay, fires once on first scroll into viewport
  • Deployed across all major headings: homepage hero, CraftStatement, ChatSection, updates page, side panel content cards
  • Career history section on the homepage: 4 roles (Draftwise, Robin AI, Next Ltd, Impero Software) with title, company, description, and year range in an editorial grid layout
  • Chat thinking indicator now cycles through 4 phrases ('Thinking…', 'Searching my work…', 'Pulling from memory…', 'Crafting a response…') every 2.2s with a fade transition between each
  • Citation chips on assistant messages: inline ✦ title chips per message — clicking jumps to the referenced content in the side panel

Why we built it this way

  • Word-level splitting over line-level — independent of font size, viewport width, and text wrapping; no layout dependency
  • IntersectionObserver threshold 0.1 with -10% rootMargin — animation lands as the user arrives at the element, not after they've already passed it
  • Words clipped via overflow: hidden on a wrapper span rather than an off-canvas translate — prevents horizontal scrollbar artifacts on narrow viewports
  • ThinkingBubble uses opacity transition (250ms) rather than remounting the component — avoids layout jank during phrase cycling
  • Citation chips reference the same ResolvedContentRef already held in messageRefs state — no additional data fetch or side panel state duplication
v1.4.0Phase 227 February 2026

Voice-to-chat input

Visitors can now speak their questions directly into the chat. A mic button sits in the input row, shows a live transcript as you speak, and commits the final text for review before sending.

What shipped

  • Mic button in the chat input row — circular glass style, sits between the textarea and send button
  • Live interim transcript: spoken words appear in the textarea in real-time as you speak (dim + italic), giving instant visual feedback
  • Final transcript appends to any existing typed text — typed prefix + spoken suffix combine naturally
  • Mustard pulse ring animation on the mic button while listening; input row border glows mustard
  • Ctrl+M / Cmd+M keyboard shortcut toggles the mic
  • Graceful degradation: mic button hidden on browsers without Speech API support (Firefox, Brave)
  • Actionable error messages for permission denied, no microphone found, and network failures

Why we built it this way

  • Native Web Speech API over a Whisper endpoint — zero latency, no audio upload cost, no backend changes; trade-off is Chrome/Edge/Safari only
  • Brave detected via navigator.brave and treated as unsupported — Brave ships without Google API keys so SpeechRecognition always fails with a network error; hiding the button is cleaner than showing a button that always errors
  • No auto-submit after final transcript — speech recognition is imperfect; visitors review and edit before sending to avoid embarrassing mis-transcriptions
  • Two-tier input model: committed input state (sent to Claude) vs. interimTranscript (display-only) — keeps the controlled textarea contract intact with no synthetic event hacks
  • Transient errors (no-speech, aborted) reset silently; persistent errors (permission denied, network) show a message
v1.3.0Phase 227 February 2026

Homepage refresh, public analytics dashboard & chat UX upgrades

A significant day — new homepage with a featured work slider and craft stats section, a fully public analytics dashboard backed by live PostHog data, and a round of chat UX upgrades including a pulsing button, auto-open on load, and styled tooltips throughout.

What shipped

  • Homepage: new ProjectSlider with animated directional transitions — 3 featured case studies with editorial text and dual image cards
  • Homepage: CraftStatement section with 4 stats (10+ Years, 7 Case studies, 0→1 AI products, 9 Articles) and craft manifesto copy
  • Analytics dashboard at /analytics — live PostHog event widgets (ISR, refreshes every 5 minutes) driven by data-collection.md YAML frontmatter
  • Analytics: bar chart, table, stats, and count widget types; safeguarding badge on sensitive events (flagged messages); data policy accordions below the fold
  • Analytics: demo mode banner when POSTHOG_PERSONAL_API_KEY is not set; graceful empty states on all widgets
  • Floating chat button enlarged (icon circle 1.75rem → 2.5rem) with ambient mustard pulse ring — pauses on hover, stops when chat is open
  • Chat auto-opens on hard page reload; client-side route navigation preserves the user's last state
  • Styled tooltip utility added globally via [data-tooltip] attribute — expand, collapse, close, and send buttons
  • Analytics moved before Updates in the main navigation

Why we built it this way

  • ProjectSlider uses a phase-based animation state machine (idle → exiting → entering) rather than CSS-only transitions — gives precise control over enter/exit direction without layout thrash
  • Analytics dashboard is ISR rather than client-side fetch — data is fresh every 5 minutes with no loading spinner, and the page is fully server-rendered for SEO
  • Event widget config lives in data-collection.md YAML frontmatter rather than a separate config file — keeps the data policy and dashboard config co-located and in sync
  • Pulse animation uses box-shadow expansion rather than a separate DOM element — no layout impact and works with the existing pill shape
  • Auto-open implemented via useEffect in ChatProvider (root layout) rather than useState initialiser — avoids SSR/hydration mismatch while still only firing once per hard load
  • Tooltip implemented as a pure CSS [data-tooltip] attribute utility in globals.css — zero JS, zero component overhead, works on any element across the codebase
v1.2.0Phase 226 February 2026

AI layer: voice calibration, session cache, persistence & Virtual Brain

Depth pass on the intelligence layer — voice training data, response caching, message persistence, and a Virtual Brain that now surfaces real content snapshots from the knowledge base.

What shipped

  • Voice calibration: 14 curated Q&As in Sumner's voice injected into every system prompt, training Claude to match style, depth, and first-person tone
  • 9 answer-quality priority rules (facts-only, correct metrics, practitioner tone, concise by default)
  • Session cache: 50-entry sessionStorage cache with key normalisation — identical questions skip the API entirely, zero latency on repeats
  • Message persistence: conversation history and per-message content refs survive full page navigation via localStorage
  • Virtual Brain outer ring: 7 content-snapshot nodes pulled live from the knowledge base — Beta metrics, AI design philosophy, code-first design, latency thinking, positioning, and design aesthetic
  • Brain page subtitle updated to orient visitors to the two-ring layout

Why we built it this way

  • Voice calibration over system-prompt instructions alone: real Q&As show the model what 'right' looks like rather than just describing it — style transfer is more reliable than rules
  • sessionStorage over localStorage for the response cache: scoped to the tab session so visitors always get fresh answers on a new visit, but repeat questions within a session are instant
  • Content-snapshot nodes on the Virtual Brain pull directly from source content files — if identity.ts or chat-voice.ts changes, the brain updates automatically with no extra maintenance
  • Dashed purple-ish edges on the outer ring visually distinguish the content layer from the inner identity/experience ring without needing a new node type
v1.1.0Phase 223 February 2026

AI Layer: Conversational portfolio with Claude claude-sonnet-4-6

The portfolio can now answer questions about Sumner's work, process, and thinking. Claude speaks in first-person as Sumner, surfaces relevant case studies mid-stream in a side panel, and logs all interactions to PostHog.

What shipped

  • Streaming chat endpoint via Vercel AI SDK + Claude claude-sonnet-4-6 (edge runtime)
  • Full knowledge base injection: all projects, blog articles, design snippets, and about in the system prompt (~15-25k tokens)
  • reference_content tool: Claude surfaces case studies and articles mid-stream in the side panel
  • ChatInterface: two-panel layout (conversation + side panel) with responsive grid
  • 6 prompt chips to help visitors start a conversation
  • PostHog events: chat_message_sent (with message_text), prompt_chip_clicked (chip_text), content_referenced, side_panel_card_clicked, resource_viewed (project/blog page views)

Why we built it this way

  • No RAG/vector DB: all content fits in Claude's 200k context window at ~15-25k tokens — adding embeddings would add infrastructure complexity with no accuracy benefit at this scale
  • Edge runtime for the API route: zero cold starts on Netlify
  • Tool use over prompt-only: mid-stream tool calls let the UI update the side panel before the text response finishes, creating a more responsive feel
v1.0.0Phase 122 February 2026

Foundation: Next.js 14 + full content migration

Complete architectural rewrite from React 16 CRA to Next.js 14 App Router. All content migrated to TypeScript files. All pages and routes built.

What shipped

  • Next.js 14 App Router with SSR metadata via generateMetadata()
  • Dark/light mode with next-themes (no flash on load)
  • All case studies migrated: robin-ai, total-platform, portfolio-for-all, online-safety
  • All 9 blog articles migrated
  • 9 design snippets gallery
  • Contact form via EmailJS
  • SEO-friendly slug-based URLs (e.g. /project/robin-ai)
  • Permanent redirects from old pid-based URLs
  • SVG illustration system split from 237KB monolith into 5 components
  • PostHog analytics (replacing GTM)
  • Netlify deployment via @netlify/plugin-nextjs

Why we built it this way

  • Chose TypeScript files over MDX or headless CMS — the nested section/subsection content structure doesn't map to flat Markdown
  • Chose PostHog over GTM — free tier captures 1M events/month with a real product analytics dashboard
  • Chose Netlify over Vercel — preserves the existing GitHub → auto-deploy workflow without DNS changes
  • Knowledge base approach chosen over RAG/vector DB — all content fits in Claude's 200k context window at ~15-25k tokens