ExperienceBrainAnalyticsUpdatesGet In Touch
ExperienceBrainAnalyticsUpdatesGet In Touch
© 2026 Lucky For Sum

Updates

This portfolio is treated as a product — built in public, shipped iteratively. These are the release notes.

v1.13.0Phase 231 March 2026

Chat streaming fade, slim Summarize & Virtual Brain copy

Assistant replies from the live API are smoothed: text reveals on an 18ms cadence with adaptive chunk steps, and each new character fades in over 0.32s with a short stagger so fast token bursts feel readable. The citation side panel waits 520ms after the stream ends before opening. Summarize now sends only the current project or blog to the API — slimmer prompts and a dedicated system path instead of the full corpus. The Virtual Brain hero gets tighter copy and subtitle spacing.

What shipped

  • StreamingFadeChars: live streams throttle how fast raw text appears (18ms tick, 1–7 characters per tick depending on backlog), then each newly revealed character gets a 0.32s opacity ease with staggered animationDelay (capped for the first 40 new chars per frame)
  • While streaming, assistant text renders as plain pre-wrap; after finish, the same message switches to full ReactMarkdown + list styling — chip/cached replies skip the throttle and use markdown immediately
  • Citation refs tray: after onFinish, the side panel opens only after a 520ms delay so the answer can be read before the layout reflows
  • Summarize button passes content slug + kind (project/blog) with the pending prompt; chat API can inject buildSlimKnowledgeBase for that piece only — no tools, lower maxOutputTokens — reducing load versus the full knowledge base
  • Chat input: send button shows a spinner and disables while loading; mic button disabled during the same window; contentEditable locked while streaming or listening
  • Virtual Brain subtitle rewritten: map of the intelligence layer behind the AI assistant; drag, zoom, pan — subtitle line-height 1.25 and bottom margin for spacing above the canvas

Why we built it this way

  • Throttle + per-char fade only for real API streams — cached chip answers stay instant and keep markdown formatting
  • 520ms refs delay — avoids the panel stealing attention mid-read; matches a natural pause after the stream completes
  • Slim KB for Summarize — one article or case study serialised server-side keeps token use predictable for org limits; general chat still uses the full corpus
  • Shorter Brain intro — ring categories remain visible on the canvas; the headline already names the experience
v1.12.0Phase 214 March 2026

Inline prompt chips & page-specific conversation starters

Prompt chips move from a focus-triggered popover in the input to inline buttons that appear directly beneath the welcome message. Every page now has its own 3 chips with hardcoded responses — so the conversation starts from where you actually are, not a generic blank slate.

What shipped

  • Chips rendered inline after the IntroMessage in the chat panel — visible immediately on open, no input focus required
  • Homepage retains all 6 general prompts ('What's your most challenging project?', 'Tell me about your AI product experience', etc.)
  • Every project page, blog post, and static page now has 3 context-specific chips that surface relevant questions about that page's content
  • All page-specific chip responses are hardcoded with citations — clicking a chip on the Draftwise page asks about the blank-box problem, trust through provenance, or Beta results; clicking on the Robin AI page asks about the first-designer role, the practice build, or the hardest challenge
  • Static pages (Brain, Work, Updates, Articles, Contact) each have 3 tailored chips — e.g. 'How do you treat your portfolio as a product?' on Updates, 'Which project should I look at first?' on Work
  • ChatInterface reads contextTag.id to look up page chips from the new PAGE_CHIPS map — falls back to HOMEPAGE_CHIPS when no context is set
  • Removed popover chip mechanism from ChatInput entirely — no more focus-to-reveal, no showPopover state, no chipsVisible prop
  • Chip row fades in with the same msgIn animation as the intro message, staggered 150ms after

Why we built it this way

  • Chips inline rather than popover — the popover required users to focus the input to discover suggestions existed; inline chips are visible the moment the chat opens
  • 3 chips per page rather than 6 — 6 chips on a specific page would feel padded; 3 forces better curation and keeps the panel from looking cluttered
  • Hardcoded responses for page chips — same pattern as the existing homepage chips, zero API latency, consistent quality, and no risk of a hallucinated answer about a specific case study
  • PAGE_CHIPS map keyed by contextTag.id — the id is already set by PageContextSetter on every page, so no additional page-level wiring was needed
  • Chip row uses flex-wrap so 3 chips reflow naturally on narrow panel widths without truncation
v1.11.0Phase 211 March 2026

UnicornStudio hero, particle canvas & work carousel

The homepage hero gets a full motion overhaul. A UnicornStudio scene runs as the hero background, a hand-coded particle flow field adds a second interactive canvas layer, an SVG ripple filter distorts the scene on mouse movement, and a new carousel beneath the hero cycles through project photography and video at pace.

What shipped

  • UnicornStudio scene integrated as the hero background via unicornstudio-react — renders a GPU-composited animation tied to the project ID, replacing the static dark gradient
  • HeroRipple: SVG feTurbulence + feDisplacementMap filter applied as a backdrop-filter overlay — mouse speed drives displacement scale (up to 48px) and frequency, decays exponentially on idle
  • HeroCarousel: 8-slide auto-advancing reel of project images and video clips below the hero fold — 1 s per image, up to 3 s per video clip, videos reset and replay on each revisit
  • Carousel timing driven by readyState checks — video slides wait for canplay before starting the clock, preventing blank frames on slow connections
  • HeroCanvas: 1 800-particle flow field rendered on a full-bleed canvas using layered trigonometric noise — gold-spectrum particles trail behind a 30-step history, mouse creates a whirlpool swirl within a 160 px radius

Why we built it this way

  • UnicornStudio over a self-hosted WebGL shader — the hosted scene is art-directed and editable without code changes; a WebGL background was prototyped and removed the same day (no user-visible regression)
  • SVG filter applied as a backdrop-filter rather than directly to the canvas — distorts all layers behind it (UnicornStudio, type) without needing per-element filter props
  • Particle flow field uses trig noise (4 layered sine/cosine terms) rather than a Perlin library — no dependency, runs fast, and the contour-map aesthetic matches the portfolio's dark gold palette
  • Carousel advances on setTimeout rather than requestAnimationFrame — frame-accurate timing isn't needed here; timeouts are simpler and don't block the main thread when the tab is backgrounded
  • Image slides use 1 s duration (fast enough to feel kinetic, slow enough to read); video slides cap at 3 s regardless of clip length to keep the reel tight
v1.10.0Phase 210 March 2026

ProjectSlider refactor & Robin AI video assets

ProjectSlider rebuilt with improved layout and media handling. Robin AI case study gains two video clips. Minor HomeHero and Nav polish.

What shipped

  • ProjectSlider layout refactored — media card and text card positions tightened, responsive grid breakpoints adjusted
  • Robin AI project: two video assets added (RAI.mp4, RAI 2.mp4) replacing static imagery in the featured slide
  • ProjectSubsection image utility class added for consistent in-content image sizing
  • SummariseButton module adjustment for layout alignment
  • Nav: minor link or spacing tweak
  • HomeHero: layout refinement pass

Why we built it this way

  • Video assets committed directly to /public rather than a CDN — file sizes are small enough for the Netlify bundle and keep the deployment self-contained
  • ProjectSlider refactor kept to layout/CSS changes only — animation state machine from v1.3.0 left intact
v1.9.0Phase 26 March 2026

Custom cursor & hero ripple overlay

A custom cursor replaces the browser default across the site — a sharp dot with a lagging ring that springs behind it. A transparent ripple overlay sits above the hero and distorts the background in response to mouse movement.

What shipped

  • CustomCursor: two-layer cursor — a 6 px solid dot that snaps to the pointer and a 28 px ring that lags behind with a spring (lerp factor 0.12) on each animation frame
  • Cursor fades in on mouseenter and out on mouseleave; hidden entirely on touch/coarse-pointer devices via pointer: fine media query
  • Respects prefers-reduced-motion — ring snaps to pointer position rather than interpolating when the system preference is set
  • HeroRippleOverlay: transparent div with a CSS backdrop-filter ripple driven by mouse position, sits above hero content without blocking interaction

Why we built it this way

  • Ring lag implemented with rAF lerp rather than CSS transition — gives per-frame control and avoids transition conflicts when the cursor accelerates sharply
  • cursor: none applied at the root via globals.css rather than per-component — simpler and ensures no fallback system cursor flashes during hydration
  • Overlay uses pointer-events: none — the ripple distorts visually but never intercepts clicks or hover states on content below
v1.8.0Phase 25 March 2026

Summarise button & citation chip polish

Each blog post and project page now has a Summarise button embedded in the intro header. One click opens the chat and fires a two-sentence summary prompt with full page context attached. Citation chips on assistant messages lost their decorative icon for a cleaner look.

What shipped

  • Summarise button on every blog and project page — sits inside the article header beneath the title/tagline, styled as a mustard pill matching the citation chip aesthetic
  • Pulsing ring animation on the Summarise button mirrors the floating chat trigger — pauses on hover, signals interactivity without competing with page content
  • One click opens the chat panel and automatically sends a prompt asking Claude to summarise the current page in two sentences, with page context injected
  • pendingPrompt mechanism in ChatContext — external triggers set a prompt that ChatInterface consumes and fires once mounted, covering both first-open and already-open states
  • Citation chips on assistant messages: removed the ✦ decorative icon for a cleaner, text-only label

Why we built it this way

  • pendingPrompt lives in ChatContext rather than a ref or prop — avoids threading callbacks through FloatingChat and works regardless of whether ChatInterface is already mounted or lazy-loading for the first time
  • Summarise button reuses handleChipSelect rather than a direct sendMessage call — gets cache lookup, PostHog tracking, and the thinking indicator for free
  • Pulse animation uses the same box-shadow keyframe pattern as the floating chat button but with a tighter 8px spread — proportional to the smaller pill size
  • Removed ✦ from citation chips: the pill shape and mustard colour already signal interactivity; the icon was visual noise rather than signal
v1.7.0Phase 25 March 2026

Site rendering fixes & mobile chat improvements

A stability pass targeting rendering regressions across the site. Mobile chat now opens reliably, several layout and z-index issues are resolved, and the blank-page bug introduced by a stale CRA index.html has been eliminated. One known issue remains: chat content has a visual offset on certain viewports — tracked and in progress.

What shipped

  • Fixed blank site caused by a CRA index.html in /public overriding Next.js's root route
  • Fixed chat not opening on mobile — tap on the floating button now correctly toggles the panel
  • Fixed BlurImage z-index leaking above project header content on scroll
  • Fixed chat content visual offset caused by missing border-radius on the inner container
  • Misc CSS refinements across ChatInput, ChatInterface, ChatMessages, FloatingChat, ChatSection, and ProjectSlider

Why we built it this way

  • Removed the legacy CRA index.html entirely rather than patching around it — the file had no purpose in the Next.js project and was silently shadowing the app's root route
  • border-radius fix applied to the inner chat container rather than the outer wrapper — preserves the existing shadow and backdrop-filter behaviour without side effects
v1.6.0Phase 23 March 2026

BlurImage component, hero headshot & project photography

A visual quality pass across the site. New BlurImage component adds a blur-to-sharp reveal on every image. Home hero gains a headshot. Project headers now show real photography instead of SVG illustrations. ProjectSlider preloads adjacent slides.

What shipped

  • BlurImage component: wraps Next.js Image with a JS-driven blur placeholder — tiny base64 PNG fades out as the full image loads; supports both fill and fixed-size modes
  • getBlurDataURL utility: reads images from /public at build time and returns a base64 PNG to use as the blur placeholder
  • Home hero headshot: photo of Sumner added to the hero section with blur placeholder; stats strip repositioned below the photo
  • Project headers: new headerImage field on ProjectEntry — Robin AI, Total Platform, and Portfolio For All now display real photography; falls back to SVG illustration when no image is set
  • Project subsection images: migrated to BlurImage for consistent blur-to-sharp behaviour across all in-content imagery
  • ProjectSlider preloading: adjacent slides are preloaded once the section enters the viewport; per-slide cache avoids hiding already-loaded images on revisit
  • New image assets: headshot-hero.png, Robin-2.jpg, portfolio-header-cropped.png, total-platform-header.jpg, Draftwise-Assistant.jpg

Why we built it this way

  • BlurImage over native Next.js placeholder='blur': native approach requires blurDataURL to be co-located with src as a static prop — awkward with paths computed at render time; the component separates concerns cleanly
  • JS-driven onLoad fade (useState) rather than CSS-only: Next.js onLoadingComplete is deprecated; onLoad on the underlying img element is the current reliable hook
  • Wrapper span gets position: absolute; inset: 0 when fill is set — matches Next.js's own fill-mode expectations without an extra CSS class
  • headerImage is optional and falls back to illustration — existing projects without photography continue to work with no migration required
v1.5.0Phase 22 March 2026

Scroll-reveal animations, career history & chat polish

New ScrollingTextReveal primitive deployed across every major heading on the site. New career history section on the homepage. Chat UI refinements: rotating thinking phrases and per-message citation chips.

What shipped

  • ScrollingTextReveal component: IntersectionObserver-triggered word-by-word slide-up animation — each word clips into view with a staggered delay, fires once on first scroll into viewport
  • Deployed across all major headings: homepage hero, CraftStatement, ChatSection, updates page, side panel content cards
  • Career history section on the homepage: 4 roles (Draftwise, Robin AI, Next Ltd, Impero Software) with title, company, description, and year range in an editorial grid layout
  • Chat thinking indicator now cycles through 4 phrases ('Thinking…', 'Searching my work…', 'Pulling from memory…', 'Crafting a response…') every 2.2s with a fade transition between each
  • Citation chips on assistant messages: inline ✦ title chips per message — clicking jumps to the referenced content in the side panel

Why we built it this way

  • Word-level splitting over line-level — independent of font size, viewport width, and text wrapping; no layout dependency
  • IntersectionObserver threshold 0.1 with -10% rootMargin — animation lands as the user arrives at the element, not after they've already passed it
  • Words clipped via overflow: hidden on a wrapper span rather than an off-canvas translate — prevents horizontal scrollbar artifacts on narrow viewports
  • ThinkingBubble uses opacity transition (250ms) rather than remounting the component — avoids layout jank during phrase cycling
  • Citation chips reference the same ResolvedContentRef already held in messageRefs state — no additional data fetch or side panel state duplication
v1.4.0Phase 227 February 2026

Voice-to-chat input

Visitors can now speak their questions directly into the chat. A mic button sits in the input row, shows a live transcript as you speak, and commits the final text for review before sending.

What shipped

  • Mic button in the chat input row — circular glass style, sits between the textarea and send button
  • Live interim transcript: spoken words appear in the textarea in real-time as you speak (dim + italic), giving instant visual feedback
  • Final transcript appends to any existing typed text — typed prefix + spoken suffix combine naturally
  • Mustard pulse ring animation on the mic button while listening; input row border glows mustard
  • Ctrl+M / Cmd+M keyboard shortcut toggles the mic
  • Graceful degradation: mic button hidden on browsers without Speech API support (Firefox, Brave)
  • Actionable error messages for permission denied, no microphone found, and network failures

Why we built it this way

  • Native Web Speech API over a Whisper endpoint — zero latency, no audio upload cost, no backend changes; trade-off is Chrome/Edge/Safari only
  • Brave detected via navigator.brave and treated as unsupported — Brave ships without Google API keys so SpeechRecognition always fails with a network error; hiding the button is cleaner than showing a button that always errors
  • No auto-submit after final transcript — speech recognition is imperfect; visitors review and edit before sending to avoid embarrassing mis-transcriptions
  • Two-tier input model: committed input state (sent to Claude) vs. interimTranscript (display-only) — keeps the controlled textarea contract intact with no synthetic event hacks
  • Transient errors (no-speech, aborted) reset silently; persistent errors (permission denied, network) show a message
v1.3.0Phase 227 February 2026

Homepage refresh, public analytics dashboard & chat UX upgrades

A significant day — new homepage with a featured work slider and craft stats section, a fully public analytics dashboard backed by live PostHog data, and a round of chat UX upgrades including a pulsing button, auto-open on load, and styled tooltips throughout.

What shipped

  • Homepage: new ProjectSlider with animated directional transitions — 3 featured case studies with editorial text and dual image cards
  • Homepage: CraftStatement section with 4 stats (10+ Years, 7 Case studies, 0→1 AI products, 9 Articles) and craft manifesto copy
  • Analytics dashboard at /analytics — live PostHog event widgets (ISR, refreshes every 5 minutes) driven by data-collection.md YAML frontmatter
  • Analytics: bar chart, table, stats, and count widget types; safeguarding badge on sensitive events (flagged messages); data policy accordions below the fold
  • Analytics: demo mode banner when POSTHOG_PERSONAL_API_KEY is not set; graceful empty states on all widgets
  • Floating chat button enlarged (icon circle 1.75rem → 2.5rem) with ambient mustard pulse ring — pauses on hover, stops when chat is open
  • Chat auto-opens on hard page reload; client-side route navigation preserves the user's last state
  • Styled tooltip utility added globally via [data-tooltip] attribute — expand, collapse, close, and send buttons
  • Analytics moved before Updates in the main navigation

Why we built it this way

  • ProjectSlider uses a phase-based animation state machine (idle → exiting → entering) rather than CSS-only transitions — gives precise control over enter/exit direction without layout thrash
  • Analytics dashboard is ISR rather than client-side fetch — data is fresh every 5 minutes with no loading spinner, and the page is fully server-rendered for SEO
  • Event widget config lives in data-collection.md YAML frontmatter rather than a separate config file — keeps the data policy and dashboard config co-located and in sync
  • Pulse animation uses box-shadow expansion rather than a separate DOM element — no layout impact and works with the existing pill shape
  • Auto-open implemented via useEffect in ChatProvider (root layout) rather than useState initialiser — avoids SSR/hydration mismatch while still only firing once per hard load
  • Tooltip implemented as a pure CSS [data-tooltip] attribute utility in globals.css — zero JS, zero component overhead, works on any element across the codebase
v1.2.0Phase 226 February 2026

AI layer: voice calibration, session cache, persistence & Virtual Brain

Depth pass on the intelligence layer — voice training data, response caching, message persistence, and a Virtual Brain that now surfaces real content snapshots from the knowledge base.

What shipped

  • Voice calibration: 14 curated Q&As in Sumner's voice injected into every system prompt, training Claude to match style, depth, and first-person tone
  • 9 answer-quality priority rules (facts-only, correct metrics, practitioner tone, concise by default)
  • Session cache: 50-entry sessionStorage cache with key normalisation — identical questions skip the API entirely, zero latency on repeats
  • Message persistence: conversation history and per-message content refs survive full page navigation via localStorage
  • Virtual Brain outer ring: 7 content-snapshot nodes pulled live from the knowledge base — Beta metrics, AI design philosophy, code-first design, latency thinking, positioning, and design aesthetic
  • Brain page subtitle updated to orient visitors to the two-ring layout

Why we built it this way

  • Voice calibration over system-prompt instructions alone: real Q&As show the model what 'right' looks like rather than just describing it — style transfer is more reliable than rules
  • sessionStorage over localStorage for the response cache: scoped to the tab session so visitors always get fresh answers on a new visit, but repeat questions within a session are instant
  • Content-snapshot nodes on the Virtual Brain pull directly from source content files — if identity.ts or chat-voice.ts changes, the brain updates automatically with no extra maintenance
  • Dashed purple-ish edges on the outer ring visually distinguish the content layer from the inner identity/experience ring without needing a new node type
v1.1.0Phase 223 February 2026

AI Layer: Conversational portfolio with Claude claude-sonnet-4-6

The portfolio can now answer questions about Sumner's work, process, and thinking. Claude speaks in first-person as Sumner, surfaces relevant case studies mid-stream in a side panel, and logs all interactions to PostHog.

What shipped

  • Streaming chat endpoint via Vercel AI SDK + Claude claude-sonnet-4-6 (edge runtime)
  • Full knowledge base injection: all projects, blog articles, design snippets, and about in the system prompt (~15-25k tokens)
  • reference_content tool: Claude surfaces case studies and articles mid-stream in the side panel
  • ChatInterface: two-panel layout (conversation + side panel) with responsive grid
  • 6 prompt chips to help visitors start a conversation
  • PostHog events: chat_message_sent (with message_text), prompt_chip_clicked (chip_text), content_referenced, side_panel_card_clicked, resource_viewed (project/blog page views)

Why we built it this way

  • No RAG/vector DB: all content fits in Claude's 200k context window at ~15-25k tokens — adding embeddings would add infrastructure complexity with no accuracy benefit at this scale
  • Edge runtime for the API route: zero cold starts on Netlify
  • Tool use over prompt-only: mid-stream tool calls let the UI update the side panel before the text response finishes, creating a more responsive feel
v1.0.0Phase 122 February 2026

Foundation: Next.js 14 + full content migration

Complete architectural rewrite from React 16 CRA to Next.js 14 App Router. All content migrated to TypeScript files. All pages and routes built.

What shipped

  • Next.js 14 App Router with SSR metadata via generateMetadata()
  • Dark/light mode with next-themes (no flash on load)
  • All case studies migrated: robin-ai, total-platform, portfolio-for-all, online-safety
  • All 9 blog articles migrated
  • 9 design snippets gallery
  • Contact form via EmailJS
  • SEO-friendly slug-based URLs (e.g. /project/robin-ai)
  • Permanent redirects from old pid-based URLs
  • SVG illustration system split from 237KB monolith into 5 components
  • PostHog analytics (replacing GTM)
  • Netlify deployment via @netlify/plugin-nextjs

Why we built it this way

  • Chose TypeScript files over MDX or headless CMS — the nested section/subsection content structure doesn't map to flat Markdown
  • Chose PostHog over GTM — free tier captures 1M events/month with a real product analytics dashboard
  • Chose Netlify over Vercel — preserves the existing GitHub → auto-deploy workflow without DNS changes
  • Knowledge base approach chosen over RAG/vector DB — all content fits in Claude's 200k context window at ~15-25k tokens