Neartail - ChatGPT Apps
2025Create forms inside ChatGPT and instantly publish them across platforms.
Your customers are already saying incredible things about your product. BuddyHQ turns that voice video, audio, text into branded, publish-ready UGC content. No editing suite. No copywriter. Just feedback in, marketing assets out.
Designed and shipped 6 core modules end-to-end, supporting 7+ content output formats from a single unified interface.
Defined the three-tier content architecture (Chat → Project → Library) the structural backbone the team continues to build on.
Validated the conversational-first model with 12 marketing professionals across SME interviews and PoC testing.
Pre-release product approved internally; stakeholder validation greenlit the next phase of partner-facing rollout.
Imagine you're a marketing lead at a B2B SaaS company.
You just wrapped a customer interview series. 14 video calls, 200+ written survey responses, and three months of support transcripts sitting in a Google Drive folder.
Every one of those conversations has a quote, a moment, a story worth publishing. You know it. Your team knows it. But by Friday, you'll ship two LinkedIn posts and one case study draft. The rest will sit untouched until someone deletes the folder next quarter.
This is what most B2B marketing teams deal with every week.
The signal is there. The system to turn it into content isn't. Every piece of marketing content from customer feedback travels through 5+ handoffs across 4+ tools, manually, every single time:
Survey tools or video platforms drop raw responses into spreadsheets.
A marketer watches hours of footage, highlights usable quotes. Most signal gets lost here.
Summaries get handed to copywriters and designers. Context is already diluted.
Videos clipped in Premiere, posts in Canva, emails in another tool. Nothing shares context.
Final assets live across Drive, Notion, Dropbox. No source of truth.
At first, it looks like a content problem. It's actually a system problem. Teams aren't short on tools, they're operating across disconnected systems with no continuity between them.
BuddyHQ was built on a simple idea:
What if customer voice didn't sit in folders, but flowed directly into the content marketing teams actually ship?
From a product perspective: BuddyHQ ingests customer feedback in any modality video, audio, text, surveys, transcripts. Runs it through an AI engine that handles transcription, theme extraction, and brand voice calibration, and returns 7+ branded content formats ready to publish. One pipeline. One interface. Zero tool switching.
As the first and only designer on the project, I owned the end-to-end design of BuddyHQ's MVP from early product definition to scalable delivery.
The team had built a working AI pipeline that could ingest video, audio, and text and generate output in multiple formats. What they didn't have was a product around it. No interface. No information architecture. No defined user journey.
The AI was capable, but raw capability isn't a product. The challenge was figuring out:
The challenge wasn't a lack of capability. It was a lack of structure. I needed to design systems that surfaced what mattered, without exposing the machinery underneath.
Before touching Figma, I ran a focused 5-day sprint to understand who BuddyHQ was actually for and how they'd use it. The goal was to validate the product hypothesis with real marketing teams before committing to a direction.
Stakeholder alignment, current workflow audit
SME sessions with 12 marketing professionals
Concept exploration and information architecture
Synthesize findings, lock direction with team
Test prototype concepts with 5 PoC users
I ran 12 sessions with content marketers, growth leads, and content managers across SaaS companies. The conversations were more specific than I expected. Three breakdowns came up in every session:
I have hours of customer interviews. I just want someone to hand me a finished LinkedIn post that sounds like the customer actually said it.
I'm a team of one shipping across five channels. I need leverage, not another tool to learn.
The PoC validated the conversational-first model with real users and gave the team a concrete artifact to align on. I designed and prototyped the core interaction loop: upload feedback → describe intent → review generated content.
I tested the prototype with 5 marketing teams from the SME pool. The findings unblocked the next phase:
⚠ Disclaimer: Early validation signals from a controlled PoC (n=5). Metrics indicate directional usability and adoption potential, not production-scale outcomes.
The PoC validated the concept. The real challenge was figuring out: which modules should we build first, and how could they flex across content formats without fragmenting into five separate products?
From the SME interviews and PoC sessions, the priority was clear:
That shaped the MVP into 6 core modules, each reusing the same generation engine, the same component patterns, and the same content storage system.
The modules aren't five products. They're five views into the same engine. One source of truth, multiple surfaces. This was the architectural choice that made BuddyHQ scalable.
SME interviews showed marketers describe their work as delegation, not configuration: “I need someone to take these responses and make a post.” A chat-first model honored that mental model. New users got an instant path to value; returning users saw recent projects.
Designed a single chat surface with five quick-action cards (Video Snippets, Survey Snap, Post Creator, Audio Bites, Blog Creation). The AI handles orchestration no menus, no settings to configure before generating. Zero onboarding required; first-time-to-value measured in seconds.
The chat surface is the scalable front door. New output formats can be added as quick-action cards without redesigning the interface. The interaction model holds regardless of how the AI engine evolves underneath.
Video was the most labor-intensive content type in the existing workflow. Premiere, CapCut, hours of clipping. SME interviews flagged inline editing as the most-requested capability. Without it, marketers would still leave the app to finish the work.
Designed a thumbnail strip that shows all 10 generated snippets at once, inline captions for accessibility, and a built-in timeline editor for trim and approve. Platform resize (YouTube, Instagram, LinkedIn) lives in the same panel users never leave the canvas.
Closed the loop on the highest-friction content type. The timeline editor became the pattern for inline editing across every other module trim audio, edit blog, refine post all use the same component vocabulary.
Marketers ship the same post across LinkedIn, Instagram, and X three different aspect ratios, three rounds of design work. The PoC sessions confirmed format-switching was the second-biggest time sink after video editing.
Designed brand-consistent generation with one-click aspect-ratio switching. Source attribution baked in for traceability clinicians, sorry, marketers can see exactly which customer quote drove the asset. A slide strip shows all variants; new pages can be added without re-prompting.
The aspect-ratio system became the foundation for any future format expansion ads, reels, carousels all use the same canvas with different output specs. Brand kit lives at the project level, so consistency holds without per-asset configuration.
Audio was the most overlooked content type in early concept testing. Marketers requested it for two reasons: shareability (podcast-style clips, testimonial highlights, social audio for Instagram and TikTok) and accessibility (a second surface for the same content). Without an audio module, BuddyHQ would have left an entire category of distribution on the table.
Designed a dual-input system: audio can be extracted from video (pulling spoken moments from testimonials and webinars) or generated from text (TTS with brand voice calibration). One waveform editor, one transcript, one export flow regardless of source. Speaker-labeled transcripts, trim points, multiple variant browsing, and brand audio styling (intro/outro) all live in the same panel.
Audio Bites became cross-module infrastructure, not a standalone format. The same generated bites feed back into the Video Snippets module as background scores closing a loop other tools never close. One generation pass produces a podcast clip, a social audio post, a video soundtrack, and an accessible audio version of a blog. That's compounding value from a single input.
Generation alone wasn't enough every long-form piece needs editing. If users had to copy text into Google Docs to refine, the value of in-app generation collapsed. The editing experience had to be production-ready, not a fallback.
Designed an inline rich-text editor with contextual toolbar. Font, size, weight, alignment, links all available without leaving the panel. Toolbar placement was the hardest design call in the build: tested fixed-top, sidebar, and contextual-on-selection. Final placement docks away from the canvas and surfaces only on selection.
Establishes BuddyHQ as a production tool, not a draft tool. The editor pattern extends to email composition and any future text-based content type. Users finish the work in BuddyHQ they don't pass it through three other tools to publish.
SME research surfaced a recurring complaint: “I made a great asset last quarter. I have no idea where it is now.” Without a reuse layer, every piece of content becomes a one-time output. Compounding value never accrues.
Proposed the three-tier hierarchy mid-project, in a team design review. Chat → Project → Library gave content a permanent home. Filter by type (video, post, email, audio), approval status, project. The Library is a single searchable surface across the entire workspace.
The most underestimated feature in the product. The Library is what makes BuddyHQ a platform, not a tool. It scales from one user to an enterprise team without architectural redesign. It's also the foundation for future search, recommendations, and AI-powered reuse suggestions.
I worked closely with engineering across every module. Our goal was to ensure every design decision was technically feasible, scalable, and future-proof especially as we moved toward modular components and partner integrations.
Built reusable component patterns from day one. Every module shares the same canvas, the same generation panel, the same editing primitives. Reduced design debt before it could accumulate.
The hardest placement call in the product. Three rounds of testing with engineering fixed-top vs sidebar vs contextual landed on contextual-on-selection. The principle: the tool should never compete with the work.
The three-tier content hierarchy emerged in a team design review, not a brief. I proposed it mid-project; engineering scoped feasibility in the same week. The structure came from the problem, not the spec.
This wasn't handoff. We co-created infrastructure that's still being built on. Every component, every pattern, every decision is something the team can extend confidently, not just use once.
BuddyHQ is built and pre-release. The honest story is about what was designed, validated, and shipped. Not paid-customer metrics that don't yet exist. Here's what's real:
Core modules shipped end-to-end. Chat Centre, Video Snippets, Post Creator, Audio Bites, Long-Form Editor, Global Library. All built and integrated with the AI engine, ready for pre-release.
Content output formats video snippets, social posts, emails, blogs, audio clips, UGC pages, and platform-specific resizes. All from a single unified interface.
Marketing professionals interviewed across SME sessions and PoC validation. Conversational-first interaction model validated with 5/5 teams completing core tasks unaided.
Average time to first output in PoC testing versus 3–7 days in the existing manual workflow. Stakeholder validation greenlit the next phase of partner-facing rollout.
The three-tier hierarchy and modular component system weren't add-ons they were the foundation. Strategic systems thinking helped us build not just a usable tool, but a platform that supports reusability, customization, and future integrations without redesign.
Mid-project, I had real doubts about the AI premise. Working through that tension made me design for augmentation, not replacement. The output is AI-generated, but every screen is built around a human making the final call.
The editor toolbar placement a small, seemingly tactical decision consumed more debate than any major flow. Friction lives in the details. A toolbar in the wrong position doesn't just annoy users; it makes the whole product feel unpolished.
The three-tier hierarchy wasn't in the brief. It emerged from a mid-project design review. That's the kind of decision that only happens if you're paying attention to what's breaking, not just what's been scoped.
From shaping the MVP vision with the founder to debugging interaction states with engineering, this project taught me how to align across product, dev, and user needs fast and how the best designs come from co-creation, not handoff.