The views expressed here are solely my own. They do not represent the opinions, positions, or policies of any current or former employer, client, or affiliated organization.
That version would be a lie.
What actually happened is closer to this: I had an idea that felt important, I started building without fully knowing what I was building, made dozens of decisions that turned out to be wrong, and ended up somewhere better than I planned precisely because I was willing to course-correct in public, meaning, in the middle of a live codebase, with real users potentially watching.
This is the story of building The Signal Health, an AI-powered content platform for physician creators, from the first line of code to the day I was ready to charge real money for it. It's not a tutorial. It's a postmortem written while the patient is still alive.
The Problem That Started Everything
I've spent a decade in content strategy and content supply chain, helping Fortune 500 companies turn complex, specialized knowledge into content that reaches people, changes behavior, and drives decisions. I've worked on campaigns where the science was real, the product was legitimate, and the message still failed to land.
What struck me, somewhere along the way, was the asymmetry: medical misinformation is brilliantly engineered. It's persuasive, shareable, emotionally compelling, and perfectly calibrated to how humans actually process information. Evidence-based medical content, by contrast, is often accurate, thorough, and ignored.
The science of persuasion doesn't belong to the people spreading misinformation. It belongs to anyone who uses it responsibly. But nobody had built the infrastructure to give it to the physicians who needed it most.
That was the gap. And that was the origin of The Signal.
The Stack Decision: Choosing Based on What I Could Learn
The first decision was the most consequential, and I made it almost entirely wrong.
I knew I wanted something AI-powered. I knew it needed to handle user accounts, subscriptions, and some kind of content analysis. What I didn't know, because I'm a content strategist, not a developer, was what that actually meant in practice.
I chose Flask running on PythonAnywhere not because it was the best tool for the job, but because it was the tool I could learn while building. Python is readable enough that I could understand what I was doing even when I didn't understand why. PythonAnywhere's Hacker plan at $5/month meant I could run a real production server without betting on something working.
For the database, I chose Supabase, Postgres with an API layer, Row Level Security, and AES-256 encryption at rest by default. For AI, I started with Gemini 2.0 Flash and migrated to Gemini 3.1 Flash Lite when the former was deprecated. For payments, Paddle as Merchant of Record, meaning they handle tax compliance globally, which matters enormously when you're selling to physicians in multiple countries. For email, Resend.
The frontend was vanilla HTML, CSS, and JavaScript rendered via Jinja2 templates. No React. No Next.js. No build pipeline. Just files.
The learning: Tool selection for a solo builder shouldn't be optimized for what's best in the abstract. It should be optimized for what you can actually ship. The best stack is the one where you can read an error message and understand what it's telling you.
The First Version: Everything in the Wrong Place
The first working version of The Signal had the email field inside the tool form. Users would paste their email, click "Verify," receive a magic link, and then come back to use the auditor. The nav had no login state. Every page was self-contained. The profile page had its own email gate.
It worked. Technically. But it created a user experience that required too much from the physician, too many steps, too much friction, too many places where they could get confused and drop off.
More importantly, it reflected a fundamental architectural misunderstanding: I had built each page as if it were a separate product, rather than as part of a single authenticated experience.
The fix wasn't a feature. It was a rethink.
The learning: Your first architecture will be wrong. The question isn't whether you'll need to refactor, it's whether you built things loosely enough to make refactoring survivable. Flask routes and Jinja2 templates are forgiving. A tightly coupled JavaScript framework would have been a nightmare to untangle.
The Authentication Refactor: Sessions, localStorage, and Magic Links
The biggest engineering decision of the project was how to handle authentication.
I didn't want passwords. Password reset flows are complex, users forget them, and for a tool that physicians use occasionally rather than daily, the friction of remembering a password is real. Magic links, email-based, tokenized, expiring, solved the UX problem elegantly.
The implementation took several iterations to get right.
The first version stored the session token in Supabase and verified it on every request. It worked but created a dependency between every page load and a database query. The second version moved session data to localStorage, email, expiry, credits, plan name, plan type, and built a global loadSession() function in nav.html that every page could call.
This meant a single source of truth for session state, accessible across all pages without a round-trip to the server for every interaction.
The nav became the orchestrator. Logged-out state: Auditor · Ideas · Get access · Log in. Logged-in state: Auditor · Ideas · My Profile · Log out. A global status bar below the nav shows the email, plan name, and credit count, or "Unlimited" for founding members.
The bug that taught me the most: For weeks, the magic link login worked fine for new users but silently failed for existing users. The issue was subtle: validar_sesion was querying Supabase with two conditions, email = X AND session_token = Y. If either condition didn't match exactly (whitespace in the token, URL encoding differences), the query returned nothing, and the route redirected to an error page with no explanation.
The fix was to query by email alone, then verify the token in Python. This separated the lookup from the verification and made the failure mode visible in the logs.
The learning: Authentication is where the most invisible bugs live. Every step of a magic link flow, generation, storage, email delivery, URL construction, URL encoding, token matching, expiry checking, redirect logic, is an opportunity for silent failure. Log everything. Verify in Python, not in SQL. Never assume the token that was sent is identical to the token that arrives.
The Onboarding Decision: Tokens vs. Email
The original onboarding flow used activation tokens, a one-time URL sent via email that proved the user had paid and given permission to set up their profile. The token was stored in Supabase, expired in 48 hours, and the entire onboarding was gated behind it.
This created a problem: the frontend was sending JSON with email and ai_response, but the backend expected request.form.get("token") and request.form.get("perfil_texto"). A classic mismatch between what the frontend evolved into and what the backend still expected.
The refactor replaced token-based verification with email-based verification for the onboarding endpoint. The account exists, onboarding_completed is false, and the email matches, that's sufficient proof. No token needed.
More importantly, when the onboarding completed successfully and redirected to the auditor, it needed to save the session to localStorage first. Otherwise the auditor would show the auth gate immediately, requiring the physician to log in again, defeating the entire purpose of a smooth activation flow.
The learning: The handoff between systems is always where things break. The onboarding-to-auditor transition, the webhook-to-profile transition, the magic-link-to-session transition, each one is a seam where two different pieces of state need to agree on reality. Make those seams explicit and test them independently.
The AI Layer: What Gemini Actually Does
The Signal uses Gemini for two things: generating brand assets during onboarding, and analyzing content in the auditor.
The brand asset generation prompt took the most iteration to get right. The first version generated technically correct assets but they read like marketing copy written by someone who had read too many brand guides. The fix was to add explicit constraints: extract patterns from the physician's own language, make the buyer persona specific and actionable, make the brand bible authentic rather than generic.
The most important addition was the data sanitization instruction, a critical section at the top of the prompt instructing Gemini to scan the input for patient names, clinical case details, diagnoses, and any PHI before generating assets. Physicians, especially in the AI path where they paste conversation history, sometimes include patient scenarios without thinking. The prompt handles this before anything gets stored.
The content auditor prompt scores on three dimensions, Clarity, Influence, and Ethics, each on an absolute 1-10 scale. "Absolute" was a deliberate choice: a 5 means genuinely average medical communication, not a bad result. The prompt includes scoring anchors, specific rules about analogy safety, voice preservation, and objective modes (motivate, educate, build trust, prevent dropout).
The response structure went through two versions. The first returned auditoria_data. The frontend was rewritten to expect resultado. For weeks, the auditor showed "Error: undefined" because the backend and frontend were speaking different keys. Credits were being deducted even on failed renders.
The fix was two-part: align the key names, and move credit deduction to after successful Gemini response parsing, not before.
The learning: AI prompt engineering is not a one-time task. The prompt is a living document that needs to evolve as you learn what the model does with edge cases. And the contract between your AI endpoint and your frontend is as important as any other API contract, treat it as such.
The Paddle Integration: Three Layers of Configuration
Payments via Paddle turned out to have more moving parts than expected.
Layer one: the client-side token, which initializes Paddle.js and enables the checkout overlay. This is different from the API key. It lives in the browser. It must match the environment, sandbox tokens won't work in production, and live tokens won't work in sandbox.
Layer two: the price IDs (pri_XXXXX), which identify specific products and prices in Paddle's catalog. These also differ between sandbox and production. We had three price IDs that were wrong for weeks, the Founding, Monthly, and Pro plans had IDs from an earlier configuration that no longer existed.
Layer three: the webhook, which fires when a transaction completes and is responsible for creating the user profile in Supabase, sending the activation email, and setting the plan_type. The webhook is the most critical piece of the system, everything downstream depends on it working correctly.
The latency risk is real: in sandbox, webhooks are nearly instant. In production, they can take seconds or minutes. A physician who completes checkout and immediately clicks the activation link could arrive before their profile exists. The mitigation is to send the activation email from inside the webhook handler, after confirming the profile was written, not as a parallel operation.
The learning: Never hardcode environment-specific values in templates. A single configuration layer, environment variables in PythonAnywhere, passed to templates via Flask's render context, means switching between sandbox and production is five variable changes, not a search-and-replace across six files.
The Compliance Layer: Building Trust with Physicians
This was the dimension I hadn't planned for and ended up spending significant time on.
Physicians are a different user than a typical SaaS customer. They operate in a regulated environment. They are professionally liable for what they publish. They have legitimate concerns about patient privacy, professional reputation, and the legal implications of using AI in a clinical-adjacent context.
The compliance layer that emerged from this is a set of friction points that are actually features:
The privacy popovers on every form field, a 🔒 icon that, when clicked, shows a popover explaining exactly what happens to that data, whether it's stored, and why we're asking. Not a privacy policy link. An inline explanation, specific to that field.
The disclaimer checkbox before running an audit or generating ideas, not a terms-of-service checkbox, but an active acknowledgment that outputs are communication support references, not publishable medical content.
The compliance checkbox before saving brand assets, requiring the physician to confirm that what they've written doesn't contain patient names, clinical case details, or PHI.
The data notice in onboarding, explaining before the physician starts that what they share will be stored, what it will be used for, and what they should not include.
None of this is legally sufficient on its own. But it establishes a posture: The Signal is a tool that takes the physician's professional responsibility seriously, and expects the physician to take it seriously too.
The learning: Compliance isn't a checkbox you add at the end. For a product serving licensed professionals in a regulated industry, it shapes the entire user experience. The earlier you build it in, the less it feels like friction and the more it feels like respect.
The Brand Asset System: Why This Is the Core Product
The brand assets, Buyer Persona, Idiolect, and Brand Bible, are the feature that makes everything else work better.
The insight was simple: a content auditor that doesn't know who you are can only give you generic feedback. One that knows your audience, your voice, your content guardrails, and your communication philosophy can give you feedback that sounds like a well-calibrated editor who has read everything you've ever written.
The three assets serve different functions:
The Buyer Persona calibrates who the physician is writing for, not a demographic sketch, but a behavioral portrait that captures the audience's dominant values, resistance triggers, preferred formats, and stage of behavior change.
The Idiolect captures voice, the specific phrases the physician uses, the phrases they'd never use, their register, their analogy style, and the content guardrails they operate within. This is what prevents the auditor from suggesting that a conservative academic cardiologist write like a wellness influencer.
The Brand Bible grounds everything in purpose, the physician's origin story, their unique differentiator, their core values, and where they lead their audience. It's the "why" that makes the "what" coherent.
These three assets are injected into every Gemini prompt at runtime. The auditor isn't just analyzing content against abstract principles, it's analyzing it against the physician's own stated communication philosophy.
The learning: The onboarding experience is the product. The tool is only as good as the quality of the brand assets that power it. A physician who rushes through onboarding with vague answers will get worse output forever. Investing in making onboarding thoughtful, the AI path that extracts assets from existing conversation history, the manual path with structured questions, the compliance guardrails, pays compound returns on every subsequent use.
The Mobile Reckoning
Mobile was an afterthought. It shouldn't have been.
The source type selector, three buttons (Paste text, URL, PDF) in a grid-template-columns: repeat(3, 1fr) layout, looked fine on desktop and was completely unusable on a phone screen. The URL input field overflowed its container. The ideas result cards had a two-column grid (Description | Why it matters) that stacked content sideways on a 375px viewport.
The fix was clean once identified: a class-based approach in the responsive CSS rather than attribute selectors on inline styles. source-type-grid collapses to one column below 640px. idea-detail-grid does the same. The pricing modal shifts to a single column. Modals get reduced padding. The global status bar wraps.
The hamburger menu required a bit more thought, not just the CSS, but the state management. The mobile drawer needs to reflect the same logged-in/logged-out state as the desktop nav, which means updateNavState() has to control both simultaneously.
The learning: If your users are physicians checking your tool between patients on their phone, and your tool doesn't work on phones, you don't have a tool, you have a desktop application that happens to have a URL. Test on mobile from day one.
What I Would Do Differently
Start with the session architecture. The email-in-the-form approach was always going to need to be replaced. If I had designed for session-based authentication from the beginning, three separate refactors could have been one.
Treat the nav as infrastructure. The nav is the one piece of UI that appears on every page. Making it a proper component with global state management from the start, rather than bolting that on later, would have saved weeks.
Test the webhook before testing anything else. The webhook is the entry point for every paid user. I spent too long building features for users who couldn't yet exist because the creation mechanism wasn't verified end-to-end.
Write the compliance layer first. Retrofitting compliance onto an existing UX creates awkward interruptions. Building it in from the beginning lets it become part of the experience rather than an obstacle.
Charge sooner. The instinct to wait until everything is perfect before charging is understandable and wrong. The only feedback that tells you whether something is actually valuable is someone handing you money for it.
What Surprised Me
How much the brand assets matter. I thought the auditor was the product. It's not. The brand asset system is the product. The auditor is the delivery mechanism. This realization came from watching what happened when an auditor ran with no brand assets versus with well-crafted ones, the difference in output quality was stark enough to change how I thought about the entire platform.
How consequential small decisions are. The key name in a JSON response (auditoria_data vs resultado). The order of operations for credit deduction. The difference between querying Supabase with two conditions versus one. These feel like implementation details. They aren't. They're the difference between a product that works and one that silently fails in ways that erode trust.
How much the non-developer background helped. Not having a developer's intuition meant I asked "why" more often than I should have needed to. It also meant I never got attached to technical elegance for its own sake. Every decision was evaluated by whether it served the physician, not whether it was architecturally interesting.
How important the "why you" question is. The question of why a non-physician built a tool for physicians is one I'll be asked by every physician I ever pitch. The answer, that content strategy and communication psychology are the missing infrastructure in medical communication, and that the gap doesn't require clinical knowledge to close, is only compelling if it's delivered with conviction. Working through that argument, stress-testing it, making it honest rather than defensive, that was as important as any line of code.
Where We Are
The Signal is an AI-powered content platform for physician creators. It audits medical content across three dimensions, Clarity, Influence, and Ethics, and generates an optimized version calibrated to the physician's own voice and audience. It extracts content ideas from research papers, articles, and PDFs, anchored to source material to prevent hallucination.
The stack is Flask on PythonAnywhere, Supabase for storage, Gemini 3.1 Flash Lite for AI, Paddle for payments, and Resend for transactional email. The brand asset system, three structured documents representing audience, voice, and brand, personalizes every AI interaction at runtime.
The pricing is straightforward: Signal Check at $4.99 for 20 uses, Signal Founding at $4.99/month with unlimited uses for the first 100 members, and Signal Pro at $29/month with unlimited uses for everyone else.
The next step is a live transaction, one real physician, one real payment, one complete flow from checkout to first audit. That's the test that matters. Everything before it is rehearsal.
The gap between what medical professionals know and what reaches the people who need it is enormous. Misinformation fills that gap because it's engineered to. The Signal exists to give the other side the same infrastructure.
That's not a product vision. It's a problem worth solving.
