# Relaunch — Full Documentation > Your trusted co-pilot for product growth --- ## Homepage ### Hero Section **Your trusted co-pilot for product growth** Stop running losing tests. Ship experiments with 80%+ win rates — in minutes. TAKES 2 MINS. ### From scattered insights to confident decisions See how product teams work today — and how Relaunch turns that into focus, speed, and impact. **Current workflow:** 1. **Fragmented insights** — User journeys and data never live in one place. 2. **Hidden opportunities** — Endless dashboards with no clear focus. 3. **Long feedback loops** — Ship and hope — or wait weeks for costly A/B tests. **With Relaunch:** 1. **Unified funnel view** — Design flow and conversion data in one place. 2. **Guided opportunity discovery** — Relaunch AI finds root causes & top opportunities. 3. **Instant validation** — Simulate A/B outcomes to decide faster. ### How Relaunch predicts real A/B test outcomes Explore real experiments or try your own in minutes **Headspace — Referral experience** 30-Day Guest Pass increased new signups - Actual: +8% - Relaunch: +10% **Blinkist — Paywall redesign** Transparent trial timeline boosted signups - Actual: +23% - Relaunch: +14% **Your Product — Try your experiment** Test your next idea in minutes ### Frequently Asked Questions **Can I trust the results?** Relaunch is designed to give you directional confidence, not false precision. Each simulated experiment is validated against over 100 real A/B tests, showing over 80% directional accuracy. You'll see how different segments react, understand why a variation works, and make faster, higher-confidence decisions — without waiting weeks for traffic to validate every hypothesis. **Do I need to send sensitive data?** Mostly no. Relaunch runs primarily on publicly available data — your live product, public user discussions, and other open sources. If you optionally provide your funnel conversion data, the simulation gets significantly more precise. All uploaded data is securely stored, never shared, and used only to improve your own simulations. You'll never need to send user-level data. **Is this a replacement for A/B testing?** Not quite — it's a powerful companion to it. Think of Relaunch as your pre-validation layer: it helps you narrow dozens of ideas down to the few worth testing live. Teams that can't run A/B tests get high-confidence direction; teams that can, run fewer, smarter, faster tests. Either way, you learn faster and waste less engineering effort. **How is this different from analytics tools like Mixpanel or Amplitude?** Analytics tools tell you what users did. Relaunch helps you understand why — and what to do next. It blends real funnel analytics with design context and AI reasoning, so instead of dashboards and guesswork, you get a clear story of where users drop off, why it happens, and what changes are most likely to move the needle. **What if my product doesn't have much traffic yet?** That's exactly when Relaunch shines. Early-stage teams usually can't afford to run statistically significant A/B tests, so they rely on intuition. Relaunch simulates how your real user segments would behave, giving you data-informed direction before you have the scale to experiment. It's like borrowing the insight of thousands of real tests — even if you only have hundreds of users. **How long does a simulation take?** A typical simulated A/B experiment runs in just a few minutes. You'll see the estimated metric impact, segment-level differences, and qualitative reasoning — all in one clear report. It's designed to be fast enough to run multiple iterations in a day, so exploration feels like real product work, not waiting for results. **Will this work for my type of product?** Relaunch is built for consumer-facing digital products: signup and pricing pages, trials/paywalls, onboarding, checkout, and in-app conversion flows on web or mobile. It also works well for self-serve B2B funnels with clear goals. You'll get the best results when a user journey can be represented as screens with an objective (click, submit, start trial) and, optionally, basic funnel conversion data to boost precision. It's not designed for sales-led, multi-touch enterprise cycles or back-office tools without observable user flows. ### Call to Action **Decide with confidence, in minutes** Make fewer guesses. Focus real experiments on ideas that earn it. See it in action | Sign up --- ## Pricing ### Pricing Start free. Upgrade when you're ready to scale your decisions. #### Free Plan **$0 per month** Start exploring AI-powered product insights No credit card needed Free forever **Free for everyone:** - 5 daily credits - 1 funnel - Unlimited collaborators Get Started #### Pro Plan **$26 per month** (billed annually) For solo PMs who need visibility and focus Shared across unlimited users **Annual** — Save $48/year **All features in Free, plus:** - 300 monthly credits (100+ conversion insights, 20+ prototypes, Up to 3 A/B simulations) - Unlimited funnels - Credit rollovers (Credits roll over for 1 month on monthly plans, or until the end of your annual plan.) Get Started #### Scale Plan (Most Popular) **$425 per month** (billed annually) For teams optimizing decisions at scale Shared across unlimited users **Annual** — Save $900/year **All features in Pro, plus:** - 6000 monthly credits (1000+ conversion insights, 200+ prototypes, Up to 50 A/B simulations) - Unlimited funnels - Credit rollovers (Credits roll over for 1 month on monthly plans, or until the end of your annual plan.) Get Started --- ## Methodology ### A/B Test Simulation Methodology #### What Relaunch Does Relaunch predicts the likely outcome of A/B tests before they are run. It simulates how different user segments respond to control and variant experiences, then outputs a directional outcome with a confidence score. It's built for teams who need to decide **which ideas are worth testing** or **what to ship** when traffic is limited. Use it to prioritize experiments, validate changes without a full test, or filter out low-signal ideas early. **Best used early** — when decisions are cheap and reversibility is high. Relaunch doesn't replace real experimentation; it reduces guesswork when real data is limited or slow. #### User Behavior is Predictable Most conversion outcomes follow repeatable patterns. Users respond predictably to friction, clarity, trust, and motivation—which means directional outcomes can often be anticipated before running a test. **Simulation is:** - A way to rank bets under uncertainty - A decision aid when traffic is limited - A fast filter before real exposure **Simulation is not:** - A predictor of exact conversion rates - A replacement for real users - A guarantee of outcomes #### How the Relaunch Simulation Model Works **1. Core Inputs** Relaunch evaluates experiments using four primary inputs: - User characteristics and intent – Who the user is and what they are trying to accomplish - Acquisition context – Channel, device, and traffic quality - Funnel position and prior exposure – First-time vs returning users, awareness level, prior steps - Control and variant artifacts – Screens, copy, flows, and interaction patterns These inputs define the decision context the simulation operates in. **2. Behavioral Evaluation Framework** Each experience is evaluated against established behavioral principles, including: - Cognitive load - Clarity of value proposition - Visual hierarchy - Trust and risk perception - Motivation and commitment signals The model focuses on identifying where friction is reduced or introduced. Multiple independent evaluations are used to reduce single-perspective bias. **3. Simulation & Aggregation** Relaunch runs repeated evaluations under controlled variation. Inconsistent or unstable responses are treated as signal risk, not averaged away. High variance lowers confidence. Extreme variance invalidates the prediction. This prevents confident outputs from noisy inputs. **4. Outputs & Confidence Scoring** Each simulation produces: - A directional outcome (positive, negative, neutral) - A confidence score - A qualitative explanation of the main drivers Results are designed to support judgment, not replace it. #### How We Validate the Model Relaunch is validated against real-world experiments with known outcomes. **Data sources:** - Publicly documented A/B tests - Private experiments shared by partners - Multiple industries and funnel stages **Outcome coverage:** - Positive, negative, and neutral - B/A reversals to detect ordering bias - A/A tests to measure consistency Success is measured by directional alignment, not exact lift. This matches how teams actually make decisions. #### 80%+ Directional Accuracy Validated against 100+ real A/B tests with known outcomes, Relaunch correctly predicts: - Which variant will win - Which variant will lose - When changes won't move the needle Predicted lift tends to be lower than actual results — **this conservatism is intentional.** #### Real Experiments vs Relaunch Predictions **Case Studies:** **Blinkist — Clarifying Free Trial Terms to Boost Trial Starts** Industry: Ed-tech Context: Mobile free trial entry point. The goal was to increase trial starts. What Changed: The variant clarified free trial mechanics and reduced perceived commitment. Actual Result: Variant produced 23% higher relative conversion. Relaunch Prediction: Variant predicted to produce 14% higher relative conversion. Outcome: Correct direction. Conservative magnitude. **Pinterest — Delaying Sign-Up Prompt to Increase Account Creation** Industry: Social Media Context: New user sign-up flow. The goal was to increase account creation. What Changed: The variant delayed the sign-up pop-up, allowing users to engage with content first. Actual Result: Variant produced 19% higher relative conversion. Relaunch Prediction: Variant predicted to produce 12% higher relative conversion. Outcome: Correct direction. Conservative magnitude. **T.M. Lewin — Reducing Sizing Uncertainty to Increase Add-to-Cart** Industry: eCommerce Context: Product detail page. The goal was to increase add-to-cart and purchase completion. What Changed: The variant introduced a fit explainer modal to reduce sizing uncertainty. Actual Result: Variant produced 13% higher relative conversion. Relaunch Prediction: Variant predicted to produce 13% higher relative conversion. Outcome: Correct direction. Exact magnitude. **Instapage — Pricing Layout Changes to Drive Plan Selection** Industry: SaaS Context: Pricing page. The goal was to increase plan selection. What Changed: The variant adjusted pricing layout and plan presentation. Actual Result: No statistically significant difference detected. Relaunch Prediction: No statistically significant difference predicted. Outcome: Correct neutral prediction. **Neos Kosmos — Simplifying Checkout to Increase Paid Subscriptions** Industry: Media Context: Subscription checkout page. The goal was to increase paid subscriptions. What Changed: The variant simplified the page structure and reduced friction in the flow. Actual Result: Variant produced 16% higher relative conversion. Relaunch Prediction: Variant predicted to produce 7% higher relative conversion. Outcome: Correct direction. Conservative magnitude. **Shaw Academy — Risk Reassurance to Boost Course Registrations** Industry: Ed-tech Context: Course sign-up flow. The goal was to reduce perceived risk and increase registrations. What Changed: The variant added reassurance messaging around cancellation and commitment. Actual Result: Variant produced 17% higher relative conversion. Relaunch Prediction: Variant predicted to produce 4.2% higher relative conversion. Outcome: Correct direction. Highly conservative magnitude. **Slopes — Clarifying Free Plan Terms to Increase Sign-Ups** Industry: Entertainment Context: Free plan selection page. The goal was to increase sign-ups. What Changed: The variant clarified free plan terms and reduced ambiguity. Actual Result: Variant produced 19% higher relative conversion. Relaunch Prediction: Variant predicted to produce 7.8% higher relative conversion. Outcome: Correct direction. Conservative magnitude. #### Limitations, Proper Use, and What Comes Next **When it's less reliable:** - Interaction patterns are highly novel - Outcomes are driven primarily by brand or emotion - Input context is sparse or inaccurate Some simulations are invalidated due to high variance. A small minority are confident but wrong. These cases are tracked and used for improvement. **How the model improves:** - Expanding the validation dataset - Improving calibration across experiment types - Increasing clarity of explanations Relaunch improves as more real outcomes are added.