experiment:Three CTA PQL test
Experiment summary
The purpose of this test is to gain a baseline understanding of how 3 CTAs perform within a feature discovery moment along with what the click-through rates are for each CTA. Longer-term, our goal is to build experiences where we intelligently display a maximum of two CTAs based on what we believe are the best CTAs for that user/namespace. This experiment will help us establish a baseline for future experimentation.
Hypothesis
We can increase the overall click-through rate on the page by providing users with options to "upgrade now" "start a trial" and "talk to sales"
Business problem
Establishing PQLs is a goal for the business and is viewed as part of our future go-to-market strategy. This initial test will allow us understand click-through rates and see how initial hand-raise PQLs perform
Supporting data
Expected outcome
Experiment design & implementation
We will launch this experiment at 20% of total free namespaces and monitor to a potential roll out of 50%. Users in the Control will get the current experience where they have two CTAs upgrade now
and start a trial
. Users in the experiment will have three CTAs upgrade now
, start a trial
, and Talk to Sales
.
Control | Experiment |
---|---|
Design Specs - TBD
Experiment tracking
We should utilize a GLEX experiment and monitor page views and clicks on each CTA. We should also record namespace ids in the experiment_subjects
table indicating if they are included in the control or experiment variant.
Link Monitoring
Variant | Link | value |
---|---|---|
Control | Upgrade Now |
?source=discover-project-security |
Control | Start a Trial |
?glm_content=discover-project-security&glm_source=gitlab.com |
Experiment | Upgrade Now |
?source=discover-project-security-pqltest |
Experiment | Start a Trial |
?glm_content=discover-project-security-pqltest&glm_source=gitlab.com |
Experiment | Talk to Sales |
(data documented here) |
ICE score
Impact | Confidence | Ease | Score |
---|---|---|---|
9 | 7 | value 3 | Average(1:3) |
Known assumptions
As this is our first hand-raise PQL experiment in-app we don't know how many hand-raise PQLs we'll generate or how three 3 CTAs will impact the overall click-through rate within the experiment. These will be factors we'll actively monitor throughout the experiment.
Results, lessons learned, next steps
Checklist
-
Fill in the experiment summary and write more about the details of the experiment in the rest of the issue description. Some of these may be filled in through time (the "Result, learnings, next steps" section for example) but at least the experiment summary should be filled in right from the start. -
Add the label of the group::
that will work on this experiment (if known). -
Mention the Product Manager, Engineering Manager, and at least one Product Designer from the group that owns the part of the product that the experiment will affect. -
Fill in the values in the ICE score table ping other team members for the values you aren’t confident about (i.e. engineering should almost always fill out the ease section). Add the ~"ICE Score Needed" label to indicate that the score is incomplete. -
Replace the ~"ICE Score Needed" with an ICE low/medium/high score label once all values in the ICE table have been added. -
Mention the [at]gitlab-core-team team and ask for their feedback.