Search
Clear search
Close search
Google apps
Main menu
true

Run A/B tests on your store listing

To help optimize your store listing on Google Play, you can run experiments to find the most effective graphics and localized text for your app.

For published apps, you can test variants against your current version to see which one performs best based on install data.

Experiment types

For each app, you can run one global experiment or up to five localized experiments at the same time.

Global (for graphics only)

Using a global experiment, you can experiment with graphics in your app's default store listing language. You can include variants of your app's icon, feature graphic, screenshots, and promo video.

  • If your app's store listing is only available in one language: Global experiments will be shown to all users.
  • If you've added any localized graphic assets in a specific language: Users viewing your app in that language are excluded from your app's global experiments. For example, if your app's default language is English and it has a localized feature graphic in French, users viewing your app in French will be excluded from the experiment (even if you're testing your icon).
Localized (for text and graphics)

Using a localized experiment, you can experiment with your app's icon, feature graphic, screenshots, promo video, and/or your app's descriptions in up to five languages. Experiment variants will only be shown to users viewing your app's store listing in the languages you choose.

If your app's store listing is only available in one language, localized experiments will only be shown to users viewing your app in its default language.

Step 1: Create an experiment

You can create an experiment using your Play Console. When you're ready to review and apply results, you can use your Play Console or the Play Console app.

Global
  1. Sign in to your Play Console.
  2. Select an app.
  3. On the left menu, click Store presence > Store listing experiments.
  4. Under "Global," click Create.
  5. Proceed to the instructions under "Step 2: Set up your experiment."
Localized
  1. Sign in to your Play Console.
  2. Select an app.
  3. On the left menu, click Store presence > Store listing experiments.
  4. Under "Localized," choose a language.
  5. Click Create.
  6. Proceed to the instructions under "Step 2: Set up your experiment."

Once you've made a localized experiment live in one language, you can add up to four other languages. To add more experiments:

  1. Go to back to your Store listing experiments page.
  2. Select New experiment.
    • If you can't select New experiment, make sure you don't have an experiment saved in draft. Any draft experiments need to be live before you add new languages.

Step 2: Set up your experiment

After you've created an experiment, you can set up variants and choose the attributes you want to test.

To set up your experiment:

  1. Follow the on-screen instructions to add your targeting information, attributes, and variants. For more information and tips, review the table below.
  2. To begin your experiment, go to the top of the page and click Run experiment. To finish setting up your experiment later, click Save.
See field descriptions, examples & tips
Field Description Examples & tips
Name
  • Your experiment name is only visible on the Play Console to identify the experiment and isn't visible to users.
  • "Bright icon experiment"
  • "Logo feature graphic"
  • "Short description with new slogan"
Audience
  • The percentage of users that see a variant of your experiment.
  • Your audience percentage will be divided equally between your experiment variants.
  • If you type 30% as your audience, the remaining 70% of visitors to your store listing page will see your page's current version.
  • If you have a 30% audience and two variants in your experiment, each variant will be shown to 15% of users.
  • During the course of an experiment, each user will only see a single variant or your page's current version.
Attributes
  • Select the item type you want to test compared to your current listing.
  • To run experiments most effectively, test one attribute at a time.
  • You can only test your short description and full description during a localized experiment.
  • If you're testing graphic assets, make sure to follow size and file type requirements.
Variants
  • To add a new version to your experiment, click Add another variant.
  • You can add up to 3 variants per experiment, in addition to your current version.

Step 3: Review & apply results

For each experiment, you'll see the following details:

  • Status: Progress & description of your experiment
  • Start date: When you started running the experiment
  • Result: Shows whether a variant or your current version is winning, if there's a tie, or if the experiment needs more data
Using the Play Console website
  1. Sign in to your Play Console.
  2. Select an app.
  3. On the left menu, click Store presence > Store listing experiments.
  4. Select the experiment you want to review.
    • To apply a variant that outperformed your app's current version, select Apply winner. You can review the changes to your store listing before the changes go live.
    • To keep your current version, select Keep. This will update your store listing to use your current version and end the experiment.
    • If your experiment results in a tie, select Stop experiment.

Results

Once you select an experiment, you can view specific details about how each variant performed.

Variant Definition & examples
Audience
  • % of users that see a variant of your experiment
Installs on active devices
  • # of devices that have been online at least once in the past 30 days and have your app installed
Scaled installs
  • # of installs during your experiment divided by audience share
  • For example, if you ran an experiment with two variants that used 90% / 10% audience shares and the installs for each variant were A = 900 and B = 200, the scaled installs would be shown as A = 1000 (900/.9) and B = 2000 (200/0.1).
Performance
  • Estimated change in install performance compared to the current version. There is a 90% chance that the variant would perform within the displayed range over time. Since the range is based on a variant's performance, these numbers will vary during the experiment.
  • The average between your variant's high and low approximate change in install performance represents your variant's estimated change in performance.
  • Example: If one of your variants had a performance range of +5% to +15%, the most likely change in performance would be the middle number between the two, about +10%.
  • Performance will only be displayed once your experiment has enough data. In general, as an experiment has more time to run and collect data, a variant's performance range will become more narrow and accurate.
Using the Play Console app
  1. Open the Play Console app Console app.
  2. Select an app.
  3. Scroll down to the "Experiments" card. For more information on your app's experiments, tap View details. From the detailed view, tap on a variant to see the audience, current, and scaled installs for that experiment.
    • To apply a variant that outperformed your app's current version, select Apply winner. You can review the changes to your store listing before the changes go live.
    • To keep your current version, select Keep current. This will update your store listing to use your current version and end the experiment.

Results

Once you select an experiment, you can view specific details about how each variant performed.

Variant Definition & examples
Audience
  • % of users that see a variant of your experiment
Installs on active devices
  • # of devices that have been online at least once in the past 30 days and have your app installed
Scaled installs
  • # of installs during your experiment divided by audience share
  • For example, if you ran an experiment with two variants that used 90% / 10% audience shares and the installs for each variant were A = 900 and B = 200, the scaled installs would be shown as A = 1000 (900/.9) and B = 2000 (200/0.1).
Performance
  • Estimated change in install performance compared to the current version. There is a 90% chance that the variant would perform within the displayed range over time. Since the range is based on a variant's performance, these numbers will vary during the experiment.
  • The average between your variant's high and low approximate change in install performance represents your variant's estimated change in performance.
  • Example: If one of your variants had a performance range of +5% to +15%, the most likely change in performance would be the middle number between the two, about +10%.
  • Performance will only be displayed once your experiment has enough data. In general, as an experiment has more time to run and collect data, a variant's performance range will become more narrow and accurate.

Was this article helpful?
How can we improve it?