Search...
Menu

Creative A/B Testing Guide

A/B Testing for Ad Creatives

Mintegral's Ad Creative A/B Testing feature is designed to help advertisers move beyond guesswork. By providing a structured experimental environment, it allows you to validate creative assumptions and quickly identify high-converting assets for scaling.

Use Cases for Ad Creative A/B Testing

Whether you are optimizing an existing campaign or launching new assets, A/B testing primarily serves two core objectives:

  • Optimizing Creative Performance:

    • Direct Comparison: When you have two creatives and are unsure which performs better, use experiments to reach a data-backed conclusion.

    • Combination Optimization: Confirm the effectiveness of specific asset combinations. For example, pair one video with two different Playables to test which Playable drives higher conversions.

  • Accelerating Creative Scaling:

    • If a creative performs exceptionally well on other platforms but is not scaling significantly via Mintegral's automatic system, you can use A/B testing to manually allocate traffic. This helps the asset accumulate samples quickly and accelerate the system's ability to scale the asset.

How to Create an A/B Test

Navigation Path: Delivery ToolCreative AB

Steps:

  1. Fill in Basic Information: Create the experiment and enter the fundamental project details.

  1. Configure Control Group: Select a control creative and allocate a corresponding traffic percentage.

  2. Configure Experimental Group(s): Add one or more experimental creatives and allocate traffic.

 

💡 Tips & Important Notes

Experimental Rules

  • Uniqueness: A single creative can only serve as a control group for one experiment at a time; it cannot participate in multiple experiments simultaneously.

Creating Experiments

  • Comparing New Creatives: To compare two entirely new creatives, it is recommended to use a creative that already has existing traffic as the control group and set both new creatives as experimental groups.

  • Control vs. Experimental Group Setup: We suggest choosing a creative that has already scaled and has significant volume as the control group. An experiment can only have one control group, but can have up to four experimental groups.

  • Traffic Allocation: The sum of the traffic proportions for the control group and all experimental groups must equal 100%.

  • Asset Requirements: Experimental creatives must be consistent with the control group in the following dimensions:

    • Image/Icon: Dimensions must be identical.

    • Video: Dimensions must be identical (bitrate and duration are currently not restricted).

    • Playable: WebGL properties must be identical.

Running Experiments

  • Traffic Fluctuations: Once an experiment starts, the system will allocate traffic to the experimental groups. Consequently, it is normal to see a decrease in the volume of the control group creative.

  • Experiment Cycle: Experiments require time to accumulate data to reach a confident conclusion. We recommend a minimum cycle of 2 days and advise against frequent modifications while the experiment is running.

Analyzing Results and Adjusting

After the experiment begins, you can view real-time reports via the "Operation" option in the experiment list.

💡 Best Practices

  • Measuring Confidence: Conclusions are only reliable when the data volume is sufficient. We recommend a minimum of 1,000 impressions. For CPI-based offers, it is suggested to wait until a significant number of installs are generated before judging the results.

  • Defining Variables and Understanding Data: Clearly define the difference between the experimental and control groups and determine the specific dimension being compared. For example, when verifying the synergy between the same video and different Playables, analyze the difference in conversion rates to identify the optimal creative pairing.

  • Executing Optimizations: Adjust your offer configuration based on the experimental conclusions. For instance, combine the winning Playable and video into a new custom creative and remove underperforming combinations.

FAQ

Q: Will the experiment affect the overall volume of the Offer? Will the control group be affected after the experiment ends? 
A: It will not affect the overall volume of the Offer. After the experiment ends, if an experimental creative has successfully scaled, it may take over some of the original control group's traffic, but the overall volume will not fluctuate significantly.

Q: Why can't I find the creative I want to include in the experiment? 
A: The system filters available creatives based on the Creative Group, Ad Type, and Ad Output you have set. We recommend using Custom Creatives to combine the specific assets you want.

Q: Is there a limit to the number of experiments I can create? 
A: Yes. The total number of experiments in "Running" and "Pending" status per advertiser account cannot exceed 50.

Q: Can I use a "Video + Playable" creative to help a "Playable-only" creative scale? 
A: No. Currently, A/B testing only supports comparisons between creatives of the same type.

Q: Why does the data in the experiment report not match the Performance Report? 
A: To ensure fairness, the A/B experiment report only records data when both the control and experimental groups are eligible to compete for the same impression. If a specific impression is only eligible for the control group, that data is recorded only in the Performance Report. Therefore, the volume of the control group in the Performance Report is typically higher than in the experiment report.

 

Previous
Creative Best Practices
Next
Ad Content Policies
Last modified: 2026-05-07