Grow Your Agency
Grow Your Agency
Social Media Calendars
Social Media Calendars
Best Text Generators
Best Text Generators
Best Posting Times
Best Posting Times
AI in Marketing
AI in Marketing
Explore Vaizle AI
Explore Vaizle AI
Run Facebook Ads
Run Facebook Ads
Content Creator's Life
Content Creator's Life
Grow your Instagram
Grow your Instagram
Marketing with Vaizle
Marketing with Vaizle

Meta Andromeda Update Results – Data from 30 Ad Tests from 9 Meta Ad Accounts

Facebook Analytics
Siddharth Dwivedi December 31, 2025 13 min read

Andromeda changed how Meta accounts feel day to day, and I don’t mean that in the usual “algo updated” way. It showed up in the parts you normally trust: pacing, stability, and the basic cause-effect of “we changed X, so Y should move.”

After Andromeda rolled out, delivery of Meta Ads started behaving less like a steady machine and turned more uncertain. Campaigns that used to spend cleanly began underspending or spending in bursts, even with healthy budgets. CAC moved more sharply on days when we didn’t touch the account.

Normally, when things used to go wrong with Meta Ads, advertisers would rely on fixes like tweaking targeting, splitting ad sets, or forcing certain controls. But post Andromeda, these practices somehow made the account performance more fragile. (And it became harder for us to explain these sudden drops to clients!)

That’s when I realized – this is NOT the time to have opinions or discussions. Instead, we need to run experiments. Let’s try what Meta wants us to. This post Andromeda field report is exactly that. It is the output of experiment phase, where we kept a running sheet, logged every meaningful change, and wrote down the outcome even when it wasn’t pretty.

I hope you’ll be able to fetch some useful insights for your Meta Ads accounts from here. (And just in case, you want to read more about the Andromeda impact, here’s our detailed guide explaining all the changes. Access guide here.)

This Andromeda impact report for Meta Ads compiles 30 Meta Ads tests we ran across 9 active ad accounts from Oct 15 to Dec 11, 2025, right after Andromeda started reshaping delivery patterns. Each test reflects a deliberate change in creative, targeting, structure, catalog, or bidding controls, tracked against outcomes like ROAS, CAC, purchases, and lead quality, with clear “worked, mixed, didn’t hold up” verdicts.

What’s inside our post Andromeda impact report and how we judged results?

This report is built from:

  • 9 ad accounts
  • 30 logged tests
  • Tests were run across different brands and setups, so you’ll see both wins and contradictions.

A “test” here is not a tiny tweak. It’s a deliberate setup shift we’d be comfortable repeating, like switching to open targeting, changing structure (ABO vs CBO), turning on Advantage+ catalog, tightening bidding with tCPA or tROAS, or changing creative format and volume.

What we tracked?
For ecommerce, we tracked spend, purchases, CAC, revenue or purchase value, ROAS, plus supporting signals like CPC, CTR, CPM, and AOV where available. For lead gen, we tracked CPL and also downstream signals (SQL count, SQL CAC) so we didn’t confuse cheap leads with real business outcomes.

How we wrote verdicts?

  • Worked: it met or beat the account’s benchmark with enough signal that we’d keep it.
  • Mixed: it helped in some contexts, but failed in others, or improved one thing while hurting another (efficiency vs volume is the classic tradeoff).
  • Didn’t hold up: it consistently missed the benchmark, failed to convert, or made delivery unstable.

If a test was too early to call, we treated it as “not enough signal,” not as a win.

The scoreboard: what held up, what kept breaking, and what I’d try first

If you manage Meta ad accounts for a living, this is the section you come back to. It’s intentionally blunt because “maybe” is expensive during volatility.

Setup we testedHow it behaved in our logWhen it helpedWhen it hurtWhat I’d do now
Advantage+ catalog campaignsWorked oftenClear product signal and solid creative inputsFull catalog with weak signalStart best sellers first, expand later
ABO setup with broad interestsWorked oftenControlled spend per ad set while still giving room to learnToo many ad sets and fragmentationUse ABO to test, consolidate when stable
Open targetingMixedStrong video angles and enough creative varietyThin creative, unclear offerOnly go broad when inputs are ready
tROAS ad setsMixedCan lift efficiency when history and signal existUnderspending and volume chokingUse later, not as the first reaction
tCPA ad setsMixed (sometimes strong)Works when success is defined beyond “cheap”Applied too early or without downstream clarityApply after baseline delivery exists
One campaign → one ad set → one adMixedCan work as a clean learning setupWeak creative or low volumeUse for controlled learning, not scaling
One campaign → one ad set → multiple adsDidn’t hold up consistentlyRarely, unless paired with strong creative scaling“Multiple ads” without real varietyIf you do this, ship real angles not variants
Heavy segmentation (gender splits, over-restriction)Didn’t hold up consistentlyRare edge casesFragmented learning and unstable deliveryAvoid unless you have a strong reason

One theme shows up almost everywhere in the report. When Andromeda made delivery unpredictable, the accounts that leaned into simplicity and strong creatives stabilized faster than the accounts that tried to out-structure the algorithm. (And it aligns with what Meta has been saying!)

Learning 1: The biggest lever was not targeting. It was creative volume and creative format

We kept seeing the same story across accounts. When performance dipped, teams wanted to “fix the audience,” but the more consistent wins came from feeding Meta better creative inputs.

In our summary sheet, the strongest pattern is blunt: video-first setups outperformed static-heavy and carousel-heavy setups post-Andromeda. The test logs support that in multiple places, especially when we paired video with open targeting.

A practical definition that held up in our accounts:

  • aim for 4–6 ads per ad set, not 1–2
  • ship real angles, not five minor edits of the same hook
  • prefer video-first when you’re asking Meta to explore (broad, Advantage+, catalog)

If you’re under pressure, this is the boring move you’ll want to skip. It’s also the move that kept showing up as the fix.

Learning 2: Broad worked best when we stopped treating it like a gamble

I’m not going to tell you “always go broad.” That’s how you lose trust fast.

What the log does show is that broad stopped being the villain when we did two things: shipped better video angles, and removed unnecessary restrictions that were reducing learning.

Here’s what broad looked like when it worked in one account:

  • Account D (D2C, Consumables/Snacks), open targeting + video: one campaign line hit ROAS 3.659 on ₹12,052.61 spend with ₹44,100.5 revenue and 61 purchases. The full logged total for the setup shows 125 purchases on ₹37,886.9 spend with ₹94,064 revenue, ROAS 2.48.

And here’s the same idea when it was only “fine,” not a home run:

  • Account A (D2C, Apparel), open targeting: ₹18,174.38 spend, 23 purchases, purchase value ₹41,375, ROAS 2.28. It delivered, but didn’t become the new default for that account.

So the takeaway is not “broad wins.” The takeaway is more useful: broad is a multiplier for strong inputs, and it gets punished when inputs are weak.

Learning 3: Catalog and Advantage+ were either a multiplier, or a dead end

Catalog was the most binary behavior in our data. When signal and scope were clean, it worked. When catalog scope was too broad, it often did nothing.

You can see both ends clearly:

  • Account E (D2C footwear, catalog-friendly): Advantage+ catalog delivered ROAS 5.72 on ₹57,354.99 spend, with 116 purchases and ₹327,829.2 purchase value.
  • Account G (D2C fashion/lifestyle, seasonal): Advantage+ best sellers delivered ROAS 2.36 on ₹8,537.32 spend, with 17 purchases and ₹20,180.6 revenue.
  • Account G (same account): Advantage+ all products catalog spent ₹1,428.79 and got 0 purchases.
  • Account C (D2C home/comfort product): Advantage+ catalog spent ₹4,418.37 and got 0 purchases.

Same feature. Four very different outcomes. That’s why we stopped treating “turn on catalog” as a checkbox.

Our default now is simple. Start with best sellers or a tight collection, then expand only after the machine is already working.

Learning 4: Controls like tROAS and tCPA were not a cheat code. They were a timing decision

The temptation after an update is to add controls because controls feel like certainty. The log shows that controls can help, but mostly when the account already has enough volume and enough signal to support them.

A clean example of the classic tradeoff:

  • Account A(D2C, Apparel), tROAS 4.5: ₹3,670.12 spend, 6 purchases, purchase value ₹14,591, ROAS 3.98. The conclusion notes it performed well, but underspent versus budget.

tCPA is more interesting because it depends on what you count as success. In one funnel where downstream outcomes mattered, it showed real promise:

  • Account F(D2C, Ed-tech), tCPA test rows:
  • 25 purchases, CAC ₹641.62, 7 OTOs, OTO CAC ₹2,291.51
  • 41 purchases, CAC ₹886.33, 14 OTOs, OTO CAC ₹2,595.68
  • A third tCPA row exists too, and the sheet conclusion is honest: not every tCPA ad set worked.

So my “agency founder” takeaway is not “use controls.” It’s this: controls are powerful when you know what success is, and dangerous when you use them to escape uncertainty.

Learning 5: Geo restrictions and exclusions: good idea, but not a guaranteed upgrade

Restrictions can make sense when you have clear reason to believe certain geos are dragging efficiency. The danger is assuming “more restriction equals better performance.”

The Account B(D2C, Beauty) tests show why:

  • Exclusions + small budget launches (Region 1 & Region 2 excluded): combined spend ₹14,051.54, purchases 25, purchase value ₹42,650.2, ROAS 3.04.
  • Only Region 1: spend ₹21,577.73, purchases 27, purchase value ₹36,547.4, ROAS 1.69.
  • Only Region 1 + low budget launches: combined spend ₹17,353.99, purchases 32, purchase value ₹41,274.77, ROAS 2.38.

Same brand, different restriction logic, very different outcomes. Restriction is a tool, not a guaranteed unlock.

Learning 6: Lead gen reminder – cheaper CPL can still be a bad outcome

This is where founders get burned because dashboards lie politely. A cheaper lead is not a cheaper customer.

The Account I(Lead gen) comparison makes that point in a way that’s hard to ignore. One setup produced cheaper CPL and a lower SQL CAC in the sheet, but the conclusion explicitly flags that it’s not recommended if backend results don’t line up.

That is the right mindset. If you run lead gen, you do not celebrate until the backend confirms the story.

If I had to stabilize a client account next week, this is the sequence I’d follow

When volatility rises, the agency job becomes risk management. You want signal without accidentally destroying learning.

Here’s the order that kept us sane across accounts:

  1. Audit creative coverage first. If you only have one angle and it is tired, everything else is a distraction.
  2. Increase creative variety inside existing structure. Video angles, real hooks, enough volume to let the system learn.
  3. Remove unnecessary splits. Gender splits and over-segmentation showed up as consistent underperformers in the summary learnings.
  4. Test broad only when inputs are ready. Broad without strong creative is not brave, it’s expensive.
  5. Use controls later. tROAS and tCPA can help, but they behave better after baseline stability exists.

That sequence is not one-time thing. It’s repeatable, and repeatable is what agencies need.

If your campaigns are underspending right now, check these three things before anything else

Underspending is the most common “post-update” symptom, and it’s also the one that triggers the most bad decisions. Start here.

  • Are you running constraints too early? tROAS and tCPA can quietly choke delivery when Meta’s confidence is shaken.
  • Did you fragment learning? Too many ad sets, too many splits, too many micro-structures can starve each ad set of signal.
  • Is creative volume too thin for exploration? Broad and Advantage+ need enough angles to learn. One tired creative makes everything look broken.

If you fix these first, you often get stability back without rebuilding the account.

Know what’s breaking before you “fix” anything

If you made it this far, you’re already doing the most important thing most advertisers skip after a platform shift. You’re not chasing empty theories. You’re looking for Meta ads patterns that hold up.

But here’s the part nobody tells you.

Reading a field report is useful. Applying it is the hard part, because your account will never match our tests perfectly. Your creatives are different. Your offer is different. Your history is different. Even your “ROAS drop” might be coming from a completely different place, creative fatigue, fragmented structure, or bidding constraints that are quietly choking spend.

That’s exactly where teams lose days.

So instead of guessing which learning to try first, we do the same thing internally across client accounts: we ask Vaizle AI to tell us what changed in this account, then we run the smallest high-signal test.

Vaizle AI connects to your Meta Ads data and answers questions like an analyst would, except it does it in minutes, not hours. You get a short diagnosis, the likely cause, and the next 3 actions to test.

If you want to use it the same way we do, start with one of these:

  • “My campaigns are underspending. What is causing it, and what should I loosen first?”
  • “Is my ROAS drop creative fatigue, structure fragmentation, or bidding constraints?”
  • “Based on my last 14 days, should I run broad, Advantage+ catalog, or stay controlled for now?”

No fluff. No generic advice. Just your data, interpreted fast, so you can make one confident move instead of ten nervous ones.

For reference, here is a log of anonymized account tags used in this report

Account I: Lead gen, SQL-quality tracked, CPL vs backend mismatch risk

Account A: D2C apparel/accessories, ecommerce, mixed broad + launches

Account B: D2C fragrance/beauty, ecommerce, geo-sensitive demand

Account C: D2C home/comfort product, ecommerce, volatility-prone post-update

Account D: D2C consumables/snacks, ecommerce, video-led acquisition

Account E: D2C footwear, ecommerce, catalog-friendly SKU winners

Account F: Education funnel, purchase event + OTO focus, tCPA testing

Account G: D2C fashion/lifestyle, ecommerce, seasonal launches and catalog tests

Account H: Performance funnel with CAC sensitivity, tCPA testing

About the Author

Siddharth Dwivedi

Siddharth Dwivedi

Siddharth built two bootstrapped companies from the ground up: Vaizle and XOR Labs. He’s personally managed over Rs 100cr in ad budget across eCommerce, D2C, ed-tech, and health-tech segments. Apart from being a full-time marketer, he loves taking on the challenges of finance and operations. When not staring at his laptop, you’ll find him reading books or playing football on weekends.

Enjoy this Article? Share it please.

Turn your Meta Ads data into insights

Understand performance

Spot hidden wins

Wondering if Vaizle AI is the right choice for you?