Andromeda changed how Meta accounts feel day to day, and I don’t mean that in the usual “algo updated” way. It showed up in the parts you normally trust: pacing, stability, and the basic cause-effect of “we changed X, so Y should move.”
After Andromeda rolled out, delivery of Meta Ads started behaving less like a steady machine and turned more uncertain. Campaigns that used to spend cleanly began underspending or spending in bursts, even with healthy budgets. CAC moved more sharply on days when we didn’t touch the account.
Normally, when things used to go wrong with Meta Ads, advertisers would rely on fixes like tweaking targeting, splitting ad sets, or forcing certain controls. But post Andromeda, these practices somehow made the account performance more fragile. (And it became harder for us to explain these sudden drops to clients!)
That’s when I realized – this is NOT the time to have opinions or discussions. Instead, we need to run experiments. Let’s try what Meta wants us to. This post Andromeda field report is exactly that. It is the output of experiment phase, where we kept a running sheet, logged every meaningful change, and wrote down the outcome even when it wasn’t pretty.
I hope you’ll be able to fetch some useful insights for your Meta Ads accounts from here. (And just in case, you want to read more about the Andromeda impact, here’s our detailed guide explaining all the changes. Access guide here.)
This Andromeda impact report for Meta Ads compiles 30 Meta Ads tests we ran across 9 active ad accounts from Oct 15 to Dec 11, 2025, right after Andromeda started reshaping delivery patterns. Each test reflects a deliberate change in creative, targeting, structure, catalog, or bidding controls, tracked against outcomes like ROAS, CAC, purchases, and lead quality, with clear “worked, mixed, didn’t hold up” verdicts.
This report is built from:
A “test” here is not a tiny tweak. It’s a deliberate setup shift we’d be comfortable repeating, like switching to open targeting, changing structure (ABO vs CBO), turning on Advantage+ catalog, tightening bidding with tCPA or tROAS, or changing creative format and volume.
What we tracked?
For ecommerce, we tracked spend, purchases, CAC, revenue or purchase value, ROAS, plus supporting signals like CPC, CTR, CPM, and AOV where available. For lead gen, we tracked CPL and also downstream signals (SQL count, SQL CAC) so we didn’t confuse cheap leads with real business outcomes.
How we wrote verdicts?
If a test was too early to call, we treated it as “not enough signal,” not as a win.
If you manage Meta ad accounts for a living, this is the section you come back to. It’s intentionally blunt because “maybe” is expensive during volatility.
| Setup we tested | How it behaved in our log | When it helped | When it hurt | What I’d do now |
|---|---|---|---|---|
| Advantage+ catalog campaigns | Worked often | Clear product signal and solid creative inputs | Full catalog with weak signal | Start best sellers first, expand later |
| ABO setup with broad interests | Worked often | Controlled spend per ad set while still giving room to learn | Too many ad sets and fragmentation | Use ABO to test, consolidate when stable |
| Open targeting | Mixed | Strong video angles and enough creative variety | Thin creative, unclear offer | Only go broad when inputs are ready |
| tROAS ad sets | Mixed | Can lift efficiency when history and signal exist | Underspending and volume choking | Use later, not as the first reaction |
| tCPA ad sets | Mixed (sometimes strong) | Works when success is defined beyond “cheap” | Applied too early or without downstream clarity | Apply after baseline delivery exists |
| One campaign → one ad set → one ad | Mixed | Can work as a clean learning setup | Weak creative or low volume | Use for controlled learning, not scaling |
| One campaign → one ad set → multiple ads | Didn’t hold up consistently | Rarely, unless paired with strong creative scaling | “Multiple ads” without real variety | If you do this, ship real angles not variants |
| Heavy segmentation (gender splits, over-restriction) | Didn’t hold up consistently | Rare edge cases | Fragmented learning and unstable delivery | Avoid unless you have a strong reason |
One theme shows up almost everywhere in the report. When Andromeda made delivery unpredictable, the accounts that leaned into simplicity and strong creatives stabilized faster than the accounts that tried to out-structure the algorithm. (And it aligns with what Meta has been saying!)
We kept seeing the same story across accounts. When performance dipped, teams wanted to “fix the audience,” but the more consistent wins came from feeding Meta better creative inputs.
In our summary sheet, the strongest pattern is blunt: video-first setups outperformed static-heavy and carousel-heavy setups post-Andromeda. The test logs support that in multiple places, especially when we paired video with open targeting.
A practical definition that held up in our accounts:
If you’re under pressure, this is the boring move you’ll want to skip. It’s also the move that kept showing up as the fix.
I’m not going to tell you “always go broad.” That’s how you lose trust fast.
What the log does show is that broad stopped being the villain when we did two things: shipped better video angles, and removed unnecessary restrictions that were reducing learning.
Here’s what broad looked like when it worked in one account:
And here’s the same idea when it was only “fine,” not a home run:
So the takeaway is not “broad wins.” The takeaway is more useful: broad is a multiplier for strong inputs, and it gets punished when inputs are weak.
Catalog was the most binary behavior in our data. When signal and scope were clean, it worked. When catalog scope was too broad, it often did nothing.
You can see both ends clearly:
Same feature. Four very different outcomes. That’s why we stopped treating “turn on catalog” as a checkbox.
Our default now is simple. Start with best sellers or a tight collection, then expand only after the machine is already working.
The temptation after an update is to add controls because controls feel like certainty. The log shows that controls can help, but mostly when the account already has enough volume and enough signal to support them.
A clean example of the classic tradeoff:
tCPA is more interesting because it depends on what you count as success. In one funnel where downstream outcomes mattered, it showed real promise:
So my “agency founder” takeaway is not “use controls.” It’s this: controls are powerful when you know what success is, and dangerous when you use them to escape uncertainty.
Restrictions can make sense when you have clear reason to believe certain geos are dragging efficiency. The danger is assuming “more restriction equals better performance.”
The Account B(D2C, Beauty) tests show why:
Same brand, different restriction logic, very different outcomes. Restriction is a tool, not a guaranteed unlock.
This is where founders get burned because dashboards lie politely. A cheaper lead is not a cheaper customer.
The Account I(Lead gen) comparison makes that point in a way that’s hard to ignore. One setup produced cheaper CPL and a lower SQL CAC in the sheet, but the conclusion explicitly flags that it’s not recommended if backend results don’t line up.
That is the right mindset. If you run lead gen, you do not celebrate until the backend confirms the story.
When volatility rises, the agency job becomes risk management. You want signal without accidentally destroying learning.
Here’s the order that kept us sane across accounts:
That sequence is not one-time thing. It’s repeatable, and repeatable is what agencies need.
Underspending is the most common “post-update” symptom, and it’s also the one that triggers the most bad decisions. Start here.
If you fix these first, you often get stability back without rebuilding the account.
If you made it this far, you’re already doing the most important thing most advertisers skip after a platform shift. You’re not chasing empty theories. You’re looking for Meta ads patterns that hold up.
But here’s the part nobody tells you.
Reading a field report is useful. Applying it is the hard part, because your account will never match our tests perfectly. Your creatives are different. Your offer is different. Your history is different. Even your “ROAS drop” might be coming from a completely different place, creative fatigue, fragmented structure, or bidding constraints that are quietly choking spend.
That’s exactly where teams lose days.
So instead of guessing which learning to try first, we do the same thing internally across client accounts: we ask Vaizle AI to tell us what changed in this account, then we run the smallest high-signal test.
Vaizle AI connects to your Meta Ads data and answers questions like an analyst would, except it does it in minutes, not hours. You get a short diagnosis, the likely cause, and the next 3 actions to test.
If you want to use it the same way we do, start with one of these:
No fluff. No generic advice. Just your data, interpreted fast, so you can make one confident move instead of ten nervous ones.
For reference, here is a log of anonymized account tags used in this report
Account I: Lead gen, SQL-quality tracked, CPL vs backend mismatch risk
Account A: D2C apparel/accessories, ecommerce, mixed broad + launches
Account B: D2C fragrance/beauty, ecommerce, geo-sensitive demand
Account C: D2C home/comfort product, ecommerce, volatility-prone post-update
Account D: D2C consumables/snacks, ecommerce, video-led acquisition
Account E: D2C footwear, ecommerce, catalog-friendly SKU winners
Account F: Education funnel, purchase event + OTO focus, tCPA testing
Account G: D2C fashion/lifestyle, ecommerce, seasonal launches and catalog tests
Account H: Performance funnel with CAC sensitivity, tCPA testing
Siddharth built two bootstrapped companies from the ground up: Vaizle and XOR Labs. He’s personally managed over Rs 100cr in ad budget across eCommerce, D2C, ed-tech, and health-tech segments. Apart from being a full-time marketer, he loves taking on the challenges of finance and operations. When not staring at his laptop, you’ll find him reading books or playing football on weekends.
Copyright @VAIZLE 2026