5: Barbara Galiza: 5 Golden Rules for Conversion Events
play Play pause Pause
S1 E5

5: Barbara Galiza: 5 Golden Rules for Conversion Events

play Play pause Pause

Barbara: You need to get to a place where,
you know, you're trying to identify out

of these, you know, a hundred people that
started the trials, which 30 ones are

the most likely to go on to, to convert.

So that's kind of the predictive
value that you're looking at.

So when you decrease the time from click
to sending the event, it makes it easier

them to identify it's the same person.

So there is, um.

There is an attribution value
on sending this event fast.

So for mobile business, you can send the
fast, so you should try to send it fast.

Jacob: Even within that 24 hour window,
like the, with the faster, the better.

Barbara: Yes.

The fastest is the better, for sure.

Hey there.

Welcome to the Price Power Podcast.

My name is Jacob Brush Finn,
CEO, and founder of botsy.

Today we're talking with Barbara Gza.

I first came across Barbara
from a written content and post

on LinkedIn a year or two ago.

I thought She has stellar insights
that are usually not obvious.

Her thinking and writing
cut through the noise.

She has 15 years of marketing
growth analytics experience.

She was responsible for growing the
dating app her from a few thousand

to over a million users as the
head of growth and early employee.

She's worked with brands including
Microsoft WeTransfer, Molly,

V-I-O-D-B-T Labs, Dentsu and More.

Now she's the founder of Fix My Tracking,
a service for marketers, dealing with

broken or untrustworthy conversions.

That feels like pretty
much every marketer.

Uh, with every conversion today.

Uh, but in this episode, we talk
through her five golden rules for

structuring conversions, which
I thought was super insightful,

uh, and, and many more insights.

Uh, all right, let's dive in.

Jacob: How should companies
be thinking about attribution

and scaling campaigns in 2025?

Barbara, where do we
even start with it all?

Barbara: Yeah, I think, I always think
like the most important place to start

is to look at your marketing and analyze,
you know, what does it actually need?

So, um, let's assume that,
you know, you're doing a

lot of, you know, pay media.

You know, that's the case
of most mobile companies.

They're running a lot of like
meta campaigns and, you know,

Google, Google campaigns.

In that case, a good place to start
is making sure that your campaigns

have conversion signals to optimize,
but ultimately, what kind of

attribution, what kind of tracking
you need is going to depend on.

What channels are you
acquiring your users?

Jacob: Yeah.

Yeah, that makes sense.

Uh, and so I was, I think maybe a good
place to dive in a little deeper is

I was reading your, uh, five golden
Rules for Structuring Conversions.

And I think everything is so
algorithmic driven, right?

And it, it comes down to making sure,
as you say, we're having those right

conversion events, sending, uh, can
we talk through those more in respect

to mobile and subscription apps?

Uh, those, those five golden rules.

And so.

The first one you have is, is don't
measure more than three events.

Barbara: Yes.

Um, you know, event tracking is free.

You can track as many events as you want.

You know, nowadays, especially
with like, if you do like.

Web and you have events
on the client site.

Things are very easy to set up with GTM.

If you have mobile, you know, it's
quite easy to also set things up

with like segment or roller stack
or the tool that you're using.

You know, setting up events has
become easier than ever, but, you

know, tracking events comes with.

A quite big hidden cost.

There is the cost of maintenance.

You know, like events break, they always
break and then you know, when you're

tracking 10 events and one of them doesn't
look like it's accurate and it's broken,

that leads to investigation and also,
you know, doubts about the other events.

And there's also the cost of, yeah,
confusion because you know, when

you're tracking a lot of events.

People often don't understand exactly,
you know, what's happening for each event.

I've, I've seen very few companies
that actually had, you know, good

up-to-date documentation on what the
events that they were tracking were.

So, you know, if every event you
add, you're adding complexity for

your marketing team, you're adding
complexity for your data team.

And I've seen they, you know,
three events is a great number for.

Tracking campaigns and again, like you
might need to track more events in your

product like Makespan or Amplitude.

You could be, you know, tracking.

Yeah.

Like the whole user journey there.

You there is a lot more that could
be valuable to the product team.

When we're talking about sending the data
to the ad platform, I don't think that

companies needs more than three events.

Jacob: What, uh, for subscription
apps, what do you typically recommend?

Recommend for those three events?

Barbara: I think like it depends on
what kind of campaigns you're running.

So for example, a lot of, a lot
of um, uh, app businesses right

now are exploring with web to app.

And then sometimes, you know, they
have a very high up the funnel

event there that they're tracking.

Maybe, you know, um, people click on
an ad and they get taken to like a

survey that's quite popular for let's
say like, um, a weight loss app.

You know, like take the survey
to figure out like what's your

calorie intake, for example.

And in that case, you know, like a client
side event that fires for cerv completion

can be one of those events in most cases.

You know, what I recommend is.

Some type of top of phone event.

If you're doing like awareness campaigns.

The first event where you're grabbing
PII, which is usually for apps, the

signup registration event, and then
the third event would be, you know,

STAR trial, is events that again, like
usually occurs for most app businesses.

You see that event firing within
the first 24 hours, so that is.

A good event to also be
using for the campaigns.

Jacob: Got it.

And so we want to structure this
hierarchy of events from early on where

potentially if we have these longer
onboarding funnels or survey flows,

not everyone will complete those.

So.

Barbara: Mm-hmm.

Jacob: maybe a, a, an event sooner for,
you know, some early, uh, checkpoint

or some early display of value entering
weight or calories or something like that.

And then moving on to maybe, uh, count,
sign up where we can then also pass email.

Uh, so passing that PII, which gets
us better match rates, and then

getting to actually a, a event that.

Uh, is associated with revenue.

You know, a trial isn't revenue quite
yet, but it's much more predictable

in terms of we typically know our
trial conversion rates, and so that's

gonna be the highest signal event.

And I, I know, I don't understand scan,
I know a scan for they've, I think

a major change was we can now send.

Continue sending these events
and so we can continue sending

higher value events over time.

Uh, do we wanna think about that?

Does that potentially expand into
sending more than three events?

Is that, do we, should I care about that?

Is that something more for
later stage optimization?

How should we think about that?

Barbara: Yeah, like the problem usually
with, you know, with subscription

apps is that usually, you know, the,
the conversion to when it actually,

you know, becomes revenue tends to
happen outside, look back windows.

So, you know, for example.

Meta the, I think the maximum, if
I'm not mistaken, look back window

that you can have, there is seven
days post click and it's, it's rare.

The situations where you
have a revenue event for a

subscription product that occurs.

those seven days, because usually
there is a period of a trial and

a trial can be, you know, yeah.

Seven days.

14 days.

It can be a longer period there too.

So it's important to have.

And the most important reason, you
know, to have this value is so you

are making a distinction to the app
platform on what's valuable using

and what's a not so valuable user.

So usually, you know what the
most advanced, you know, app

companies are doing is that
they're passing, um, an offline.

Event, so like a server to server
event that's, you know, estimates,

calculates how likely it is for the
user to convert, or what do they

predict their, their value to be?

Jacob: And that is a kind of predictive
revenue value or so they're, and

they're passing that back to the
ad network in the kind of revenue

parameter as a, uh, and is that done?

We have 30% of our trials convert.

And is that a simple estimation?

Of that or, or is it more complicated?

Barbara: because basically, you
know, like 30% of your trials

is not going to tell you.

Which trials are valuable ones.

So that's what you need to get to.

You need to get to a place where, you
know, you're trying to identify out of

these, you know, a hundred people that
started the trials, which 30 ones are

the most likely to go on to, to convert.

So that's kind of the predictive
value that you're looking at.

And that is done, you know, in
different ways, but you know, like

a common way that you know, um.

Companies do is like looking at
the features that they've used, you

know, um, their first 24 hours or
in the first 48 hours of app usage.

You know, certain features for
products tend to indicate that

the user is more likely to pay.

Perhaps there's other signals that.

relevant to your business.

They also take it in something like,
you know, they've already looked

into the plans, like you can see that
they visited the page that has plans,

they've shown an interest to subscribe.

That is even for B2B companies, visits to
the pricing page is a very common variable

that is used to build predictive models.

Jacob: Got it.

Yeah.

Yeah.

And it's, I I think it's,
it's not simple, right?

It's not a, just a, a simple thing
to build these predictive models.

I, I know, I'm sure some teams
do it internally, there's a lot

more companies springing up.

To offer this journey, I know is one.

I think, uh, there, there's lots
of others out there that, uh,

uh, solve this problem, but it's,

Barbara: Yeah,

Jacob: not a simple one.

It's probably a later
term strategy, right?

It's probably not.

If you're a newer app, is this something
you should think about or is this more

optimizing, you know, scaling large.

Barbara: well, I think.

Like, I think for most businesses they
can probably postpone having some type

of predictive value because maybe they
have, you know, like other signals,

you know, that they can take into
consideration to assess how user is worth.

But with mobile apps, with
subscription apps in specific, because

the revenue action is occurring.

Almost always outside of the look back
window, you need to have something in

place that tells the, a platform this
is a good user or this is a bad user.

Like for the first iteration,
you're not going to have, uh, a

predicted LTV that, you know, like

the reality.

Two years from signup, obviously
like that we're talking about

like very, very highly developed.

Algorithms, but at the very least, you
need to be like, okay, this user, you

know, back to the app you know, two hours.

Like, you know, they've, you
know, they've clicked through the

features like you need to have.

Some kind of signal that you're using and
that can be as simple as, you know, like

we're gonna take old people that create
an account and then we're gonna, you know,

okay, either they're worth zero, either
they're worth 10, either they're worth 20.

You know, it can be something
even as simple as that, but you

need to have some kind of vari.

That tells you how much
this user is worth.

Otherwise, what's going to happen is
you're going to be stuck with running cac.

So you're gonna be stuck with, you
know, um, a bid based model where

you're paying like per install.

And then what the platform is
going to do is going to optimize

for the cheapest installs.

Jacob: Yeah.

Barbara: installs usually is, you
know, like if you've run pay campaigns

for apps, it's like 17 year olds.

You know, like it's usually, you
know, the cheapest install is usually

not the install that converts into a
paying customer, so you cannot really

for too long if you want to do pay media,

Jacob: Yeah,

Barbara: at scale.

Jacob: that makes sense.

And so is there.

Uh, and this kind of segues
into your, your next rule.

I, we, we've kind of touched upon it
a little bit, but fire within 24 hours

of the ad click, uh, is there Usually
for most subscription apps, trial

conversions, the majority of trial
conversions happen on, on day zero.

Uh, immediately on that first session, is
there value to waiting longer than to see?

What that, uh, trial converter actually
does to send back, um, more nuanced data?

Or is it just the fact that we send
back this trial conversion data and we

include some predicted revenue parameter?

Or is it that we send back maybe
not all trial converters, we know

that certain trial converters are
higher value within that 24 hours.

What other optimizations
should we think about?

Does that make sense?

Barbara: yeah, I think it makes sense.

So ultimately, you know how,
how much you have to wait to get

to a predicted value that is.

Valuable for your performance campaigns
is going to depend on what does the

customer journey look like your business.

You know, for something like B2B

I keep coming back to B2B because
it's B2B is, is very different

from mobile app in so many ways.

So I think it helps, know, make
the PIC picture a bit clearer.

you know, for like a B2B
product, there is a big value in

waiting to send this, this, um.

Predictive value, whatever it is,
you know, because yeah, you know, a

lot of the times the user doesn't do
enough, uh, of using the products for

it, for the algorithm, you're using
to predict value, to actually be

able to, to tell how good they are.

So what B2B companies mostly
do is they will rely on like

firmographic information.

So they're gonna rely on something
like, you know, like the, the domain

from the email, from the registration.

What company does it belong to?

Is it within our ICP?

So they use like registration
data and then they try to postpone

predicting the value based on product.

But like you've said it yourself, you
know, with most mobile businesses.

trial conversion is happening
within the first 24 hours.

Like, you know, my experience with
mobile apps is you can tell who the most

valuables are within the first hour.

So obviously, you know, analyze your
data and you can take a look, you

know, like what signals do I have
that correlates to, to purchasing,

subscribing, and when do they happen?

Maybe for your business it can
be different, but I would say for

mobile apps, for most cases, you can
definitely like predict value on, um,

on a short period of time and use that.

And the fastest you fire this
event, the, the easiest it is for

the ad platforms to attribute.

Like there isn't like, um.

A direct calculation for this, but you
can kind of like imagine that, you know,

like, you know, meta doesn't know exactly.

Can't right now exactly match, right.

The person that, you know, clicked on
the ad and the person that downloaded

the app, not before, you know, they
could like pre a TT before they had, you

know, the IDA and they could, you know.

Match easily, even if the person was
to, um, create an account much later

because the IDFA would remain the same,
but now they're trying to use all these

other signals to make this connection.

So when you decrease the time from click
to sending the event, it makes it easier

them to identify it's the same person.

So there is, um.

There is an attribution value
on sending this event fast.

So for mobile business, you can send the
fast, so you should try to send it fast.

Jacob: Even within that 24 hour window,
like the, with the faster, the better.

Barbara: Yes.

The fastest is the better, for sure.

Yeah, the fastest.

Jacob: Interesting.

Barbara: to the click the best.

again,

Jacob: if that's.

Barbara: want to, you know.

Because you want to
have this value of tool.

I see companies doing is sometimes is
they have a client event, you know,

perhaps set up via an MMP or the SDK that
fires as soon as possible, and then they

have a server side event that fires, you
know, a few hours later, and then that

service side event will have the value.

Jacob: Got it.

And so that, uh, that client side event
is, uh, better for getting data to the

added network, added platform faster, but
to get a more predictive value, uh, a, a.

It can be valuable to kind of send that
secondary server event, whether that's

predictive revenue value, whether that's,
um, what kind of we call, people are

calling signal engineering now, where
we're sending our highest value trials

where we know that people within, um.

35 to 45 are our highest value users.

We want to push Thad
Networks to those users.

So we send some, uh, more calculated
or, or filtered server value.

Uh, got it.

Barbara: Yeah,

Jacob: That's, yeah.

Yeah.

Barbara: heard this term
signal engineering before.

It's difficult to keep up, you know, like
they're always like new terms, but I think

like, you know, what you're describing now
is a way of calculating predictive value.

What basically you're saying is like,
you know, okay, like if you're using

like a cost per trial and then you're
saying like your cost per trial is 20.

What you're saying is only pass
users that I believe are worth don't

pass users are not worth anything.

So there are multiple ways to do that.

And like I said, like it's not
necessarily about, you know,

building the most complex LTV model.

It's about starting from a place where you
are creating guard rails so that you're

not just acquiring the cheapest users
that don't have a path to conversion.

Jacob: Yep.

That makes sense.

And it's all working with the
algorithm, kind of feeding the right,

the right signals into the algorithm
to, to help it work for you versus

to just give Facebook all your money.

Barbara: Indeed.

Jacob: Uh, yeah.

Barbara: Yeah.

Jacob: Um, okay, cool.

And, and so we've kind of,
uh, you're right, right?

Get all the teenagers into your
app and no one does anything,

or no one ever pays you.

Uh, that's not what we want.

Barbara: is

Jacob: And so.

The, the next one we've, we've kind
of talked about a lot is, uh, your

third rule is include value data.

And so we don't have to kind of repeat
everything we just said, but essentially

either figure out some way to have
for subscription apps, you don't have

the revenue data fast enough either.

Probably an at scale strategy is
building out predictive algorithm

to try to figure out LTV.

Most people aren't at the scale to do
that, and so we can kind of take measures

in that direction by sending, uh, filtered
trials or filtered trial conversion

data that has a higher signal to us.

Is that generally right?

Barbara: Yeah, that can definitely be
an approach for tackling the problem.

Yeah, for sure.

Jacob: Any, anything else you'd add
to that for, for subscription apps?

For, for the value data.

Barbara: I think you can, for the simplest
ways to start, I think what you're saying,

like filter around the trial works or
just building some type of, um, um,

how would I say?

Oof.

Like, like what?

Like levels that you use to, to bucket.

That you, you bucket the users in, you
know, like, okay, this user is worth zero.

They, they came, they opened the app.

They, they, they didn't open
it again, they didn't register.

This user is worth, you know,
five, this user is worth 10.

Just building something like that,
it's already going to help you

exclude the, the people they're
least likely to, to convert.

But always, you know, a mistake that I
often see is like people over-engineer.

first version of this.

I have a few articles that I've written,
um, on like ROAS and like Predict LTV.

So I dunno if, if there's a page for
this podcast, it can be good to link.

But a common mistake is, you know, people
are trying to think like, you know, I want

to build a model that when someone tries a
trial, I'm gonna be able to tell how much

they've been worth two years from now.

I'm like, you probably do not
have the data for that, please.

Don't attempt.

Jacob: that, yeah.

I, I think that makes sense.

So, yeah.

We'll, we can include those
links in the show notes for

people to, to dig into more.

Um, great.

And, and this is a, that's a good segue
into, into the next one of Deduplicating

across client and server in terms
of the events you're sending and.

People can overcomplicate it.

Oftentimes the issue is just that
you're, uh, sending too many events

or, or, or duplicating events.

How do I, like, how do I even start
to understand if I'm sending duplicate

events, how do you approach this?

Barbara: Yeah, so this is, I wouldn't
say is a simple thing because I've just

seen like a lot of situations where.

It can be quite difficult to, to
troubleshoot duplicated events.

So, um, obviously you can have the case
where, you know, like the same event

is, is fired multiple times, but often
what you have is, you know, when you're

sending an event from multiple sources.

And for mobile apps, it's quite common.

Like you have like.

the app platform, SDK, maybe
you have an MP and you also

have like a server science.

So you can pass PI to improve attribution
and you can also have like a value.

the app platform needs to know that, know.

If you're sending the same conversion
event from these three sources, then

they're all the same conversion event.

You know, usually that's
duplicated with an event id.

So, but this can be like, it
can, there's a bunch of little

things that can break here

usually, you know what
I recommend this to?

Yeah.

Try to, um.

Create all the set.

So like, maybe let's take a, like a,
like a step back here so I can explain

this, you know, like a bit better.

What I usually recommend is to have
overview of both this unified event and

also the separate events from sources.

So what you can do is you can create
like an event that goes like, up.

P sign up, SDK, sign up, capi.

And then you also have a signup
event that is to be de duplicated.

And then when you're looking into
reporting, you can take a look at what

the numbers are and obviously, you
know if one plus one plus one equals

three, they're not painted duplicated.

Jacob: And is that what you're saying
of uh, tagging, different signup

events coming from different sources?

Is that all done internally
in your infrastructure?

And then the final signup event is
what you send to the ad network?

Barbara: No.

So you would be sending to the ad network
basically two different events for when

the signup happens One day you are.

This is mostly an issue, or you can
also see it on, on, on Google, but you

mostly see it on meta and some, uh, so
when you're sending the events to meta,

let's say from your SDK, you can have
two events, the fire at the same time,

and they're called different things.

And then one event can be, you know, the.

Complete registration, which is the
standard math event for signup, and

then you can also have another event
that is just used for troubleshooting.

Jacob: Got it.

And so you're, you can measure.

Or you can track the events in Facebook to
see the counts, but you're only actually

using one for conversion or, or one for
to actually optimize your campaigns on.

Barbara: So the idea is that you use
the Dicated event for optimization,

and then you use the individual

Jacob: I

Barbara: events for troubleshooting.

Jacob: Interesting.

That makes sense.

And to dup de-duplicate that event.

Uh, is that typically something I, I
have to do by sending a server event

where it's, it's harder to, if I,
if I have the Facebook SDK, I have

A-M-M-P-S-D-K, maybe I have a revenue
infrastructure, SDK, and then I also

have conversion, API, uh, how do I.

How do I, how do I deal with all that?

Should I not have all of those
SDKs implemented in my app?

Should I really like remove them and strip
them down or, or is it that I'm just,

I, I'd like, I'm not the expert here.

I don't even know where, what
I would start with all that.

Barbara: Yeah.

No, I, I totally, I
totally understand, yeah.

The confusion, so I.

Don't know necessarily about the, I think
the revenue example that you've mentioned.

I think that would be using capi.

So I would say that, you know, for
most mobile apps you're looking at

resources, you're looking at the MMP,
the SDK, and you're looking at capi.

uh, I wrote.

I think that's, that's how we got started
to, to talk about this podcast, the

article on whether you still need an MMP
and that is something that, um, can also

be like a good place to, to link here.

I, I would say that, you know,
usually what I would recommend is

having, and again, it depends on the
platform, on the article I cover a bit,

like, what does that look like per.

Per channel.

But usually what, what I would say is
you want to have a client event and

you want to have a server event or
just server event if you, you know, if

you're, and, but then again, like that
depends on what kind of data you're

able to capture on your server event.

Because is there are advantages
of having the clients too, because

you know, then you have the, the.

touchpoint being owned
by like meta entirely.

You know, let's say you have the meta SDK
and then the clicks are happening on the

meta platform, there are signals that are
shared there that can help attribution,

but let's say on your server event, like
everyone that signs up uses Facebook to

sign up and then you have like a Facebook
user ID that you're capturing there.

Like if you have that, I.

Don't know how valuable the Facebook SDK
would be, but you know, you can, you can

test it and analyze it, but I think like
ultimately for all of these things, I

would say depends, and it depends on like
what you're capturing depends on how long

does it take for you trying to happen.

It depends on what kind of value
calculation do you need to do.

Some people, even they do valid
calculations in the client

and they send that there.

So there are multiple
ways of setting this up.

Jacob: Yep.

As always, uh, it depends.

Uh, and, and I think it, it.

It depends even more with the
world of web subscriptions.

And so you can have someone come
into your app and then purchase in

web, you can have someone purchase
on web and then come into your app.

And so it seems like getting to a point
where you have, um, the conversion,

API working for you is, uh, at least
for meta, is generally the direction

where you want to go, where I hear
most people saying that this is

gonna give you the best quality data.

Barbara: I, uh, I would say so.

I think especially, you know,
it, I mean, sorry, it depends.

Um, but, um, uh, especially what
type of PII you're capturing because,

you know, I know I come back to.

B2B use cases.

You know, Matt's attribution is a
lot more difficult for B2B because,

you know, most people are creating an
account or you know, whatever they're,

you know, signing up, starting to try
on a B2B product they're doing with

a business email and Meta doesn't
know where a business email is.

So also like no one signed up to
a business with a Facebook login.

You know, like there are all these.

Other issues that occur in B2B that maybe
you don't really have on, on a mobile app.

And on the other hand, you
know, I've seen like mobile

apps run campaigns on LinkedIn.

And then it's also like, like kind of like
the same problem is like the emails that

each platform has, like your product has.

And then LinkedIn has very,
like different emails.

It's more difficult to match the users.

Jacob: Got it.

Got it.

And, and that's another good segue
into your final, uh, conversion,

uh, golden rule of a past PII Do
you for, for mobile subscription

apps, is this just normally email.

Barbara: Yes, email is is
common if you're capturing it.

So if like Apple, um, apple, SSO.

Doesn't give you the email, for example.

So if you have the option of sign up of
email, most moments you're not capturing

the email they're using like this.

Um.

Uniquely generated like
hashed email that Apple does.

I think with the Facebook API right
now, if someone signs up with Facebook,

I think you also don't get the email.

I think you also only get
like the Facebook login.

So ultimately you will depend on
what data you're able to capture.

If they sign up with Facebook and you
have this login id, login ID is good.

Otherwise, you know, if you have
first names, last names, location.

Um, device information.

This is all PLI that can be passed and
can be used to help attribute conversions.

Jacob: Got it.

Got it.

And generally it seems like more
is better if you have it, send it

to, to match these conversions.

We can.

Is it true we can still only do
this for users that are opted

into a TT, correct on iOS?

Barbara: Y it, yes.

I mean, I'm not the, the legal.

Privacy expert here.

You are technically capable like of
passing this data regardless of consent.

'cause this is your first party data.

Whether or not you know you are
legally able to do that will depend

on your industry and your market.

So, you know, talk to your privacy lawyer.

Jacob: Got it.

Got it.

And it's, there's legal and
then there's Apple guidelines

and Apple laws, which, yes.

And then, so, okay, so yeah, we won't,
we won't get into the, all of that.

Uh, but, but generally not just
sending the conversion event.

Also including, uh, some, some
identifiers for these users if we can.

Is usually helpful.

Is that when we think about lift or
impact from that, do, I don't know,

is there any quantifiable lift you've
seen from, uh, you know, sending PII

for someone who didn't previously?

Barbara: yeah,

Jacob: Um,

Barbara: I've seen huge differences
from like, from water to wine

because, you know, both in performance
but also in attribution within,

you know, like the ad platform.

It's um, uh.

Mobile, journeys that cross devices,
and that is the case with a lot of

mobile app installs and maybe it
doesn't cross like desktop to mobile.

That happens a lot less nowadays.

But you have a cross browser.

Journey that's happening.

You know, they're going from clicking
on an in-app browser to going to the

app store, to then going to your app.

Like there's a lot of hurdles going
through where the tools are losing the,

the information on who this user is.

So when you're able to offer data
points that enable these tools to

connect, you're actually being able to.

See the conversions in the ad platform of
people you weren't able to see it before.

And alongside that meta or whatever
platform you're using is also being

able to see who these converters
are and they're therefore able

to target the right people.

And that can make a huge
difference in campaigns.

Jacob: Got it.

And this was maybe.

Less critical in the IDFA days when
we had that as something to match.

But now we're, we're always just
searching for better attribution,

better matching of users.

And so that's kind of a piece of why
this is, this is even more helpful today.

Barbara: for sure.

I think like even in the I DFA,
you still had a lot of gaps.

A very common user journey was, you
know, to see an ad then close, you

know, the Facebook app, then open
the app store and then search for

the app whose ad you've just seen.

that was already like a common user
journey then, and I've read it recently

that like Gen Z or something are even
more likely to search instead of.

and downloading.

again, like, you know, this is a journey.

It comes back to how fast are you sending
these events, you know, because like for

an install event in specific, you know,
even before like IDFA, like were ways to,

with fingerprinting to be able to tell,
like, this is the same user if you're

sending the event immediately after.

Jacob: Yep.

And Apple says fingerprinting
isn't allowed anymore.

All these networks are still
doing fingerprinting, just

called probabilistic measurement.

And so it's a whole, this
gray area of what is, yes.

Um, okay, cool.

This is.

Barbara: I'm a big believer that,
yeah, a TT will probably change, I

think like yeah, the EU is, yeah,
it's looking in, you know, I, I

have a lot of thoughts on this.

This could be a whole
other back podcast, so.

Jacob: Okay, we'll, we'll write it
down and, and if it changes, we'll

say Barbara predicted it first.

Uh, and I'll be, I'll be, I'll be really
curious to, to see what actually happens.

But I, I, I do agree.

Changes are probably coming.

Who, who knows what they really
are, but we, we will see.

We will see.

Um, yeah.

But this was.

I, I think these, these five
kind of golden rules are still

helpful, uh, and super valuable.

Can, can we get specific for a minute?

I think for subscription apps, meta
is generally still king for, and, and

where most apps budgets go, maybe it's,
you know, some Apple search ads and,

but, but Majority Meadow, where most
majority of new apps getting started,

you have to figure out how to make
meta work if you really wanna scale.

So let's say if you're.

You know, small, medium sized
app, uh, and, and you're

running majority on only meta.

What, what should your, your tracking
and conversion setup look like?

Barbara: I, I think that.

We, we covered like some good steps

Jacob: I.

Barbara: for how it should be setting up.

I think like, you know, frying an
event fast, not find too many events,

making sure to, you know, have some
type of, you know, value so they're

able to, um, quantify how much users
are worth if you can, you know.

have server sites and then also
pass some PII on that server side.

So you're able to improve attribution?

I think sometimes, you know, especially
like small apps, I mean, not just apps,

all types of businesses, they maybe
try to add too many channels too fast.

I think if you're seeing like, you
know, meta is working for you, you

know, definitely put in the work on.

Getting tracking that works.

And then on the other hand, make sure that
you are really producing good creatives.

I think right now, like those are
the, the two biggest levers for meta.

Jacob: Yeah.

And, and that to the foundation is the
conversion data, the signal, and then

that allows you to actually start getting,
uh, testing out different creatives and

trusting the results you're, you're seeing
back and, and, and really start optimizing

and figuring out what, what works for you.

Barbara: well, I wouldn't.

I wouldn't, I don't know how I would
approach the trusting the numbers.

I think, you

Jacob: Okay.

Barbara: you know, like my take is, you
know, like the events that you're seeing

on meta, the conversions you're seeing
on meta, it's not for you, it's for meta.

The point of getting these
conversions is so meta is able

to target the right users and.

There's obviously, you know, like
what you're seeing on matter is just

the fact that, you know, someone
viewed or clicked on an ad and

then went on to perform the action.

It doesn't mean that matter drove that.

And that is the case for,
you know, every ad platform.

Like, you know, like what you're seeing
is what is being technically measured.

You're not necessarily being able to,
you know, connect one of the other,

if you want to really assess the.

The impact of meta, then you're
better off, you know, running an

incremental test and analyzing that.

Jacob: Yeah.

Do you wanna say more on that?

I, I think people conflates
measurement and optimization.

What, what's, how should
we be thinking differently?

And, and then maybe you can
also say, uh, should everyone

be doing incrementality tests?

Barbara: I try to, and I think that's
a very important point because I

think, you know, a lot of the times
people think reporting optimization.

Has to be the same dataset, but those
have very different use cases and

the data points that you're using,
because of that, they have different

use cases should also be different.

You know, the data that you're sending
to the app platform, the go there

is to get the platform to optimize.

When you're looking into how do I assess.

Where to allocate my budget.

You shouldn't be necessarily looking
at what's on the app platform.

You should be looking at a more
holistic form of measurement.

You know, if you're a small
company, everything you do on paper

should be an incremental test.

Every test that you do, you
need to be able to clearly

visualize the line going up.

If the line is not going up and
it's not clear it has worked,

then you know, focus elsewhere.

Jacob: Yeah, and, and I think it's maybe
commonly lost where we see all, oh, we're

getting these great results in Facebook.

It's like, okay, well why aren't
you seeing more revenue then?

Uh, that, that it, it's, it's for
smaller and medium sized companies,

incrementality is pretty obvious, right?

Where it's not like you have
this huge brand presence.

It's not like you have all
this organic traffic coming in.

No one knows about you,
so any ads you run.

That you better be able to see results.

Barbara: Yeah.

And then, you know, sometimes people
think it's a measurement problem when

it's actually a strategy problem.

You know, if you're running all of
these channels individually with

small budgets and you have no idea
what's working, it's not a measurement

issue, that's a strategy issue.

Like you shouldn't really doing that.

Jacob: Yep.

Simplify.

Simplify.

Barbara: simplified.

Jacob: Yep.

Yep.

Um, that makes a lot of sense.

Uh, well this was, this
was awesome, Barbara.

I, I learned so much.

Like it's fun for me to kind of dig in
to, to fill the gaps in my knowledge

where I don't know everything.

So it was really fun to
go, go through all this.

I'm sure everybody else will, will
find this super valuable as well.

Uh, do you wanna, um, maybe, maybe
shout out a few places people can,

can find you and, and follow you
and read more and, and anything

you're, you're working on right now?

Barbara: Yeah, for sure.

So one of the main things that I'm working
right now is a productized service that

I call fix my tracking, where I help.

Yeah, audit and troubleshoot.

Platform conversion tracking
for companies, for mobile apps.

That usually means, you know, looking
into math SDK, looking at how CAPI

was set up, looking at how apps Flyer
adjust singular was configured, and then

giving advertisers like peace of mind
that the events working as they should.

They're attributing as they should, and.

they can continue to use them
for scaling their campaigns.

Apart from that, I, I write a
lot about marketing and data.

I have a newsletter called Zero one
Newsletter, where I publish roughly

one article a month on all things
measurement, analytics, attribution,

conversion tracking, roas, you name it.

And I also, yeah, I post on LinkedIn.

uh, you can find me there.

Actively, I would say yeah, these
are the, these are the main places.

Jacob: Amazing.

Yeah.

And, and I can, I can vouch that Barbara
has a, an amazing newsletter, super in

depth and, and very kind of tactical
to figure out what you just scale.

And if, if anything you talked,
if we talked through today, you

didn't understand, or you want to
dig in more, reach out to Barbara

and, and she'll be, she'll be happy
to help, uh, uh, work with you.

Uh,

Barbara: For

Jacob: too busy, which, which she, she may
be, but she'll, she'll, she'll get to you.

Yeah.

Uh, okay.

Barbara: place.

Jacob: Right.

Right.

Okay.

Well, well this was awesome Barbara.

Uh, really appreciate
you, you joining us today.

Barbara: Thanks.

Thanks,

Jacob: thanks.

Barbara: to be here.

Jacob: Bye.

Thanks for listening.

Hope you enjoyed.

Please go to price power podcast.com

to see all the episodes.

Go to Spotify and YouTube and
give us a subscribe and follow

so you don't miss any episodes.

Alright, talk to you next time.

All.


Episode Video