Michal: It's just doesn't really matter
to test, uh, I don't know, let's say
uh, 50 versus, 48 versus 52, uh, dollars
those differences are super small.
So your first prices, uh, should be
something like 40 versus 60 versus
80, like some big differences.
So we actually, we.
we test the length, we find out
that there is a, there's almost no
difference in trial conversion rate
in our case between, let's say three
and seven day con trial length.
Jacob: Hey Michael.
Uh, super excited to have
you here, uh, on the podcast.
You've done so much research
on, on price testing and written
an amazing blog posts for us.
I'm really excited to dive into it.
Michal: For having me.
Jacob: Uh, so, you know, something
I was really curious about is, you
know, you talked about looking at
both short term revenue and it's
like 13 month revenue or 13 month LTV
prediction when evaluating price test.
How do you blend these signals
when deciding whether to actually
ship a variant from an experiment?
Michal: Yeah.
Yeah.
So for kind of all price test, we figure
out that we just can't focus on new
revenue as we usually did for some other
tests like design, payroll, design test.
we've seen that the prices, particularly
when we've increased prices, what we've
seen then is renewal rates for like
monthly plans and some signals for lower
renewal rates for the yearly plans.
So we wanted to make sure that
by essentially shipping a new.
A set of prices, uh, for what
we wanted to make sure that we
are not harming the, the, yeah.
Long-term revenue or like Yeah, like
one year, one year two, et cetera.
So we are thinking like
how to, to evaluate it.
Early enough, so we don't need to wait.
Uh, yeah, like a few months or
a year when we do a price test,
it won't be really convenient.
Uh, so we figured out that what we
could do is to, instead of like a
prediction for, we call it like.
Bound 13, which is essentially, yeah, kind
of you, you have like the full revenue or
for the full one year, and then you have a
sort of like a glimpse of, or like a peak
on what's gonna be the year two revenue.
So, so what we've done do
we take the new revenue for
essentially what we get in day one.
in the first days.
And then we essentially have a sum
with new revenue that one end, the
13 month new, uh, revenue projection.
And then what we get is essentially
like a, yeah, sort of like a
long term revenue indication.
And that's how we've then sort
of decide on, on the variant
on whether we essentially ship
it or not, or roll out or not.
The, the testing variant.
Jacob: Do you have this, um, calculation
like automated set up in like a data
warehouse or a dashboard, or is this
manual you're doing this each time?
Michal: It was, it was, it was sort
of like a report template, uh, which
we then reduced for the prices we
pulled the data usually like manually
from amplitude or revenue kit, uh,
for the renewal day, renewal rates.
And yeah, the calculation for, yeah, those
13 month revenue projection was sort of.
The first sort of input, which was super
key, was a proxy to renewal rate, what
we used as a proxy was like seven day
cancellation rate because we find out
that in data that actually like lots of,
I think two to 30% of the cancellation
like subscription cancellation happened.
In the early days in, let's say,
yeah, the first seven or 10 days.
So in the first seven or 10 days,
you already kind of know what the
re renewal rate might be for Yeah.
Certain cohort.
we measure that seven day
cancellation rate for, yeah, for
each variant, each price point, and.
We then use it, uh, to predict what
the renewal rate will be for the
year one, for yearly plans or for
month one for the monthly plans.
And, uh, we essentially take, yeah, we,
we know that let's say the seven day
installation rate for, for the testing
variant is 20% higher than the baseline.
So we know that likely they,
the renewal rate for month one.
It could be also like 20% higher,
20% And essentially using that,
yeah, that inside, then we can
calculate all the other renewal rates.
And then with that we can actually, yeah,
for the predict what's gonna be the total
revenue for the monthly subscription,
let's say for the whole month.
So for the 12 month, uh uh, next.
Very similarly for, for
yearly subscription.
So we, it's important to, if you have
enough data, of course, for monthly
and yearly plans, uh, to calculate,
basically calculate it separately.
So you have like seven day
cancellation rate for the monthly
plans and the seven day cancellation
rate for the yearly plans.
Then if you had that, and if you get
enough confidence in those, in those
data, then you are able actually
to, to quite confidently predict
what's gonna be the renewal rate.
And then can again, use that sort of.
Simple calculation to calculate,
uh, yeah, the predicted renewable
rate or predicted renewal revenue
for that particular variant.
Jacob: Yeah, that makes a lot of sense.
And you found, you know, it, it
makes sense for annual where, um.
The, uh, uh, you know, there, there's
this one, one term, you found that for
monthly as well, that even just for the
monthly first period, that the monthly
renewal patterns follow the same kind
of curve, and that even in those first
seven days of monthly cancellations
based on your pre-existing data,
Michal: Yeah.
Jacob: you can model out what
that's kind of, uh, uh, the curve
will be in, in terms of renewals.
Michal: we, what we, what we did
for the monthly, we essentially used
that, uh, the seven day consideration
to calculate a month, one, so.
Renewable.
And then for month two, month three, or
month, month three, month four, month
five, we essentially kept the curve
in terms of percentage, like let's say
month, month three was this month two.
So essentially the curve is same,
but the starting point is different
because yeah, early, early in
Jacob: Hmm.
Michal: the subscription, uh, there were
less people or more people canceling.
Uh, and then the curve
sort of stayed the same.
But since in module, we've got the
majority of the subscription, uh,
in the yearly plans, essentially,
like what was like super essential
for us was the year plan.
So we got it a bit easier because
yeah, as you said, it's just
just essentially one number.
It's just one renewal rate.
But yeah.
But
Jacob: Right, right.
And so.
Michal: we applied for monthly.
Jacob: Right.
And so potentially if you have a large
number of monthly users, you could maybe
get a little more complicated, uh, if
it's, but it's, it's not the priority for
you where the majority of the accuracy
comes from projecting annual plans.
Correct.
Michal: Yeah, exactly.
Jacob: what you're more
concerned about, have you?
Michal: sorry.
Jacob: Yeah, I was just gonna ask if
the, um, you found that even with price
increases or decreases in the annual
plan, the cancellation pattern still
stays the same and so that those seven
day cancellation rates remain accurate.
It's just like you say, um, you know, a
multiplier, a little up or a little down,
but that's still that, you know, the, the.
Ratios remain the same.
Michal: Hmm.
Yeah, we've, we've checked after,
let's say six or nine or 12
months after we roll out the new.
Essentially the, yeah, the
higher price or lower price.
Then we check the renewal rate
and compared with what the
model or the prediction said.
luckily for us, yeah, we could see that
there was, uh, the, the pattern, it
wasn't, yeah, it was maybe like exactly
the same, but we've seen that even,
we've predicted that the renewal rate
is gonna go, uh, down by, let's say 20%.
Yeah.
It kind of did really go down by.
Roughly it was like something
between like five to 25%.
So we've seen that
and uh, regarding like whether like
still the seven day cancellation rate
kind of stayed reliable or relevant and
we can actually even like, essentially
even lower rank or higher, but I
think it's still fees, like pretty.
Big chunk of cancellations
happening in the first, yeah,
seven to 10 days, actually.
So we still see that, which is
Yeah, which is still like a, at
least like a good signal for that.
We can still use that
sort of proxy to Yeah.
To translate, uh, or to use it for, yeah.
Predictions of renewal rates
in the Yeah, in the future.
Jacob: Yeah, so, so in terms of that, uh,
using cancellation rate as that predictor
for, uh, new apps, maybe launching that
don't have a full year's worth of annual
renewal data, do you have any like easy
multiplier to use for them if you receive.
X number of cancellations
in the first seven days.
Multiply that times four, and you'll
probably get a good renewal rate.
You know, of course it's not,
Michal: Yeah.
Yeah.
Jacob: every app has slightly different
renewal rates, and it's not perfect, but
just for, for the people to get a range.
Michal: Well, I think, uh,
for like new apps, I think.
What's good is to look
for some benchmarks.
There's like a couple of
benchmarks, uh, available.
Uh, yeah, I think it's easier to,
quite easy to search, uh, and look
for your app or your vertical,
uh, and look and uh, retention.
So I, the yearly attention is
probably be somewhere between,
could be like 20 to 30% or 15 to 25.
Really, I think depends here
on the market, on the vertical,
on the various, various factor.
But, uh, if you don't have yet any
history in your renewal rates, I think
the, yeah, the benchmarks are probably
the best resource, uh, for that.
And then, yeah, you can use
that sort of as a prediction of
what's gonna, your apps will be.
Jacob: So you need, you really
do need a year's worth of data.
Of a four, you can start looking at
cancellation rates to predict anything.
Michal: no, no.
But yeah, if you don't have the
year of data, uh, you still can
have a de cancellation though.
You, that's something for
seven day cancellation.
We will certainly have.
And then for the yearly renewal rate
for, look for like benchmarks for your
vertical, for your market, and you
find what the benchmarks are saying.
So, and then use that for, yeah,
for the calculation Uh, that
should be, that should be, yeah.
of good enough.
Jacob: Okay, cool.
Well, yeah, I think this is
super helpful and I think that
this is always a tricky thing.
Um, apps struggle with is kind
of predicting it that longer term
revenue, that 13 month revenue where.
Most people just focus on short-term
revenue, where we're really to
understand what's gonna be valuable
and not shoot yourself in the foot.
You need to be projecting out that.
So I think people will find this useful.
Uh, something else I wanted to ask
about is, you know, you've, you've,
you've talked about emphasized,
don't blindly trust apple's automatic
price tiers and instead studying
regional prices individually.
Uh, how do you start going about
this and, and what are some of the.
Biggest mistakes that you've seen
teams make with with localization?
Michal: Yeah, I think that that, that
one is, I think, really important
for essentially any app which
wants to grow their revenue outside
the US or outside their, yeah.
Their original country.
Uh, because yeah, what Apple
does just maybe like, is.
Just for those who don't know,
apple, essentially calculate the
local prices just by using Yeah.
Exchange rate.
And that's pretty much it.
They don't really use any purchasing
power, uh, insights, anything like that.
So, so it's essentially super simple
and in many cases just kind of wrong
prices, which will not work for,
yeah, for your app or for any app.
So I suggest you Yeah.
Look at, uh, yeah.
Key markets, essentially
not Yeah, of course.
Uh, not starting with like, trying
to localize every currency or every
market to this focus, of course,
first to, let's say top five markets.
And, and then yeah, look, uh, and do.
Do some what, what, what we've done in
module First, we do some research in terms
of like how the competitive competition
price they have in the country.
So that's like sort of one input.
Uh.
The advice is to look into conversion
rates and, and, and like your app data
in terms of like the funnel, the purchase
funnel from like new users, existing
users from the paywall to starting trial.
Converting from the trial.
And look at, look at the data and,
and just look at, uh, your countries
you are interested in and just
yeah, compare and, and see, uh, if.
That actually data suggests something,
you know, if, so for example, like if,
yeah, if one country, like it's like much
lower conversion rate, uh, than the other,
uh, for like, let's say a similar price
point, it might suggest that actually
one, the price point maybe's not the, the
best one for, uh, for a respected country.
So your day check can also like
suggest, uh, good insights and.
By using those, uh, you can, uh, then
figure out like what the vari, uh, it
can be for the test, and then start
doing executing, uh, prices in the, in
the countries you are interested in.
Jacob: And so, you know, if we
see one country with, um, yeah,
really low conversion rates, maybe
we should test lowering prices.
Um, do you think the opposite holds
true If you see, you know, countries
or group of countries with really high
conversion rate, maybe that's a signal
we can raise prices, or do you go, well,
we're getting a good conversion rate, so
maybe that's a, a risk, or, or do you just
say, we'll, we'll, we'll test it and see.
Michal: Yeah, I would, I think it,
yeah, it depends on, uh, also what
the sort of competition has as well.
Like if, let's say r mm.
Price, or let's say the
conversion is very, very, yeah,
above average, super high.
Uh, but the competition actually
has the same prices as ourselves.
Um, maybe going, let's say, I don't
know, 50% up in prices or 20%, 30% up
in prices might not be like super good.
But in terms of, yeah, so I
would also like include that
comparison with rest of the market.
Also looking, uh, one, one thing
maybe we forget to mention, but also
look at, uh, yeah, the costs we have
in the con, uh, like internal costs.
For example, if the US ai, what
are the costs we need to like
taking account, it's sort of,
which will translate to the price.
Uh, can also take into account
like the UA cost in that particular
country, uh, that also could kind of.
Put some indication whether, yeah,
our price is actually pretty kind
of or is pretty kind of relevant
for, for the particular country.
But yeah, I wouldn't, I think
it's actually, it's always
giving just some prices.
I, I can suggest, yeah, retail
sale prices every six months.
So, uh, I think there's a harm
actually trying to even higher prices.
Uh, so it does, yeah,
there's pretty much no.
Of danger, it just is running like
a one or two weeks price test.
Uh, so I'm always kind of open to that.
Uh, so yeah, I would say
yeah, definitely do that.
Jacob: And, and you know, there's, um,
with pricing, there's.
It's not, people aren't just making
decision on the price and, and deciding.
There's so many factors that go in.
Um, I, I was curious, how do you
think about the relationship between
pricing, packaging, you know, in
terms of, you know, monthly versus
yearly versus weekly versus trials
and then the design of the paywall.
Do you think about all
these things interact?
How do teams prioritize
what order to test in.
Michal: Yeah, it's, it's, it's complex
as I said, like yeah, price tests
are just not about the price point.
But as you mentioned, like, yeah, all
those things are important and yeah.
and they're essentially
yeah, for the whole thing.
And, and you and customers
essentially look at it, not really,
uh, specifically at the price, but
consider all those factors and.
From my experience, uh, yeah, the prices.
And I, by prices I don't mean essentially
adjusting different price points,
but also the packaging, uh, different
plans, uh, different introductory
offers, like different trials.
That all is more impactful than then the.
design copy or this kind of stuff.
So I tend to prioritize more those
because of the experience I have.
But I think on, uh, on the app.
If in the app they've yeah, they
already made like a ton of different
prices, then, uh, then let's just
switch, uh, to a different area so
the people like, yeah, the content or
layout, uh, the visuals, et cetera.
But if you are.
Sort of starting with payroll
optimization and pricing optimization.
I think it's, uh, it's actually,
it's better to prioritize Yeah.
The packaging and pricing.
Yeah.
And not, uh, the, the
payroll design itself.
And then in terms of the different
components of like the price the,
the pricing package and packaging,
I, I tend to believe that, uh.
It's, uh, it's first looking,
uh, we discussed like before.
Uh, yeah.
Looking at all data, looking at the sort
of in the competition and comparison.
Give some insights and if you even
see that, yeah, we are like, there's
no obvious sort of lowing example,
like in terms of pricing and we are.
Sort of similar as the, as the
competition, then I would kind of
try to analyze what plan is, what
plan essentially is like yearly,
monthly, weekly, what actually
what we have currently where we can
see the biggest long-term value.
And I would try to optimize for
that particular plan and get
highest share of it as possible.
And then we can do with different.
Sign of the payroll.
We can also just tweak the
prices a little bit to get that,
essentially get that result of.
Increasing the share of the plan.
And so I would start with that one.
And uh, yeah, you can still like tweak
the prices and then we can also play with
the, let's say, introductory offer trial.
So that, that would sort of my order.
So if like, if you are confident,
you have, so like roughly
the price is right then.
Try to, uh, optimize for yeah, sort
the right packaging in terms of yearly
and monthly, uh, and, and then look,
tweak prices slightly, and then, uh,
and then you try to offer, and yeah,
then you can follow maybe with some
people designs and layouts or thorough.
Jacob: So generally, like first step is
figuring out is this the right price?
Um, that influences everything.
Then maybe there's, is
this the right price?
Uh, for regions or, or do you
think that like localization of
price comes after maybe packaging
tests for your core region?
Michal: localization, uh,
essentially still should be included
in the, uh, in the step one.
So I wouldn't look, uh, in the price test.
I
Jacob: Yeah.
Okay.
Michal: Uh, globally.
I would, I would look at our key markets
separately and try to sort of, yeah,
just analyze it separately, essentially.
So we can see, um, maybe in the us
Yeah, the price looks brazil we are off.
Okay.
Let's just fix the price
in Brazil and then Yeah.
Can follow up.
Jacob: So let's look at the
countries have the biggest volume.
Maybe it's, you know, usually for
most apps it's a few, it's not tons.
Uh, get the price right there.
For those countries, of course, the
biggest country is the most important.
And then once we understand the right
price, we can start to figure out, um,
which package or which subscription
length actually drives the most value
for us, and then start to optimize
more people towards that subscription.
Michal: Yeah, I completely agree with
as far as the price then, then the
right sort of packaging in terms
of like subscription L and that on
subscription L might sort of take
you back to the, take you back to the
price in terms of like what is the
ratio between your, let's say, yearly
and weekly or yearly and monthly plan?
because actually like the
ratio in my experience.
plays a big role in terms of how,
yeah, what's the share of your
monthly subscribers and, and our
trial is that's, that's, I think
that it's good to play with Yeah.
The relationship between your
plans, prices, uh, and yeah,
take, take some care, focus.
So you can still keep the yearly
price, still the same, but you
can just test a different monthly
price to test a different ratio.
it was, in my experience, uh, we could
see big different, the difference in terms
of behavior when we, let's say, uh, based
on the monthly plan price, even though
the yearly plan price stayed the same.
Okay.
Jacob: Yeah.
Yeah, that, that makes a lot of sense.
And then after you feel like you're
confident in those, then you can
focus on some lower impact changes
like paywall design, where paywall
design can still be impactful, but
generally pricing and packaging
are, are gonna be a larger impact.
Michal: I think, uh, I agree with that.
Uh, for me, like from the, from the paper
design layout test, the most impactful
test, were still the ones which in
some sense, they sort of touched the.
The plans, the packaging,
the pricing, like the one.
But if I described to your blog when
we've essentially we've hidden a bit
monthly plan under like a separate screen.
So the yearly plan was kind of
seemingly, looked as the only plan.
Uh, and we essentially put
just more priority and visual
priority into the yearly plan.
Uh.
that actually, uh, yeah, had a big impact.
And just shifting the share.
Regular a plans from, I believe from
the like, yeah, 60 to 80% as change.
And you, they have a big drop in
conversion rate, which is great.
So layouts, I believe just are, yeah.
Can be super impactful.
Jacob: It's interesting.
Yeah.
The layout tests, at the end of the day,
the layout often is influencing what plans
and packages you could see are in focus.
Um, and of course there's other
types of design tests, but those,
those are, those are impactful ones.
Uh, I've seen similar results where
you show the annual plan first.
And then there's the view all plans
button that has a few other plans hidden
where the people that have higher intent
and higher energy, um, are gonna go
after to search those view all plans.
And so it can be, even that little bit
of friction still is beneficial because
it drives a higher share of annual.
Michal: Yeah.
That we've just, we've seen exactly that.
Yeah.
And those kind of psychological
kind of tricks like anchoring
pricing and then using.
Some prices that decrease prices
is just, that just kind of works
in the, in that, in that area.
And also, another good example
of that is essentially like you
using, uh, monthly equivalent of.
yearly plan as a sort of, as a sort
of anchoring of like, what's the,
what's the cost of the plan, and
then when you, they see that they had
the monthly equivalent of the yearly
plan, it's just much, much lower and
lower than the monthly plan costs.
It's in, it's, it was another sort
of push for the year plan, another
successful test in driving, more
share for the, for the little plans.
So yeah, stuff like that kind
of work usually work very well.
Jacob: Yeah, that's what I've seen too.
Um, so, so on that note, can we, can we
talk about free trials for a little bit?
I think.
Free trials.
Uh, uh, used to be just the standard.
Everyone had a free trial.
You know, now more people are testing,
you know, no free trials or only
free trials on certain products.
You know, you, you, you
mentioned in your blog post,
you know, a lot of testing here.
What did you learn or are
there any quick takeaways?
They don't have to be quick, just
takeaways for, uh, uh, for people,
you know, using three trials.
Michal: Yeah.
We've, we've tested a lot, actually,
few trials since, uh, it's a, I think
it's a powerful, uh, it's a powerful
sort of tool or powerful Yeah.
Weapon you can use.
But again, first I would just sort of.
Wanted to highlight or wanted to just
say that it's, it's always good to
just test it for your app and just
don't take anything for granted what,
what actually worked for us at my job.
Yeah.
No guarantee can work for you, but I.
For us.
Yeah, we, we actually use, yeah, first
we used also like offering free trial
and, and not offering free trial.
We also use it as another tool, how
to drive more real subscription.
So intentionally we've had
only free trial for our yearly
plan and not a monthly plan.
And yeah, we've tested it and
yeah, it really kind of, yeah.
And I just pushed a little bit
more again for the yearly plan.
So it's one, one thing using that for.
the plan you've, yeah.
You kind of optimize for And then,
yeah, we've also tested like offering
free trial, offering free trial
based on country as we've seen that
there are like, there could be some
countries which essentially they
value free trial more than the other.
For example, like in our case,
we've find out that in Latin
America, just the free trial.
Some reason it's just more popular or this
is more valuable than the other countries.
So, uh, actually we have few kind of
regional exceptions in our sort of free
trial strategy in that region particular,
and, uh, and then, uh, maybe other
factor is like, yeah, offer a free trial.
Like there were few.
Successful kind of tests, uh, in the
community, like that, yeah, you shouldn't
offer free trial for your new users
because they have the highest intent
and they will, they will essentially
convert even without no trial, because
yeah, they just download your app and
they're looking forward to use it.
So, you know, don't, yeah.
Don't offer free trial for new users.
Uh, but actually for, for us, we, yeah,
we haven't really seen that working.
So for us, we find out
that, yeah, the free trial.
yeah, very important for the new users.
And it's something we just, yeah,
when we, when we actually removed
it, we haven't really seen, uh, we
basically seen negative results and
we've seen like decrease in Yeah.
The, the new revenue.
uh, and then, but another factor is
actually like the length of the trial.
Uh, I see.
Typical is maybe like a seven day
free trial, which yeah, on one
side gives the users like, yeah,
of course, like more time to.
the value from your app, and
potentially you can see like higher
conversion rate of the, for the trial.
But on the other hand, it's, it's
not so good for the ua for the user
acquisition as you need to wait look
longer to see if the trial convert and
the payback period gets also longer.
so I just want another sort of think why
not to have like long, uh, free trial.
So we actually, we.
we test the length, we find out
that there is a, there's almost no
difference in trial conversion rate
in our case between, let's say three
and seven day con trial length.
So obviously then we, sort of rolled
out the three day trial as we've seen.
Okay.
The users, they just, it's enough
for the users and it's shorter,
so it's better for the business.
Yeah, let's do that.
Um.
Yeah, so that's sort of my
experience in free trial.
Maybe kind of recent thinking that what
I've seen is a trend now is sort of
like a pay, allows users to design their
free trial, and we essentially offer.
A couple of different
free trials for the user.
You can offer like a free seven day
free trial or free, free day free
trial, or sort of a paid trial.
Like you can offer like a 30 day trial
for say, I don't know, like five or
$10, and then you can see how that.
essential identity work for your users?
So that could be, so again,
kind of interesting test.
I think it's kind of worth, trying and
I've heard that it, it can actually work
very well for, uh, yeah, for many ads.
we've actually just
testing it as, as I speak.
So I, I dunno yet how it gonna
look like, but I'm quite excited
of how, how that will work out.
Jacob: Yeah.
That's a super interesting test.
I would imagine it's a little, it,
it complicates analysis a bit more.
Um, in terms of understanding.
You have to wait a little bit longer and
maybe you test it for a certain period of
time and then you look at that cohorts.
Michal: Yeah.
Jacob: Uh, in the next month or two, um,
to understand how it actually converts.
Michal: what he did.
Yeah.
Uh, to
Jacob: Yeah.
Michal: soon.
Yep.
Jacob: And so you probably
have to have enough, um, enough
user volume in each variant.
So for smaller apps, it's probably
not, doesn't make sense, uh,
for meeting the larger apps.
You haven't, you'll have enough data
and then it, it, uh, the impact can,
um, outweigh the analysis complications.
But then also your, your
complicating ua, right?
Uh, uh, where the signals to
UA are, are a little different.
Michal: So I would say that the
results need to be kind of really.
Strong and then they, the, yeah,
the vendor will need to be really
kind of meaningful to be able
to, as you say, justify the
negative effects on the UA 'cause.
Yeah.
And the sort of uncertainty that you
will see, maybe after 30 days, how uh,
the users will opt for that variant.
Will, yeah.
Will, will come convert into
the full plan subscription.
But yeah.
As you also mentioned in, have you also
mentioned like important, important,
Jacob: that makes sense.
Michal: Factor that Yeah.
For all, actually, all the
prices and for all the tests,
you need to have enough data.
And that's actually one of the, one
of the maybe the biggest mistakes
I've seen in the teams, that they,
they don't have enough, yeah.
Enough volume of, yeah.
Of users and particular like trials
and, and purchases and they, lot
of times they essentially, uh,
evaluated on yeah, just a sort of.
or just, you know, low hundred
of, uh, of conversion users.
And then, and yeah, you get then a lot
of false positive, uh, in the results.
And then, yeah, basically not
making, uh, right decisions.
Jacob: Something I've also found
helpful is after you run that
test, uh, go look at those cohorts
three months later, see if that.
Uh, uh, the accuracy is still valid.
If your predictions are still
valid, um, where you can look
at those new users joining, is
those LTV still what you expected?
Um, sometimes, you know, monthly
renewal patterns can change.
Sometimes annual cancellation
rates can change.
Uh, and so don't just assume
because it looked like a winner
on, on day 14 or day 30 that.
Three months, six months later.
So it'll always be constantly, you know,
measuring and testing those assumptions.
Michal: totally.
And also what also what I would
maybe add is to look at your, sort
of the source of the traffic of
the, of your, of your downloads.
For example, you can see.
Quite a big differences
between kind of traffic.
Uh, we drive from meta from Instagram
and then from TikTok you can actually
say like, the, the users, the
demographics is probably different.
The, yeah, so the, the value or
the how, the intent they have.
Uh, so you can actually,
we can see as well.
So if essentially if like in a
couple of months time span, uh.
sort of traffic distribution changed
a lot, then it can actually Yeah.
Also influence and translate into,
into different, uh, performance
of your paywall and Yep.
So that's also important
to take into account.
Jacob: Yeah, where if you know
you, you run a price test once
and you see weird results.
Maybe take a look at your traffic mix
to understand, um, did the acquisition
sources change versus normal?
Is that influencing your results?
And, and so again, you know, nothing is
in a vacuum, uh, that, that, you know,
you have to take multiple factors into
account to, to truly assess, uh, uh,
the, the success and whether your price
test is gonna be repeatable, which is
the ultimate goal on the long term.
Michal: totally.
Yeah.
Jacob: One last thing here is,
you know, you mentioned that Mojo
raised yearly prices by like 50%
in, in markets in the US and Germany
with minimal impact on conversion.
Did you have, did you see anything
in the data, uh, initially that gave
you the confidence to push that hard
or raise the prices by that much?
Or was it just, Hey, um, you know,
we have good conversion rates,
let's see if we can, you know,
drive these prices up further?
What gave you the confidence to do that?
Michal: Yeah, well.
Yeah, what we did, what we, we
had the conversion agreement, so
we look at our data and see the
conversion looks pretty good.
Uh, but there were a few
things why we did that.
Uh, number one was that the prices that,
that the original price, the baseline
prices were set, uh, as the, as they
were like two or three years before.
So the prices weren't
changed for a long time.
So that was number one thing.
Okay.
It be really outdated.
Uh, the second thing was when you
look at yeah, the main competitor, cc.
Okay.
You belong differently for like,
among the, the cheaper, the cheaper
apps and it might be potential.
So, and then, uh, yeah, so there
two, those two important sort of
inputs, uh, next to, yeah, the
good conversion rates we've had.
Uh, when we compare the
benchmark that look really look.
Above average.
Uh, and then we start with the US
it was kind of number one market
for us, and we've seen, okay.
Uh, we've actually, we actually even,
we tested not even like 50%, we even
test even like a hundred percent up.
Uh, but it actually hit price point
actually for the new revenue for actually.
Even better than the 50%.
But, uh, then we've seen lots
more cancellations like in the
seven day cancellation rate.
So that actually, that, that price
point was actually the classic example
price point where we've seen, okay,
the short term impact would be really
good, but long term impact like would
be really bad for, for the business.
See, but that kind of, that learning and
the, the, the results from the US gave
us more confidence to essentially do
similar tests at other key markets like
Germany or, yeah, we essentially took
the main European markets like Germany,
France, Spain, mostly like the Europe
markets in Germany and Europe, and then
cluster it together and rent prices there.
Jacob: really, you were testing prices
that were even higher, uh, uh, just
that, uh, a hundred percent increase,
but you didn't see success with those.
So, you know, test multiple price
increases and you know, you, you,
you'll never know unless you test 'em.
And that's really, I think the whole,
whole message of, of kind of all of
this is like, look, you have to try and
analyze the data and see what happens.
Michal: And I would even
like, so it is a first prices.
I would definitely get
like big differences.
It's just doesn't really matter to
test, uh, I don't know, let's say uh,
50 versus, 48 versus 52, uh, dollars
those differences are super small.
So your first prices, uh, should be
something like 40 versus 60 versus
80, like some big differences.
So you kind of.
it gives you a rough idea
of where IL pricing is.
And then your second follow up just can be
kind of a little more Yeah, uh, detailed.
And you get, then you can really test like
whether it should be like, I don't know,
like differences of couple of dollars.
And that's okay.
you first should have like a really, kind
of a broad one and if you are even like.
Don't know if it's go,
if it's lower or up.
You can, you can test like one
lower, let's say 20, 20 bucks lower,
20 bucks higher, and then you will
see like where, when's the, where
is the ideal price, uh, roughly is,
and then you can, you get a full up
test to just do, yeah, specific one.
Jacob: Well, this was awesome, Michael.
I really appreciate you digging into all
this and then sharing your knowledge.
Uh, any, any last parting wisdom or.
Parting thoughts that you'd like to to
leave people with for price testing?
Michal: Something that you mentioned.
Yeah.
You shouldn't be afraid to like Yeah.
Testing different price points.
What I found out that a lot of users
and lot of founders and a lot of,
uh, yeah, the app owners, they don't
want to maybe like, change the price
because they think that what if the
users find out how did they react?
Uh, they will negative reviews and.
Usually it's just four worries.
Yeah.
Usually nothing much happens.
Yeah.
You essentially just a new price
for just a different cohort.
Uh, the other one would know, and it's
sort of like a short term decision.
Uh, and it's, uh, yeah, it's
usually just usually just.
benefits and then really no harm, almost.
So, uh, I would just suggest just
don't be afraid and, and, and, yeah,
experiment the prices, uh, and revisit
that every couple of months because
yeah, conditions change quickly.
Uh, and yeah, you should always
kind of stay, uh, relevant.
And the pricing should stay
sort of, yeah, up to date.
Jacob: Don't analyze data
well, uh, think that's.
So we'll, we'll, um, we'll link to
your, your blog post in the show notes.
Um, this was awesome.
I really appreciate you,
you, you taking the time.
Uh, anything else you'd
like to promote Michael?
Michal: Hmm.
uh.
Not really.
Um, so yeah, uh, type two if
you Yeah, link my article.
I think it goes maybe yeah, bit deep
deeper into certain topics and also
yeah, shows a bit different ideas.
yeah, listeners can yeah, find
out a few more things there.
But it was actually super
good to chat with you, Jacob.
Really enjoyed it.
Jacob: Alright.
Thanks Michael for joining.
Really appreciate it.
Michal: having me, Jacob.
Jacob: Bye.
Speaker: Thanks for listening.
Hope you enjoyed.
Please go to price power podcast.com
to see all the episodes.
Go to Spotify and YouTube and
give us a subscribe and follow
so you don't miss any episodes.
Alright, talk to you next time.
All.