Sunday, March 21, 2010

On river bet sizing with the nuts


Here is the graph outcome of my simulations that I mentioned in the last post I was going to make. I'm using a very simplified model in these simulations and I'll explain some of the problems with it at the end of the post, but for the time being we'll just go with it. In order to understand the graph and what it says, I should explain what the program does and what I'm trying to show:

When we have the nuts on the river, extracting value is the only thing that should be on our mind. How do we make the most money? Often, the answer is "bet big." In fact, that's usually what books suggest, and while the correct decision always hinges on board texture, our opponent's range and varying degrees of level thinking, what I want to get across is this: If your opponent is likely to have a medium strength hand and you think he'll on average call a bet-size of X with that hand, then you should bet less than X.

Let's give it a number so we don't have to have that frightening variable name 'X' looming over us: Let's say that our opponent will on average call a bet of $100 and then compare what our outcome is if we bet $90 compared to $100. What we mean when we say "on average" in this case is important. It means that the bet our opponent is willing to call lies somewhere around $100. If we bet over whatever his breaking point is, he's just going to muck his hand because he's not willing to pay that much to see a showdown, and if we go below it, we're guaranteed a call. So far, so good, yes?

So what we mean when we say that he'll on average call $100, that means that his maximum amount for calling lies somewhere in the range of perhaps $80 to $120. Or differently put, he'll always call $79, but never $121. In between those amounts, we're not sure exactly what he'll do. But then our bet size should be much closer to $80 than to $120. And it's relatively easy to explain even without a graph, because if he'll call on average $100, that means - with this usage of average - that our expected value for betting $100 is $50. Half the time, he'll fold and half the time he'll call, and so we'll make $50 on average. Right?

But our expected value for betting $79 is $79, because he'll always call. So we're doing much better betting smaller.


The horizontal axis is the bet size. The vertical axis is the average profit.

a and b denote the min and max of our opponent's calling-amount range. x is the average amount he'll call. The straight line that rises up to the left of a is the amount of money we'd make if we bet less than his minimum, and then our profit will go up linearly until we reach a. Making the maximum bet, b, is clearly the inferior option.

The easiest way to understand the conclusion is perhaps to consider a hypothetical opponent where we happen to know exactly where his breaking point lies. Let's say that Bob has his breaking point at $80. That means that if we bet $81, we make absolutely nothing but if we bet $79, we win $79. That's what this graph reflects.

---

But, like I said, the model I used for the simulations was heavily simplified, and so let me be clear on how:
  1. I used a uniform distribution for their calling amount. In other words, their breaking point was as likely to be $85 as $115, when the reality probably is different; perhaps a normal distribution around the average? This would affect the shape of the graph, but not the conclusion.
  2. I did away completely with psychology, obviously. It's not entirely out of the question for some situations that larger bets are more likely to get looked up than smaller bets because they think a bigger bet looks suspicious.
  3. I also didn't factor in the possibility of getting raised. While simplification #2 builds a case for a bigger bet than what my model suggests, the possibility of a smaller bet inducing a raise (bluff or otherwise) should counteract that at least to some extent.
  4. Most importantly, this model targets a specific hand that our opponent has. In reality, our opponent is going to have a range of hands, some stronger and some weaker, and it's not at all as clear cut exactly how we should bet when his distribution of hands includes some strong hands. For instance, 90% of the time he may have middle pair top kicker and the graph above applies, but 10% of the time he has an overpair and will in fact call a much bigger bet. Now the distribution is definitely different and this will have quite an impact. In fact, we might end up with a graph with several local maximums.
  5. I'm also assuming we have the nuts (or rather, that we'll always win when he calls). This is not a big problem with the model, though, because firstly there are plenty of river decisions where we can feel confident that the risk of our opponent having a better hand is negligible, and secondly it doesn't actually affect the conclusion: if some of his calling range beats us, that (mostly) speaks in favor of betting smaller.
Despite these shortcomings of a very simple model, I think the conclusion is an important one and is often valid: A bigger bet isn't necessarily more profitable. If our opponent's likelyhood of calling it goes down, we're mostly better off just betting smaller.

No comments: