Where HD gets probability absolutely wrong Topic

Posted by dahsdebater on 10/2/2018 11:24:00 AM (view original):
Posted by shoe3 on 10/1/2018 10:58:00 PM (view original):
I didn’t play HD until 2013, so happily that was not something I experienced.

He should have chosen #1, no doubt. That’s the whole point here. If the system is producing more variability than desired, then you “tighten” up the process within each possession. Engineering the game to skew back towards a favorite when it’s already been weighted to appropriately favor the favorite is stacking the deck. Would you play blackjack at a casino that handed the dealer a deck specifically loaded against you after you win 2 hands in a row?
This isn't possible. Think it through. Most of the decisions in the game are binary. There is no control process for individual events. A .500 shooter has a standard deviation per shot of .5. Period.
I think you’re misunderstanding the reference here.

Of course it’s possible to tweak the parameters that determine the probability for each possession. Every possession is a coin flip, with the coin weighted according to parameters based on the factors and variables involved. If you have more variability than desired, the question is whether you tweak the parameters weighting each flip (better), or just lazily engineer the game to change the parameters based on how the last flip(s) went (what we have).
10/2/2018 11:52 AM
Now if you’re just talking about FT shooting, which is an event based on an isolated skill, (the shooter’s FT ability), you’re right, there’s no further event manipulation you can get to “tighten” anything. The odds are what they are. But the issue remains, is each “flip” for a C FT shooter going to be weighted at 67% (better), or is the system going to come up with a new weight for each shot, to try to get the player closer to that target as the game goes on (what we have).
10/2/2018 12:04 PM
Posted by dahsdebater on 10/2/2018 11:22:00 AM (view original):
Posted by Trentonjoe on 10/1/2018 10:49:00 AM (view original):
Posted by shoe3 on 9/30/2018 10:23:00 PM (view original):
Observationally, I don’t think it’s just a halftime adjustment, either. I suspect it can be triggered at any point where a team outperforms expectations by x amount.
I am almost positive it doesn't start until half time. It would be a good question to ask Seble in a ticket. Personally, I don't think it's that big a deal and the theory behind it is at least reasonable.

I don't think it's super powerful because I have had tons of guys shoot 3-14 from the FT line and as a team I have shot under 50% a number of times.
My distinct recollection is that you are wrong. It was handled based on number of events, not time of the game. So if a player's shooting percentage in the game is significantly off of his expected percentage, the engine starts to "massage" the expected percentage for subsequent shots after a set number of shots taken, not after halftime. And of course the extent of that massage has to do with how many shots have been taken and how far off expectation the percentage is. I think this was also true of rebounds and maybe turnovers? And again, based on number of total rebounds in the game and number of total possessions.

Granted, these effects will get more noticeable as you get deeper into a game. The size of the adjustments is driven by the probability of the current outcome. If you expect a .500 shooting percentage, a .333 after 6 shots is easily within one standard deviation (I would assume - hard to believe one shot out of any sample size is ever going to be greater than a standard deviation). If a guy takes 30 shots and he's still at .333 it's probably multiple standard deviations off. So the adjustment is bigger even though, to your first glance, it looks like the same result. But there's potentially an adjustment to the expected shooting percentage in both cases. It's just that the adjustment gets bigger when outcomes are more unlikely, which is always going to have the potential to be a larger effect later in games.
It's possible I am wrong, I don't remember the parameters for this to "turn on". I don't remember it being discussed for anything other than FG%. From an observation, I certainly don't think it effects fouls or rebounds.
10/2/2018 1:08 PM
Posted by shoe3 on 10/2/2018 11:52:00 AM (view original):
Posted by dahsdebater on 10/2/2018 11:24:00 AM (view original):
Posted by shoe3 on 10/1/2018 10:58:00 PM (view original):
I didn’t play HD until 2013, so happily that was not something I experienced.

He should have chosen #1, no doubt. That’s the whole point here. If the system is producing more variability than desired, then you “tighten” up the process within each possession. Engineering the game to skew back towards a favorite when it’s already been weighted to appropriately favor the favorite is stacking the deck. Would you play blackjack at a casino that handed the dealer a deck specifically loaded against you after you win 2 hands in a row?
This isn't possible. Think it through. Most of the decisions in the game are binary. There is no control process for individual events. A .500 shooter has a standard deviation per shot of .5. Period.
I think you’re misunderstanding the reference here.

Of course it’s possible to tweak the parameters that determine the probability for each possession. Every possession is a coin flip, with the coin weighted according to parameters based on the factors and variables involved. If you have more variability than desired, the question is whether you tweak the parameters weighting each flip (better), or just lazily engineer the game to change the parameters based on how the last flip(s) went (what we have).
I want you to explain to me, for a .500 shooter, how you would tweak the parameters weighting that flip better. Actually think about the math, whether or not in coding language.

You can't do it. It's nonsense. A fair coin has a fixed distribution of outcomes on 60 flips. There's nothing to "tighten up." The ONLY way you can narrow the distribution is to flip more times or to start making the coin less fair if you happen to be out in the tails.
10/2/2018 1:14 PM
Posted by dahsdebater on 10/2/2018 1:14:00 PM (view original):
Posted by shoe3 on 10/2/2018 11:52:00 AM (view original):
Posted by dahsdebater on 10/2/2018 11:24:00 AM (view original):
Posted by shoe3 on 10/1/2018 10:58:00 PM (view original):
I didn’t play HD until 2013, so happily that was not something I experienced.

He should have chosen #1, no doubt. That’s the whole point here. If the system is producing more variability than desired, then you “tighten” up the process within each possession. Engineering the game to skew back towards a favorite when it’s already been weighted to appropriately favor the favorite is stacking the deck. Would you play blackjack at a casino that handed the dealer a deck specifically loaded against you after you win 2 hands in a row?
This isn't possible. Think it through. Most of the decisions in the game are binary. There is no control process for individual events. A .500 shooter has a standard deviation per shot of .5. Period.
I think you’re misunderstanding the reference here.

Of course it’s possible to tweak the parameters that determine the probability for each possession. Every possession is a coin flip, with the coin weighted according to parameters based on the factors and variables involved. If you have more variability than desired, the question is whether you tweak the parameters weighting each flip (better), or just lazily engineer the game to change the parameters based on how the last flip(s) went (what we have).
I want you to explain to me, for a .500 shooter, how you would tweak the parameters weighting that flip better. Actually think about the math, whether or not in coding language.

You can't do it. It's nonsense. A fair coin has a fixed distribution of outcomes on 60 flips. There's nothing to "tighten up." The ONLY way you can narrow the distribution is to flip more times or to start making the coin less fair if you happen to be out in the tails.
Dahsdebater, Shoe is not necessarily saying it should be tweaked from .500, but the game is already changing parameters lazily to make each player regress to their mean (yes it's easily possible, and it does appear to be happening). He is saying that the game should consider all factors for every single shot (initial player ratings modified by if they are open, doubleteamed, distance from hoop, did they receive a pass from a good passer, etc) instead of what it does now: "well he's a .500 shooter and he made the last 3 shots, so he's due for a miss."
10/2/2018 3:30 PM (edited)
Posted by dahsdebater on 10/2/2018 1:14:00 PM (view original):
Posted by shoe3 on 10/2/2018 11:52:00 AM (view original):
Posted by dahsdebater on 10/2/2018 11:24:00 AM (view original):
Posted by shoe3 on 10/1/2018 10:58:00 PM (view original):
I didn’t play HD until 2013, so happily that was not something I experienced.

He should have chosen #1, no doubt. That’s the whole point here. If the system is producing more variability than desired, then you “tighten” up the process within each possession. Engineering the game to skew back towards a favorite when it’s already been weighted to appropriately favor the favorite is stacking the deck. Would you play blackjack at a casino that handed the dealer a deck specifically loaded against you after you win 2 hands in a row?
This isn't possible. Think it through. Most of the decisions in the game are binary. There is no control process for individual events. A .500 shooter has a standard deviation per shot of .5. Period.
I think you’re misunderstanding the reference here.

Of course it’s possible to tweak the parameters that determine the probability for each possession. Every possession is a coin flip, with the coin weighted according to parameters based on the factors and variables involved. If you have more variability than desired, the question is whether you tweak the parameters weighting each flip (better), or just lazily engineer the game to change the parameters based on how the last flip(s) went (what we have).
I want you to explain to me, for a .500 shooter, how you would tweak the parameters weighting that flip better. Actually think about the math, whether or not in coding language.

You can't do it. It's nonsense. A fair coin has a fixed distribution of outcomes on 60 flips. There's nothing to "tighten up." The ONLY way you can narrow the distribution is to flip more times or to start making the coin less fair if you happen to be out in the tails.
A “.500 shooter” doesn’t just shoot .500. There are multiple possible variables that can affect his percentage in a game. The defense/defender, opponents’ game plan (double team, etc), the tempo/fatigue. All those things go into how the “fair coin” is weighted in a specific game. All of that can be tweaked on an individual level, if the results seem like they are consistently “random”.

Again, the broad question is whether the coin should be weighted properly from the beginning, or if the coin should be re-weighted as we go along to get the “expected” result.
10/2/2018 3:53 PM
Im in the camp where I was more worried about the result than how we got there.

So if the end of game box score makes sense more often than not, then I think its a decent game engine. Whether the PBP of each half is a little wonkey or not is a secondary concern.

Except how he has coded the impact of defensive positioning in this game, I think the bulk of the game engine is good and doesn't need much tweaking.
10/2/2018 5:06 PM
Posted by shoe3 on 10/2/2018 3:53:00 PM (view original):
Posted by dahsdebater on 10/2/2018 1:14:00 PM (view original):
Posted by shoe3 on 10/2/2018 11:52:00 AM (view original):
Posted by dahsdebater on 10/2/2018 11:24:00 AM (view original):
Posted by shoe3 on 10/1/2018 10:58:00 PM (view original):
I didn’t play HD until 2013, so happily that was not something I experienced.

He should have chosen #1, no doubt. That’s the whole point here. If the system is producing more variability than desired, then you “tighten” up the process within each possession. Engineering the game to skew back towards a favorite when it’s already been weighted to appropriately favor the favorite is stacking the deck. Would you play blackjack at a casino that handed the dealer a deck specifically loaded against you after you win 2 hands in a row?
This isn't possible. Think it through. Most of the decisions in the game are binary. There is no control process for individual events. A .500 shooter has a standard deviation per shot of .5. Period.
I think you’re misunderstanding the reference here.

Of course it’s possible to tweak the parameters that determine the probability for each possession. Every possession is a coin flip, with the coin weighted according to parameters based on the factors and variables involved. If you have more variability than desired, the question is whether you tweak the parameters weighting each flip (better), or just lazily engineer the game to change the parameters based on how the last flip(s) went (what we have).
I want you to explain to me, for a .500 shooter, how you would tweak the parameters weighting that flip better. Actually think about the math, whether or not in coding language.

You can't do it. It's nonsense. A fair coin has a fixed distribution of outcomes on 60 flips. There's nothing to "tighten up." The ONLY way you can narrow the distribution is to flip more times or to start making the coin less fair if you happen to be out in the tails.
A “.500 shooter” doesn’t just shoot .500. There are multiple possible variables that can affect his percentage in a game. The defense/defender, opponents’ game plan (double team, etc), the tempo/fatigue. All those things go into how the “fair coin” is weighted in a specific game. All of that can be tweaked on an individual level, if the results seem like they are consistently “random”.

Again, the broad question is whether the coin should be weighted properly from the beginning, or if the coin should be re-weighted as we go along to get the “expected” result.
I think you're missing the entire problem that led to the overhaul. Nobody was complaining about how well players were performing in aggregate. The problem was that the sample sizes of games were too small for statistical behavior to feel "right" to the players. And for what it's worth, none of the factors you reference here were altered in any way. The way it was explained was that the engine basically stores a running value of (expected FG made - FG made) {this may be stored separately for 2- and 3-point shots, layups off TOs may not count, etc.} and when that value is significantly different from 0 it adds an additional parameter into the expected FG percentage for subsequent shots to bring that running total back closer to 0. The further from 0 the value gets, the bigger the adjustment.

The reality is that without this the game could be pretty random. And this is to be expected. Human-coached teams typically shoot about .500. That's convenient because it makes the math easy. The standard deviation of a .500 shooter is .5. Typical teams shoot about 50-60 shots per game. Let's be generous and run it at 60. The standard error is .5/sqrt(60) = .0645. That means that the difference between +/- 1 SE - the most likely difference from true expected shooting percentage - is 3.9 shots per game. That's basically 16 points if nobody shoots any 3s. The reality is that players felt it's not good for the game for evenly matched teams to routinely lose by 16 points due to appropriate statistical treatment of the outcomes. It requires you to be a massive favorite to really feel confident in a game. That matters even more in win-or-go-home tournament play.

This correction may or may not be in any way realistic - it's probably not - but ultimately the consensus was that it makes it a better game. You could go one step further and just take the expectation value for points on each possession and add it together. At that point the only thing you can do to change the outcome of a game is change your distribution of minutes or massage substitution patterns in some way. That's clearly boring. So WIS wanted to target something in between. The only good way to do that was what has been done - force results from moderate samples to regress to the expectation value at a quicker-than-statistical rate. I generally agree with the sentiment that regardless of how well it simulates real life, producing a game where the better team/better gameplan usually - but not always - wins is a reasonable goal. Too deterministic is not fun, but too statistical when ultimately rewards dollars are on the line drives players away.
10/2/2018 5:36 PM
“The only good way to do that was what has been done - force results from moderate samples to regress to the expectation value at a quicker-than-statistical rate.”

No. That isn’t “the only good way”. It may have been the most expedient, but framing it as the only good way is ridiculous.

Deviations work themselves out over time, given enough possessions. Sometimes you don’t have enough possessions in a game to make up for deviations. That’s why games, and their defined stopping points, are played out, rather than just handing the championship to the team with the best adjusted OVR of the top 9 players at the beginning of the year.

Someone who cant tolerate needing an “overwhelming favorite” to “feel confident” in a game shouldn’t be playing a competitive multiplayer game in the first place. But that’s probably another thread.

So again, would you play in a casino where the house hands the dealer a loaded blackjack deck if you win 3 hands in a row?
10/2/2018 6:05 PM
Also, I guarantee it was not a “consensus”, unless you’re talking about a group of 4 or 5 specific coaches. Look at the responses on this, or the thread I linked to. Re-stacking the deck mid-game to even more heavily favor favorites, and engineer “expected” results is not a good or popular plan, except among a very select group of users.
10/2/2018 6:12 PM

No. That isn’t “the only good way”. It may have been the most expedient, but framing it as the only good way is ridiculous.

Deviations work themselves out over time, given enough possessions. Sometimes you don’t have enough possessions in a game to make up for deviations. That’s why games, and their defined stopping points, are played out, rather than just handing the championship to the team with the best adjusted OVR of the top 9 players at the beginning of the year.

You still haven't been able to name another good way to reduce variance. Except maybe making the games unrealistically long? I feel like that's a whole lot weirder than making outcomes more predictable.

Someone who cant tolerate needing an “overwhelming favorite” to “feel confident” in a game shouldn’t be playing a competitive multiplayer game in the first place. But that’s probably another thread.

I would argue that someone who can't handle generally losing when they have an inferior team has no business playing in a competitive multiplayer game. Go find a single-player game you can put on easy mode and just beat up on people. But those of us who enjoy competition would like the players who do the best job building and setting up their teams to typically win.

Also, I guarantee it was not a “consensus”, unless you’re talking about a group of 4 or 5 specific coaches. Look at the responses on this, or the thread I linked to. Re-stacking the deck mid-game to even more heavily favor favorites, and engineer “expected” results is not a good or popular plan, except among a very select group of users.

It was actually extremely popular at the time. It was a change that was only ever implemented because of overwhelming forum outcry for something to reduce variance in game outcomes. I don't think it's unpopular now. Seems like the majority of commenters still approve, they just don't say as much. And many of the people complaining about it only do so after they perceive it to cost them a victory. Put simply - there is a hell of a lot less complaining on the forums now about the "lack of randomness" than there used to be on the amount of variance in game outcomes.
10/2/2018 6:39 PM
Posted by dahsdebater on 10/2/2018 6:39:00 PM (view original):

No. That isn’t “the only good way”. It may have been the most expedient, but framing it as the only good way is ridiculous.

Deviations work themselves out over time, given enough possessions. Sometimes you don’t have enough possessions in a game to make up for deviations. That’s why games, and their defined stopping points, are played out, rather than just handing the championship to the team with the best adjusted OVR of the top 9 players at the beginning of the year.

You still haven't been able to name another good way to reduce variance. Except maybe making the games unrealistically long? I feel like that's a whole lot weirder than making outcomes more predictable.

Someone who cant tolerate needing an “overwhelming favorite” to “feel confident” in a game shouldn’t be playing a competitive multiplayer game in the first place. But that’s probably another thread.

I would argue that someone who can't handle generally losing when they have an inferior team has no business playing in a competitive multiplayer game. Go find a single-player game you can put on easy mode and just beat up on people. But those of us who enjoy competition would like the players who do the best job building and setting up their teams to typically win.

Also, I guarantee it was not a “consensus”, unless you’re talking about a group of 4 or 5 specific coaches. Look at the responses on this, or the thread I linked to. Re-stacking the deck mid-game to even more heavily favor favorites, and engineer “expected” results is not a good or popular plan, except among a very select group of users.

It was actually extremely popular at the time. It was a change that was only ever implemented because of overwhelming forum outcry for something to reduce variance in game outcomes. I don't think it's unpopular now. Seems like the majority of commenters still approve, they just don't say as much. And many of the people complaining about it only do so after they perceive it to cost them a victory. Put simply - there is a hell of a lot less complaining on the forums now about the "lack of randomness" than there used to be on the amount of variance in game outcomes.
*If* there is more variance than desired, there are as many ways to “reduce” it as there are places where probability is used to determine an outcome. If an outcome is weighted 53-47, and it results in too much variance, weight it 55-45. But keep it there from the beginning.

You are the one claiming that losing games one is “supposed” to win is a problem. I don’t have any problem losing a game that is fair. Loading a deck after a hot hand is not fair.

Tell us more about how popular it was, and who liked it, and among them, how many of those folks are still actively playing.
10/2/2018 11:45 PM (edited)
To use the analogy in recruiting, if you (the programmer) don’t like how often an extreme underdog wins a recruit when the straight odds are 60-40, stretch them to 78-22. That’s cool. The parameters are what they are, the underdog will win less often.

What you don’t do is reconfigure the recruits preference profile in the middle of a recruiting battle to account for the fact that a D2 is ahead of a D1, or if the team in the lead has already won more battles than expected this season. That’s a ridiculous way to design a game.
10/2/2018 11:53 PM
Posted by shoe3 on 10/2/2018 6:12:00 PM (view original):
Also, I guarantee it was not a “consensus”, unless you’re talking about a group of 4 or 5 specific coaches. Look at the responses on this, or the thread I linked to. Re-stacking the deck mid-game to even more heavily favor favorites, and engineer “expected” results is not a good or popular plan, except among a very select group of users.
Um yeah...

How do you know YOU'RE not among the select group of players who likes it your way. Enjoy your insights but you always present things like you are the majority and most users must agree with you, when that isn't always the case.

As the previous poster said, it was common for people to post box scores with "WTF" titles every day. Now it is rare when people get royally screwed by the SIM.

You've probably benefited by this more than not so not sure why its a big issue. In fact, with so many SIM teams, ALL the humans probably benefit from it.
10/3/2018 12:35 PM (edited)
That's a completely ridiculous analogy. Massaging the odds for a single decision is easy. Massaging the outcomes of a sequence of events is NOT easy. Your best solution is to make all the shooters less close to .500 shooters so that probability will favor the better ones? That's dumb as hell. Almost nobody has complained in over a decade about the bulk shooting percentages in the game. Why break something that works to fix a different problem?

And by the way, moving a 53% shooter to a 55% shooter reduces variance by less than a tenth of a shot per game. So that really doesn't address the problem in a meaningful way. You have to make him like an 80% shooter to really make a difference. Again - dumb as hell. Your solutions all basically involve making it less of a basketball sim. The implemented solution keeps all the basketball sim that it ever had but narrows the distribution of outcomes. Which is exactly what the player base was asking for.
10/3/2018 12:26 PM
◂ Prev 1|2|3|4|5...10 Next ▸
Where HD gets probability absolutely wrong Topic

Search Criteria

Terms of Use Customer Support Privacy Statement

© 1999-2025 WhatIfSports.com, Inc. All rights reserved. WhatIfSports is a trademark of WhatIfSports.com, Inc. SimLeague, SimMatchup and iSimNow are trademarks or registered trademarks of Electronic Arts, Inc. Used under license. The names of actual companies and products mentioned herein may be the trademarks of their respective owners.