Announcement

Collapse
No announcement yet.

ESPN top10 PG's ( Lowry 6th )

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #46
    DanH wrote: View Post
    Yeah, but player improvement projections are different for sophomores and for veterans. And the data used to generate these projections is based on entire seasons - how a player performs in their entire 3rd year versus their entire 2nd year, and how players tend to perform in their entire 6th year versus their entire 5th. So it is entirely relevant to state that it is the full season performance that is being projected, and entirely reasonable to assume that the entire season will tell a different story than the players' talent level on the first day of the season.
    True but the biggest jumps are from season to season, not during the season. Plus any in season jump would be a fairly small sample size and shouldn't move the needle on the entire season too much.
    Heir, Prince of Cambridge

    If you see KeonClark in the wasteland, please share your food and water with him.

    Comment


    • #47
      Axel wrote: View Post
      True but the biggest jumps are from season to season, not during the season. Plus any in season jump would be a fairly small sample size and shouldn't move the needle on the entire season too much.
      Is that the case? Ross, until December 7th in his sophomore year, posted a 52.3 TS% and a 3.9 Game Score (one of basketball-ref's aggregation stats). In his rookie year, he posted a 49.1 TS% and a 3.7 Game Score. After that point in his sophomore year, he posted a 55.8 TS% and a 7.5 average Game Score.

      Which of those two sets is more similar? There are always extenuating circumstances (Gay trade in this case) but the reality is that young players tend to have bigger jumps in performance, usage, and playing time than older more established players, and those jumps can happen just as easily mid-season as between seasons.
      twitter.com/dhackett1565

      Comment


      • #48
        DanH wrote: View Post
        Is that the case? Ross, until December 7th in his sophomore year, posted a 52.3 TS% and a 3.9 Game Score (one of basketball-ref's aggregation stats). In his rookie year, he posted a 49.1 TS% and a 3.7 Game Score. After that point in his sophomore year, he posted a 55.8 TS% and a 7.5 average Game Score.

        Which of those two sets is more similar? There are always extenuating circumstances (Gay trade in this case) but the reality is that young players tend to have bigger jumps in performance, usage, and playing time than older more established players, and those jumps can happen just as easily mid-season as between seasons.
        And so a partial season should equate a higher score? A player can go on a good stretch to finish the season but with strength of schedule and so many other factors, until they can put it together for the majority of a season then it is still a fairly small sample.

        And like you mentioned, other circumstances outside of the players control can have as big an impact as anything. There is no direct correlation that can be pinpointed, so how can we say what caused the jump in this figures? How did his minutes and role change - as these can have a big impact on things like TS%. Point is, in this example you can't say why there was a difference in his performance. Many young players could have a similar jump at the end of the season as their team starts packing it in and let's their youngsters have larger roles. A player could easily put up big scoring numbers on a bad team, but that doesn't mean they are a better player. Opportunity changes are often so nuanced that no stat can accurately track them to create a historical frame of reference.
        Heir, Prince of Cambridge

        If you see KeonClark in the wasteland, please share your food and water with him.

        Comment


        • #49
          Apples, meet oranges.

          Comment


          • #50
            Axel wrote: View Post
            And so a partial season should equate a higher score? A player can go on a good stretch to finish the season but with strength of schedule and so many other factors, until they can put it together for the majority of a season then it is still a fairly small sample.
            Um, no. The partial season counts as a higher score in that part of the season, and is averaged in with the rest of the season to give the overall season. Strength of schedule and other factors average out over an entire season. Remember, I'm not talking about how good they will be in the final five games of the season. I'm talking about their overall impact for the entire season. 82 games sample size. No strength of schedule issues. Everyone on the same playing field. But you can't judge the entire season's impact on the first day. The projections are for a full season - not the games at the end, not the talent level in 2 months, but the entire season.

            And like you mentioned, other circumstances outside of the players control can have as big an impact as anything. There is no direct correlation that can be pinpointed, so how can we say what caused the jump in this figures? How did his minutes and role change - as these can have a big impact on things like TS%. Point is, in this example you can't say why there was a difference in his performance. Many young players could have a similar jump at the end of the season as their team starts packing it in and let's their youngsters have larger roles. A player could easily put up big scoring numbers on a bad team, but that doesn't mean they are a better player. Opportunity changes are often so nuanced that no stat can accurately track them to create a historical frame of reference.
            Yes, many of those factors would colour a player's performance. And all of them have existed at some point for the various players in the past used to generate the patterns for age, experience, minutes load, etc that feed into the projections. These projections are based on what a player in a certain situation does, on average. All those external factors will be present to some degree in the projections. Obviously we can't know exactly which factors will cause what changes in role, minutes played, etc, but we are hardly ignoring them. We are using them - as they exist in the data from past players. We're just using an average impact from them - the impact could be more, could be less, in the players being projected, but it's a projection - there is bound to be uncertainty in spades.
            twitter.com/dhackett1565

            Comment


            • #51
              Everything you say just supports my original comment, no single statistic can be used in isolation to rank players. The author mailed in an easy report because they were probably drinking by the pool and didn't want to spend the effort to do the full level of research necessary to equate a reasonably accurate projection of the season.
              Heir, Prince of Cambridge

              If you see KeonClark in the wasteland, please share your food and water with him.

              Comment


              • #52
                Puffer wrote: View Post
                Apples, meet oranges.
                Sales projections based on average history, ignoring context, for evaluating salesmen, meet stats projections based on average history, ignoring context, for evaluating players.

                Some differences in context aren't so nuanced, in fact glaring, yet this number crunching method pays no attention. For example, DeMar has been the focus of defensive schemes for the last few years, not just this past season. This past season he was consistently facing the best defender, plus getting double teamed consistently. All that while also facing the pressures, from within, as well as from opponents, in battling for playoff seeding down to the wire. Waiters on the other hand, didn't even face starters for the majority of his PT, never mind never getting doubled by the opponent's best defender every night, and he put up his best numbers down the stretch in meaningless games.

                This is like saying a salesman who inherited an easy territory last year, and had a ton of gimme sales, is going to be a better salesman next year than a guy who faced a very tough territory, with some of his customers even going out of business, yet still bettered the sales of the new guy that inherited most of his sales. Why? Because history tells us that young guys improve their sales more than experienced guys, on average, and look at last year's numbers (no context) after all.

                Averages, which brings up another point. Just like DeMar has proven, through hard work and dedication, that he's an exception to "the rule", every all-star level player in the NBA is an exception. Either because of exceptional natural gifts (LBJ, KD, Dwight....), or because they work harder, for longer, than the norm, or both. Has Waiters, for example, demonstrated either yet? I say not even close. He could just be one of the many players littering NBA history, who come into the league and put up good numbers in year 1,2 or 3, yet for a variety of reasons, don't have what it takes to sustain growth over time, and end up bench warmers or out of the league. For example, how many here would have traded DeMar straight up for OJ Mayo at the drop of a hat? How many would now?

                Evaluating players and projecting their future season(s), objectively, isn't an accounting exercise, no matter how elaborate the mathematics is.
                Last edited by chico; Mon Aug 25, 2014, 11:27 AM.

                Comment


                • #53
                  Axel wrote: View Post
                  Everything you say just supports my original comment, no single statistic can be used in isolation to rank players. The author mailed in an easy report because they were probably drinking by the pool and didn't want to spend the effort to do the full level of research necessary to equate a reasonably accurate projection of the season.
                  Huh? There are lots of places to go to get player rankings based more on common sense - heck, ESPN does their player rankings every year based off the popular consensus of a bunch of sportswriters.

                  And none of what I said supports that suggestion. I was saying that all those variables are a) just as unpredictable to an analyst who is making judgment calls as they are to a formula and b) to the extent that they are somewhat predictable, the model makes an attempt to project based on the average impact that past players have seen in those situations.

                  Clearly projections won't be perfect, and assumptions are made, but the imperfections and assumptions are no worse than those made in more traditional attempts to predict how players will perform.
                  twitter.com/dhackett1565

                  Comment


                  • #54
                    chico wrote: View Post
                    This is like saying a salesman who inherited an easy territory last year, and had a ton of gimme sales, is going to be a better salesman next year than a guy who faced a very tough territory, with some of his customers even going out of business, yet still bettered the sales of the new guy that inherited most of his sales. Why? Because history tells us that young guys improve their sales more than experienced guys, on average, and look at last year's numbers (no context) after all.
                    Technically, it is like saying a salesman who has, for 4 years straight, done terribly, then suddenly had a very good year, is expected to drop back off closer to his struggling years. And on the other side, it is like looking back at historical records for salesmen, seeing that most first and second year salesmen statistically tend to increase their sales 2 or 3 fold, and projecting that each of your second and third year salesmen should see that same increase. Not all of them will - but that is the expectation based on history.
                    twitter.com/dhackett1565

                    Comment


                    • #55
                      DanH wrote: View Post
                      Clearly projections won't be perfect, and assumptions are made, but the imperfections and assumptions are no worse than those made in more traditional attempts to predict how players will perform.
                      I think that's where most disagree with you, and looking at just how off off-base a lot of their projections seem to be, I'm not sure you can say that definitively.

                      Comment


                      • #56
                        Fully wrote: View Post
                        I think that's where most disagree with you, and looking at just how off off-base a lot of their projections seem to be, I'm not sure you can say that definitively.
                        How off-base their projections seem to be? Seems to me people have had problems with the SG list and none of the others, so I would suggest that a) people are upset based on a small sample of the overall work and b) that we really won't know whose projections are more accurate until the season is over.

                        Keep in mind that I do think that DD will be better than those two this year, and that he is an exception that causes issues for this projection system (and more traditional ones - up until a year ago he was seen fairly widely in the media, for example, as an overpaid player). His being underrated is not reason enough to throw away the entire system.
                        twitter.com/dhackett1565

                        Comment


                        • #57
                          Plus I'm not sure that Pelton is able to throw his hands in the air and say "blame the formula for the rankings, not me" when a) he was the one who created it and b) no one, as far as I know, put a gun to his head and forced him to publish it.

                          Seriously, if your formula pumps out a list that looks that far off base than wouldn't a natural reaction be to either tinker with the formula itself or possibly incorporate other means of evaluating players before you released the article?

                          Comment


                          • #58
                            Fully wrote: View Post
                            Plus I'm not sure that Pelton is able to throw his hands in the air and say "blame the formula for the rankings, not me" when a) he was the one who created it and b) no one, as far as I know, put a gun to his head and forced him to publish it.

                            Seriously, if your formula pumps out a list that looks that far off base than wouldn't a natural reaction be to either tinker with the formula itself or possibly incorporate other means of evaluating players before you released the article?
                            The point is to develop the formula and let it predict things. He hardly threw up his hands - he pointed out a few situations where he expected criticism, and gave explanations why the system treated these players that way. The entire point of developing new formulas is to give you lists you wouldn't come up with yourself - and ideally to give you lists more accurate than the ones you would come up with yourself. Predicting things subjectively is pretty well covered in dozens of places (with wildly fluctuating results, and often pretty inaccurate come the end of the year). Finding an objective projection system that works (and I'm not claiming it's there yet, they tweak it every year to improve it) has a great deal of value.
                            twitter.com/dhackett1565

                            Comment


                            • #59
                              DanH wrote: View Post
                              How off-base their projections seem to be? Seems to me people have had problems with the SG list and none of the others, so I would suggest that a) people are upset based on a small sample of the overall work and b) that we really won't know whose projections are more accurate until the season is over.

                              Keep in mind that I do think that DD will be better than those two this year, and that he is an exception that causes issues for this projection system (and more traditional ones - up until a year ago he was seen fairly widely in the media, for example, as an overpaid player). His being underrated is not reason enough to throw away the entire system.
                              This is not about DD or Lowry but the methodology. I didn't even bother reading the other rankings once I saw names like Olapido and Rubio as top 10, because I fundamentally find the method lazy and inaccurate.

                              The more Dan writes, the more I suspect that he is the writer from ESPN as he seems to have a personal attachment to the statistic.
                              Heir, Prince of Cambridge

                              If you see KeonClark in the wasteland, please share your food and water with him.

                              Comment


                              • #60
                                Axel wrote: View Post
                                This is not about DD or Lowry but the methodology. I didn't even bother reading the other rankings once I saw names like Olapido and Rubio as top 10, because I fundamentally find the method lazy and inaccurate.

                                The more Dan writes, the more I suspect that he is the writer from ESPN as he seems to have a personal attachment to the statistic.
                                I think most people here, including myself, have only seen the SG and PG rankings posted here, and both show serious flaws. Rubio #9, and Tony Parker not even in the top 10? What??? There's "excuses", as in past injury, for Rose and Rondo not being there, but Dragic not as good as Rubio too???

                                Comment

                                Working...
                                X