Warping Play Registration

Synergy: Breaking Down Field Goal Types

With the creation of Synergy, the basketball world gained valuable access to previously hard-to-obtain data on all field goal events in the league. One of the biggest introductions was the “primary defender” tag on field goal events. With play-by-play data, when a player drives to the basket or attempts a step-back three, the players are logged as Player A driving layup 2PA or Player B step back 3PA. The only way to exfiltrate the defender information is to go back and watch the actual film. For an NBA season, 1230 games each of approximately 2 hours leads to 2460 hours of film to review. And it’s rare that we can watch, log, and verify at real time. Also, for reference, there’s only 2087 hours in a standard work year according to the federal government; and that’s taking no vacation days.

Another fantastic introduction in Synergy is the play-type field that identifies the action that leads to a field goal attempt. For instance, a pick-and-roll may occur and it frees up a drive to the basket. Again, in play-by-play data the play is logged as Player A driving layup 2PA. But in Synergy, we get to know who the screener is, who the ball handler is, and who the primary defenders are. As an analyst, if we wanted to measure the quality of a shooter in say “pick-and-roll” events, all we had to do was open up Synergy and sort on field goal percentage on pick-and-roll events.

The key here is that Synergy leverages mechanical turk logging of games. It uses loggers and verifying loggers (opposed to machine learning) to help ensure accuracy of their data. There’s also “one-touch” video in Synergy, which allows the analyst to view the play in question; which is undoubtedly the best feature of the system. If we are interested in every pick-and-roll that Damian Lillard plays in, we can filter on Little and Pick-and-Roll events and click on any attempt we are interested in. There’s a reason why Synergy is expensive to the casual viewer. There’s definitely a lot of blood, sweat, and tears that go into this platform.

Second Spectrum: Applying Machine Learning

Over the previous six years, Second Spectrum attempted to leverage tracking data to perform similar tasks as Synergy, but also improve the quantifiability of players in given situations. To this end, instead of mechanical turk-ing field goal events, second spectrum could identify all pick-and-rolls; which include non-field-goal attempts. This was a revolutionary step from Synergy’s sortable table of only-field-goal attempts. For starters, the analyst could now track how many pick-an-rolls defenders could disrupt and deter any field goal attempt. Therefore instead of seeing a switching defender give up say 47-for-80, a rather terrible 58.75% defensive field goal percentage, we may find out that teams have actually ran 137 pick-and-rolls against that switch defender. That 58.75% is really a 47-for-137; or 34.31%. In case you were wondering; this was Rudy Gobert from a random subsample of games.

Instead of using humans in the loop (which is exhaustive just from an hours standpoint), Second Spectrum employs a proprietary machine learning library that classifies trajectories as certain basketball actions. One such classification algorithm focuses on identifying pick-and-roll events. The beauty of Second Spectrum’s work is that not only do they have upwards of 200 actions classified, ranging from screens to fast breaks to field goal types and defender contest style; but they also have the Eagle platform to perform similar tasks as Synergy’s platform: we can select plays on demand and watch the video as well.

The Challenges

Key challenges with both Synergy and Second Spectrum focus on the nuances in their logging system.

With Synergy, an analyst must grapple with the logger’s definition of coverage. Two key stories pop up from Synergy that has been shared around the league: JJ Hickson an Earl Boykins. If you’re not familiar with these two stories, here’s the short gist.

J.J. Hickson

One season, a team was interested in finding a dominant scorer at the rim. One quick and dirty way was to use the field-goal location tag in Synergy called Rim and sort on all players. Immediately J.J. Hickson popped up to the top of the list. This led the team to investigate Hickson as a potential rim scorer. What the team ultimately found out was that Hickson was indeed a top scorer at the rim; but specifically at the rim. He could convert dunks and he tried to dunk a lot. As soon as he bumped out to 2-to-5 feet, his FG% would drop significantly and his attempts would fall off a cliff; meaning he wouldn’t take those shots either.

The reason for Hickson popping up is because Synergy’s definition of Rim is the region near the basket. And unless that team could guarantee spacing (a relatively foreign concept at the time) to ensure Hickson could get 6-8 dunks a game; he wasn’t going to be the guy they were looking for.

Earl Boykins

Another season, a team was looking for a strong perimeter guard. Of course, in a sorting that would make most analysts cringe, the team sorted on defensive three point percentage as primary defender. Out popped Earl Boykins at the top of the list. Furthermore, Boykins had been near the top of the list for multiple seasons.

It turned out that due to Boykins’ size, teams would try to attempt shooting over him thinking it to be a psychological advantage. Those players would actually take lower quality attempts than usual. For one season, attempts on Boykins per possession led the league but quality was near the bottom and despite teams converting better than quality on their attempts; it was their shot quality (decision-making) that led to an overall lower percentage rather than Boykins’ defensive prowess beyond the arc. Adjusting for quality, Boykins actually turned out to be a solid perimeter defender, but nothing exceptional; which the team was looking for.

What should be clearly stated is that these examples are not showing that Synergy is bad, but rather there is a nuance to the data that is delivered. In fact, Synergy is a wonderful tool when used thoughtfully during executing player analysis.

Primary Defender and Contesting…

In the Second Spectrum case, identifying primary defenders and contests are two looming challenges for analysts. While the company provides labels, it too has nuance. For instance, Second Spectrum attempts using a Munkres-Linear-Assignment type algorithm to identify primary defender match-ups. It’s a fantastic machine learning algorithm and is used in several advanced tracking algorithms today; but it’s also nuanced. In some cases, it’s slow to reassign players on switches. Specifically, when a BLUE action occurs, it may not correctly attribute primary defender status on the shooter.

Similarly, defining contesting is a challenge; particularly around the rim. For many years, defining a contest at the rim was poorly applied by Second Spectrum; and the reason is because tracking data lacks directionality and player verticality. This is not a fault of Second Spectrum; the cameras can only get what they can get for now.

In the case of directionality, the biggest problem is a player who is back-pedaling on a pass and has no chance to contest a shot, but their momentum carries the defender towards the shooter. They will be labeled almost always as a contester.

Similarly, we do not have any knowledge of the player’s z-axis in the tracking data. This means we have no idea whether a player jumped to contest a shot. So if a player attempts to take a charge or attempts a strip but lets the shooter go; they can easily be listed as a contesting defender.

Nuances Aside

Given some of the nuances in both Synergy and Second Spectrum, one thing neither system can give is how a player runs that play. We’ve been primarily discussing pick-and-rolls. In both Synergy and Second Spectrum, they give us a marker and a result. What we don’t know is how a team runs the pick-and-roll. Do they run it slower or faster? Do they run it wide or tight? Is it delayed? These may seem rather odd questions, but the answers give way to understanding how quickly does a team attack the switch and how much spacing do they incorporate off the pick and how much gravity is expected for the driver to have, respectively. And it’s these things that can’t get answered directly from Synergy data or Second Spectrum markings.

Instead, we would look back at tracking and perform a well-known task in the geolocation world: registration.

Registration

Registration is the process of finding a spatial transformation to align multiple point sets. In the case of geolocation, the most common problem is to ensure an aircraft follows its way-points. Using the trajectory of the aircraft, we can compare the aircraft’s trajectory to the intended flight path and identify deviations that may have occurred. The “cool, new” problem is applied to automated vehicles such as driver-less cars, to ensure a car is following its course of way-points.

But also, it’s used in many other applications, such as monitoring foot traffic of pedestrians in a park. Measuring trajectories of patrons of a park may help the park officials identify optimal locations for newly proposed sidewalks to be installed. In this case, we look at thousands of trajectories and perform registration to see the largest class of paths.

Finally, in basketball, we apply registration to identify similar plays. Since we are using registration, we can also identify the amount of distortion associated with the play and it’s this distortion (or, technically, warping) that gives us insight into the nuances of the players associated in the action of interest.

Spatio-Temporal Registration

Spatio-temporal registration is the process of comparing two trajectories through an optimization process combining temporal registration (dynamic time warping) and rigid spatial registration. Combining both the temporal and spatial aspect allows us to compare the trajectories as bodies move along these paths as not only a function of distance, but also of time. The registration process is then identification of the difference between two trajectories, allowing us to identify if two trajectories are effectively traveling the same path.

Temporal Registration

For two temporal processes of spatial locations, and Y, of length Nx and Ny, respectively; we may have to prepare a warping function to align the series. A warping function is a function that attempts to find temporal “matches” between two time series:

Screen Shot 2019-09-09 at 8.12.26 AM

In this case, if the sampling rates are off, the warping function will attempt some form of interpolation between the two time series. Suppose X is a “longer” time series, then the warping function will identify the appropriate slice of time to compare X to Y. The value, s, is then the segment in which we compare the two trajectories. Hence the function PHI_x and PHI_y are simply looking for the the index of each respective series that match within a segment.

Thankfully, we do not have to apply rigid rotations as the sampling rate in Second Spectrum data is typically uniform, sampled every .04 seconds. This means, the function PHI is simply looking for an offset between the start of a trajectory in question and the segment in which the play elapses over.

To be clear, suppose Pick-and-Roll (PnR) action for a team occurs at 11:38 remaining in the first quarter (seen as time 12 seconds) and it takes 3.7 seconds to complete the action. Then suppose a second PnR is completed by the same team at 2:17 remaining in the first quarter (seen as time 583 seconds) and it takes 4.1 seconds to complete the action. Then, for our dynamic time warping function of choice, we may select the larger window and synchronize the motion of interest.

Screen Shot 2019-09-09 at 9.29.00 AM

Two PnR sequences with same starting locations. Play on the left develops slower due to the switch occurring on engagement of the screen.

 

Considering the two PnR plays above, we perform a temporal registration on the point guard action only. The play on the left shows a Hedge-and-Under defensive scheme, which pushes the point guard away from the basket as the on-ball defender gets extra time to sneak underneath the screen to recover. The guard sees that the screen defender is not going to switch and attempts to accelerate to attack the recovering guard.

The play on the right shows a Show-And-Over defensive scheme that has gone woefully awry for the defense. The screener tangles up the on-ball defender and this forces the screen defender to ultimately switch on the show. The point guard, seeing that he’s drawn the (hopefully) slower defender, accelerates earlier than in the first play. This allows the screener to slip and keeps the on-ball defender straggling behind the attacked screen defender.

Here, we see that the point-guard action is not nearly identical however, the action is almost the exact same: guard drives right, attacks the right elbow, screener slips towards left elbow. Performing a temporal registration will align the motion across both plays.

Screen Shot 2019-09-09 at 9.32.24 AM

Temporal Registration (lime green) of the two point guard paths. Tilting in the green lines indicate acceleration changes.

We see that the lime green lines serving as the warping function does not necessarily find the closest points in space. As the lines turn more sharply than the curves do, this suggests that the second action moves a little fast than the first!

Spatial Registration

Spatial registration is the task of identifying similar shapes. The most common example is looking at a selection of point as asking, “Are these the same shapes?” Spatial registration therefore looks at rigid motion which include rotationreflection, and translation. Spatial registration may also use other tools such as stretching, however that is for comparing shapes that are measured on different scales. Under the Second Spectrum hypothesis of equal sizes for all frames, we may omit stretching as a factor.

Therefore, the key question is whether a spatial trend is equivalent in rigid motion. The challenge with spatial registration is that actions may lose their right-left interpretation.  For our PnR examples above, spatial registration will identify both a translation and a rotation to match the point guard action.

Screen Shot 2019-09-09 at 10.13.51 AM

Rigid motion (lime green) applied to identify spatial registration of point guard across two PnR actions.

 

We see that the action is strikingly similar in pattern, but as a reflection and a slight rotation. We do lose the information of angle of attack; but we can test for defender effects on this later using the space of rotations, SO(2).

The methodology used for identifying rigid motion (as we see above), is commonly solved using the Iterative Closest Point (ICP) algorithm. This algorithm treats the trajectory as a point cloud, regardless of time and looks for optimal matching through an iterative scheme. Unfortunately, this methodology fails to properly register player trajectories as the temporal aspect is too important to ignore.

Spatio-Temporal Registration

This leads us to spatio-temporal registration. In this case, we combine both the spatial and temporal registration into a single cost function given by

Screen Shot 2019-09-09 at 10.24.41 AM

where R is the rotation operator, T is the translation operator, and PHI is the time warping function. We then can compute and inner- and outer-optimization scheme where the outer loop solves the dynamic time warping problem, followed by an inner loop of spatial optimization. Iterating over this scheme, we then identify a spatio-temporal distance for comparing two player trajectories.

Ok…So What?

Now that we are able to spatio-temporally register two player actions, we can start to develop distributions of player actions. These can be defined as clusters of low-cost comparisons between two trajectories. From here, opportunities are endless. Here’s a couple examples:

Impact of Defender Response to a Pick-And-Roll

Now we can start testing the impact of certain defender actions on PnR plays. In the example above, we saw the same PnR get attacked differently. As we saw the guard respond differently each time, the spatio-temporal registration is actually quite similar. We can then look at the parameter sets of R, T, and PHI and condition on defender response. Using this, we can quantify the changes in directionality and speed; and begin to answer the following question: How will a hedge compare to a show by my screen defender

This allows us to separate ourselves from poorly construed results-based analysis such as “Whats my defensive rating when I perform this defender action?”

Decision Making of Ball Handlers

Another problem we can begin to answer is: How well can my ball-handler read defenses? In this case, we can look at changes in the trajectories and again test on R, T, and PHI. Here, we are not testing god or bad decisions; that requires a target variable. Instead, we are looking at how the distribution changes given a new wrinkle in the defense.

For this situation, we may ask about how defensive rotation may impact how a ball-handler attacks the rim. In this case, we may see quite a change in R, T, and PHI dependent on the schemes. We can scan the clusters of registered ball-handler motions and compute the probabilities of making that registered motion given the defensive scheme. From there, we may look at the players associated with the action and gain insight on how players respond to the action. Note, this maybe quite noisy at the player level; so be very careful in making player-based decisions.

 

Ultimately, the point here is that the game of basketball is performed in a spatio-temporal manner. Therefore it requires tools to analyze the spatio-temporal aspect accordingly. As an attack at the rim between Damian Lillard may be considerably different than one performed by De’Aaron Fox, despite their spatial trajectories looking the same. Registration also allows for follow on testing without having to rely on result-based analysis. Consider this artifact when discussing perimeter defense; as shooters may not take an attempt despite doing all the right things leading to an attempt.

This way, we can leverage platforms such as Synergy to identify types of plays, Second Spectrum to extract out markers for the plays; but then build our own custom analytics on top of the tracking to perform the rigorous test.

Advertisement

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.