Race data video analysis pipeline [Placeholder]

Code needs to be cleaned before it's ready to be published, also still a work in progress.

In the video above you can see 4 videos playing simultaneously. The top two videos are produced by the Android app: Harry's LapTimer. These videos were shot during a friend and mine races on the track Zandvoort in the Netherlands. The engine data is produced by linking it with an ODB2 connector. This app produces a CSV file with all the relevant data on a second resolultion basis. Smoothing and interpolation is done for animation purposes on a 60Hz resolution (to produce smooth 60 FPS video). Interpolation is done with pandas Spline method. The Coordinates in the video are also filtered with a Kalman filter to smooth out the corners, some noise is still (understandebly) visible though. The video in the bottom left corner is produced by crawling the google API and stitchting the images together to produce one large image. For the Zandvoort track the image is an image of ~15k by ~10k. In this image the GPS coordinates are translated to their respective pixels so it can be drawn frame for frame onto the picture and can be used as input to create the video. The dots are the cars and are on true size (the width and length are from Wikipedia). Final montage has been done with the use of MoviePy.

The end product is an application pipeline written in ~2k LoC Python which produces such a video when given a CSV with coordinates. The relevant track is found by the use of GeoPy and the OSM streetmaps server and kicks of the crawling & rendering proces.

Rendering and smoothing takes about 4 hours (!) on a 8 core 32GB ram server (Hetzner). This can be sped up almost linearly by the use of Dask, if necessary.