Rotating LiDAR sensors are used to solve an increasing number of computer vision challenges. They are robust to light conditions, they have a 360° field of view, a high range (hundreds of meters) and they have a depth accuracy that has not been reached by 3D cameras for now. Moreover, their decreasing prices make them accessible in a wide variety of contexts. SLAM algorithms (Simultaneous Localization And Mapping) allow to map a scene in 3D while locating a device. It is a powerful tool to integrate at the beginning of a pipeline to perform further tasks such as detection, segmentation, trajectory planning and others, because it allows to aggregate points acquired from different points of view. Kitware presents its open source LiDAR SLAM (https://gitlab.kitware.com/keu-computervision/slam) package which can be plugged after any rotating Lidar driver and gather powerful tools of the state of the art to make a competitive navigation system, regardless of the context, through its modular parameterization. Sensor fusion has notably been implemented (IMU, GNSS, Camera, wheel odometer, other Lidar, etc.) to leverage any external information that can increase the Lidar SLAM accuracy and robustness. In this talk, the core algorithm and its ROS wrapping will be presented, then, the most interesting related tools will be explained (pose graph optimization, confidence estimators, moving objects rejection, etc.). Finally, the speaker will talk about new improvements to come or ideas to be developed so that, together, we can keep on enriching the robotic open source world!