Skip to content

Commit

Permalink
Adjust documentation of tutorials to latest changes
Browse files Browse the repository at this point in the history
  • Loading branch information
Heiko Thiel committed Mar 13, 2019
1 parent 43e0bba commit 396c217
Show file tree
Hide file tree
Showing 6 changed files with 45 additions and 45 deletions.
20 changes: 10 additions & 10 deletions doc/tutorials/content/ground_based_rgbd_people_detection.rst
Original file line number Diff line number Diff line change
Expand Up @@ -20,7 +20,7 @@ Here it is the code:

.. literalinclude:: sources/ground_based_rgbd_people_detection/src/main_ground_based_people_detection.cpp
:language: cpp
:lines: 48-243
:lines: 48-247


The explanation
Expand All @@ -35,13 +35,13 @@ maximum (``max_h``) height of people can be set. If no parameter is set, the def

.. literalinclude:: sources/ground_based_rgbd_people_detection/src/main_ground_based_people_detection.cpp
:language: cpp
:lines: 67-78
:lines: 71-82

Here, the callback used for grabbing pointclouds with OpenNI is defined.

.. literalinclude:: sources/ground_based_rgbd_people_detection/src/main_ground_based_people_detection.cpp
:language: cpp
:lines: 80-87
:lines: 84-91

The people detection algorithm used makes the assumption that people stand/walk on a planar ground plane.
Thus, it requires to know the equation of the ground plane in order to perform people detection.
Expand All @@ -52,7 +52,7 @@ the structure used to pass arguments to this callback.

.. literalinclude:: sources/ground_based_rgbd_people_detection/src/main_ground_based_people_detection.cpp
:language: cpp
:lines: 89-110
:lines: 93-114

Main:
*****
Expand All @@ -61,7 +61,7 @@ The main program starts by initializing the main parameters and reading the comm

.. literalinclude:: sources/ground_based_rgbd_people_detection/src/main_ground_based_people_detection.cpp
:language: cpp
:lines: 112-130
:lines: 116-134

Ground initialization:
**********************
Expand All @@ -74,7 +74,7 @@ After this, ``Q`` must be pressed in order to close the visualizer and let the p

.. literalinclude:: sources/ground_based_rgbd_people_detection/src/main_ground_based_people_detection.cpp
:language: cpp
:lines: 132-165
:lines: 136-160

.. image:: images/ground_based_rgbd_people_detection/Screen_floor.jpg
:align: center
Expand All @@ -89,7 +89,7 @@ written to the command window.

.. literalinclude:: sources/ground_based_rgbd_people_detection/src/main_ground_based_people_detection.cpp
:language: cpp
:lines: 167-175
:lines: 171-179

In the following lines, we can see the initialization of the SVM classifier by loading the pre-trained parameters
from file.
Expand All @@ -101,7 +101,7 @@ setSensorPortraitOrientation should be used to enable the vertical mode in :pcl:

.. literalinclude:: sources/ground_based_rgbd_people_detection/src/main_ground_based_people_detection.cpp
:language: cpp
:lines: 181-191
:lines: 185-195

Main loop:
**********
Expand All @@ -113,7 +113,7 @@ This procedure allows to adapt to small changes which can occur to the ground pl

.. literalinclude:: sources/ground_based_rgbd_people_detection/src/main_ground_based_people_detection.cpp
:language: cpp
:lines: 197-210
:lines: 201-214

The last part of the code is devoted to visualization. In particular, a green 3D bounding box is drawn for every
person with HOG confidence above the ``min_confidence`` threshold. The width of the bounding box is fixed, while
Expand All @@ -123,7 +123,7 @@ Please note that this framerate includes the time necessary for grabbing the poi

.. literalinclude:: sources/ground_based_rgbd_people_detection/src/main_ground_based_people_detection.cpp
:language: cpp
:lines: 212-238
:lines: 216-242

Compiling and running the program
---------------------------------
Expand Down
14 changes: 7 additions & 7 deletions doc/tutorials/content/moment_of_inertia.rst
Original file line number Diff line number Diff line change
Expand Up @@ -47,32 +47,32 @@ Now let's study out what is the purpose of this code. First few lines will be om

.. literalinclude:: sources/moment_of_inertia/moment_of_inertia.cpp
:language: cpp
:lines: 13-15
:lines: 16-18

These lines are simply loading the cloud from the .pcd file.

.. literalinclude:: sources/moment_of_inertia/moment_of_inertia.cpp
:language: cpp
:lines: 17-19
:lines: 20-22

Here is the line where the instantiation of the ``pcl::MomentOfInertiaEstimation`` class takes place.
Immediately after that we set the input cloud and start the computational process, that easy.

.. literalinclude:: sources/moment_of_inertia/moment_of_inertia.cpp
:language: cpp
:lines: 21-31
:lines: 24-34

This is were we declare all necessary variables needed to store descriptors and bounding boxes.

.. literalinclude:: sources/moment_of_inertia/moment_of_inertia.cpp
:language: cpp
:lines: 33-39
:lines: 36-42

These lines show how to access computed descriptors and other features.

.. literalinclude:: sources/moment_of_inertia/moment_of_inertia.cpp
:language: cpp
:lines: 41-47
:lines: 44-50

These lines simply create the instance of ``PCLVisualizer`` class for result
visualization. Here we also add the cloud and the AABB for visualization. We
Expand All @@ -81,14 +81,14 @@ because the default is to use a solid cube.

.. literalinclude:: sources/moment_of_inertia/moment_of_inertia.cpp
:language: cpp
:lines: 49-52
:lines: 52-55

Visualization of the OBB is little more complex. So here we create a quaternion from the rotational matrix, set OBBs position
and pass it to the visualizer.

.. literalinclude:: sources/moment_of_inertia/moment_of_inertia.cpp
:language: cpp
:lines: 54-60
:lines: 57-63

These lines are responsible for eigen vectors visualization. The few lines that
are left simply launch the visualization process.
Expand Down
22 changes: 11 additions & 11 deletions doc/tutorials/content/normal_distributions_transform.rst
Original file line number Diff line number Diff line change
Expand Up @@ -25,31 +25,31 @@ Now, let's breakdown this code piece by piece.

.. literalinclude:: sources/normal_distributions_transform/normal_distributions_transform.cpp
:language: cpp
:lines: 5-6
:lines: 10-11

These are the required header files to use Normal Distributions Transform algorithm and a filter used to down sample the data. The filter can be exchanged for other filters but I have found the approximate voxel filter to produce the best results.

.. literalinclude:: sources/normal_distributions_transform/normal_distributions_transform.cpp
:language: cpp
:lines: 14-30
:lines: 17-33

The above code loads the two pcd file into pcl::PointCloud<pcl::PointXYZ> boost shared pointers. The input cloud will be transformed into the reference frame of the target cloud.

.. literalinclude:: sources/normal_distributions_transform/normal_distributions_transform.cpp
:language: cpp
:lines: 32-39
:lines: 35-42

This section filters the input cloud to improve registration time. Any filter that downsamples the data uniformly can work for this section. The target cloud does not need be filtered because voxel grid data structure used by the NDT algorithm does not use individual points, but instead uses the statistical data of the points contained in each of its data structures voxel cells.

.. literalinclude:: sources/normal_distributions_transform/normal_distributions_transform.cpp
:language: cpp
:lines: 41-42
:lines: 44-45

Here we create the NDT algorithm with the default values. The internal data structures are not initialized until later.

.. literalinclude:: sources/normal_distributions_transform/normal_distributions_transform.cpp
:language: cpp
:lines: 44-50
:lines: 47-53


Next we need to modify some of the scale dependent parameters. Because the NDT algorithm uses a voxelized data structure and More-Thuente line search, some parameters need to be scaled to fit the data set. The above parameters seem to work well on the scale we are working with, size of a room, but they would need to be significantly decreased to handle smaller objects, such as scans of a coffee mug.
Expand All @@ -58,37 +58,37 @@ The Transformation Epsilon parameter defines minimum, allowable, incremental ch

.. literalinclude:: sources/normal_distributions_transform/normal_distributions_transform.cpp
:language: cpp
:lines: 52-53
:lines: 55-56

This parameter controls the maximum number of iterations the optimizer can run. For the most part, the optimizer will terminate on the Transformation Epsilon before hitting this limit but this helps prevent it from running for too long in the wrong direction.

.. literalinclude:: sources/normal_distributions_transform/normal_distributions_transform.cpp
:language: cpp
:lines: 55-58
:lines: 58-61

Here, we pass the point clouds to the NDT registration program. The input cloud is the cloud that will be transformed and the target cloud is the reference frame to which the input cloud will be aligned. When the target cloud is added, the NDT algorithm's internal data structure is initialized using the target cloud data.

.. literalinclude:: sources/normal_distributions_transform/normal_distributions_transform.cpp
:language: cpp
:lines: 60-63
:lines: 63-66

In this section of code, we create an initial guess about the transformation needed to align the point clouds. Though the algorithm can be run without such an initial transformation, you tend to get better results with one, particularly if there is a large discrepancy between reference frames. In robotic applications, such as the ones used to generate this data set, the initial transformation is usually generated using odometry data.

.. literalinclude:: sources/normal_distributions_transform/normal_distributions_transform.cpp
:language: cpp
:lines: 65-70
:lines: 68-73

Finally, we are ready to align the point clouds. The resulting transformed input cloud is stored in the output cloud. We then display the results of the alignment as well as the Euclidean fitness score, calculated as the sum of squared distances from the output cloud to the closest point in the target cloud.

.. literalinclude:: sources/normal_distributions_transform/normal_distributions_transform.cpp
:language: cpp
:lines: 72-76
:lines: 75-79

Immediately after the alignment process, the output cloud will contain a transformed version of the filtered input cloud because we passed the algorithm a filtered point cloud, as opposed to the original input cloud. To obtain the aligned version of the original cloud, we extract the final transformation from the NDT algorithm and transform our original input cloud. We can now save this cloud to file ``room_scan2_transformed.pcd`` for future use.

.. literalinclude:: sources/normal_distributions_transform/normal_distributions_transform.cpp
:language: cpp
:lines: 78-106
:lines: 81-109

This next part is unnecessary but I like to visually see the results of my labors. With PCL's visualizer classes, this can be easily accomplished. We first generate a visualizer with a black background. Then we colorize our target and output cloud, red and green respectively, and load them into the visualizer. Finally we start the visualizer and wait for the window to be closed.

Expand Down
6 changes: 3 additions & 3 deletions doc/tutorials/content/random_sample_consensus.rst
Original file line number Diff line number Diff line change
Expand Up @@ -56,19 +56,19 @@ The following source code initializes two PointClouds and fills one of them with

.. literalinclude:: sources/random_sample_consensus/random_sample_consensus.cpp
:language: cpp
:lines: 30-63
:lines: 33-66

Next we create a vector of ints that can store the locations of our inlier points from our PointCloud and now we can build our RandomSampleConsensus object using either a plane or a sphere model from our input cloud.

.. literalinclude:: sources/random_sample_consensus/random_sample_consensus.cpp
:language: cpp
:lines: 65-85
:lines: 68-88

This last bit of code copies all of the points that fit our model to another cloud and then display either that or our original cloud in the viewer.

.. literalinclude:: sources/random_sample_consensus/random_sample_consensus.cpp
:language: cpp
:lines: 87-96
:lines: 90-99

There is some extra code that relates to the display of the PointClouds in the 3D Viewer, but I'm not going to explain that here.

Expand Down
16 changes: 8 additions & 8 deletions doc/tutorials/content/region_growing_rgb_segmentation.rst
Original file line number Diff line number Diff line change
Expand Up @@ -33,13 +33,13 @@ Let's take a look at first lines that are of interest:

.. literalinclude:: sources/region_growing_rgb_segmentation/region_growing_rgb_segmentation.cpp
:language: cpp
:lines: 16-21
:lines: 20-25

They are simply loading the cloud from the .pcd file. Note that points must have the color.

.. literalinclude:: sources/region_growing_rgb_segmentation/region_growing_rgb_segmentation.cpp
:language: cpp
:lines: 30-30
:lines: 34-34

This line is responsible for ``pcl::RegionGrowingRGB`` instantiation. This class has two parameters:

Expand All @@ -49,40 +49,40 @@ This line is responsible for ``pcl::RegionGrowingRGB`` instantiation. This class

.. literalinclude:: sources/region_growing_rgb_segmentation/region_growing_rgb_segmentation.cpp
:language: cpp
:lines: 31-33
:lines: 35-37

These lines provide the instance with the input cloud, indices and search method.

.. literalinclude:: sources/region_growing_rgb_segmentation/region_growing_rgb_segmentation.cpp
:language: cpp
:lines: 34-34
:lines: 38-38

Here the distance threshold is set. It is used to determine whether the point is neighbouring or not. If the point is located at a distance less than
the given threshold, then it is considered to be neighbouring. It is used for clusters neighbours search.

.. literalinclude:: sources/region_growing_rgb_segmentation/region_growing_rgb_segmentation.cpp
:language: cpp
:lines: 35-35
:lines: 39-39

This line sets the color threshold. Just as angle threshold is used for testing points normals in ``pcl::RegionGrowing``
to determine if the point belongs to cluster, this value is used for testing points colors.

.. literalinclude:: sources/region_growing_rgb_segmentation/region_growing_rgb_segmentation.cpp
:language: cpp
:lines: 36-36
:lines: 40-40

Here the color threshold for clusters is set. This value is similar to the previous, but is used when the merging process takes place.

.. literalinclude:: sources/region_growing_rgb_segmentation/region_growing_rgb_segmentation.cpp
:language: cpp
:lines: 37-37
:lines: 41-41

This value is similar to that which was used in the :ref:`region_growing_segmentation` tutorial. In addition to that, it is used for merging process mentioned in the beginning.
If cluster has less points than was set through ``setMinClusterSize`` method, then it will be merged with the nearest neighbour.

.. literalinclude:: sources/region_growing_rgb_segmentation/region_growing_rgb_segmentation.cpp
:language: cpp
:lines: 39-40
:lines: 43-44

Here is the place where the algorithm is launched. It will return the array of clusters when the segmentation process will be over.

Expand Down
12 changes: 6 additions & 6 deletions doc/tutorials/content/tracking.rst
Original file line number Diff line number Diff line change
Expand Up @@ -60,40 +60,40 @@ Now, let's break down the code piece by piece.

.. literalinclude:: sources/tracking/tracking_sample.cpp
:language: cpp
:lines: 224-239
:lines: 227-242


First, in main function, these lines set the parameters for tracking.

.. literalinclude:: sources/tracking/tracking_sample.cpp
:language: cpp
:lines: 243-254
:lines: 246-257

Here, we set likelihood function which tracker use when calculate weights. You can add more likelihood function as you like. By default, there are normals likelihood and color likelihood functions. When you want to add other likelihood function, all you have to do is initialize new Coherence Class and add the Coherence instance to coherence variable with addPointCoherence function.

.. literalinclude:: sources/tracking/tracking_sample.cpp
:language: cpp
:lines: 256-269
:lines: 259-272

In this part, we set the point cloud loaded from pcd file as reference model to tracker and also set model's transform values.

.. literalinclude:: sources/tracking/tracking_sample.cpp
:language: cpp
:lines: 170-177
:lines: 173-180


Until the counter variable become equal to 10, we ignore the input point cloud, because the point cloud at first few frames often have noise. After counter variable reach to 10 frame, at each loop, we set downsampled input point cloud to tracker and the tracker will compute particles movement.

.. literalinclude:: sources/tracking/tracking_sample.cpp
:language: cpp
:lines: 79-79
:lines: 82-82


In drawParticles function, you can get particles's positions by calling getParticles().

.. literalinclude:: sources/tracking/tracking_sample.cpp
:language: cpp
:lines: 113-114
:lines: 116-117



Expand Down

0 comments on commit 396c217

Please sign in to comment.