After reading the tutorial, one may have noticed that writing certain pipelines can be tedious, especially when normalizing a dataset, which always implies using the same code.
This vignette shows a list of pipeline factories, i.e., functions that generate useful pieces of pipelines that can be reused. It also defines some functions that execute a pipeline, which may be regularly useful instead of writing it out. The lists are not comprehensive and will be updated if good tools come to mind. Some are already installed natively in the package.
This pipeline is already installed in the package.
normalize = function(extrabytes = FALSE)
{
tri <- triangulate(filter = keep_ground())
pipeline <- tri
if (extrabytes)
{
extra = add_extrabytes("int", "HAG", "Height Above Ground")
trans = transform_with(tri, store_in_attribute = "HAG")
pipeline = pipeline + extra + trans
}
else
{
trans = transform_with(tri)
pipeline = pipeline + trans
}
return(pipeline)
}
It can be used this way
ground_csf = function(smooth = FALSE, threshold = 0.5, resolution = 0.5, rigidness = 1L, iterations = 500L, step = 0.65)
{
csf = function(data, smooth, threshold, resolution, rigidness, iterations, step)
{
id = RCSF::CSF(data, smooth, threshold, resolution, rigidness, iterations, step)
class = integer(nrow(data))
class[id] = 2L
data$Classification <- class
return(data)
}
classify = callback(csf, expose = "xyz", smooth = smooth, threshold = threshold, resolution = resolution, rigidness = rigidness, iterations = iterations, step = step)
return(classify)
}
ground_mcc = function(s = 1.5, t = 0.3)
{
csf = function(data, s, t)
{
id = RMCC::MCC(data, s, t)
class = integer(nrow(data))
class[id] = 2L
data$Classification <- class
return(data)
}
classify = callback(csf, expose = "xyz", s = s, t = t)
return(classify)
}
These pipelines usecallback()
that exposes the point cloud as adata.frame
. One of the reasons whylasR
is more memory-efficient and faster thanlidR
is that it does not expose the point cloud as adata.frame
. Thus, these pipelines are not very different from theclassify_ground()
function inlidR
. The advantage of usinglasR
here is the ability to pipe different stages.
These two pipelines are natively installed in the package under the name
chm()
.
This one is also natively installed in the package.
add_class
can be used to add a class used as ground such
as 9 for water.
Writes LAS files or returns data.frame
s. Supports
sf
objects as input.
clip_circle = function(files, geometry, radius, ofiles = paste0(tempdir(), "/*_clipped.las"))
{
if (sf::st_geometry_type(geometry, FALSE) != "POINT")
stop("Expected POINT geometry type")
coordinates <- sf::st_coordinates(geometry)
xcenter <- coordinates[,1]
ycenter <- coordinates[,2]
read = reader_las(xc = xcenter, yc = ycenter, r = radius)
if (length(ofiles) == 1L && ofiles == "")
stage = callback(function(data) { return(data) }, expose = "*", no_las_update = T)
else
stage = write_las(ofiles)
ans = exec(read+stage, on = files)
return(ans)
}
A CRS as sf
object. The cost of applying
hulls()
is virtually null.
Using an sf
object to provide plot centers and offering
the option to normalize on-the-fly. It returns the same sf
object with extra attributes.
inventory_metrics = function(files, geometry, radius, fun, normalize = FALSE)
{
if (sf::st_geometry_type(geometry, FALSE) != "POINT")
stop("Expected POINT geometry type")
coordinates <- sf::st_coordinates(geometry)
xcenter <- coordinates[,1]
ycenter <- coordinates[,2]
pipeline <- reader_las(xc = xcenter, yc = ycenter, r = radius)
if (normalize)
{
tri <- triangulate(filter = keep_ground())
trans <- transform_with(tri)
pipeline <- pipeline + tri + trans
}
pipeline <- pipeline + callback(fun, expose = "*")
ans <- exec(pipeline, on = files)
ans <- lapply(ans, as.data.frame)
ans <- do.call(rbind, ans)
return(cbind(geometry, ans))
}