Title: | Localization Microscopy Data Analysis |
---|---|
Description: | Read, register and compare point sets from single molecule localization microscopy. |
Authors: | Jean-Karim Heriche [cre, aut]
|
Maintainer: | Jean-Karim Heriche <[email protected]> |
License: | GPL-3 |
Version: | 0.5.0 |
Built: | 2025-02-24 04:03:40 UTC |
Source: | https://github.com/cran/LOMAR |
Apply rotation and translation to a point set
apply_transformation(X, R, t, s)
apply_transformation(X, R, t, s)
X |
a point set as an N x D matrix |
R |
D x D rotation matrix |
t |
1 x D translation vector |
s |
scaling factor |
transformed point set as a N x D matrix
Convert a 4d array to a list of 3d point sets. The points are formed by extracting the coordinates of array values strictly above the given cut-off (default 0).
ary2ps(ary, bkg = 0)
ary2ps(ary, bkg = 0)
ary |
a 4d array with last dimension indexing instances. |
bkg |
Extract points for array values strictly above this (default = 0) |
a list of point sets.
Binning in 1D, 2D or 3D.
binning(x, y, nbins, xrange = NULL)
binning(x, y, nbins, xrange = NULL)
x |
design matrix, dimension n x d with d in 1:3. |
y |
either a response vector of length n or NULL. |
nbins |
vector of length d containing number of bins for each dimension, may be set to NULL. |
xrange |
range for endpoints of bins for each dimension, either matrix of dimension 2 x d or NULL. xrange is increased if the cube defined does not contain all design points. |
Copied from package aws which is no longer in CRAN. Original author: Joerg Polzehl ([email protected]) who adapted code of function binning in package sm.
a list with elements:
x matrix of coordinates of non-empty bin centers
x.freq number of observations in nonempty bins
midpoints.x1 bin centers in dimension 1
midpoints.x2 bin centers in dimension 2
midpoints.x3 bin centers in dimension 3
breaks.x1 break points dimension 1
breaks.x2 break points dimension 2
breaks.x3 break points dimension 3
table.freq number of observations per bin
means means of y in non-empty bins (if y isn't NULL)
devs standard deviations of y in non-empty bins (if y isn't NULL)
Extract coordinates of the centres of circles from a 2D image using the Hough transform
circle_hough_transform( pixels, rmin, rmax, threshold, resolution = 360, min.separation = rmin/4, ncpu = 1 )
circle_hough_transform( pixels, rmin, rmax, threshold, resolution = 360, min.separation = rmin/4, ncpu = 1 )
pixels |
input data, either a matrix representing a 2D image or a data frame of signal coordinates with columns x, y. For images, background is expected to be 0 and signal to have positive values. |
rmin |
minimum search radius. |
rmax |
maximum search radius. |
threshold |
score threshold between 0 and 1. |
resolution |
number of steps in the circle transform (default: 360). This represents the maximum number of votes a point can get. |
min.separation |
distance between circle centres below which overlapping circles are considered the same and merged (default to 0.25*rmin) |
ncpu |
number of threads to use to speed up computation (default: 1) |
a data frame with columns x, y, r and score
point.set <- data.frame(x = c(-9.8,-5.2,12.5,2.5,4.5,1.3,-0.2,0.4,9.3,-1.4,0.5,-1.1,-7.7), y = c(-4.2,1.5,-0.5,12,-3,-7.2,10.9,6.7,-1.3,10,6.7,-6.2,2.9)) circles <- circle_hough_transform(pixels = point.set, rmin = 3, rmax = 6, resolution = 100, threshold = 0.1, ncpu = 1)
point.set <- data.frame(x = c(-9.8,-5.2,12.5,2.5,4.5,1.3,-0.2,0.4,9.3,-1.4,0.5,-1.1,-7.7), y = c(-4.2,1.5,-0.5,12,-3,-7.2,10.9,6.7,-1.3,10,6.7,-6.2,2.9)) circles <- circle_hough_transform(pixels = point.set, rmin = 3, rmax = 6, resolution = 100, threshold = 0.1, ncpu = 1)
Compute a co-localization index between two sets of points. Adapted from: Willems and MacGillavry, A coordinate-based co-localization index to quantify and visualize spatial associations in single-molecule localization microscopy. Sci Rep 12, 4676 (2022). https://doi.org/10.1038/s41598-022-08746-4
coloc_index( P1, locprec1 = NULL, locprecz1 = NULL, P2, locprec2 = NULL, locprecz2 = NULL )
coloc_index( P1, locprec1 = NULL, locprecz1 = NULL, P2, locprec2 = NULL, locprecz2 = NULL )
P1 |
a point set as matrix or data frame with columns x,y,z. |
locprec1 |
(optional) localization precision in x,y for P1 |
locprecz1 |
(optional) localization precision along z for P1 |
P2 |
a point set as matrix or data frame with columns x,y,z. |
locprec2 |
(optional) localization precision in x,y for P2 |
locprecz2 |
(optional) localization precision along z for P2 |
This can be seen as measuring the similarity between two spatial distributions. Co-clustering in dense structures can give values above 1.
Localization precision is optional but if used then all locprec parameters must be specified.
a list with two elements:
vector of co-localization indices for points in P1 relative to P2
vector of co-localization indices for points in P2 relative to P1
Objective function to minimize when using GMMs
costWd(Tr, X, Y, CX, CY, w1 = NULL, w2 = NULL, S = NULL)
costWd(Tr, X, Y, CX, CY, w1 = NULL, w2 = NULL, S = NULL)
Tr |
Transformation vector as translation vector + rotation (angle in 2d, quaternion in 3d)) |
X |
matrix of means of first GMM (i.e. reference point set) |
Y |
matrix of means of second GMM (i.e. moving point set) |
CX |
array of covariance matrices of first GMM such that X[i,] has covariance matrix CX[,,i] |
CY |
array of covariance matrices of second GMM such that Y[i,] has covariance matrix CY[,,i] |
w1 |
(optional) vector of mixture weights of first GMM. |
w2 |
(optional) vector of mixture weights of second GMM. |
S |
(optional) array of pre-computed sqrtm(sqrtm(CX[,,i]) %*% CY[,,j] %*% sqrtm(CX[,,i])) |
cost value
Affine and rigid registration of two point sets using the coherent point drift algorithm. See: Myronenko A., Song X. (2010): "Point-Set Registration: Coherent Point Drift", IEEE Trans. on Pattern Analysis and Machine Intelligence, vol. 32, issue 12, pp. 2262-2275.
cpd( X, Y, w = 0, weights = NULL, scale = FALSE, maxIter = 100, subsample = NULL, tol = 1e-04 )
cpd( X, Y, w = 0, weights = NULL, scale = FALSE, maxIter = 100, subsample = NULL, tol = 1e-04 )
X |
reference point set, a N x D matrix |
Y |
point set to transform, a M x D matrix, |
w |
noise weight in the range [0, 1) |
weights |
a M x N matrix of point correspondence weights |
scale |
logical (default: FALSE), whether to use scaling |
maxIter |
maximum number of iterations to perform (default: 100) |
subsample |
if set, use this randomly selected fraction of the points |
tol |
tolerance for determining convergence |
a list of
Y: transformed point set,
R: rotation matrix,
t: translation vector,
s: scaling factor,
P: matrix of correspondence probabilities between the two point sets,
sigma: final variance,
iter: number of iterations performed,
converged: boolean, whether the algorithm has converged.
data.file1 <- system.file("test_data", "parasaurolophusA.txt", package = "LOMAR", mustWork = TRUE) PS1 <- read.csv(data.file1, sep = '\t', header = FALSE) data.file2 <- system.file("test_data", "parasaurolophusB.txt", package = "LOMAR", mustWork = TRUE) PS2 <- read.csv(data.file2, sep = '\t', header = FALSE) transformation <- cpd(PS1, PS2, maxIter = 10, tol = 1e-3) ## Not run: # Visualize registration outcome library(rgl) plot3d(PS1, col = "blue") points3d(PS2, col = "green") points3d(transformation[['Y']], col = "magenta") ## End(Not run)
data.file1 <- system.file("test_data", "parasaurolophusA.txt", package = "LOMAR", mustWork = TRUE) PS1 <- read.csv(data.file1, sep = '\t', header = FALSE) data.file2 <- system.file("test_data", "parasaurolophusB.txt", package = "LOMAR", mustWork = TRUE) PS2 <- read.csv(data.file2, sep = '\t', header = FALSE) transformation <- cpd(PS1, PS2, maxIter = 10, tol = 1e-3) ## Not run: # Visualize registration outcome library(rgl) plot3d(PS1, col = "blue") points3d(PS2, col = "green") points3d(transformation[['Y']], col = "magenta") ## End(Not run)
Retain points in the set that are within the given distance from the geometric median of the set. Using the geometric median is more robust than using the centre of mass (i.e. mean).
crop_point_set(point.set, size, center = NULL)
crop_point_set(point.set, size, center = NULL)
point.set |
a point set as a matrix with columns x,y,z. |
size |
vector of distances from the target region centre along each axis. Points are discarded if they are outside the ellipsoid defined by size and centred on the given position. |
center |
(optional) coordinates of the centre of the target region. If not given, default to the geometric median of the point set. |
point set as a matrix with columns x,y,z.
Point density is estimated using a Gaussian mixture model and points in low density regions are considered as noise and removed.
denoise(points, k = 16, prob = 0.3)
denoise(points, k = 16, prob = 0.3)
points |
a data frame with columns x,y,z. |
k |
integer, number of mixture components for the GMM |
prob |
probability level in the range [0,1] to identify high density regions |
a point set
Given a point set and an alpha-shape, get the distance of each point to the closest boundary point of the alpha-shape. Points inside the shape get negative values.
dist_to_boundary(points, shape)
dist_to_boundary(points, shape)
points |
a data frame with x,y,z columns |
shape |
an object of class ashape3d with a single alpha value |
vector of distances (negative values indicate points inside the shape)
Compute distance between a set of points and a line defined by two points
dist_to_line(pts, a = NULL, b = NULL)
dist_to_line(pts, a = NULL, b = NULL)
pts |
a data frame or matrix with 3 columns of coordinates |
a |
vector of coordinates of a point on the line |
b |
a second point on the line |
vector of distances
Weighted downsampling of a point set. If point weights are not provided, they are computed to be proportional to the local density around each point.
downsample(point.set, n = NULL, k = NULL, weights = NULL)
downsample(point.set, n = NULL, k = NULL, weights = NULL)
point.set |
a point set |
n |
integer, sample size. |
k |
integer, number of nearest neighbours to consider to estimate local density |
weights |
a vector of probability weights |
a point set
Find elbow in a 2D curve represented by a list of ordered values
find_elbow(values)
find_elbow(values)
values |
vector of values in decreasing order |
This function finds the point with maximum distance from the line between the first and last points. Adapted from StackOverflow: http://stackoverflow.com/questions/2018178/finding-the-best-trade-off-point-on-a-curve
index and value of the selected point
Compute 2-Wasserstein distance between two Gaussian distributions
Gaussian_Wd(m1, m2, S1, S2, S = NULL)
Gaussian_Wd(m1, m2, S1, S2, S = NULL)
m1 |
mean of first distribution |
m2 |
mean of second distribution |
S1 |
variance of first distribution |
S2 |
variance of second distribution |
S |
(optional) matrix of pre-computed sqrtm(sqrtm(S1) %*% S2 %*% sqrtm(S1)) |
distance value
Compute kernel/distance matrix between persistence diagrams.
get_kernel_matrix( Diag = NULL, method = c("sWd", "pssk"), dimensions = NULL, return.dist = FALSE, M = NULL, sigma = NULL, ncpu = 1, cluster.type = "PSOCK" )
get_kernel_matrix( Diag = NULL, method = c("sWd", "pssk"), dimensions = NULL, return.dist = FALSE, M = NULL, sigma = NULL, ncpu = 1, cluster.type = "PSOCK" )
Diag |
list of persistence diagrams as n x 3 matrices |
method |
which kernel or distance to compute. One of sWd (for sliced Wasserstein kernel) or pssk (for the persistence scale-space kernel) |
dimensions |
vector of the dimensions of the topological features to consider, if NULL (default) use all available dimensions |
return.dist |
logical (default: FALSE) for method sWd, whether to return the sliced Wasserstein distance matrix instead of the kernel. |
M |
number of slices for the sliced Wasserstein kernel |
sigma |
kernel bandwidth |
ncpu |
number of parallel threads to use for computation |
cluster.type |
type of multicore cluster to use, either PSOCK (default) or FORK |
a matrix
PS <- list(data.frame(x = c(2.4,-6.9,4.6,-0.7,-3.3,-4.9,-3.5,-3.5,4.2,-7), y = c(5.7,1.9,4.8,3.4,-3,-2.1,7.2,1.8,6.1,-1.6), z = c(2.7,-0.1,-0.7,-0.6,0.4,-1.5,-0.6,-0.9,2.2,0.7)), data.frame(x = c(0,0,3.1,-5.6,-5,-7.4,-0.7,-7.7,-6.7,4,4.2,0.2,5.8,3.9,3.9), y = c(6.3,-6.1,-3.5,4.6,-4.1,0.3,8.8,-2.3,2.9,3.7,-1.4,-3.9,5.5,-1.2,-6.7), z = c(-1.5,1.7,-0.4,-1.4,1.8,1.7,-0.9,-1.8,-0.5,1.7,1.3,0.5,-1.4,1.6,-0.1)), data.frame(x = c(-9.8,-5.2,12.5,2.5,4.5,1.3,-0.2,0.4,9.3,-1.4,0.5,-1.1,-7.7), y = c(-4.2,1.5,-0.5,12,-3,-7.2,10.9,6.7,-1.3,10,6.7,-6.2,2.9), z = c(3.4,-3.8,-1.4,1.8,3.5,2.5,2.6,-4.8,-3.8,3.9,4.1,-3.6,-4))) Dgs <- get_persistence_diagrams(point.sets = PS, maxdimension = 1, maxscale = 5, ncpu = 1) K <- get_kernel_matrix(Diag = Dgs, method = 'sWd', dimensions = c(0,1), M = 10, sigma = 5)
PS <- list(data.frame(x = c(2.4,-6.9,4.6,-0.7,-3.3,-4.9,-3.5,-3.5,4.2,-7), y = c(5.7,1.9,4.8,3.4,-3,-2.1,7.2,1.8,6.1,-1.6), z = c(2.7,-0.1,-0.7,-0.6,0.4,-1.5,-0.6,-0.9,2.2,0.7)), data.frame(x = c(0,0,3.1,-5.6,-5,-7.4,-0.7,-7.7,-6.7,4,4.2,0.2,5.8,3.9,3.9), y = c(6.3,-6.1,-3.5,4.6,-4.1,0.3,8.8,-2.3,2.9,3.7,-1.4,-3.9,5.5,-1.2,-6.7), z = c(-1.5,1.7,-0.4,-1.4,1.8,1.7,-0.9,-1.8,-0.5,1.7,1.3,0.5,-1.4,1.6,-0.1)), data.frame(x = c(-9.8,-5.2,12.5,2.5,4.5,1.3,-0.2,0.4,9.3,-1.4,0.5,-1.1,-7.7), y = c(-4.2,1.5,-0.5,12,-3,-7.2,10.9,6.7,-1.3,10,6.7,-6.2,2.9), z = c(3.4,-3.8,-1.4,1.8,3.5,2.5,2.6,-4.8,-3.8,3.9,4.1,-3.6,-4))) Dgs <- get_persistence_diagrams(point.sets = PS, maxdimension = 1, maxscale = 5, ncpu = 1) K <- get_kernel_matrix(Diag = Dgs, method = 'sWd', dimensions = c(0,1), M = 10, sigma = 5)
Compute persistence diagrams for a list of point sets. By default, compute persistent homology from the Vietoris-Rips filtration. If use.dtm is TRUE, compute instead the persistent homology of the sublevel set of the distance to measure evaluated over a grid.
get_persistence_diagrams( point.sets = NULL, maxdimension = NULL, maxscale = NULL, use.dtm = FALSE, m0 = NULL, grid.by = NULL, ncpu = 1, cluster.type = "PSOCK" )
get_persistence_diagrams( point.sets = NULL, maxdimension = NULL, maxscale = NULL, use.dtm = FALSE, m0 = NULL, grid.by = NULL, ncpu = 1, cluster.type = "PSOCK" )
point.sets |
list of point sets, each as a data frame with columns x,y,z |
maxdimension |
maximum dimension of the homological features to be computed |
maxscale |
limit of the Vietoris-Rips filtration |
use.dtm |
logical (default: FALSE), whether to use the distance to measure function |
m0 |
parameter for the dtm function |
grid.by |
vector of space between points of the grid for the dtm function along each dimension |
ncpu |
number of parallel threads to use for computation |
cluster.type |
type of multicore cluster to use, either PSOCK (default) or FORK |
a list of persistence diagrams as n x 3 matrices. Each row is a topological feature and the columns are dimension, birth and death of the feature.
PS <- list(data.frame(x = c(2.4,-6.9,4.6,-0.7,-3.3,-4.9,-3.5,-3.5,4.2,-7), y = c(5.7,1.9,4.8,3.4,-3,-2.1,7.2,1.8,6.1,-1.6), z = c(2.7,-0.1,-0.7,-0.6,0.4,-1.5,-0.6,-0.9,2.2,0.7)), data.frame(x = c(0,0,3.1,-5.6,-5,-7.4,-0.7,-7.7,-6.7,4,4.2,0.2,5.8,3.9,3.9), y = c(6.3,-6.1,-3.5,4.6,-4.1,0.3,8.8,-2.3,2.9,3.7,-1.4,-3.9,5.5,-1.2,-6.7), z = c(-1.5,1.7,-0.4,-1.4,1.8,1.7,-0.9,-1.8,-0.5,1.7,1.3,0.5,-1.4,1.6,-0.1))) Diags <- get_persistence_diagrams(point.sets = PS, maxdimension = 1, maxscale = 5, ncpu = 1)
PS <- list(data.frame(x = c(2.4,-6.9,4.6,-0.7,-3.3,-4.9,-3.5,-3.5,4.2,-7), y = c(5.7,1.9,4.8,3.4,-3,-2.1,7.2,1.8,6.1,-1.6), z = c(2.7,-0.1,-0.7,-0.6,0.4,-1.5,-0.6,-0.9,2.2,0.7)), data.frame(x = c(0,0,3.1,-5.6,-5,-7.4,-0.7,-7.7,-6.7,4,4.2,0.2,5.8,3.9,3.9), y = c(6.3,-6.1,-3.5,4.6,-4.1,0.3,8.8,-2.3,2.9,3.7,-1.4,-3.9,5.5,-1.2,-6.7), z = c(-1.5,1.7,-0.4,-1.4,1.8,1.7,-0.9,-1.8,-0.5,1.7,1.3,0.5,-1.4,1.6,-0.1))) Diags <- get_persistence_diagrams(point.sets = PS, maxdimension = 1, maxscale = 5, ncpu = 1)
Get the the alpha-shape of a point set. If not given, the function automatically determines alpha using a downsampled point set. As a consequence, alpha and therefore the computed shape can vary slightly between runs.
get_shape(points, alpha = NULL)
get_shape(points, alpha = NULL)
points |
a data frame with columns x, y, z. |
alpha |
(optional) positive number |
an alpha-shape object of class ashape3d
Compute the surface area of an alpha-shape by summing the surfaces of the boundary triangles
get_surface_area(as)
get_surface_area(as)
as |
an alpha-shape object of class ashape3d |
a numeric value
Compute 2-Wasserstein distance between two Gaussian mixture models See: Delon J, Desolneux A. (2019) A Wasserstein-type distance in the space of Gaussian Mixture Models. hal-02178204v2
GMM_Wd(m1, m2, S1, S2, w1 = NULL, w2 = NULL, S = NULL)
GMM_Wd(m1, m2, S1, S2, w1 = NULL, w2 = NULL, S = NULL)
m1 |
matrix of means of first GMM |
m2 |
matrix of means of second GMM |
S1 |
array of covariance matrices of first GMM such that m1[i,] has covariance matrix S1[,,i] |
S2 |
array of covariance matrices of second GMM such that m2[i,] has covariance matrix S2[,,i] |
w1 |
(optional) vector of mixture weights of first GMM. |
w2 |
(optional) vector of mixture weights of second GMM. |
S |
(optional) array of pre-computed sqrtm(sqrtm(S1[,,i]) %*% S2[,,j] %*% sqrtm(S1[,,i])) |
list of distance value d and optimal transport matrix ot
Gradient of the objective function with respect to rotation and translation parameters
gradientWd(Tr, X, Y, CX, CY, w1 = NULL, w2 = NULL, S = NULL)
gradientWd(Tr, X, Y, CX, CY, w1 = NULL, w2 = NULL, S = NULL)
Tr |
Transformation vector as translation vector + rotation (angle in 2d, quaternion in 3d)) |
X |
matrix of means of first GMM (i.e. reference point set) |
Y |
matrix of means of second GMM (i.e. moving point set) |
CX |
array of covariance matrices of first GMM such that X[i,] has covariance matrix C1[,,i] |
CY |
array of covariance matrices of second GMM such that Y[i,] has covariance matrix C2[,,i] |
w1 |
(optional) vector of mixture weights of first GMM. |
w2 |
(optional) vector of mixture weights of second GMM. |
S |
(optional) array of pre-computed sqrtm(sqrtm(CX[,,i]) %*% CY[,,j] %*% sqrtm(CX[,,i])) |
gradient vector
Localisation events are grouped by recursively clustering mutual nearest neighbours. Neighbours are determined using the Mahalanobis distance to account for anisotropy in the localisation precision. Since the Mahalanobis distance has approximately a chi-squared distribution, a distance threshold can be chosen from a chi-squared table where the number of degrees of freedom is the dimension and alpha can be seen as the probability of missing a localization event generated from the same fluorophore as the event under consideration.
group_events(points, locprec = NULL, locprecz = NULL, p = 0.1)
group_events(points, locprec = NULL, locprecz = NULL, p = 0.1)
points |
a data frame with columns x,y,z. |
locprec |
localization precision in x,y |
locprecz |
localization precision along z, defaults to locprec |
p |
confidence level, see description. Defaults to 0.1 |
a list with two elements:
points: a point set as data frame with columns x,y,z
membership: a vector of integers indicating the cluster to which each input point is allocated.
Rigid registration of two point sets using the iterative closest point algorithm.
icp( X, Y, weights = NULL, iterations = 100, subsample = NULL, scale = FALSE, tol = 0.001 )
icp( X, Y, weights = NULL, iterations = 100, subsample = NULL, scale = FALSE, tol = 0.001 )
X |
reference point set, a N x D matrix |
Y |
point set to transform, a M x D matrix, |
weights |
vector of length nrow(Y) containing weights for each point in Y. Not implemented. |
iterations |
number of iterations to perform (default: 100) |
subsample |
if set, use this randomly selected fraction of the points |
scale |
logical (default: FALSE), whether to use scaling. |
tol |
tolerance for determining convergence |
a list of
Y: transformed point set, a M x D matrix,
R: rotation matrix,
t: translation vector,
s: scaling factor,
iter: number of iterations performed,
conv: boolean, whether the algorithm has converged.
data.file1 <- system.file("test_data", "parasaurolophusA.txt", package = "LOMAR", mustWork = TRUE) PS1 <- read.csv(data.file1, sep = '\t', header = FALSE) data.file2 <- system.file("test_data", "parasaurolophusB.txt", package = "LOMAR", mustWork = TRUE) PS2 <- read.csv(data.file2, sep = '\t', header = FALSE) transformation <- icp(PS1, PS2, iterations = 10, tol = 1e-3) ## Not run: # Visualize registration outcome library(rgl) plot3d(PS1, col = "blue") points3d(PS2, col = "green") points3d(transformation[['Y']], col = "magenta") ## End(Not run)
data.file1 <- system.file("test_data", "parasaurolophusA.txt", package = "LOMAR", mustWork = TRUE) PS1 <- read.csv(data.file1, sep = '\t', header = FALSE) data.file2 <- system.file("test_data", "parasaurolophusB.txt", package = "LOMAR", mustWork = TRUE) PS2 <- read.csv(data.file2, sep = '\t', header = FALSE) transformation <- icp(PS1, PS2, iterations = 10, tol = 1e-3) ## Not run: # Visualize registration outcome library(rgl) plot3d(PS1, col = "blue") points3d(PS2, col = "green") points3d(transformation[['Y']], col = "magenta") ## End(Not run)
Convert indices into a dist object to row, column coordinates of the corresponding distance matrix
idx2rowcol(idx, n)
idx2rowcol(idx, n)
idx |
vector of indices |
n |
size of the n x n distance matrix |
a matrix with two columns nr and nc
Read an image into a point set. The points are formed by extracting the coordinates of voxel values strictly above the given cut-off (default 0).
img2ps(img = NULL, bkg = 0, crop.size = NULL)
img2ps(img = NULL, bkg = 0, crop.size = NULL)
img |
either a 2d or 3d array or a path to a file containing a 2d or 3d image. |
bkg |
Extract points for values strictly above this (default = 0). |
crop.size |
vector (of length 2 or 3) containing the desired reduced size of the images along each dimension, e.g. c(30,30,30). |
a point set as matrix with columns x,y[,z]
img.file <- system.file("test_data/img", "alien1_3d.tif", package = "LOMAR", mustWork = TRUE) point_set <- img2ps(img = img.file, bkg = 0)
img.file <- system.file("test_data/img", "alien1_3d.tif", package = "LOMAR", mustWork = TRUE) point_set <- img2ps(img = img.file, bkg = 0)
Joint registration of multiple point sets See: G. D. Evangelidis, D. Kounades-Bastian, R. Horaud, andE. Z. Psarakis. A generative model for the joint registration of multiple point sets. In European Conference on Computer Vision, pages 109–122. Springer, 2014
jrmpc( V, C = NULL, K = NULL, g = NULL, initialPriors = NULL, updatePriors = TRUE, maxIter = 100, fixedVarIter = 0, tol = 0.01, initializeBy = NULL, model.selection = FALSE, model.selection.threshold = NULL, rotation.only = FALSE )
jrmpc( V, C = NULL, K = NULL, g = NULL, initialPriors = NULL, updatePriors = TRUE, maxIter = 100, fixedVarIter = 0, tol = 0.01, initializeBy = NULL, model.selection = FALSE, model.selection.threshold = NULL, rotation.only = FALSE )
V |
list of point sets as N x D matrices |
C |
(optional) list of arrays of covariance matrices with C[[j]][,,i] the covariance matrix associated with point i of set j. |
K |
(optional) number of components of the GMM, defaults to the average number of points in a set. |
g |
(optional) proportion of noisy points, defaults to 1/K. If set, priors will be initialized uniformly. |
initialPriors |
(optional) vector of length K of prior probabilities. Defaults to uniform distribution using g. If set, will determine g so it is an error to specify g with initialPriors. |
updatePriors |
logical, whether to update priors at each iteration (default: TRUE). |
maxIter |
maximum number of iterations to perform (default: 100). |
fixedVarIter |
number of iterations before starting variance updates |
tol |
tolerance for determining convergence (default: 1e-2). |
initializeBy |
(optional) how to initialize the GMM means. Defaults to distributing the means on the surface of the sphere enclosing all (centred) sets. Currently supported values are:
|
model.selection |
whether to perform model selection (default: FALSE). If set to TRUE, GMM components with no support in the data are deleted. |
model.selection.threshold |
value below which we consider a GMM component has no support, set to 1/K if not explicitly given |
rotation.only |
if set to TRUE, no translation is performed (default: FALSE) |
a list of
Y: list of transformed point sets as N x d matrices,
R: list of d x d rotation matrices, one for each point set in V,
t: list of translation vectors, one for each point set in V,
M: centres of the GMM,
S: variances of the GMM.
a: list of posterior probabilities as N x K matrices
iter: number of iterations
conv: error value used to evaluate convergence relative to tol
z: support scores of the GMM components
X <- read.csv(system.file("test_data", "parasaurolophusA.txt", package="LOMAR", mustWork = TRUE), sep = "\t") Y <- read.csv(system.file("test_data", "parasaurolophusB.txt", package="LOMAR", mustWork = TRUE), sep = "\t") Z <- read.csv(system.file("test_data", "parasaurolophusC.txt", package="LOMAR", mustWork = TRUE), sep = "\t") PS <- list(X, Y, Z) C <- list() for(i in 1:3) { cv <- diag(0.1, ncol(PS[[i]])) + jitter(0.01, amount = 0.01) cv <- replicate(nrow(PS[[i]]), cv) C[[i]] <- cv } transformation <- jrmpc(PS, C = C, K = 100, maxIter = 20, tol = 0.01, model.selection = TRUE) ## Not run: # Visualize registration outcome library(rgl) colours <- c("blue", "green", "magenta") Yt <- transformation[['Y']] plot3d(Yt[[1]], col = colours[1]) for(i in 2:length(Yt)) { points3d(Yt[[i]], col = colours[i]) } # Visualize GMM centres highlighting those with high variance GMM <- as.data.frame(cbind(transformation[['M']], transformation[['S']])) colnames(GMM) <- c("x", "y", "z", "S") colours <- rep("blue", nrow(GMM)) # Find high variance components threshold <- quantile(transformation[['S']], 0.75) high.var.idx <- which(transformation[['S']]>threshold) colours[high.var.idx] <- "red" plot3d(GMM[, c("x", "y", "z")], col = colours, type = 's', size = 2, box = FALSE, xlab = '', ylab = '', zlab = '', xlim = c(-0.15,0.15), ylim = c(-0.15, 0.15), zlim = c(-0.15, 0.15)) ## End(Not run)
X <- read.csv(system.file("test_data", "parasaurolophusA.txt", package="LOMAR", mustWork = TRUE), sep = "\t") Y <- read.csv(system.file("test_data", "parasaurolophusB.txt", package="LOMAR", mustWork = TRUE), sep = "\t") Z <- read.csv(system.file("test_data", "parasaurolophusC.txt", package="LOMAR", mustWork = TRUE), sep = "\t") PS <- list(X, Y, Z) C <- list() for(i in 1:3) { cv <- diag(0.1, ncol(PS[[i]])) + jitter(0.01, amount = 0.01) cv <- replicate(nrow(PS[[i]]), cv) C[[i]] <- cv } transformation <- jrmpc(PS, C = C, K = 100, maxIter = 20, tol = 0.01, model.selection = TRUE) ## Not run: # Visualize registration outcome library(rgl) colours <- c("blue", "green", "magenta") Yt <- transformation[['Y']] plot3d(Yt[[1]], col = colours[1]) for(i in 2:length(Yt)) { points3d(Yt[[i]], col = colours[i]) } # Visualize GMM centres highlighting those with high variance GMM <- as.data.frame(cbind(transformation[['M']], transformation[['S']])) colnames(GMM) <- c("x", "y", "z", "S") colours <- rep("blue", nrow(GMM)) # Find high variance components threshold <- quantile(transformation[['S']], 0.75) high.var.idx <- which(transformation[['S']]>threshold) colours[high.var.idx] <- "red" plot3d(GMM[, c("x", "y", "z")], col = colours, type = 's', size = 2, box = FALSE, xlab = '', ylab = '', zlab = '', xlim = c(-0.15,0.15), ylim = c(-0.15, 0.15), zlim = c(-0.15, 0.15)) ## End(Not run)
Compute local point density at each point of a point set
local_densities(X, k = NULL)
local_densities(X, k = NULL)
X |
point set, a N x D matrix |
k |
(optional) number of nearest neighbors used (defaults to all points). |
Local density is computed as in Ning X, Li F, Tian G, Wang Y (2018) An efficient outlier removal method for scattered point cloud data. PLOS ONE 13(8):e0201280. https://doi.org/10.1371/journal.pone.0201280
vector of density value for each point
Converts localization precision columns to a list of arrays of covariance matrices
locprec2cov(point.sets, scale = FALSE)
locprec2cov(point.sets, scale = FALSE)
point.sets |
a list of n point sets with locprec columns (locprecz column required for 3D data) |
scale |
logical, whether to scale the localization precision by the variance of the coordinates |
a list of 2x2xn or 3x3xn arrays.
Reads and filters single molecule localization events from a csv file as typically output by the SMAP software. The main columns of interest are the coordinates (x, y, z), point set membership (site) and localization precision (locprec and locprecz).
locs_from_csv( file = NULL, roi = NULL, channels = NULL, frame.filter = NULL, llrel.filter = NULL, locprec.filter = 0, locprecz.filter = 0 )
locs_from_csv( file = NULL, roi = NULL, channels = NULL, frame.filter = NULL, llrel.filter = NULL, locprec.filter = 0, locprecz.filter = 0 )
file |
a csv file with columns x[nm], y[nm], z[nm] and optionally site[numbers], channel, locprec[nm] and locprecz[nm], other columns are ignored. |
roi |
region of interest, keep points within the specified volume. Must be a data frame with columns x,y,z and rows min and max defining a bounding box. |
channels |
vector of integers indicating which channel(s) of a multicolour experiment to get data from. |
frame.filter |
vector of min and max values, filter out points from frames outside the specified range. |
llrel.filter |
vector of min and max values, filter out points on log-likelihood (for fitted data). |
locprec.filter |
filter out points with locprec value greater than the specified number. Points with locprec == 0 are also removed. |
locprecz.filter |
filter out points with locprecz value greater than the specified number. Points with locprecz == 0 are also removed. |
a data frame with columns x,y,z, optionally site, locprec and locprecz.
data.file <- system.file("test_data", "simulated_NUP107_data.csv", package = "LOMAR", mustWork = TRUE) locs <- locs_from_csv(file = data.file, locprec.filter = 20)
data.file <- system.file("test_data", "simulated_NUP107_data.csv", package = "LOMAR", mustWork = TRUE) locs <- locs_from_csv(file = data.file, locprec.filter = 20)
Cluster localizations into point sets using DBSCAN
locs2ps( points, eps, minPts, keep.locprec = TRUE, keep.channel = TRUE, cluster.2d = FALSE )
locs2ps( points, eps, minPts, keep.locprec = TRUE, keep.channel = TRUE, cluster.2d = FALSE )
points |
a point set as a data frame of coordinates with columns x,y,z. |
eps |
DBSCAN parameter, size of the epsilon neighbourhood |
minPts |
DBSCAN parameter, number of minimum points in the eps region |
keep.locprec |
logical (default: TRUE), whether to preserve the localization precision columns |
keep.channel |
logical (default: TRUE), whether to preserve channel information column |
cluster.2d |
logical (default: FALSE), whether to cluster only using x,y (and ignore z) |
a list of matrices with columns x,y,z and eventually locprec[z] and names set to the cluster indices.
Registration of multiple point sets using tree-guided progressive registration followed by iterative refinement.
multiple_registration(PS, registration, refine.iter = 20, ...)
multiple_registration(PS, registration, refine.iter = 20, ...)
PS |
list of point sets |
registration |
pairwise registration method to use |
refine.iter |
Maximum number of refinement iterations (default: 20) |
... |
additional arguments to the registration method |
a list of
Y: list of transformed point sets as N x d matrices
Extracts list of point sets from a data frame of single molecule localization coordinates. By default, uses point set membership indicated in the site column.
point_sets_from_locs( locs = NULL, channels = NULL, min.cardinality = NULL, max.cardinality = NULL, crop.size = NULL, keep.locprec = TRUE, sample.size = NULL, ignore.site = FALSE, cluster.points = FALSE, eps = NULL, minPts = NULL )
point_sets_from_locs( locs = NULL, channels = NULL, min.cardinality = NULL, max.cardinality = NULL, crop.size = NULL, keep.locprec = TRUE, sample.size = NULL, ignore.site = FALSE, cluster.points = FALSE, eps = NULL, minPts = NULL )
locs |
a data frame with columns x[nm], y[nm], z[nm] and optionally site[numbers], locprec[nm] and locprecz[nm], other columns are ignored. |
channels |
vector of integers indicating which channel(s) of a multicolour experiment to extract point sets from. |
min.cardinality |
filter out point sets with less than the specified number of points. |
max.cardinality |
filter out point sets with more than the specified number of points. |
crop.size |
remove points from a set if they are further away than the specified distance from the center of the set. |
keep.locprec |
logical (default:TRUE). Whether to keep locprec information for each point. |
sample.size |
returns this number of randomly selected point sets. Selects the point sets after applying eventual filtering. |
ignore.site |
logical (default: FALSE), set to TRUE if point set membership is not present or needed. |
cluster.points |
logical (default: FALSE), whether to cluster the points using DBSCAN (only if ignore.site is also TRUE). |
eps |
DBSCAN parameter, size of the epsilon neighbourhood |
minPts |
DBSCAN parameter, number of minimum points in the eps region |
a list of matrices with columns x,y,z, optionally locprec and name set to the value of the site column (if applicable).
data.file <- system.file("test_data", "simulated_NUP107_data.csv", package = "LOMAR", mustWork = TRUE) locs <- locs_from_csv(file = data.file, locprec.filter = 20) point.sets <- point_sets_from_locs(locs, keep.locprec = TRUE, min.cardinality = 15)
data.file <- system.file("test_data", "simulated_NUP107_data.csv", package = "LOMAR", mustWork = TRUE) locs <- locs_from_csv(file = data.file, locprec.filter = 20) point.sets <- point_sets_from_locs(locs, keep.locprec = TRUE, min.cardinality = 15)
Read in single molecule localization events from a series of 3D images in TIFF files where each image file represents a point set.
point_sets_from_tiffs( image_dir = NULL, pattern = NULL, image.size = NULL, sample.size = NULL, sample.first = FALSE, min.cardinality = NULL, max.cardinality = NULL, crop.size = NULL )
point_sets_from_tiffs( image_dir = NULL, pattern = NULL, image.size = NULL, sample.size = NULL, sample.first = FALSE, min.cardinality = NULL, max.cardinality = NULL, crop.size = NULL )
image_dir |
path to a directory containing the TIFF files. |
pattern |
regular expression, select images whose file path matches the given pattern. |
image.size |
vector of length 3 containing the size of the images along each dimension, e.g. c(40,40,40). |
sample.size |
if set, selects this number of images at random. A sample size larger than the available number of samples produces a warning and is ignored. |
sample.first |
if TRUE, samples are selected before applying any eventual filtering. This is more efficient as it avoids reading all data files. |
min.cardinality |
if set, filter out all point sets with less than the specified number of points. |
max.cardinality |
if set, filter out all point sets with more than the specified number of points. |
crop.size |
vector of length 3 containing the desired reduced size of the images along each dimension, e.g. c(30,30,30). |
a list with two elements:
point.sets: a list of point sets as matrices with columns x,y,z and
file.names: a vector of paths to the TIFF files from which the point sets were extracted.
data.dir <- system.file("test_data/img", package = "LOMAR", mustWork = TRUE) point_sets <- point_sets_from_tiffs(image_dir = data.dir, pattern = "\\.tiff?$", image.size = c(64, 64, 4), min.cardinality = 10)
data.dir <- system.file("test_data/img", package = "LOMAR", mustWork = TRUE) point_sets <- point_sets_from_tiffs(image_dir = data.dir, pattern = "\\.tiff?$", image.size = c(64, 64, 4), min.cardinality = 10)
Extract points within given bounding box. Points are translated so that (0,0,0) correspond to the bounding box corner defined by roi['min',c('x','y','z')]
points_from_roi(points, roi)
points_from_roi(points, roi)
points |
a point set as a data frame of coordinates with columns x,y,z. |
roi |
a data frame with columns x,y,z and rows min and max defining a bounding box |
a data frame with same columns as input
Convert a data frame of point coordinates into an image. Expected photon count at each voxel is computed as in: F. Huang, S. L. Schwartz, J. M. Byars, and K. A. Lidke, “Simultaneous multiple-emitter fitting for single molecule super-resolution imaging,” Biomed. Opt. Express 2(5), 1377–1393 (2011).
points2img(points, voxel.size, method, channels = NULL, ncpu = 1)
points2img(points, voxel.size, method, channels = NULL, ncpu = 1)
points |
a point set as a data frame of coordinates with columns x,y,z. |
voxel.size |
a numeric vector of length 3 indicating the size of the voxel along x,y and z in the same unit as the coordinates (e.g. nm) |
method |
how to calculate voxel values. Available methods are:
|
channels |
vector of channels to consider, must be values present in the input data frame channel column |
ncpu |
number of threads to use to speed up computation (default: 1) |
an array of dimensions x,y,z and channels if applicable
point.set <- data.frame(x = c(-9.8,-5.2,12.5,2.5,4.5,1.3,-0.2,0.4,9.3,-1.4,0.5,-1.1,-7.7), y = c(-4.2,1.5,-0.5,12,-3,-7.2,10.9,6.7,-1.3,10,6.7,-6.2,2.9), z = c(3.4,-3.8,-1.4,1.8,3.5,2.5,2.6,-4.8,-3.8,3.9,4.1,-3.6,-4)) img <- points2img(point.set, voxel.size = c(2,2,2), method = 'histogram')
point.set <- data.frame(x = c(-9.8,-5.2,12.5,2.5,4.5,1.3,-0.2,0.4,9.3,-1.4,0.5,-1.1,-7.7), y = c(-4.2,1.5,-0.5,12,-3,-7.2,10.9,6.7,-1.3,10,6.7,-6.2,2.9), z = c(3.4,-3.8,-1.4,1.8,3.5,2.5,2.6,-4.8,-3.8,3.9,4.1,-3.6,-4)) img <- points2img(point.set, voxel.size = c(2,2,2), method = 'histogram')
Convert a list of 3d point sets to a 4d array. Also works for 2d point sets to 3d array conversion.
ps2ary(point.sets, dims)
ps2ary(point.sets, dims)
point.sets |
a list of point sets. |
dims |
vector of dimensions of the axes (x,y in 2d, x,y,z in 3d). |
a 3d or 4d array.
Compute the persistence scale-space kernel on persistence diagrams. Reference: Jan Reininghaus, Stefan Huber, Ulrich Bauer, and Roland Kwitt. A stable multi-scale kernel for topological machine learning. In Proceedings of the IEEE conference on computer vision and pattern recognition (CVPR), pages 4741–4748, 2015.
pssk(Dg1 = NULL, Dg2 = NULL, sigma = NULL, dimensions = NULL)
pssk(Dg1 = NULL, Dg2 = NULL, sigma = NULL, dimensions = NULL)
Dg1 |
a persistence diagram as a n1 x 3 matrix where each row is a topological feature and the columns are dimension, birth and death of the feature. |
Dg2 |
another persistence diagram as a n2 x 3 matrix |
sigma |
kernel bandwidth |
dimensions |
vector of the dimensions of the topological features to consider, if NULL (default) use all available dimensions |
kernel value
D1 <- matrix(c(0,0,0,1,1,0,0,0,1.5, 3.5,2,2.5,3, 4, 6), ncol = 3, byrow = FALSE) D2 <- matrix(c(0,0,1,1,0, 0, 1.2, 2, 1.4, 3.2,4.6,6.5), ncol = 3, byrow = FALSE) K <- pssk(Dg1 = D1, Dg2 = D2, sigma = 1)
D1 <- matrix(c(0,0,0,1,1,0,0,0,1.5, 3.5,2,2.5,3, 4, 6), ncol = 3, byrow = FALSE) D2 <- matrix(c(0,0,1,1,0, 0, 1.2, 2, 1.4, 3.2,4.6,6.5), ncol = 3, byrow = FALSE) K <- pssk(Dg1 = D1, Dg2 = D2, sigma = 1)
Get derivative of 3D rotation matrix from quaternion
q2dr(q)
q2dr(q)
q |
quaternion |
derivative of rotation matrix
Convert quaternion to rotation matrix http://en.wikipedia.org/wiki/Quaternions_and_spatial_rotation
q2r(q)
q2r(q)
q |
quaternion |
rotation matrix
Restore coordinates from mean 0 and standard deviation 1 to their original distribution
restore_coordinates(X, mu, sigma)
restore_coordinates(X, mu, sigma)
X |
standardized point set as N x D matrix |
mu |
1 x D vector of means |
sigma |
standard deviation |
N X D matrix of unstandardized coordinates
Create a rotation matrix representing a rotation of theta radians about the x-axis
rotx(theta)
rotx(theta)
theta |
angle in radians |
a 3x3 rotation matrix
Create a rotation matrix representing a rotation of theta radians about the y-axis
roty(theta)
roty(theta)
theta |
angle in radians |
a 3x3 rotation matrix
Create a rotation matrix representing a rotation of theta radians about the z-axis
rotz(theta)
rotz(theta)
theta |
angle in radians |
a 3x3 rotation matrix
Uniformly scale an alpha-shape. Note that this computes the alpha-shape of the scaled point set associated with the input alpha-shape.
scale_alpha_shape(as, s)
scale_alpha_shape(as, s)
as |
an alpha-shape object of class ashape3d |
s |
scaling factor |
an object of class ashape3d
Compute shape features of a 3D alpha-shape object
shape_features_3d(as)
shape_features_3d(as)
as |
an alpha-shape object of class ashape3d |
Features are: - major.axis, minor.axis and least.axis: Lengths of the axes of the fitted ellipsoid - elongation: from 0 (line) to 1 (globular) - flatness: from 0 (flat) to 1 (spherical) - max.feret.diameter: Maximum Feret diameter - max.inscribed.radius: Radius of the largest inscribed sphere - sphericity: from 0 (not spherical) to 1 (perfect sphere) - concavity: fraction of the convex hull volume not in the object - volume - area: area of the surface of the alpha-shape
a named vector of numeric values or NULL if no non-singular vertices
Compute sliced Wasserstein distance or kernel. Reference: Mathieu Carriere, Marco Cuturi, and Steve Oudot. Sliced Wasserstein kernel for persistence diagrams. In Proceedings of the 34th International Conference on Machine Learning, volume 70 of Proceedings of Machine Learning Research, pages 664–673, 2017.
sliced_Wd(Dg1, Dg2, M = 10, sigma = 1, dimensions = NULL, return.dist = FALSE)
sliced_Wd(Dg1, Dg2, M = 10, sigma = 1, dimensions = NULL, return.dist = FALSE)
Dg1 |
a persistence diagram as a n1 x 3 matrix where each row is a topological feature and the columns are dimension, birth and death of the feature. |
Dg2 |
another persistence diagram as a n2 x 3 matrix |
M |
number of slices (default: 10) |
sigma |
kernel bandwidth (default: 1) |
dimensions |
vector of the dimensions of the topological features to consider, if NULL (default) use all available dimensions |
return.dist |
logical (default: FALSE). Whether to return the kernel or distance value. |
kernel or distance value
D1 <- matrix(c(0,0,0,1,1,0,0,0,1.5, 3.5,2,2.5,3, 4, 6), ncol = 3, byrow = FALSE) D2 <- matrix(c(0,0,1,1,0, 0, 1.2, 2, 1.4, 3.2,4.6,6.5), ncol = 3, byrow = FALSE) K <- sliced_Wd(Dg1 = D1, Dg2 = D2, M = 10, sigma = 1, return.dist = TRUE)
D1 <- matrix(c(0,0,0,1,1,0,0,0,1.5, 3.5,2,2.5,3, 4, 6), ncol = 3, byrow = FALSE) D2 <- matrix(c(0,0,1,1,0, 0, 1.2, 2, 1.4, 3.2,4.6,6.5), ncol = 3, byrow = FALSE) K <- sliced_Wd(Dg1 = D1, Dg2 = D2, M = 10, sigma = 1, return.dist = TRUE)
Transform coordinates to have mean 0 and standard deviation 1
standardize_coordinates(X)
standardize_coordinates(X)
X |
point set as N x D matrix |
a list of X: standardized matrix, mu: vector of means, sigma: standard deviation
Compute the trace of a matrix
tr(x)
tr(x)
x |
matrix |
trace of the matrix
Rigid registration of two point sets by minimizing the Wasserstein distance between GMMs
wgmmreg( X, Y, CX, CY, wx = NULL, wy = NULL, maxIter = 200, subsample = NULL, tol = 1e-08 )
wgmmreg( X, Y, CX, CY, wx = NULL, wy = NULL, maxIter = 200, subsample = NULL, tol = 1e-08 )
X |
reference point set, a N x D matrix |
Y |
point set to transform, a M x D matrix, |
CX |
array of covariance matrices for each point in X |
CY |
array of covariance matrices for each point in Y |
wx |
(optional) vector of mixture weights for X. |
wy |
(optional) vector of mixture weights for Y. |
maxIter |
maximum number of iterations to perform (default: 200) |
subsample |
if set, use this randomly selected fraction of the points |
tol |
tolerance for determining convergence (default: 1e-8) |
a list of
Y: transformed point set,
R: rotation matrix,
t: translation vector,
c: final value of the cost function,
converged: logical, whether the algorithm converged.
data.file1 <- system.file("test_data", "parasaurolophusA.txt", package = "LOMAR", mustWork = TRUE) PS1 <- read.csv(data.file1, sep = '\t', header = FALSE) data.file2 <- system.file("test_data", "parasaurolophusB.txt", package = "LOMAR", mustWork = TRUE) C1 <- diag(0.1, ncol(PS1)) + jitter(0.01, amount = 0.01) C1 <- replicate(nrow(PS1),C1) PS2 <- read.csv(data.file2, sep = '\t', header = FALSE) C2 <- diag(0.1, ncol(PS2)) + jitter(0.01, amount = 0.01) C2 <- replicate(nrow(PS2),C2) transformation <- wgmmreg(PS1, PS2, C1, C2, subsample = 0.1, maxIter = 30, tol = 1e-4) ## Not run: # Visualize registration outcome library(rgl) plot3d(PS1, col = "blue") points3d(PS2, col = "green") points3d(transformation[['Y']], col = "magenta") ## End(Not run)
data.file1 <- system.file("test_data", "parasaurolophusA.txt", package = "LOMAR", mustWork = TRUE) PS1 <- read.csv(data.file1, sep = '\t', header = FALSE) data.file2 <- system.file("test_data", "parasaurolophusB.txt", package = "LOMAR", mustWork = TRUE) C1 <- diag(0.1, ncol(PS1)) + jitter(0.01, amount = 0.01) C1 <- replicate(nrow(PS1),C1) PS2 <- read.csv(data.file2, sep = '\t', header = FALSE) C2 <- diag(0.1, ncol(PS2)) + jitter(0.01, amount = 0.01) C2 <- replicate(nrow(PS2),C2) transformation <- wgmmreg(PS1, PS2, C1, C2, subsample = 0.1, maxIter = 30, tol = 1e-4) ## Not run: # Visualize registration outcome library(rgl) plot3d(PS1, col = "blue") points3d(PS2, col = "green") points3d(transformation[['Y']], col = "magenta") ## End(Not run)