Home | History | Annotate | Download | only in opencv2

Lines Matching full:points

55 formed by projecting 3D points into the image plane using a perspective transformation.
120 - Project 3D points to the image plane given intrinsic and extrinsic parameters.
121 - Compute extrinsic parameters given intrinsic parameters, a few 3D points, and their
261 @param srcPoints Coordinates of the points in the original plane, a matrix of the type CV_32FC2
263 @param dstPoints Coordinates of the points in the target plane, a matrix of the type CV_32FC2 or
266 - **0** - a regular method using all the points
438 /** @brief Projects 3D points to an image plane.
440 @param objectPoints Array of object points, 3xN/Nx3 1-channel or 1xN/Nx1 3-channel (or
441 vector\<Point3f\> ), where N is the number of points in the view.
448 @param imagePoints Output array of image points, 2xN/Nx2 1-channel or 1xN/Nx1 2-channel, or
451 points with respect to components of the rotation vector, translation vector, focal lengths,
458 The function computes projections of 3D points to the image plane given intrinsic and extrinsic
460 image points coordinates (as functions of all the input parameters) with respect to the particular
467 means that you can compute the distorted coordinates for a sparse set of points or apply a
479 @param objectPoints Array of object points in the object coordinate space, 3xN/Nx3 1-channel or
480 1xN/Nx1 3-channel, where N is the number of points. vector\<Point3f\> can be also passed here.
481 @param imagePoints Array of corresponding image points, 2xN/Nx2 1-channel or 1xN/Nx1 2-channel,
482 where N is the number of points. vector\<Point2f\> can be also passed here.
487 @param rvec Output rotation vector (see Rodrigues ) that, together with tvec , brings points from
500 function requires exactly four object and image points.
511 The function estimates the object pose given a set of object points, their corresponding image
521 - The P3P algorithm requires image points to be in an array of shape (N,1,2) due
535 @param objectPoints Array of object points in the object coordinate space, 3xN/Nx3 1-channel or
536 1xN/Nx1 3-channel, where N is the number of points. vector\<Point3f\> can be also passed here.
537 @param imagePoints Array of corresponding image points, 2xN/Nx2 1-channel or 1xN/Nx1 2-channel,
538 where N is the number of points. vector\<Point2f\> can be also passed here.
543 @param rvec Output rotation vector (see Rodrigues ) that, together with tvec , brings points from
557 The function estimates an object pose given a set of object points, their corresponding image
576 @param objectPoints Vector of vectors of the calibration pattern points in the calibration pattern
579 @param imagePoints Vector of vectors of the projections of the calibration pattern points. In the
614 a regular chessboard has 8 x 8 squares and 7 x 7 internal corners, that is, points where the black
700 @param objectPoints In the new interface it is a vector of vectors of calibration pattern points in
705 the vectors will be different. The points are 3D, but since they are in a pattern coordinate system,
708 In the old interface all the vectors of object points from different views are concatenated
711 pattern points (e.g. std::vector<std::vector<cv::Vec2f>>). imagePoints.size() and
713 In the old interface all the vectors of object points from different views are concatenated
725 from the model coordinate space (in which object points are specified) to the world coordinate
761 points and their corresponding 2D projections in each view must be specified. That may be achieved
762 by using an object with a known geometry and easily detectable feature points. Such an object is
766 patterns (where Z-coordinates of the object points must be all zeros). 3D calibration rigs can also
779 that is, the total sum of squared distances between the observed feature points imagePoints and
780 the projected (using the current estimates for camera parameters and the poses) object points
830 @param objectPoints Vector of vectors of the calibration pattern points.
831 @param imagePoints1 Vector of vectors of the projections of the calibration pattern points,
833 @param imagePoints2 Vector of vectors of the projections of the calibration pattern points,
856 points during the optimization.
910 points in all the available views from both cameras. The function returns the final value of the
939 the function makes the principal points of each camera have the same pixel coordinates in the
1012 @param points1 Array of feature points in the first image.
1013 @param points2 The corresponding points in the second image. The same formats as in
1021 than zero, all the point pairs that do not comply with the epipolar geometry (that is, the points
1023 rejected prior to computing the homographies. Otherwise,all the points are considered inliers.
1087 /** @brief Converts points from Euclidean to homogeneous space.
1089 @param src Input vector of N-dimensional points.
1090 @param dst Output vector of N+1-dimensional points.
1092 The function converts points from Euclidean to homogeneous space by appending 1's to the tuple of
1097 /** @brief Converts points from homogeneous to Euclidean space.
1099 @param src Input vector of N-dimensional points.
1100 @param dst Output vector of N-1-dimensional points.
1102 The function converts points homogeneous to Euclidean space using perspective projection. That is,
1108 /** @brief Converts points to/from homogeneous coordinates.
1110 @param src Input array or vector of 2D, 3D, or 4D points.
1111 @param dst Output vector of 2D, 3D, or 4D points.
1113 The function converts 2D or 3D points from/to homogeneous coordinates by calling either
1120 /** @brief Calculates a fundamental matrix from the corresponding points in two images.
1122 @param points1 Array of N points from the first image. The point coordinates should be
1124 @param points2 Array of the second image points of the same size and format as points1 .
1142 where \f$F\f$ is a fundamental matrix, \f$p_1\f$ and \f$p_2\f$ are corresponding points in the first and the
1151 epipolar lines corresponding to the specified points. It can also be passed to
1159 // initialize the points here ...
1180 /** @brief Calculates an essential matrix from the corresponding points in two images.
1182 @param points1 Array of N (N \>= 5) 2D points from the first image. The point coordinates should
1184 @param points2 Array of the second image points of the same size and format as points1 .
1186 are feature points from cameras with same focal length and principle point.
1198 for the other points. The array is computed only in the RANSAC and LMedS methods.
1210 where \f$E\f$ is an essential matrix, \f$p_1\f$ and \f$p_2\f$ are corresponding points in the first and the
1233 corresponding points in two images, using cheirality check. Returns the number of inliers which pass
1237 @param points1 Array of N 2D points from the first image. The point coordinates should be
1239 @param points2 Array of the second image points of the same size and format as points1 .
1243 are feature points from cameras with same focal length and principle point.
1251 triangulated 3D points should have positive depth. Some details can be found in @cite Nister03 .
1261 // initialize the points here ...
1282 /** @brief For points in an image of a stereo pair, computes the corresponding epilines in the other image.
1284 @param points Input points. \f$N \times 1\f$ or \f$1 \times N\f$ matrix of type CV_32FC2 or
1286 @param whichImage Index of the image (1 or 2) that contains the points .
1288 @param lines Output vector of the epipolar lines corresponding to the points in the other image.
1305 CV_EXPORTS_W void computeCorrespondEpilines( InputArray points, int whichImage,
1308 /** @brief Reconstructs points by triangulation.
1312 @param projPoints1 2xN array of feature points in the first image. In case of c++ version it can
1313 be also a vector of feature points or two-channel matrix of size 1xN or Nx1.
1314 @param projPoints2 2xN array of corresponding points in the second image. In case of c++ version
1315 it can be also a vector of feature points or two-channel matrix of size 1xN or Nx1.
1316 @param points4D 4xN array of reconstructed points in homogeneous coordinates.
1318 The function reconstructs 3-dimensional points (in homogeneous coordinates) by using their
1331 /** @brief Refines coordinates of corresponding points.
1334 @param points1 1xN array containing the first set of points.
1335 @param points2 1xN array containing the second set of points.
1343 geometric distance between points \f$a\f$ and \f$b\f$ ) subject to the epipolar constraint
1384 points where the disparity was not computed). If handleMissingValues=true, then pixels with the
1386 to 3D points with a very large Z value (currently set to 10000).
1397 stereoRectify). To reproject a sparse set of points {(x,y,d),...} to 3D space, use
1410 @param inliers Output vector indicating which points are inliers.
1435 invalidated if point correspondences are available by applying positive depth constraint (all points
1642 /** @brief Projects points using fisheye model
1644 @param objectPoints Array of object points, 1xN/Nx1 3-channel (or vector\<Point3f\> ), where N is
1645 the number of points in the view.
1646 @param imagePoints Output array of image points, 2xN/Nx2 1-channel or 1xN/Nx1 2-channel, or
1652 @param jacobian Optional output 2Nx15 jacobian matrix of derivatives of image points with respect
1657 The function computes projections of 3D points to the image plane given intrinsic and extrinsic
1659 image points coordinates (as functions of all the input parameters) with respect to the particular
1669 /** @brief Distorts 2D points using fisheye model.
1671 @param undistorted Array of object points, 1xN/Nx1 2-channel (or vector\<Point2f\> ), where N is
1672 the number of points in the view.
1676 @param distorted Output array of image points, 1xN/Nx1 2-channel, or vector\<Point2f\> .
1680 /** @brief Undistorts 2D points using fisheye model
1682 @param distorted Array of object points, 1xN/Nx1 2-channel (or vector\<Point2f\> ), where N is the
1683 number of points in the view.
1689 @param undistorted Output array of image points, 1xN/Nx1 2-channel, or vector\<Point2f\> .
1734 Pictures a) and b) almost the same. But if we consider points of image located far from the center
1735 of image, we can notice that on image a) these points are distorted.
1760 @param objectPoints vector of vectors of calibration pattern points in the calibration pattern
1762 @param imagePoints vector of vectors of the projections of calibration pattern points.
1774 space (in which object points are specified) to the world coordinate space, that is, a real
1811 the function makes the principal points of each camera have the same pixel coordinates in the
1829 @param objectPoints Vector of vectors of the calibration pattern points.
1830 @param imagePoints1 Vector of vectors of the projections of the calibration pattern points,
1832 @param imagePoints2 Vector of vectors of the projections of the calibration pattern points,