How to use "triangulateMultiview" to reconstruct the same world coordinate point under multiple different views?

15 views (last 30 days)
How to use "triangulateMultiview" to reconstruct the same world coordinate point under multiple different views? It seems that the "triangulate" function can only be calculated using one set of points, and multiple sets of corresponding points cannot be calculated?
The premise is that the camera parameters (internal and external parameters) are not known, only the "cameraMatrix" (ie Camera projection matrix, 4×3 matrix) under each view angle is known. How to solve the world coordinate point?
前提是相机参数(内外参)并不知道,只知道每个视角下的"cameraMatrix"(即Camera projection matrix,4×3大小的矩阵)已知的,请问如何求解该世界坐标点?
I wrote an algorithm to complete multiple point reconstruction using the least squares method. The following can be used as a reference, but matlab does not provide a similar function.
function points3D = TriangluateLS(matchedPoints1,camera_matrix1,...
% 功能: 最小二乘法多对(n>=2,n<=4)匹配点三维点重建。
% 输入:matchedPoints, m*2 double [x,y] 图像坐标点,其余matchedPoints类推,顺序要对应,大小一致
% camera_matrix, 3*4 double 相机矩阵P,形如P =
% [m11,m12,m13,m14;m21,m22,m23,m24;m31,m32,m33,m34];其余类推
% 输出: points3D,m*3 double [x,y,z],重建后的三维点坐标
% reference:
% author:cuixingxing
% email:
% 2018.7.31
minArgs=4;% 最少2对匹配点
maxArgs=8;% 最多4对匹配点
if mod(nargin,2)~=0
m = size(matchedPoints1,1); % 点个数
points3D = zeros(m,3);
for i = 1:m
[A1,b1] = GetCoff(matchedPoints1(i,:),camera_matrix1);
[A2,b2] = GetCoff(matchedPoints2(i,:),camera_matrix2);
A = [A1;A2];
b = [b1;b2];
if length(varargin) == 2 % 3对
[A3,b3] = GetCoff(varargin{1}(i,:),varargin{2});
A(5:6,:) = A3;
b(5:6,:) = b3;
if length(varargin) == 4
[A3,b3] = GetCoff(varargin{1}(i,:),varargin{2});
[A4,b4] = GetCoff(varargin{3}(i,:),varargin{4});
A(5:8,:) = [A3;A4];
b(5:8,:) = [b3;b4];
sol = A\b;
points3D(i,1:3) = sol;
GetCoff.m :
function [A,b] = GetCoff(matchedPoints,camera_matrix)
% 功能:获取系数
% 输入,matchedPoints,m*2 double 点集坐标
% camera_matrix, 3*4 相机矩阵
% 输出:A,2*1 double
% b,2*1 double
% reference:
% author:cuixingxing
% email:
% 2018.7.31
u1 = matchedPoints(:,1);v1 = matchedPoints(:,2);
m11 = camera_matrix(1,1);m12 = camera_matrix(1,2);m13 = camera_matrix(1,3);m14 =camera_matrix(1,4);
m21 = camera_matrix(2,1);m22 = camera_matrix(2,2);m23 = camera_matrix(2,3);m24 =camera_matrix(2,4);
m31 = camera_matrix(3,1);m32 = camera_matrix(3,1);m33 = camera_matrix(3,3);m34 =camera_matrix(3,4);
A = [u1.*m31-m11,u1.*m32-m12,u1.*m33-m13;
b = [m14-u1.*m34;

Accepted Answer

Qu Cao
Qu Cao on 8 Apr 2021
Edited: Qu Cao on 8 Apr 2021
triangulateMultiview requires both camera poses and intrinsic parameters inputs to compute the 3-D world positions corresponding to point tracks across different images. Internally, camera projection matrices are computed based on these two inputs. You can take a look at the code implementation of triangulateMultiview and reuse some of the pieces to write a function for your workflow. Just be aware that this approach is not recommended as the internal code is not documented or tested.
  1 Comment
cui on 9 Apr 2021
Through the source code view, I found the internal calling method I want. It would be better if it can be implemented in open source.
worldPoints = vision.internal.triangulateMultiViewPoints(pointTracks, cameraMatrices);

Sign in to comment.

More Answers (0)

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!