I'm not 100% sure linear algebra will crunch this problem, but hopefully so. This may just be a case of matrices, which would be good cause I like those.

Imaging we have a robot with a camera attached. The camera can move in x,y,z, and also a,b,c. Where x is vertical, y horizontal and z lineal moving in and out. So, if it were a 3d / stereo camera it would give a distance reading on z.

a is rotation around x, b is rotation around y and c is rotation around z.

So, if I take an initial 4 (x,y,z) positions, one for each corner of an object and use these as my template locations, then if I change that object and place another identical object in it's place, but not exactly in the same location, I am able to get the offsets from the next 4 (x,y,z) positions and use these offsets to adjust the robot so that it can do any work on the object correctly as expected.

On top of this, I take a picture of a calibration dot grid, which is at a fixed location, and get coordinates for (x,y,z) and (a,b,c). And store these as my template positions in the event that the camera is moved or knocked, so that I can use these offsets when I change the next object.

The problem:

Problem is, in my object measurements I only have (x,y,z) and in the calibration measurements I have (x,y,z) and (a,b,c).

I need a way to translate the calibration offsets for (x,y,z) and (a,b,c) to the (x,y,z) for the object measurements and I'm not sure what method I should be looking at.

Thank you.