New position based on spatial offset and rotation

Apr 2017
2
0
United Kingdom
I'm working on a project which I am hoping will teach me more about maths, but I am struggling to wrap my head around this one. I have got an electromagnetic motion tracker which tracks a sensor in global space. It gives me its position (xyz) and its rotation in Euler angles (yaw, pitch, roll).

I have attached the sensor which is tracked to a baseball cap which is worn by someone and this allows me to track their head movements. However, I want to then infer the position of facial features based on this position and rotation value. To do this, I have calculated an estimated spatial offset between the sensor and the facial feature I wish to track. I have done this my measuring the offset along the X, Y and Z axis.

I was beginning to think I was making progress but I am now stuck again. I have composed a rotation matrix from the Euler angles given to me by the motion tracker. I think that I will need this rotation matrix, but I'm just not sure on the rest of the process.

I'm not necessarily looking for someone to calculate it for me, but rather to help me understand the process I need to go through. I have done a lot of reading online and there are a few things which I think are going to help me, but then I struggle to link it directly back to what I want.

To recap:

I have the position of the motion tracker in global space (attached to the top of the person's head)
I have the rotation matrix representing the rotation of the motion tracker in global space
I have the spatial offset between the sensor and the facial feature (measured along the X,Y,Z axis in global space.

I am trying to use the above information to calculate the new position of the facial feature in global space should the person move their head.
 
Last edited by a moderator:
Apr 2017
2
0
United Kingdom
OK so I think I may have come up with how to do it. I would appreciate it if someone could go over my workings and confirm whether or not I have reached the correct approach.

I have the following recorded values for a specific point in time:

Motion tracker sensor in global space (p1)
Motion tracker sensor orientation as euler angles (rX, rY, rZ)
Position of a facial feature in global space (p2)
I get the spatial offet by p2 - p1. I use this to create a translation matrix (T)

I take the following steps to allow me to calculate the position of the facial feature in global space for all subsequent recordings of the motion tracker sensor in global space.

1. I subtract the original motion tracker orientation from the motion tracker orientation to give me the delta orientation in x, y, z.

2. I create a rotation matrix (r1)

3. I record the new position of the motion tracker (p2)

4. I multiply p2 * r1

5. I translate the resulting vector by T

6. The resulting vector (I hope) is the facial feature position in global space.