Skip to content

关于相机

问题描述

标注平台车道线编辑节点和锚点位置的时候,目标位置与鼠标位置存在偏差,根本原因是透视投影相机有近大远小的效果,导致两个位置有视差。

THREE.JS内置4种相机的区别与应用场景

1. 透视投影相机(Perspective Camera)

  1. 特点

    • 模拟人眼的观察方式,物体的大小随距离变化:近大远小。
    • 有真实的深度感,符合现实世界的视觉效果。
  2. 适用场景

    • 游戏、虚拟现实、三维可视化等需要真实感的场景。
    • 点云展示、模型观察,尤其需要表现物体远近关系时。

2. 正交投影相机(Orthographic Camera)

  1. 特点

    • 物体的大小与距离无关,永远保持固定比例。
    • 没有透视失真,适合用于测量和几何分析。
  2. 适用场景

    • CAD 工具、二维游戏、地图、建筑设计。
    • 点云标注、工程测量等需要精准对齐的场景。

3. 立方相机(CubeCamera)

  1. 特点

    • 用于生成场景的 360° 环境贴图。
    • 通过立方体渲染六个方向的场景。
  2. 适用场景

    • 反射效果,例如镜面材质。
    • 动态环境贴图渲染。

4. 立体相机(StereoCamera)

  1. 特点

    • StereoCamera 在 Three.js 中是一种特殊相机,由 两个子相机(cameraL 和 cameraR)组成。
    • 其工作原理是基于两个视点的偏移(eyeSeparation),从略微不同的角度观察场景。
  2. 适用场景

    • 虚拟现实 (VR):配合 VR 显示设备,创建沉浸式体验。
    • 3D 电影:用于生成立体视觉效果的左右图像。
    • 科学研究:模拟双目视觉,用于计算机视觉中的深度感知或物体定位。
    • 点云渲染:用于增强点云数据的空间感,帮助用户更直观地感知点云分布。

点云吸附

  1. 地面吸附
js
// 计算位置信息
calPosition(event: MouseEvent) {
const oldPos = this.selectObject?.position.clone();
const intersects = this.getIntersectsObjects(
    event,
    this.renderView.pointCloud.groupPoints.children[0],
);
let pos = new Vector2();
pos.x = event.offsetX;
pos.y = event.offsetY;
const newPos = this.renderView.canvasToWorld(pos);
newPos.setZ(oldPos!.z);
let position = newPos;
if (!event?.shiftKey) {
    if (intersects.length === 0) return;
    position = intersects[0].point.clone();
    for (let i = 0; i < intersects.length; i++) {
    const point = intersects[i];
    const classification = point.object.geometry.getAttribute('classification');
    if (classification) {
        const isGround = classification.getX(point.index);
        if (isGround) {
        position = intersects[i].point.clone();
        break;
        }
    }
    }
}
return position;
}
  1. 立方体吸附
js
export function getMiniBox1(
  transform: Required<ITransform>,
  positions: THREE.BufferAttribute,
  heightRange: [number, number],
) {
  let matrix = new THREE.Matrix4();
  const quaternion = new THREE.Quaternion().setFromEuler(transform.rotation);
  matrix.compose(transform.position, quaternion, transform.scale);
  let invertMatrix = new THREE.Matrix4().copy(matrix).invert();

  let pos = new THREE.Vector3();
  let box = new THREE.Box3(new THREE.Vector3(-0.5, -0.5, -0.5), new THREE.Vector3(0.5, 0.5, 0.5));
  // let newBox = new THREE.Box3(new THREE.Vector3(), new THREE.Vector3());
  // let pointN = 0;
  let offsetFloor = 0.15; // ground offset
  let preData: THREE.Vector3[] = [];

  setPreData();

  if (preData.length === 0) return;

  // filter z
  let info = statisticPositionVInfo(preData);
  let infoRange = getMaxMinInfo(info);
  // console.log('info', info, infoRange);
  // ground offset
  if (infoRange.infoMin + offsetFloor < infoRange.infoMax) {
    infoRange.infoMin += offsetFloor;
  }
  if (infoRange.infoMax <= infoRange.infoMin) return;

  preData = preData.filter((e) => e.z >= infoRange.infoMin && e.z <= infoRange.infoMax);
  if (preData.length === 0) return;
  transform.position.z = (infoRange.infoMin + infoRange.infoMax) / 2;
  transform.scale.z = Math.abs(infoRange.infoMax - infoRange.infoMin);

  // update matrix
  matrix.compose(transform.position, quaternion, transform.scale);
  invertMatrix = new THREE.Matrix4().copy(matrix).invert();
  // to local pos
  preData.forEach((pos) => {
    pos.applyMatrix4(invertMatrix);
  });

  // x
  info = statisticPositionVInfo(preData, 2, 'x');
  infoRange = getMaxMinInfo(info, { filter: 0 } as any);
  // console.log('info', info, infoRange);
  let positionX = (infoRange.infoMin + infoRange.infoMax) / 2;
  transform.scale.x *= Math.abs(infoRange.infoMax - infoRange.infoMin);

  // y
  info = statisticPositionVInfo(preData, 2, 'y');
  infoRange = getMaxMinInfo(info, { filter: 0 } as any);
  // console.log('info', info, infoRange);
  let positionY = (infoRange.infoMin + infoRange.infoMax) / 2;
  transform.scale.y *= Math.abs(infoRange.infoMax - infoRange.infoMin);

  let center = new THREE.Vector3(positionX, positionY, 0);
  center.applyMatrix4(matrix);
  transform.position.copy(center);

  function setPreData() {
    for (let i = 0; i < positions.count; i++) {
      let x = positions.getX(i);
      let y = positions.getY(i);
      let z = positions.getZ(i);
      pos.set(x, y, z).applyMatrix4(invertMatrix);
      if (
        box.min.x <= pos.x &&
        box.max.x >= pos.x &&
        box.min.y <= pos.y &&
        box.max.y >= pos.y &&
        z <= heightRange[1] &&
        z >= heightRange[0]
      ) {
        preData.push(new THREE.Vector3(x, y, z));
      }
    }
  }
}
  1. 点和线的吸附
js
// 获取附近的要素标签
export const getNeighborhoodPos = function (
  renderView: MainRenderView,
  pos: THREE.Vector2,
  position: THREE.Vector3,
  neighborhood = 1,
): THREE.Vector3 | null {
  const { raycaster, camera } = renderView;
  if (raycaster.params.Points) raycaster.params.Points.threshold = neighborhood;
  const obj = renderView.pointCloud.getAnnotateLine();
  raycaster.setFromCamera(pos, camera);
  let closeVector: THREE.Vector3 | null = null;
  let dis = Infinity;
  obj.forEach((item) => {
    if ([LaneMarkingNode, LaneLineCenterNode, LineFacilityNode].some((val) => item instanceof val)) {
      const line = new THREE.Line3(position, item.position);
      if (dis > line.distance() && line.distance() < neighborhood) {
        dis = line.distance();
        closeVector = item.position;
      }
    } else if ([LaneMarking, LaneLineCenter, LineFacility].some((val) => item instanceof val)) {
      for (let i = 0; i < item.positions.length - 1; i++) {
        const Segment = new THREE.Vector3
        const line = new THREE.Line3(item.positions[i], item.positions[i + 1]);
        const po = line.closestPointToPoint(position, true, Segment);
        const newLine = new THREE.Line3(position, po);
        if (dis > newLine.distance() && newLine.distance() < neighborhood) {
            dis = newLine.distance();
            closeVector = po;
        }
      }
    }
  });
  if (closeVector) {
    closeVector.z = position.z
  }

  return closeVector;
};