[Total: 0 Average: 0/5]
This paper describe a novel method to locate two dimensional position and orientation of smart cameras in a network, using depth and angle information of a person moving around an area. The area is described by the viewing frustum seen by all the cameras. In this scheme each camera is provided by a face detection algorithm to detect the person and by a stereo vision for its depth estimation. These relative measurement are send to a central node to build the entire sensors map. The system is very scalable thanks to the low data communication bandwidth requirements. This method could be used to implement distributed surveillance systems and other high level applications such as environmental modeling, 3D model construction or object tracking.