3visionD lens distortion algorithm enables any camera to be used for object detection.
Implementing this service to existing cameras is upgrading them with new features.
Camera calibration is required for Geo Tagging.
Time required for one camera calibration is less then 15 minutes.
Geo map or a 3D model can be imported in Visual Sensor so that detected objects from cameras video streams are directly mapped to Digital Twin platform.
Parking Management, Traffic Management, Smart Public Lightning, Harbor Supervision, Area Surveillance, City planning, AI Prediction models.
Geo Tagging objects in 3D such as pedestrian on a sidewalk, vehicle in parking lot or a boat in the harbor over Geo maps or 3D models enables computers to have real time overview of supervised area.
Receiving metadata from large number of cameras prediction models are build that are build upon real world data.
Prediction models combined with real time data are used for various applications such as traffic control, smart city lightning, harbor management city planning services and etc..
Visual Sensor is a service that extracts 3D coordinates of detected objects from video stream thus adding new features to existing or new video cameras.
Visual sensor extracts 3D location of objects detected in video stream thus creating data rich maps. Maps that have layer displaying people, layers displaying different vehicle types, layers displaying boats, and etc.
Data collected from network of visual sensors are displayed on one 3D map or a city 3D model.
What is the benefit of Visual Sensor?
Simple example of one benefit is the system where ONE Visual Sensor is mounted on a street lamp. During one year data about pedestrian, vehicles, bicyclists, animals geo locations and their detection times are gathered and based upon this data, total time required for a lamp light intensity to be set to 100% is calculated. Since presence based lightning required lamp to be set to full intensity only when pedestrian is detected energy savings can be calculated.
But, any camera with AI analytics can be mounted on a street pole and can detect objects, so where is the difference?
Video camera exports detected object location in pixels in reference to image origin, Visual sensor exports detected object geo location. Installing large number of cameras requires tremendous amount of human work to transfer coordinates from pixels to geo coordinates while 3visionD camera calibration requires only two calibration points and is done in less then 20 minutes.
Detected vehicles geo locations from network of visual sensors are input data for a traffic management.
Vehicles geo location are used for a parking management where one application is managing unlimited number of parking spaces.
Video camera with restricted area drawn on a map will detect parking violation.
Kindergartens and Schools, for example, can be set as restricted zone for pets. Alarm can be raised when pet is detected in school yard and PTZ camera can be directed for a closeup shot.
Increase light level intensity when person is detected. Using 360 degrees cameras detection range is extended up to 30 meters in radius.
Boat detection for harbor supervision and dock management.
Visual sensor can import 3D map for each video camera thus detected object location is exported in 3D coordinates.
Automated or manual PTZ control over video stream or over 3D map