Researchers at Carnegie Mellon University have found that videos captured by smartphones can be useful for determining the location of a shooter. Gizmodo reports: The Video Event Reconstruction and Analysis system — or VERA, for short — was developed at CMU's Language Technologies Institute with the cooperation of SITU Research who shared its expertise on ballistics and architecture, and the tool was released last month as free-to-use open-source code at the Association for Computing Machinery's International Conference on Multimedia in Nice, France. Using machine learning, VERA first synchronizes footage from multiple videos shot on smartphones in and around an event where a shooting occurs. The more footage collected the more accurate the results will be, but the researchers found the system even performed well when using footage from just three devices. Once synchronized, VERA calculates the position of where each video was filmed based on landmarks and other notable features in the actual footage.
The system then processes the audio from each clip, specifically identifying two distinct sounds: the crack of the shock wave created by the supersonic bullet in flight, and the sound of the blast emanating from the weapon's muzzle. The time delay between the two parameters provides a crucial clue, but the sounds also help reveal the type of gun used, which in turn helps determine the speed of the bullet. By processing all of that information, VERA is then able to determine the location of the shooter with a surprising level of accuracy. During its development, VERA was tested using video captured by three smartphones during the first minute of the 2017 mass shooting in Las Vegas, Nevada, which included multiple shots fired. The system was able to accurately estimate that the shooter was located in the north wing of the Mandalay Bay hotel, even with a margin of error that still pointed to the hotel as being the probable location.
Read more of this story at Slashdot.