Video Summarization and Visual Analysis on Videos
Mobile sensing has offered efficient, cost-effective data collection procedures that opened new research frontiers, specifically in urban sensing and transportation. In the past, due to highly costly and time-consuming data collection procedures, a limited number of urban indicators were measured and made available to researchers. Hence, our understanding of cities on many frontiers was bounded by the ability to collect, record, manage, and store data. Recent advancements in producing low-cost sensing devices, together with the advent of new techniques in computer vision and machine learning, lead to the creation of massive data sets collected by fleets of sensor-equipped vehicles moving through streets.
In this project, we propose to employ machine learning techniques for creating adaptive sampling profiles and a data-driven, opportunistic approach to data acquisition from moving sensors. Our immediate goal is to drastically cut down the cost of deploying video and image sensors, making them more practical. To this end, we plan to explore a novel research direction: detecting the salient frames in video data captured by sensors using computer vision, video segmentation algorithms. Then, a data-driven approach using ML will be employed to find the control features that enhance sensor data acquisition and prevent huge waste to the memory and storage resources. We plan to evaluate our proposed methods by demonstrating their effectiveness in a pedestrian mobility analysis. We provide a method to count pedestrians from a moving car instead of relying on the conventional methods of using fixed sensors or human counters, which due to their high cost, suffer from very limited spatial coverage.