Vitaly Ablavsky Principal Research Scientist Affiliate Assistant Professor, Electrical and Computer Engineering vablavsky@apl.washington.edu Phone 206-616-0380 |
Research Interests
Ablavsky's research has focused on machine learning, computer vision, and autonomous systems. His broader interests include the application of artificial intelligence to problems in diverse domains and the role of AI in our society.
Department Affiliation
Environmental & Information Systems |
Education
B.A. Mathematics, Brandeis University, 1992
M.S. Computer Science, University of Massachusetts at Amherst, 1996
Ph.D. Computer Science, Boston University, 2011
Publications |
2000-present and while at APL-UW |
ZeroWaste dataset: Towards deformable object segmentation in cluttered scenes Bashkirova, D., and 9 others including V. Ablavsky, "ZeroWaste dataset: Towards deformable object segmentation in cluttered scenes," Proc., IEEE/CVR Conference on Computer Vision and Pattern Recognition (CVPR), 18-24 June, New Orleans, LA, doi:10.1109/CVPR52688.2022.02047 (IEEE, 2022). |
More Info |
27 Sep 2022 |
|||||||
Less than 35% of recyclable waste is being actually recycled in the US, which leads to increased soil and sea pollution and is one of the major concerns of environmental researchers as well as the common public. At the heart of the problem are the inefficiencies of the waste sorting process (separating paper, plastic, metal, glass, etc.) due to the extremely complex and cluttered nature of the waste stream. Recyclable waste detection poses a unique computer vision challenge as it requires detection of highly deformable and often translucent objects in cluttered scenes without the kind of context information usually present in human-centric datasets. This challenging computer vision task currently lacks suitable datasets or methods in the available literature. In this paper, we take a step towards computer-aided waste detection and present the first in-the-wild industrial-grade waste detection and segmentation dataset, ZeroWaste. We believe that ZeroWaste will catalyze research in object detection and semantic segmentation in extreme clutter as well as applications in the recycling domain. |
The 6th AI City Challenge Naphade, M., and 16 others including V. Ablavsky, "The 6th AI City Challenge," Proc., IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), 19-20 June, New Orleans, LA, doi:10.1109/CVPRW56347.2022.00378 (IEEE, 2022). |
More Info |
23 Aug 2022 |
|||||||
The 6th edition of the AI City Challenge specifically focuses on problems in two domains where there is tremendous unlocked potential at the intersection of computer vision and artificial intelligence: Intelligent Traffic Systems (ITS), and brick and mortar retail businesses. The four challenge tracks of the 2022 AI City Challenge received participation requests from 254 teams across 27 countries. Track 1 addressed city-scale multi-target multi-camera (MTMC) vehicle tracking. Track 2 addressed natural-language-based vehicle track retrieval. Track 3 was a brand new track for naturalistic driving analysis, where the data were captured by several cameras mounted inside the vehicle focusing on driver safety, and the task was to classify driver actions. Track 4 was another new track aiming to achieve retail store automated checkout using only a single view camera. We released two leader boards for submissions based on different methods, including a public leader board for the contest, where no use of external data is allowed, and a general leader board for all submitted results. The top performance of participating teams established strong baselines and even outperformed the state-of-the-art in the proposed challenge tracks. |
Leveraging affect transfer learning for behavior prediction in an intelligent tutoring system Ruiz, N., and 12 others including V. Ablavsky, "Leveraging affect transfer learning for behavior prediction in an intelligent tutoring system," In Proc., 16th IEEE International Conference on Automatic Face and Gesture Recognition, 15-18 December 2021, Jodhpur, India, doi:10.1109/FG52635.2021.9667001 (IEEE, 2022). |
More Info |
12 Jan 2022 |
|||||||
In this work, we propose a video-based transfer learning approach for predicting problem outcomes of students working with an intelligent tutoring system (ITS). By analyzing a student's face and gestures, our method predicts the outcome of a student answering a problem in an ITS from a video feed. Our work is motivated by the reasoning that the ability to predict such outcomes enables tutoring systems to adjust interventions, such as hints and encouragement, and to ultimately yield improved student learning. We collected a large labeled dataset of student interactions with an intelligent online math tutor consisting of 68 sessions, where 54 individual students solved 2,749 problems. We will release this dataset publicly upon publication of this paper. It will be available at https://www.cs.bu.edu/faculty/betke/research/learning/. Working with this dataset, our transfer-learning challenge was to design a representation in the source domain of pictures obtained "in the wild" for the task of facial expression analysis, and transferring this learned representation to the task of human behavior prediction in the domain of webcam videos of students in a classroom environment. We developed a novel facial affect representation and a user-personalized training scheme that unlocks the potential of this representation. We designed several variants of a recurrent neural network that models the temporal structure of video sequences of students solving math problems. Our final model, named ATL-BP for Affect Transfer Learning for Behavior Prediction, achieves a relative increase in mean F-score of 50% over the state-of-the-art method on this new dataset. |