As robots enter human-occupied environments, it is important that they work effectively with groups of people. To achieve this goal, robots need the ability to detect groups. This ability requires robots to have ego-centric (robot-centric) perception because placing external sensors in the environment is impractical. Additionally, robots need learning algorithms which do not require extensive training, as a priori knowledge of an environment is difficult to acquire. We introduce a new algorithm that addresses these needs. It detects moving groups in real-world, ego-centric RGB-D data from a mo- bile robot. Also, it uses unsupervised learning which leverages the underlying structure of the data for learning. This paper provides an overview of our approach and discusses future experimental plans. This work will enable robots to work with human teams in the general public using their own on-board sensors with minimal training.