We aim to solve a multi-robot active joint localization and target tracking (AJLATT) problem, where a team of robots with sensors of limited field of view cooperate with their neighbors to actively plan their individual motions so as to achieve better performance for self-localization and target tracking while avoiding collisions with their team members, the target, and the environment. To achieve that goal, a Deep Reinforcement Learning (DRL)-based distributed algorithm is proposed.Compared with other motion planning strategies, the DRL-based method has a near-optimal and long-sighted nature to solve the problem by learning from numerous trial-and-error interactions with the environment. Several simulations in different scenarios are performed to demonstrate the capability of our algorithm, and performance comparison with other motion strategies is given.