We present a theoretical framework for online probabilistic plan recognition in cooperative multiagent systems. Our model extends the Abstract Hidden Markov Model (AHMM) (Bui, Venkatesh, & West 2002), and consists of a hierarchical dynamic Bayes network that allows reasoning about the interaction among multiple cooperating agents. We provide an in-depth analysis of two different policy termination schemes, T all and T any for concurrent action introduced in (Rohanimanesh & Mahadevan 2003). In the T all scheme, a joint policy terminates only when all agents have terminated executing their individual policies. In the T any scheme, a joint policy terminates as soon as any of the agents terminates executing its individual policy. Since exact inference is intractable, we describe an approximate algorithm using Rao-Blackwellized particle filtering. Our approximate inference procedure reduces the complexity from exponential time in N, the number of agents and K, the number of levels, to time linear in both N and K̂ ≤ K (the lowest-level of plan coordination) for the T all termination scheme and O(N log N) and linear in K̂ for the T any termination scheme.