I used the cluster command to attempt to remove the first node of a cluster, using this: cluster node mynode1 /force That much worked okay and the Cluster Administrator showed that the node had been taken offline. It's resources had automatically been assumed but node 2. I then tried cluster node mynode1 /evict and got the error below: >cluster node clu1 /evict System error 1753 has occurred (0x000006d9). There are no more endpoints available from the endpoint mapper. Shortly after this the cluster went offline completely and it doesn't look like it will recover by itself. Is it incorrect to remove the first node of a cluster? I assume not since the node could of course crash and another node would have to assume its role. So what happened here?