WebSocket is a communication protocol. RabbitMQ is a message broker. Two different things.
Now, whether the client connects to your WebSocket server, or RabbitMQ server directly, is of no difference in terms of numbers of used sockets. You will hit limitations on both.
But there are other, major differences. First of all, WebSockets are supported by browsers. Browsers are very limited in terms of networking, in particular they won't let you open an arbitrary tcp connection. So if you want to support browsers, then WebSockets is a good choice. I don't know if RabbitMQ works over WebSockets (other answer claim it does). But lets say it does. Then you now still use WebSockets, but connect to RabbitMQ directly, instead of your server. If it doesn't support WebSockets, then you have no choice but to put a WebSocket server in front of it.
So we are in a situation where transport protocol isn't really relevant (which is fine, there's nothing wrong with WebSockets), and the question becomes: should I connect to my own server or RabbitMQ directly? And I do believe it is the true question here.
First of all, however RabbitMQ scales, you can of course implement your custom server at least as well. But if RabbitMQ scales badly, then you won't be able to improve it, unless you put a custom server. This is of course a tradeoff: you need to sacrifice time and have knowledge to do that correctly. Secondly, RabbitMQ won't be able to handle more sockets than your custom server. Sockets, as in file descriptors, aren't bottlenecks anyway, you can always modify your OS limits to handle millions if you want to. The true bottleneck is due to what those servers do with the data that sockets transmit. And how big the traffic is. In which case transport protocol is rarely a bottleneck.
And thirdly, most importantly: connecting to RabbitMQ directly will completely break your application. What about authentication? What about validation? What about additional custom logic? What about security (e.g. ddos)? Etc. etc. Don't ever do such thing. Resources like message brokers or databases have to be hidden behind servers. No production ready application can work like that.
So how to scale such game application? There are many variants. One of the more widely accepted way is that when a new game starts, the application choses a server for it to live on. So you need two kinds of servers: one that hosts a game, and one that choses the first one (technically can be the same server, but its not necessarily a good decision - a story for another time). Then the game, and all participants are redirected to that particular server, and communicate with that server only.
This works well for any scenario where we have isolated pieces, like online games. But even MMO games use similar strategy: these divide game world into regions and try to ensure that no more than couple of thousands of players are at the same time in one region. And they put a server per region.
Depending on your actual requirements and how "real" time your game has to be (poker probably does not have to be correct up to say milliseconds) then there is a different, global approach possible. You have say 10 web servers that accept (websocket) connections. In a round robin fashion or something, arbitrarily. And then those web servers communicate between each other. And here you can put a RabbitMQ instance behind them as an internal message broker. But other pub/sub servers (e.g. Redis) will probably work better.