How dumb to implement bridge AI in browser using wasm?

I've been playing with implementing an online bridge game (as in cards), and am thinking about creating an AI. I'm wondering whether I could get away with saving server resources by putting the AI into the client. This would make scaling up far easier, but I'm not sure about the limitations placed by browsers on wasm code.

The biggest issue is that I'll probably have peak memory use in excess of a gigabyte. Does anyone know at what point the browser will declare that too much? Are there other issues I should worry about?

Right now I have no AI code (well, a random AI) so I can't benchmark, but can estimate how many possible plays I'll need to cache.

Since you are using Rust, you can have both!
Run it in wasm when possible and fall back to the server if not.

I have no idea really. I suspect many users would be a bit distressed by a webpage consuming a gig of RAM. Meanwhile the same gig of RAM on the server can be used to serve AI to many browsers concurrently.

I think the threshold for alarming users with a 1GB page overhead has long ago been surpassed.

Do you have to transfer that 1GB to the client first? That would certainly put an end to it.

Popular chess sites like chess.com and lichess.org run AI in the browser using WASM, and it seems to work great. They do warn mobile device users that enabling client-side analysis can use a lot of battery.

3 Likes

Sadly, the Gig of RAM would be unique to each robot, since it would be a cache of possible endings for the play of a given hand. You could try to share cache, but it probably wouldn't be very helpful.

That's great to hear! I'm not sure I want to go to the effort of a WASM build + sending info back and forth (currently the max number of players is 4), but it's nice to know that it's possible to make client-side AI work. Although I suppose I don't know whether the chess AIs store a large cache like bridge AIs do.

No, the 1GB (estimated) would be a cache of the outcome of different "final" states of the play of a bridge hand, so it would be computed on the client.

I have an old Chromebook that doesn't have enough memory, so I use the task manager to see which tab to blame when things slow down. It's not unusual to see pages taking a GB or more even without AI in the browser.

1 Like

From my experience the browser won't really try to cap memory usage. I've been to websites which use so much memory that they eventually force the OS to start paging, making my entire system grind to a halt.

Would it limiting how far into the future you calculate possible endings be feasible without impacting the game experience too much? So instead of (for example) calculating all possible game states to a depth of 10 moves you'll only go to 8 or 9?

Optimising is normally done by either doing something more efficiently/intelligently or doing less work. I'm assuming the game engine is already doing everything it can to reduce memory usage, so if you want to do work in the browser you'd need to find ways to do less.

It's probably implemented so that they do all the simulation and game logic inside some engine compiled to WebAssembly, with a JavaScript shell that handles the UI and synchronising moves between players.

I'm working on a CAD engine which puts a big emphasis on being used from the browser and that's roughly the design I use. It works pretty well.

I couldn't decrease the distance looked ahead, because that just doesn't work in bridge, but I could limit the number of players considered using a heuristic, and could reduce the number of possible sets of hands considered (for hands that aren't seen). The algorithm doesn't yet exist. My gigabyte estimate comes from the memory requirements of a highly optimized double dummy solver, which is solving a much easier problem, albeit exactly.

Edit: this morning I actually implemented my first estimator of the score resulting from a given holding, which is at the extreme of sloppy: it only considers one random play for I've random possible hand distribution...