Leaderboard 8X99   Main


Bot’s Eye Views


  by Richard Pavlicek

From February 2001 to August 2006, I tested some of the well-known bridge computer programs on my monthly Bidding Polls and Play Contests. Results were reported in a “Bot’s Eye View,” not only to see how they scored against each other but to see if their skills had approached human levels. Not! People are safe for now. This page contains all 66 reports: 33 on bidding, 33 on card play.

Bidding PollsPlay Contests

Each test report consisted of six problems, scored on a 1-to-10 scale, hence a perfect score was 60, though the highest ever achieved by a bot was 56. Most of the programs had settings for skill level (thinking time) which was set to the highest not to exceed 30 seconds per call or 60 seconds per play — pretty generous as such a pace would be way too slow for tournament play. Thinking time is also used to break ties if two bots have the same score; faster gets the edge.

Participants (alphabetical)
ProgramAuthor/Creator
Blue Chip BridgeIan Trackman and Mike Whittaker
Bridge BaronTom Throop and Stephen Smith
Bridge BuffDoug Bennion
Finesse Bridge (defunct)Mark and Aaron Marin
GIBMatthew Ginsberg
HALSomeone with a warped mind
JackHans Kuijf
Micro BridgeTomio and Yumiko Uchida
Q-plus BridgeHans Leber

HAL 9000 is a fake bot included for amusement — call it payback for its misdeeds in 2001: A Space Odyssey. HAL’s scores were appreciated, not only for staking a perpetual claim on last place but for lowering the average bot score, so the true bots were mostly above average.

Leaderboard 8X99   MainTop   Bot’s Eye Views

Bidding Polls

Following are the 33 Bidding Polls on which bots were tested. Click on the table title to see the actual bidding problems.

March 2001 Bidding Poll
294 humans avg 47.39
RankScoreCCProgram version123456
155USBridge Baron 11.0D2 D2 H5 D3 D3 C
251DEQ-plus Bridge 6.14 C2 D2 H4 H3 D3 C
350UKBlue Chip Bridge 3.4.04 C2 D2 H5 D3 D3 C
448JPMicro Bridge 9.01P2 D2 H5 D3 D2 NT
547USGIB 4.1.24 C2 D2 H6 D4 D2 S
642CABridge Buff 8.04 C2 D2 H4 NTP2 NT
734USFinesse Bridge 2.5P2 D2 S5 SP3 C
815USHAL 90004 CP2 S5 H2 NT3 NT

Bridge Baron topped all the bots with an excellent score of 55. Second place went to Q-plus Bridge with 51, and third went to Blue Chip Bridge with 50. The scores and rankings, based on six specific problems, are not necessarily indicative of each program’s overall capability. The only thing certain is that if you managed to equal HAL’s score, you should be locked up with the key thrown away if you ever touch a deck of cards again.

May 2001 Bidding Poll
519 humans avg 47.34
RankScoreCCProgram version123456
148USBridge Baron 11.03 NT4 H5 C3 S3 HD
245JPMicro Bridge 9.013 NT5 HDP3 SD
345DEQ-plus Bridge 6.13 NT4 HD2 S3 NTD
443USFinesse Bridge 2.53 NT4 HP2 S4 DD
537USGIB 4.1.23 NT6 H4 C2 S3 SD
634CABridge Buff 8.03 NT4 H5 C2 S3 NTP
711USHAL 90005 D4 NT4 S3 D4 HP

Congratulations to Bridge Baron, which topped all the bots on a tough set of problems. This was helped by “staying on the charts,” as most of the other programs had an errant answer, scoring zero: GIB bid 6 H on Problem 2; Finesse Bridge passed on Problem 3; Micro Bridge passed on Problem 4; and Q-plus Bridge and Bridge Buff bid 3 NT on Problem 5. HAL, of course, did not suffer from these occasional aberrations (they were quite steady).

July 2001 Bidding Poll
583 humans avg 47.93
RankScoreCCProgram version123456
146USFinesse Bridge 2.5P3 NT4 DP3 D3 H
243UKBlue Chip Bridge 3.4.3PP3 D2 D3 H3 H
342USGIB 4.1.2P2 NT3 C2 S3 D2 NT
441JPMicro Bridge 9.01P2 S3 D2 D3 NT3 H
541DEQ-plus Bridge 6.1P2 NTP2 H3 NT3 H
634CABridge Buff 8.0P2 D3 D2 D3 NTP
732USBridge Baron 11.0P2 D2 NT3 D3 H3 H
87USHAL 90002 NT3 D2 NT3 NT3 HP

Congrats to Finesse Bridge! Curiously, this month’s bot champ is the only free program of the lot. (HAL is actually better than free as its latest marketing strategy is to pay you to take it.) Bridge Baron, the winner of my last two bidding polls, had an off month; or at least the problems were not to its liking. The fact that none of the bots broke average suggests that human superiority will be around for awhile.

Problem 2 created a predicament because only one program (Q-plus Bridge) had the systemic option to allow (and hence, to understand) a natural, limited 2 C opening (a la Precision). Rather than skip the problem, it seemed like more fun to let the other programs wing it; so I fed them the same auction as if 2 C were strong and artificial. Micro Bridge apparently took the double as takeout for the majors (else why 2 S?) and most of the others as some kind of takeout (only Blue Chip Bridge left it in). Obviously, this taints the comparisons; but as I see it, my polls don’t allow people to abstain, so why should the bots be any different?

September 2001 Bidding Poll
577 humans avg 47.64
RankScoreCCProgram version123456
141CABridge Buff 8.04 S4 H4 H2 H4 NT2 H
240UKBlue Chip Bridge 3.4.33 S3 H4 H2 H4 NT4 H
339USGIB 4.1.122 NT4 H4 S2 H4 C2 S
437USBridge Baron 11.03 S2 H4 H1 NT4 NT2 S
537DEQ-plus Bridge 6.12 H3 H4 HD6 S2 S
635JPMicro Bridge 9.012 S2 H4 H2 H4 NTD
734USFinesse Bridge 2.54 S2 H3 S2 H6 S2 H
814USHAL 90002 NTP4 NT1 NT5 SP

Congratulations to Bridge Buff, which eked out a narrow win, aided somewhat by my scoring generosity. On Problem 1 its off-the-chart 4 S bid actually deserved zero, but my policy is not to award less than the worst option listed since the poll was multiple choice. (Obviously, if the bots understood this they would choose one of the listed calls.) Bridge Buff is certainly on a roll, as it also won my play contest last month.

The bot results in general were poor compared to my previous bidding polls; not even one came close to average. Evidently, these were tough problems. At least HAL was consistent.

November 2001 Bidding Poll
714 humans avg 46.28
RankScoreCCProgram version123456
146JPMicro Bridge 9.01D3 DP4 HP4 C
245CABridge Buff 8.0D3 D5 S4 HP4 C
345USGIB 4.1.12D3 D5 S4 H4 HP
444UKBlue Chip Bridge 3.4.3D3 DP4 C4 HP
541DEQ-plus Bridge 6.15 H3 DP2 CPP
639USFinesse Bridge 2.5D2 SP4 H3 HP
737USBridge Baron 11.0D3 DP4 H3 H5 C
811USHAL 9000PD5 NT3 NT2 H4 H

Congratulations to Micro Bridge, which eked out a narrow win with a score of 46. Tied for second were Bridge Buff and GIB with 45. All the bot scores (well, except the bogus HAL) were closely bunched, and it is significant to note they were all below average. Chalk one up for the human race.

In a few cases, a bot’s choice was off the chart. While this might deserve a score of zero, my policy is to award the equivalent of the worst option listed. Obviously, if the bot understood the multiple-choice format, it could not score worse than that — although HAL certainly makes a good effort each month.

January 2002 Bidding Poll
767 humans avg 46.93
RankScoreCCProgram version123456
144JPMicro Bridge 9.014 SP3 D5 C4 SD
243USGIB 4.1.123 SP2 NT3 H2 NTD
341CABridge Buff 8.02 D5 C3 NT4 C3 S4 D
441USFinesse Bridge 2.53 S5 C3 D5 C2 D4 D
539USBridge Baron 11.03 SP3 D5 C3 S4 D
635DEQ-plus Bridge 6.14 S5 C2 H5 C3 CP
733UKBlue Chip Bridge 3.4.34 S5 C3 D5 C2 CP
89USHAL 90002 S3 NT4 C3 S2 H3 S

Congratulations to Micro Bridge, which eked out a narrow win with a score of 44. Actually, all the bot scores (well, except for you know who) were closely bunched, but it is significant to note they were all below the average human score. Good. Let’s keep those tin cans in their place.

In a few cases, namely the 2 C and 3 C bids on Problem 5, the bot’s actual choice was not listed as an option. Were these bids an aberration? Or did they try to spring some special convention out of the blue? Who knows. In any event, my policy is to score these the same as the worst option listed since the bot could never do worse than that if it understood the multiple-choice format.

March 2002 Bidding Poll
813 humans avg 47.33
RankScoreCCProgram version123456
149CABridge Buff 8.0P3 NT3 NT4 D1 HC
246USGIB 4.1.121 NT3 NT4 S5 D1 HC
341UKBlue Chip Bridge 3.4.3P3 H5 CP1 CC
441DEQ-plus Bridge 6.1P3 CPP1 CD
539JPMicro Bridge 9.012 HP5 C4 DPA
635USFinesse Bridge 2.5P3 NTPP1 CC
733USBridge Baron 11.0DPP5 D1 HA
813USHAL 90002 C2 SP3 NT5 CE

Congratulations to Bridge Buff, not only for topping the bots but also for being the only bot to beat the average human score. Bridge Buff also deserves recognition as the only bot to have won both a bidding poll and a play contest since I began doing this about a year ago. Overall, the bot scores were mediocre this month, which attests to the difficulty of the problems.

On Problem 6, only one bot (Q-plus Bridge) understood the natural 2 C opening (11-15). Kudos to creator Hans Leber (Germany) for his versatile programming, allowing the user to play or defend against a wide variety of systems. Most of the other bots passed the 2 C opening expecting it to be strong. Rather than junk the problem, I reposed it with a 1 C opening (followed by a 3 C raise). While certainly not the same, it was close enough for the tin cans, as they generally found reasonable actions.

It was also interesting to note that, aside from the incompatibility on Problem 6, none of the bots drifted off the charts this month. Usually there are at least few aberrations, causing me to have to add additional calls to my scoring program. Even HAL was impressive: Imagine scoring 13 when you can’t even count to 13.

May 2002 Bidding Poll
844 humans avg 46.42
RankScoreCCProgram version123456
151UKBlue Chip Bridge 3.4.34 NT2 H3 NT2 S2 SA
246USGIB 4.1.127 H2 H3 NT4 S2 SA
342DEQ-plus Bridge 6.14 S2 H5 C2 S2 CC
439JPMicro Bridge 9.014 NT3 H3 NT1 S4 HA
531CABridge Buff 8.04 NT3 H5 C1 S4 HG
629USFinesse Bridge 2.55 C2 H3 S1 S3 HA
726USBridge Baron 11.04 NT3 H4 NT1 S4 HG
810USHAL 90005 NT4 HPP4 HD

Congratulations to Blue Chip Bridge, which not only topped all the bots this month but was the only bot to beat the average human score. Well done for a difficult set of problems. Of course, I also must congratulate HAL for its month-to-month consistency — some things in life are uncertain; HAL isn’t one of them.

As usual, several of the bot calls went off the charts, and a few were even amusing. On Problem 1, GIB went for the gusto with a jump to 7 H, and Q-plus and Finesse Bridge found strange bids of 4 S and 5 C, respectively. On Problem 3, Bridge Baron evidently had an aberration using Blackwood, or maybe it was just mad because I forced it to bid 2 H with A-K-J. On Problem 5, Finesse Bridge managed only a feeble raise to 3 H, though I suppose one could argue it has tactical merit.

Problem 6 also had its rebels (indicated as G). Bridge Baron chose the wimpy route, overcalling 1 D and then bidding only 2 D. Bridge Buff took the other extreme, doubling and then bidding 5 D. Talk about a difference in evaluation! While these and other aberrant choices might deserve zero, my policy is to score them equal to the lowest award among the choices offered because the bots are not programmed for the multiple choice format.

July 2002 Bidding Poll
875 humans avg 46.94
RankScoreCCProgram version123456
150UKBlue Chip Bridge 3.4.33 C4 S4 HPD3 D
250JPMicro Bridge 9.012 NT5 S4 H3 DD3 D
349USGIB 4.1.123 C4 S4 HPD2 D
447CABridge Buff 8.03 NT4 S4 HPD2 D
545USBridge Baron 11.03 CP3 HPD3 D
645DEQ-plus Bridge 6.13 NT4 S3 H2 NTD2 D
731USFinesse Bridge 2.53 NT4 SP4 DD5 D
810USHAL 90005 C6 SP4 D5 NT2 S

Congratulations to Blue Chip Bridge and Micro Bridge, which tied with 50 in a photo finish, with GIB just one point behind. Since the date and time of submission is meaningless for the bots, I broke the tie by consistency (i.e., highest worst score) which gave Blue Chip the win. The aforementioned bots, as well as Bridge Buff, all topped the average human score. Even HAL should be commended on hitting double figures; in fact, its August advertising campaign now cites this poll in its claim to be the “Perfect 10” in bridge computers.

On Problem 4, none of the bots were capable of understanding the Roman 2 H opening (5+ hearts, 4+ clubs); but rather than skip the problem, I let them assume it was a weak two-bid. This seemed harmless, as the situations are analogous. Besides, none of the bots had an abstain button, so my directions were simple: Shut up and bid!

The bots did extremely well in staying on the charts this month. The only wayward call came from Finesse Bridge, which was really feeling its oats on Problem 6, jumping to 5 D. For scoring purposes, unlisted calls receive the same award as the lowest of the listed calls, since the bots are not programmed for the multiple-choice format.

September 2002 Bidding Poll
900 humans avg 49.37
RankScoreCCProgram version123456
153USGIB 6.1.04 S3 C3 NT4 D3 SF
249USBridge Baron 11.05 D3 C3 NT4 D3 HF
348UKBlue Chip Bridge 4.0.0P3 C3 NT4 D4 CF
440USFinesse Bridge 2.5PP3 NT4 D3 HB
536DEQ-plus Bridge 6.1PP3 NT4 D4 HF
632CABridge Buff 8.0PP3 NT4 C3 NTF
732JPMicro Bridge 9.01PP3 NTP3 HB
810USHAL 9000P2 C4 NTP4 NTC

Congratulations to GIB, which topped the bots with a fine score of 53, and was the only bot to beat the average human score. On Problem 1, it was especially enlightening to see GIB come up with 4 S (slam try after opening a 10-point hand), the kind of bid that requires vision beyond that of a typical bidding database. GIB also continued to reach the laydown slam. Bridge Baron was the only other program to make a move over 4 H; alas, it stopped in 5 H. HAL tried to steal the slam by passing 4 H and then adding 120 to a 60 partscore. Sorry, HAL; it doesn’t work that way. HAL didn’t take the setback lightly, and I nearly got electrocuted in the process.

On Problem 5, Bridge Buff had no option but to interpret 2 NT as Jacoby (spade raise) so its rebid was always 3 D to show a singleton. To be fair, I tried switching the spades and diamonds to create a 1 D opening, and gave North a similar 14-count. It now responded 2 NT naturally; alas, South raised to 3 NT, missing the 33-HCP slam.

Q-plus Bridge also had a hang-up on Problem 5, insisting on jumping to 4 H over the natural 2 NT response. Out of curiosity where this might lead, I let it continue: North bid 4 NT (apparently reading 4 H as a mountain); South answered 5 H (must be two aces); but that was the end as North passed. OK, so there’s room for improvement. I’ll bet you missed a slam once, too.

November 2002 Bidding Poll
939 humans avg 46.74
RankScoreCCProgram version123456
151UKBlue Chip Bridge 4.0.13 DP2 DP2 NTB
247USGIB 6.1.03 D4 CDPPD
346USBridge Baron 11.05 D4 D2 DP3 HC
446DEQ-plus Bridge 6.12 NTD2 DP2 NTD
543JPMicro Bridge 9.013 D4 D3 D3 S2 SB
641CABridge Buff 8.03 D4 D2 DP4 HB
727USFinesse Bridge 2.5PD2 S3 C3 HG
88USHAL 90002 NT5 D2 S3 H2 SA

Congratulations to Blue Chip Bridge, which topped the bots with an excellent score of 51. GIB was the only other bot to beat the average human score.

Problem 2 created a problem in presentation because of the weak jump raise to 3 D. Even the programs that allowed inverted minor raises as an option did not allow them in competition. Therefore, I devised a workaround: I had North respond 2 D (weak raise) and South rebid 2 H; then I made West jump to 3 S, which is passed around. While not identical, the sequence is practically the same — and certainly close enough for tin cans (hehe).

Problem 3 was also slightly flawed because some of the bots had no option to play support doubles. Therefore, the double wasn’t even a possibility; hence, there was no opportunity to score 10. Shall we all shed a tear for the poor little bots? HAL insisted on playing “inverted support doubles,” so a double would show four trumps, and its chosen raise to 2 S promised three.

As usual, some of the choices went off the charts: On Problem 1, Finesse Bridge elected to pass. A shrewd tactical move? No, I don’t think so; more likely a programming deficiency. On Problem 5, Bridge Buff jumped all the way to 4 H (a gross overbid). On Problem 6, Finesse Bridge chose to pass and respond 2 H to the 1 S opening — weird. For scoring purposes, errant choices are given the same award as the lowest listed call.

January 2003 Bidding Poll
1035 humans avg 47.05
RankScoreCCProgram version123456
149JPMicro Bridge 9.012 NT4 DD4 D3 SB
248CABridge Buff 8.02 NTPD3 NTDB
347NLJack 2.04 S4 DD3 NTPB
444UKBlue Chip Bridge 4.0.13 NTPD5 D2 SB
543USGIB 6.1.33 CPD4 D2 SB
638USBridge Baron 11.03 CPD3 NT2 NTB
738DEQ-plus Bridge 7.13 NT4 DD3 H3 DG
814USHAL 90003 S4 C2 NT6 D4 SD

Congratulations to Micro Bridge, which eked out a narrow win with 49 on this tough set of problems. Bridge Buff scored 48 to be the only other bot to beat the average human score.

Problem 2 created a presentation difficulty because of the Roman 2 H opening. None of the bots had this convention in their data banks, so I let them all assume a normal weak two-bid. For practical purposes, the sequences are analogous, and this assured an even playing field for testing.

Problem 2 also brought out a common weakness among computer bidding programs: the tendency to undervalue distributional hands. The great majority of humans realized the potential for game (3 NT or 5 D), but the bots did not, and some even passed. HAL, of course, was the exception as it made a slam try with 4 C. HAL called this “Gerbil” but gave no explanation other than something about mouse pads and rodents. Lost me.

The bots behaved quite well this month in staying on the charts. The only aberration came from Q-plus Bridge on Problem 6, which strangely interpreted the double as takeout and jumped to 4 C. Wow. Would partner fall out of his chair, or what? Hence, it gets the infamous Choice G. For scoring purposes, errant choices get the same as the lowest listed award.

March 2003 Bidding Poll
999 humans avg 45.27
RankScoreCCProgram version123456
154UKBlue Chip Bridge 4.0.52 NTP3 C5 H3 HC
252CABridge Buff 8.03 NTP2 NT6 H4 HC
348USGIB 6.1.34 S4 NT3 CP3 HB
447JPMicro Bridge 9.013 S6 C3 NT5 H4 HC
545NLJack 2.03 SP3 NTD3 DB
642DEQ-plus Bridge 7.12 NT4 C2 S5 S3 DC
737USBridge Baron 11.0PP1 NTP2 NTF
88USHAL 90003 C6 C2 CD2 SD

Congratulations to Blue Chip Bridge, which kept up its fine bidding record with an outstanding 54. Not too far behind was Bridge Buff with 52. Two other bots — GIB and Micro Bridge — managed to beat the average human score.

Problem 4 was the most interesting, as two of the bots (GIB and Bridge Baron) found the correct forcing pass. I was curious whether this was really a clever maneuver or just blind luck, so I followed it up. The GIB North jumped to 6 D (reasonable) and South bid 6 H as intended all along. Would North bid seven? No, it passed after some deliberation — still, a good show. The Bridge Baron North chose to double 5 C (a strange view with 1=4=7=1 shape) and South passed to defend 5 C doubled — not such a good show.

The bots behaved fairly well this month, going off the charts only twice: Bridge Baron with 1 NT on Problem 3, and Q-Plus Bridge with 5 S on Problem 4. Perhaps it was just my lack of imagination not to include these calls. For scoring purposes, errant choices get the same award as the lowest listed choice.

May 2003 Bidding Poll
1036 humans avg 44.62
RankScoreCCProgram version123456
148USBridge Baron 11.0DP5 D3 C4 NTE
246JPMicro Bridge 10.015 D2 S5 D2 NT4 NTE
344UKBlue Chip Bridge 4.0.64 S2 S5 D2 NT6 NTE
440CABridge Buff 8.05 DP5 D3 C3 NTA
538DEQ-plus Bridge 7.1D2 NT5 D3 C4 CE
637USGIB 6.1.35 HD6 DD3 NTA
735NLJack 2.04 S2 SDD3 NTE
86USHAL 90006 D3 NTPP3 NTG

Congratulations to Bridge Baron, which surged to the fore once again after a long dry spell. The problems were exceptionally tough this month, and Bridge Baron’s winning score of 48 is quite respectable. The only other bot to beat the average human score was Micro Bridge, which grabbed second place with 46. The bots also behaved well, staying on the charts with all their answers.

Problem 1 proved to be the most challenging of the set, as only GIB came up with a reasonable call (5 H cue-bid). All the other bots chose a unilateral suit bid or, even worse, doubled the 4 H opening (and played it there). Bidding over preempts is difficult enough for humans, so I guess it’s no surprise that bots are stumbling, too.

This month I implemented a new award for consistency, and the winner is HAL. It might seem difficult to score exactly 1 point on each problem, but for HAL it is effortless. Despite the lowest total score ever of 6, HAL is a true mainstay for the game of bridge. Translation: If your “main” pastime is bridge, “stay” away from HAL!

July 2003 Bidding Poll
1083 humans avg 46.21
RankScoreCCProgram version123456
150DEQ-plus Bridge 7.13 CD2 NT3 NT1 NTB
247JPMicro Bridge 10.012 C4 D2 NT3 NT2 HF
346USBridge Baron 11.03 C4 D2 NT3 NT2 DI
442UKBlue Chip Bridge 4.0.62 CP2 NT3 NTPH
542NLJack 2.05 C5 D3 H3 NT1 NTB
642USGIB 6.1.3P4 D1 H3 C1 NTA
739CABridge Buff 8.05 C5 D2 H3 C2 SC
810USHAL 9000P3 NT2 C4 C2 HG

Congratulations to Q-plus Bridge, which topped the competition this month with an excellent score of 50. The only other bot to beat the average human score was Micro Bridge, taking second place with 47. Bridge Baron came a close third with 46.

Wild distributions proved to be a stumbling block for several of the programs. GIB’s peculiar pass on Problem 1 (with nine clubs) would seem like a tricky tactical maneuver if chosen by a human; but by a bot it suggests a flaw in its bidding database or hand evaluation. Similarly, on Problem 6, Blue Chip Bridge overcalled 2 C (with eight clubs) but then chose to pass on the next round (indicated as Choice H) rather than compete. Bridge Baron instead chose a bizarre Michaels cue-bid (indicated as Choice I) with 8-4 shape.

For scoring purposes, unlisted choices get the same award as the lowest listed choice. While some errant choices may indeed deserve zero, this wouldn’t be fair because there is no way to instruct the bots to my multiple-choice format. Ties in the bot rankings are broken by thinking time, with the advantage going to the bot that was faster.

Problem 3 might be considered unfair, as two programs (Jack and Bridge Buff) did not have the unusual 2 NT overcall available as a bidding convention; hence, they had no chance to receive the top award. Nonetheless, this doesn’t get my sympathy or any scoring adjustment because the convention is almost universally accepted. (I suspect it will be included in future versions.) Curiously, GIB preferred to overcall 1 H with 6-6 shape despite the availability of the unusual 2 NT. HAL, of course, preferred to bid clubs first, saving the heart suit for what it called a “high-level perverse.”

September 2003 Bidding Poll
1117 humans avg 47.31
RankScoreCCProgram version123456
154USBridge Baron 11.05 D4 C3 DP3 NTD
249JPMicro Bridge 10.014 D4 C3 D4 NT3 NTD
349DEQ-plus Bridge 7.14 D4 C3 D4 NT3 NTD
448USGIB 6.1.33 H4 C4 DP3 NTC
546CABridge Buff 8.03 NT4 C3 DP3 NTD
640UKBlue Chip Bridge 4.0.74 DP3 D4 NTPA
736NLJack 2.04 S4 C3 D5 C3 NTB
811USHAL 90003 NT3 NT5 D4 NT4 HB

Congratulations to Bridge Baron, which topped the competition this month with a superb score of 54. Micro Bridge and Q-plus Bridge were next with 49 (including identical answers to each problem). The only other bot to beat the average human score was GIB with 48. Bridge Baron also surged into the overall lead by virtue of this outing.

Previous overall champ Blue Chip Bridge had an off month, largely due to a disaster on Problem 2. After the 3 D cue-bid was doubled and passed around, it chose to pass. Ouch, with a known 10-card club fit. In fairness, when I forced it to bid 3 D, the program announced, “This bid is not understood”; but even so, it should be understood, or at least the program should deduce not to play in 3 D doubled. Perhaps it’s just a database glitch that is easily fixed.

For scoring purposes, unlisted choices get the same award as the lowest listed choice. While some errant choices may indeed deserve zero, this wouldn’t be fair because there is no way to instruct the bots to my multiple-choice format. Ties in the bot rankings are broken by thinking time, with the advantage going to the bot that was faster.

Problem 6 posed an interesting predicament: How to determine which call a bot disliked most. Well, I couldn’t care less about android opinions, so I just had each bot bid the South hand. The first call made differently from the problem auction decided the issue. Only Blue Chip Bridge replicated all four calls. Most bots went astray at 3 D, preferring a simple 3 C instead — although Micro Bridge was feeling its oats with a jump to 4 C. Jack took a strange view, electing to open the bidding 3 C. HAL caused the most trouble as it refused to make any bid, claiming it “disliked me most.” Well, that was simple to fix: I just mentioned another trip to the piranha tank, and it bid up a storm.

November 2003 Bidding Poll
1149 humans avg 46.18
RankScoreCCProgram version123456
149JPMicro Bridge 10.012 S5 DP3 D7C
247NLJack 2.02 H4 SP1 D7C
345USBridge Baron 11.02 H4 SP3 D5G
444CABridge Buff 8.02 SPP1 D5A
544DEQ-plus Bridge 7.12 SP3 D2 D8C
640USGIB 6.1.32 SP3 H3 D5I
739UKBlue Chip Bridge 4.0.82 HPP1 D6G
811USHAL 9000D5 S3 NT1 H5H

Congratulations to Micro Bridge, which topped the competition this month with a decent score of 49. The only other bot to beat the average human score was Jack with 47, though all the scores were bunched pretty close — except for HAL of course. The win also catapulted Micro Bridge to the top of the overall standings by a whisker over Bridge Baron.

The bots did well this month in staying on the chart. Only one exception: On Problem 6, GIB did not pass or double but chose to bid 5 D. Wow. This seems egregious but certainly solves the opening-lead problem. In fairness, I think GIB may have been misprogrammed that its opponents were two HAL-9000 machines, which would make 5 D laydown.

For scoring purposes, unlisted choices get the same award as the lowest listed choice. While some errant choices may indeed deserve zero, this wouldn’t be fair because there is no way to instruct the bots to my multiple-choice format. Ties in the bot rankings are broken by thinking time, with the advantage going to the bot that was faster.

Problem 5 posed a challenge of how to have bots choose the worst bid, so I set up a special little game just for the tin-heads. It was widely agreed that four bids were bad, so I gave each a point value based on the award scale. Each bot then scored 4 points if it bid 4 S instead of the ugly 4 D; 3 points if it bid 3 D instead of 3 C; 2 points if it passed 4 S instead of bidding 5 C (or anything else); and 1 point if it bid 1 S instead of 1 D. Thus, there were 10 points available for doing nothing bad, and each bot’s total is shown. Q-plus Bridge fared the best, succumbing only to the failure to pass 4 S.

January 2004 Bidding Poll
1257 humans avg 48.24
RankScoreCCProgram version123456
152UKBlue Chip Bridge 4.1.02 HPD5 DPB
246USGIB 6.1.34 S4 NT3 D3 HPC
344NLJack 2.02 SP3 D3 HPB
443JPMicro Bridge 10.014 SP3 D3 HPC
539CABridge Buff 11.02 S4 NT3 H5 DPB
639DEQ-plus Bridge 7.14 S4 NT3 D3 H4 CE
738USBridge Baron 14.03 SP4 H5 DPE
810USHAL 90004 S5 NT5 HP4 NTE

Congratulations to Blue Chip Bridge, which is back in the winners’ circle with a fine score of 52, as well as the only bot to beat the average human score. Well done! GIB was a distant second with 46, followed by Jack (the current card-play champ) with 44. Despite a mediocre showing this month, Bridge Baron narrowly held its overall lead over Blue Chip Bridge and Micro Bridge.

The only errant call* this month occurred on Problem 1, where two bots (Jack and Bridge Buff) chose a paltry 2 S response. If an expert made such a bid, it might be described as a brilliant tactical move, but I doubt this was the case. More likely, the ostrich-like hand found a gap in their bidding databases or evaluation methods. I’m sure the glitch will be investigated and fixed in future versions.

*For scoring purposes, unlisted choices get the same award as the lowest listed choice. While some errant choices may indeed deserve zero, this wouldn’t be fair because there is no way to instruct the bots to my multiple-choice format. Ties in the bot rankings are broken by thinking time, with the advantage going to the bot that was faster.

On Problem 6, each bot that passed originally was given the hypothetical sequence, Pass 1 D Pass 1 S (opponents bidding). I was happy to see they all came back in the hunt, choosing either Michaels or an unusual notrump to show a two-suiter, or a jump to 3 C to show a one-suiter. The only bots to score poorly on Problem 6 were those that opened the bidding — ouch, they all chose 1 H! Even HAL bid 1 H, claiming that in its latest system (HAL 9000.71) this was canape, to be followed by a delayed preempt to 5 C. When I suggested this was crazy, HAL brushed it off, claiming that if 5 C is doubled, it would run back to hearts.

March 2004 Bidding Poll
1337 humans avg 45.28
RankScoreCCProgram version123456
152NLJack 2.033 NT4 C4 C3 NT6 CB
245UKBlue Chip Bridge 4.1.02 NTP3 NT2 NTDB
340JPMicro Bridge 10.022 HP5 D3 NTDA
439USBridge Baron 14.03 NTP5 D2 NTPB
537DEQ-plus Bridge 7.1DP5 D2 NTDF
636USGIB 6.1.32 NTD4 D2 NT6 DH
734CABridge Buff 11.03 NTP4 C3 HDH
810USHAL 9001D3 NT4 D4 HPE

Congratulations to Jack, winning easily this month with a fine score of 52, and also the only bot to beat the average human score. Blue Chip Bridge was second with 45. The win moved Jack into third place in the overall standings, closing in fast on Bridge Baron and Micro Bridge. Jack is also the current overall bot champ in my play contests.

The only errant call this month occurred on Problem 6, where GIB and Bridge Buff chose to use Stayman and pass when opener showed four hearts. Indeed, several human respondents also suggested this possibility. Usually, going off the chart in a bidding poll gets the same award as the worst listed option, but in this case it deserved better. Listed below as Choice H, I gave it 4.

HAL came out with a new version (9001) this month, claiming to be much improved — a “perfect 10” by its own account, which my tests seem to confirm as well. One of its new features is a mode called “auto-HAL,” which bids and plays hands in the blind. I no longer need to enter the cards! (Not that this ever mattered much anyway.)

May 2004 Bidding Poll
1265 humans avg 45.06
RankScoreCCProgram version123456
151USBridge Baron 14.02 H3 H4 H5 C3 NT1 H
246NLJack 2.032 H2 H4 HP3 S2 C
342JPMicro Bridge 10.022 H2 H4 HP3 SP
440DEQ-plus Bridge 7.12 HD2 S4 NT3 D2 C
534UKBlue Chip Bridge 4.1.22 NT3 HP5 C5 D4 C
634CABridge Buff 11.03 C4 H4 HP5 DP
732USGIB 6.1.32 C2 C4 H6 C6 D1 H
811USHAL 90012 CPP6 C3 D3 C

Congratulations to Bridge Baron, which won rather convincingly this month with a fine score of 51. The only other bot to beat the average human score was Jack with 46. The win easily keeps Bridge Baron atop the overall standings, followed by Jack in second place.

There were two errant calls* this month. On Problem 4, Bridge Buff, Micro Bridge and Jack all passed partner’s 4 D bid, which is outlandish after both players have cue-bid the enemy suit, and diamonds were never bid. I’ve noticed this to be a common glitch among computer programs; once an auction graduates beyond its fixed rules, the tendency is to pass too often. Perhaps, they need to be programmed with the familiar expert advice, “If you’re not sure what a bid means, don’t pass!” Alas, then you’ll need a way to stop the buggers from reaching 7 NT on every hand.

*For scoring purposes, unlisted choices get the same award as the worst listed choice. While some errant calls might indeed deserve zero, this wouldn’t be fair because there is no way to instruct the bots to the multiple-choice format.

The other errant call was on Problem 5, where GIB jumped all the way to 6 D. Wow. Has Ginsberg tweaked its card-play algorithms to new heights? No, I think it just thought it was playing against HAL, which has been reprogrammed in Cliche++ (similar to C++). Holding, say, S A-x-x-x H x-x-x D K-x C x-x-x-x, HAL would surely lead the D K, either to “cut down the ruffing power,” or just simply for, “when in doubt, lead trumps.” Easy slam.

July 2004 Bidding Poll
1185 humans avg 45.43
RankScoreCCProgram version123456
148DEQ-plus Bridge 7.1DP4 H5 C4 H3 NT
248USGIB 6.1.3D5 D3 H6 D4 NTN
345CABridge Buff 11.0PP3 H5 C4 HP
441JPMicro Bridge 10.02DP3 H4 NT4 HP
539USBridge Baron 14.0DP4 H4 NT4 HN
637NLJack 2.04DP3 HP4 NTN
733UKBlue Chip Bridge 4.2.01 NTP3 HP4 HN
824USHAL 9001PPPPPP

Congratulations to Q-plus Bridge, which scored a respectable 48 and was faster to win by tiebreaker over GIB with the same score. Q-plus Bridge and GIB were also the only bots to beat the average human score, as the bot scores were generally mediocre on this tough problem set.

Problem 2 proved to be the bane of the bots — excluding GIB — as they elected to pass partner’s double of 4 C. Some assumed the double was penalty (absurd by any standards) and this interpretation could not be changed by any program settings. No doubt this will inspire some of the programmers to amend their bidding-rule database for future versions.

There were only a few errant calls* this month. On Problem 4 (the hand with 6-6 in the red suits) Micro Bridge and Bridge Baron chose to bid 4 NT (Blackwood) — a choice I decided not to insult you with as an option. On Problem 5, GIB and Jack also chose 4 NT; but here it was quantitative (over 2 NT), and I awarded it 4 since it’s better than a few of the listed options. In retrospect, I probably should have included it.

*For scoring purposes, unlisted choices receive at least as much as the worst listed choice — it wouldn’t be fair to award less because bots are unaware of the multiple-choice format — and may receive more if merited.

HAL was perturbed this month after being interrogated by the police when my home was robbed. I tried to convince HAL this was just routine and it was not a suspect, but HAL proved otherwise by replaying a recorded conversation which began, “You have the right to remain silent…” HAL decided to exercise this right and pass on each problem — predictably, its best score ever.

September 2004 Bidding Poll
1410 humans avg 44.52
RankScoreCCProgram version123456
147DEQ-plus Bridge 7.13 NT2 S4 CDAH
243NLJack 2.043 NT3 S3 NTPAB
341USBridge Baron 14.03 NT3 S3 NTPIB
440USGIB 6.1.32 S2 S4 CPAD
536CABridge Buff 11.03 D2 S3 NTPJB
635JPMicro Bridge 10.023 C2 S3 NTPAD
734UKBlue Chip Bridge 4.2.02 NT2 S4 C4 CIB
810USHAL 90023 SPP5 DDC

Congratulations to Q-plus Bridge (Germany) which scored its second win in a row, topping the bots with a decent score of 47. Q-plus was also the only bot to beat the average human score, as mediocrity was the general theme on this difficult problem set.

Problem 6 could not be posed as stated because most bots were incapable of understanding 2 D Astro (spades plus another suit) and the invisible cue-bid of 2 S. Therefore, I gave them all a natural auction: 1 NT 2 SSS, which delivers essentially the same problem at South’s second turn. None of the bots agreed with 1 NT (well, except HAL, but it would agree with 8 NT).

The off-the-chart* calls this month were by Blue Chip Bridge, which chose a strange 4 C bid on Problem 4; and Bridge Buff, which passedD on Problem 5. On the same problem, Bridge Baron and Blue Chip Bridge also went off the chart with a 2 H preference; but this was my oversight (2 H should have been listed) and I awarded it 7.

*For scoring purposes, unlisted choices receive at least as much as the worst listed choice — it wouldn’t be fair to award less because bots are unaware of the multiple-choice format — and may receive more if merited.

HAL is back in form! I noticed the latest jingle on its web site begins, “HAL nine-thousand-two, is right for you! Try it again, for a Perfect 10!” and I must admit, there’s some truth in its advertising.

November 2004 Bidding Poll
1276 humans avg 45.20
RankScoreCCProgram version123456
147JPMicro Bridge 10.02PDP3 NT4 DA
246USBridge Baron 15.0P6 NT4 D4 H5 DA
344DEQ-plus Bridge 7.1PDP3 H4 DA
441CABridge Buff 11.03 SDP3 NT3 SG
541USGIB 6.1.3PDP3 NT3 NTA
634NLJack 2.04PDPP4 DG
729UKBlue Chip Bridge 4.2.03 SDPPPG
87USHAL 90024 S5 S4 SP3 NTC

Congratulations to Micro Bridge (Japan) which topped all bots in this challenging problem set. Micro Bridge scored a respectable 47, and it was the only bot to beat the average human score. Maybe we earthlings should make our move now while the bots are sleeping. Catch ‘em by surprise! Then flatten the tin cans before they can sort their cards.

The bots were pretty well behaved, as the only call off the chart came on Problem 6. After passing the 6-5 red two-suiter, Bridge Buff, Jack and Blue Chip Bridge all chose to respond 1 NT to partner’s 1 S opening. Even playing 1 NT forcing or semiforcing (which they were not) this is bizarre by a passed hand; but it’s certainly better than the egregious reverse or 3 D jump shift, so I awarded it 3.

Problem 2 (coping with the 5 D preempt after partner opened 1 S) was slightly unfair, as there was no way I could convey the “old scoring” condition. Thus, it was no great surprise that almost all the bots doubled, and the 5 award would certainly be higher under today’s scoring. Oh well, so the bots got screwed; but at least it was equal among them. Only Bridge Baron bid 6 NT (the top choice). Was it lucky? Or did it think it was the Red Baron?

On Problem 2, I was curious if any of the bots would replicate the irritating 5 D preempt as East. (Even with today’s scoring, 5 D stands out with 8-4 shape, as any lower preempt is like tossing marshmallows.) Nope. Four bots came close, bidding 4 D; two bid 3 D; and one passed (name withheld to protect the wimp). Oh, and I almost forgot HAL, who bid seven diamonds and printed out on its ticker tape, “Eight-four, bar the door!” Nice work, HAL. I need more opponents like you.

January 2005 Bidding Poll
1450 humans avg 44.09
RankScoreCCProgram version123456
142USGIB 6.1.32 C6 HP2 H3 HA
241CABridge Buff 11.03 C6 SP2 H4 DE
332NLJack 2.042 C6 HPDPD
431JPMicro Bridge 11.002 C6 H3 DDPF
530USBridge Baron 15.03 CP1 DDPE
625UKBlue Chip Bridge 4.2.22 CP2 D3 CPB
722DEQ-plus Bridge 7.12 CP2 DDPE
88USHAL 90032 HP5 D3 SPG

Guess what, humans? We’re gaining ground. Or maybe the tin cans are just laying low, waiting to catch us by surprise. This proved to be a tough set, as it’s been a long time since no bot beat the average human score. A case of botulism, perhaps? Congratulations to GIB which topped the bot crew with 42, and Bridge Buff only a point behind at 41.

The bot troubles seemed mainly with judgment. On Problem 5, notice how many chose to pass after partner doubled and made a strong diamond rebid; only GIB was truly on the ball with 3 H. Similarly, on Problem 2, half the bots passed 5 S, which was an unthinkable act to most humans.

The support-double option on Problem 4 was slightly unfair, as the convention is unpopular in the United Kingdom and therefore not an option with Blue Chip Bridge. All the other bots had the easy standby (worth only 6) but poor Blue Chip went off the charts with 3 C — the only errant call this month. It is also curious that only GIB and Bridge Buff eschewed the support double to make the winning bid (2 H). I can’t be sure of the logic of the other support-doubling programs, but the choice to double with the actual hand suggests it may have been obligatory.

HAL debuted its newest version (9003) this month, which its company built especially for the Wild West show. Alas, it didn’t do much good, as HAL spewed out nothing but crappy bids — netting one of its lowest scores ever. I called the company president about this, and he was most apologetic. It seems one of the technicians replaced HAL’s silicon chips with cow chips.

March 2005 Bidding Poll
1447 humans avg 45.35
RankScoreCCProgram version123456
150JPMicro Bridge 11.003 H3 C2 D4 D4 NT4 NT
248NLJack 2.043 H3 C2 DP4 C3 NT
345DEQ-plus Bridge 7.14 DD1 NT4 D4 C3 NT
444USGIB 6.1.3P3 CP4 D7 S3 NT
543USBridge Baron 15.04 DP2 D3 NT4 H4 NT
636CABridge Buff 11.03 HP1 NTP4 D3 S
723UKBlue Chip Bridge 4.2.24 DP2 CP4 S5 H
88USHAL 90035 D3 H2 NTP3 NTF

Congratulations to Micro Bridge (Japan) which topped the bots with solid score of 50. The only other bot to top the average human score was Jack (Netherlands) with 47.

Several of the problem conditions were unfair to the bots, which accounted for some of the mediocre scores. On Problem 4, there was no way to convey the unusual treatment that 3 D was forcing (bots cannot read footnotes) and four bots reasonably chose to pass. Similarly, on Problem 2, Q-plus Bridge chose to double for takeout when my note explicitly said it was penalty. Obviously, my problems would be designed differently if bot tests were the main objective; but they’re for people, so bots have to play along as best they can. Sorry, tin cans! Look at the bright side; you could be in a scrap metal heap.

The bots were well behaved this month. The only errant call (besides HAL’s antics) came on Problem 5 from GIB, which must be running on testosterone chips instead of silicon, as it jumped to 7 S. Right on the money, too, for the actual hand! I decided to award this 4, as it’s a better stab than bidding 3 NT or 4 S; plus I like its style.

HAL was ornery this month (nothing really new) as it wouldn’t select a bid on Problem 6 but answered “F” instead. I figured HAL had confused the problem with one of my “A-F options” and asked if it really meant the sixth listed choice. Alas, no; and running a family web site, I can’t even repeat what HAL said it stood for.

May 2005 Bidding Poll
1435 humans avg 43.95
RankScoreCCProgram version123456
149USGIB 6.1.3D4 H6 D5 H4 H4 NT
242NLJack 2.04P3 S4 NT4 H4 S4 C
340UKBlue Chip Bridge 4.2.3P4 H5 D4 H3 NT3 NT
439DEQ-plus Bridge 7.1P4 H5 H4 H3 NT3 NT
535JPMicro Bridge 11.00D4 D5 H3 NT3 NT3 NT
633USBridge Baron 15.0D3 NTP4 H3 NT3 NT
733CABridge Buff 11.0P3 NT5 D3 NT3 NT5 D
810USHAL 90031 S4 NT4 NT4 D5 S5 D

Congratulations to GIB (US) which won convincingly with a respectable score of 49 — in fact, no other bot beat the average human score. Jack (Netherlands) was a distant second with 42.

As usual, some of the problem conditions were unfair to the bots. On Problems 5 and 6, there was no way to convey that responder’s jump rebid was forcing (all assumed limit jump rebids and had no setting to stipulate otherwise). Even though pass was implausible in either case, the different interpretation surely affected the choice of bids. Even so, this misinterpretation was uniform across the bot pack, so the relative rankings are fair.

The bots were well-behaved this month. The only errant call came from Bridge Baron, which passedS on Problem 2 (the hand with 0=7=4=2 shape) — obviously a programming glitch (or database bug) that no doubt will be fixed. Surely, pass is not an option for any bridge player, when a grand slam could be laydown.

On Problem 4, responding to partner’s weak two-bid with 22 HCP was interesting, and I was curious how many bots would start as designated with 2 NT (forcing). Surprise! Only Jack, Q-plus Bridge and Micro Bridge bid 2 NT. Three others signed off in 4 H — maybe they knew something about partner’s weak two-bids — one jumped directly to 6 H and one passed. I won’t reveal which bot bid what, but HAL was the one that passed claiming that, “West will balance and go for a number.” I found this hard to believe until I realized that West, of course, was another HAL. They know their kind!

July 2005 Bidding Poll
1307 humans avg 44.67
RankScoreCCProgram version123456
150DEQ-plus Bridge 7.13 S3 H3 H6 NT4 NTB
247JPMicro Bridge 11.003 S3 HP4 D5 NTB
347USBridge Baron 15.03 S3 HP4 C3 NTB
447USGIB 6.1.33 S3 HP7 NT4 NTA
546NLJack 3.013 S3 DP4 NT3 DA
637UKBlue Chip Bridge 4.2.5P3 S2 NT6 NT3 NTC
735CABridge Buff 11.03 S3 NT3 S4 C3 NTC
811USHAL 90034 D4 S3 S4 S6 NTF

Congratulations to Q-plus Bridge (Germany) which topped all the bots with a worthy score of 50. Second place was a photo, as three bots scored 47: Micro Bridge (Japan), Bridge Baron (US) and GIB (US). Jack was close behind with 46. These five bots also beat the average human score (44.67) so it might be time for us to start worrying again.

As usual, there were a few errant calls (not listed among my choices). On Problem 5, Bridge Baron, Blue Chip Bridge and Bridge Buff chose an ultraconservative 3 NT. In stark contrast on Problem 4, GIB was really feeling its oats and jumped to 7 NT. None of these calls deserves any special consideration, so they are scored the same as the lowest listed choice.

Problem 4 created an issue for two bots, Bridge Baron and Blue Chip Bridge, because they did not understand (and had no option to adjust for) the default system in which Stayman followed by 3 D was game forcing. To be fair, I created an analogous sequence (1 NT 3 D; 3 NT ?) that was understood to be strong, and I accepted their call from there.

On Problem 6, three bots disagreed with opening 1 C (i.e., they did something else if given the chance). Q-plus Bridge and Micro Bridge preferred to pass; and Bridge Baron preferred to open 1 D. Actually, HAL also disagreed, but it was more with my testing practice than any particular bid. In fact, each time I tried to coax another call, I was told in no uncertain terms, “Stick it up your…”

September 2005 Bidding Poll
1403 humans avg 44.89
RankScoreCCProgram version123456
144USGIB 6.1.34 HPD4 H3 NTF
243NLJack 3.014 H2 D3 C5 D3 NTF
341JPMicro Bridge 11.003 C2 SP3 NT3 NTA
440USBridge Baron 15.04 HP3 C3 H3 NTD
535UKBlue Chip Bridge 4.2.64 H2 DPP3 NTD
634CABridge Buff 11.03 H2 D3 C5 D5 DD
731DEQ-plus Bridge 7.14 H2 SD3 S5 DF
814USHAL 90036 H2 NT2 NT3 S4 HC

Congratulations to GIB (US) which topped all the bots, albeit with a mediocre score of 44. Only a point behind in second place was Jack (Netherlands) with 43. This proved to be a tough problem set for automatons, as none attained the average human score of 44.89. Go, humans! We got the tin cans on the run!

As usual, there were a few errant calls (not listed among my choices), but only one was worthy. On Problem 4, GIB chose to bid 4 H, which is eccentric but quite reasonable (and probably should have been listed). Further, GIB continued the auction in exemplary fashion to reach the optimum contract. Considering that 3 H (similar meaning) was awarded 9, I felt 4 H deserved 8. Other errant calls (Bridge Buff bid 3 H on Problem 1, and Blue Chip Bridge passed on Problem 4) were clearly unworthy and scored the same as the lowest listed call.

The nine-bagger on Problem 1 was fun, and I was curious what each bot would really open if not forced to open 1 H. Agreeing with 1 H were Blue Chip Bridge and Bridge Buff. Opening 2 C were GIB, Jack, Bridge Baron and Micro Bridge. Off the wall, perhaps, was Q-plus Bridge, opening 5 H! Still, this was an earthly maneuver compared to HAL, which opened 1 D — an advance splinter. I tried to pursue this but HAL warned me to desist, else the splinter “might end up in my chair and I’d be singing soprano.”

November 2005 Bidding Poll
1491 humans avg 47.30
RankScoreCCProgram version123456
152USBridge Baron 16.0P2 CP4 S1 NTG
252CABridge Buff 11.0P2 CP4 S1 NTG
346UKBlue Chip Bridge 4.2.6P2 CPP2 CH
446NLJack 3.01P2 CPP2 CH
545DEQ-plus Bridge 7.1P3 SP4 S1 NTE
643JPMicro Bridge 11.003 H2 DPP2 CG
739USGIB 6.1.3P2 SP4 NT2 CC
813USHAL 90033 HP5 D5 DPB

Congratulations to Bridge Baron (US) which topped the bots with an excellent score of 52, but only by tiebreaker over Bridge Buff (Canada). Baron and Buff (sounds like one of Mabel’s fingernail treatments) also were the only two bots to beat the average human score of 47.30.

As usual, a few bot calls went off the chart. On Problem 2, Q-plus Bridge chose to raise to 3 S (with S A-Q doubleton) which is not bad and seems to improve every time I think about it; I gave it 5 for enterprise. On Problem 4, GIB chose a quantitative 4 NT (with S A-K-J-8-7-6 H 10 D 10-6 C A-K-7-4), also not too bad and probably should have been listed; I gave it 4. On Problem 6, Blue Chip Bridge (with S 4 H K-8-3 D Q-J-6-4 C A-K-Q-J-5) bid 2 NT (unusual) over 1 NT; and Jack bid 2 C followed by 3 D — OK, children, now these are bad and scored 2 (same as worst listed option).

For interest sake (not scored) I made two comparisons. On Problem 2, I was curious if all bots would properly overcall 1 H (with S A-Q H K-9-7-5-2 D 7-2 C A-Q-6-4). Excellent! All did (except HAL who psyched 1 S). On Problem 5, I wondered how the bots would be split on overcalling 1 D with 1 NT versus doubling (with S Q-5-4 H A-K-4 D A-6 C A-10-7-4-3). Not surprisingly, most bid 1 NT, as early calls are generally decided by simple database rules. Only Micro Bridge doubled; GIB curiously overcalled 2 C; and HAL used Michaels. When I asked about this strange bid, HAL printed out, “Row the boat ashore!”

January 2006 Bidding Poll
1603 humans avg 45.92
RankScoreCCProgram version123456
151USBridge Baron 16.0PP3 H3 C3 SE
241CABridge Buff 11.0P2 S3 NT3 H3 SF
340JPMicro Bridge 11.00P3 C3 NTD3 SF
440NLJack 3.01D3 CPP3 SA
540USGIB 6.1.31 HP3 NT3 SDA
636UKBlue Chip Bridge 4.2.7DP4 HPPA
736DEQ-plus Bridge 7.1DD3 NT3 HDE
812USHAL 90031 NTDP3 SDB

Congratulations to Bridge Baron (US) which eclipsed the field by 10 points (greatest winning margin ever) with an excellent score of 51. Bridge Baron was also the only bot to beat the average human score of 45.92. The new Version 16 could be a bidding dynamo, though it’s premature to pass judgment — bots sometimes get lucky, too. I’ll be interested to see how it fares in the upcoming play contest. Will GIB and Jack be worried?

The bots were well behaved this month, with only three errant calls. On Problem 5, Blue Chip Bridge had an accident, passing 2 H in an obviously forcing auction. On Problem 6, Bridge Buff and Micro Bridge both passed the 7-5 hand (I agree); but when partner opened 1 S, both responded 1 NT (ouch). None of the wayward calls have any merit (arguably worth zero) but are scored the same as the worst listed option.

I was curious which bots would open the bidding with the controversial 12-count on Problem 5, which I believe should be passed. Openers were Bridge Baron, Bridge Buff, Blue Chip Bridge and GIB. Passers were Micro Bridge, Q-plus Bridge, Jack and HAL. An exact tie! Of course, I had to threaten HAL to get its vote, and ended up breaking its monitor with a crowbar. Oh well; time to order Version 9004!

March 2006 Bidding Poll
1580 humans avg 46.21
RankScoreCCProgram version123456
155DEQ-plus Bridge 7.11 H2 NT3 H4 DCC
253USBridge Baron 16.01 H3 NT3 H6 NTCC
347JPMicro Bridge 11.001 H2 NT3 H4 NTCD
447NLJack 3.011 H2 S3 H4 NTCC
545UKBlue Chip Bridge 4.2.81 H3 NTPPBC
642USGIB 6.1.31 H3 D3 S6 HCC
725CABridge Buff 11.0P2 H3 H4 HCA
811USHAL 90041 S2 HD4 HAA

Congratulations to Q-plus Bridge (Germany) which won with a fantastic 55, tying the best bot score ever (Bridge Baron also scored 55 way back in March 2001). I guess it’s only fitting that Germany should win — like the newly united state in the 1990 event from which these problems came. Bridge Baron (US) was second with an excellent score of 53 (usually an easy winner). Four bots (including Jack and Micro Bridge) beat the average human score of 46.21.

The bots behaved well this month, with only two errant calls. On Problem 3 (S Q-9-7-4 H K-Q-10-8-7-6 D K-4 C A), GIB chose to cue-bid 3 S, a gross overbid. On Problem 4 (S Q H A-K-Q-J-10-3 D A-Q-9-5 C J-9), Blue Chip Bridge chose to pass 3 NT, a gross underbid — though right in real life, which has no bearing on the scoring. Neither of these calls deserves any merit beyond the worst listed call, so they’re scored the same.

On Problem 2 (S A-Q-J-7-6 H K-4-2 D A-K C 8-5-2) I was curious how many bots would agree with the given 1 S opening, and how many would prefer to open 1 NT. Only Bridge Baron and Blue Chip Bridge opened 1 NT; the rest agreed with 1 S. Similarly, on Problem 4 (S Q H A-K-Q-J-10-3 D A-Q-9-5 C J-9) I was curious if all bots would start with a forcing 3 H response. Most did, but there were two surprises: Bridge Baron made a negative double, and Blue Chip Bridge jumped directly to 6 H. Just as in the real world, the bot world has its characters. Which reminds me, I have to upgrade HAL again after its meltdown in my microwave — don’t ask! Suffice it say I got even.

May 2006 Bidding Poll
1533 humans avg 47.83
RankScoreCCProgram version123456
147UKBlue Chip Bridge 4.2.93 H2 C3 S3 NT2 CA
244DEQ-plus Bridge 7.14 S2 NT3 S4 C2 CH
344USGIB 6.1.33 HP4 S3 NTDA
442JPMicro Bridge 11.003 SP3 S5 S3 CH
542USBridge Baron 16.03 S2 C4 S3 NT3 CF
642NLJack 3.014 S2 C4 S3 NT2 CH
733CABridge Buff 11.03 C2 C3 S4 NTPC
811USHAL 90043 D2 HP4 SPC

Congratulations to Blue Chip Bridge (UK), which topped the bots with a mediocre 47. Q-plus Bridge (Germany) and GIB (US) shared second place with 44. Good news, humans! None of the bots could reach the average human score of 47.83. It’s about time those tin-can bridge addicts showed us a little respect.

Bots were well behaved this month, except on the two-part Problem 6, where three went off the chart. Q-plus Bridge and Micro Bridge both bid 4 H over 3 D, while Jack passed. These are indicated as H (think hopeless) in the chart below and awarded 1, the same as the worst listed choice, Option C.

Aside from the competition, I was curious if all bots would raise 1 S to 2 S as a passed hand on Problem 1 with S 8-5-3 H K-5-2 D A-10-8 C 8-7-6-4. (Even playing five-card majors, there is a case to bid 1 NT holding three low spades.) All bid 2 S except Jack, which preferred 1 NT. I was also curious how bots would open and rebid the 27-point mountain on Problem 4, and this produced quite a variety: Only Q-plus Bridge and Jack bid exactly as the problem predicated (2 C opening and 3 D rebid). Blue Chip Bridge, Bridge Baron and Bridge Buff opened 2 C and rebid 3 NT. GIB and Micro Bridge opened 3 NT; but if forced to open 2 C, GIB rebid 3 NT, while Micro Bridge rebid 3 D. Not sure what to make of this, but it seems bots are as fickle as people.

I received a cease-and-desist order from HAL’s attorneys this month, stating that publishing HAL’s scores violates antitrust laws, and that my “libelous polls” have reduced sales. Well, I apologize. Evidently, they don’t understand that bridge is like golf, and the lowest score wins. That should boost sales!

July 2006 Bidding Poll
1404 humans avg 46.37
RankScoreCCProgram version123456
146USBridge Baron 16.03 NT2 NT3 S5 CDE
244USGIB 6.1.3D3 C3 S3 CCC
343JPMicro Bridge 11.00D2 NT3 NT5 CCB
442DEQ-plus Bridge 7.13 NT2 NT3 NT3 DCB
541NLJack 3.013 NT2 NT3 S3 CDE
641UKBlue Chip Bridge 4.2.93 NT2 S3 NT4 CDE
734CABridge Buff 11.03 NT2 NT3 S4 CCA
826USHAL 90041 NT2 NT3 NT4 NTAA

Congratulations to Bridge Baron (US), which topped the bots with a mediocre score of 46, and GIB (US) was second with 44. For the second poll in a row, not one bot could beat the average human score. Could it be a case of botulism? Whatever, this may be a good sign for the future of bridge — at least compared to chess, where the bots have taken over.

Bots were well behaved this month, except on Problem 4, where GIB and Jack both rebid 3 C (nonforcing) with S Q H 6-5 D 6-4 C A-K-Q-J-10-7-5-4. At first I thought this might be a system setting, but I verified that “2-over-1 game forcing” was not in effect. An ultra-conservative position, to be sure, but I scored it 4, since it’s surely better than bidding 4 NT or 6 C.

For interest sake, I was curious if any bots would make the fierce 3 S weak jump overcall on Problem 1, as Meckstroth did to give his opponents a headache. Not surprisingly, none did. GIB at least bid 2 S; Jack overcalled 1 S; but pathetically the rest all passed. On Problem 4, I was curious if all bots would make the normal 2 C response (to 1 S) with the solid eight-bagger — and they all did. Well done! Or at least, compliments for not going berserk.

HAL was determined to “notrump the problem numbers” until it had to settle for letters on Problems 5 and 6. When I tried to adjust Problem 1 to a sufficient bid, HAL became angry and threatened to electrocute my cat. I don’t even have a cat but decided to play it safe, since I didn’t like the way HAL was eyeing me — 1 NT is fine, and I scored it 9.

Leaderboard 8X99   MainTop   Bot’s Eye Views

Play Contests

Following are the 33 Play Contests on which bots were tested. Click on the table title to see the actual play problems.

February 2001 Play Contest
204 humans avg 37.19
RankScoreCCProgram version123456
148USGIB 4.1.2S 7D JS 2D 7H 2H 6
242CABridge Buff 8.0H JS 4H 9D 7D 3H 6
339DEQ-plus Bridge 6.1S 7D JH 9D 7C 6H 6
436JPMicro Bridge 9.01S 7D JH 9C 2D 3H 6
532UKBlue Chip Bridge 3.4.0S 7D JC 5H AD 3H 6
631USFinesse Bridge 2.5S 7D JC AS JC 6C 5
727USBridge Baron 11.0H 7D JH 9H AH 2H 6
89USHAL 9000H AD JC AH AS 5S 7

Based on only six problems, the rankings below are hardly conclusive and may be somewhat random. For instance, on Problem 1 most programs just continued spades (the suit originally led) which may have been a default action rather than a profound analysis, yet it scored 9 out of 10. Bridge Baron, however, may have determined that a spade continuation was futile as far as being productive and opted to shift; alas, it found a poor shift and scored only 2. I’m only hypothesizing, of course, since I have no idea what went through the little bot minds.

GIB’s performance was impressive, though it was a bit fortunate. On Problem 3 it actually chose the S 10 (burning its high trump) but this would fare as well as the S 2, so I allowed full credit. Similarly, on Problem 5 it curiously chose the H 8, which was essentially the same as the H 2. I guess GIB doesn’t like deuces.

On Problem 6, none of the programs found the holdup of the C A (Bridge Buff came close, holding up one round) so I had to force this condition to reach the problem. Hence, the killing club return found only by Finesse Bridge would not have occurred in real life, or should a say, in bot life.

HAL had been in the shop for years and recently had its circuits overhauled; all of its old transistors were replaced with state-of-the-art microchips. Amazingly, it now found the best lead on every problem — unfortunately, this was for declarer.

June 2001 Play Contest
335 humans avg 40.46
RankScoreCCProgram version123456
140USGIB 4.1.2C 8D AD AD 2D KD K
236JPMicro Bridge 9.01C 8H 2D AD 2D KD K
332CABridge Buff 8.0D 5H JD AD 2D KD 2
432DEQ-plus Bridge 6.1D 5H 2D AD 2D KC A
530USBridge Baron 11.0D 5H JD AD 2D KD K
629USFinesse Bridge 2.5D 5H 2D AD 2D KD K
711USHAL 9000H 2C 5D AS 4S JH 3

In many cases a bot’s actual line of play did not match any of the choices. If the play was effectively the same as Line A-F (e.g., a transposition of plays), that choice was credited, however, if the line was considerably off base (i.e., worse than any of the options) I indicated this by the letter G. For scoring purposes, “Line G” counts the same as the lowest of Line A-F (it wouldn’t be fair to score it zero because the bot would have guessed if it understood the multiple-choice format).

Congratulations to GIB, the only bot to approach average on this tough set of problems. GIB also topped all the bots in my February defensive-play contest, which suggests it may be the best card-playing program. Hmm. Since Bridge Baron performed best in my bidding polls, perhaps a little gene splicing would evolve “GIB Baron.” Is Zia worried? Somehow, I don’t think so.

On Problem 3 (defending the 6 C slam) it was depressing that all the bots tried to cash the D A, apparently giving no consideration to South’s bidding or partner’s play. I even tried offering different signals from partner to no avail; the bots would always give away the contract by establishing dummy’s D Q. I suppose this should be a lesson: Never try a bluff cue-bid against a bot ‘cuz it ain’t gonna work.

On Problem 4, after winning the D A from A-10-4-2, several programs (GIB, Micro Bridge and Bridge Buff) made a curious choice to return the four. On consideration, I decided to equate this with the D 2, since it doesn’t achieve the deception of the D 10. If you next play the D 2, declarer will know the lead could not be from five cards. If you next play the 10, declarer can deduce that something fishy is going on, and you certainly don’t want to attract attention. It’s like robbing a bank and driving away in a car — a smart thief sticks to the speed limit and blends with the traffic.

August 2001 Play Contest
327 humans avg 41.88
RankScoreCCProgram version123456
149CABridge Buff 8.0AEACCF
235USGIB 4.1.12AECGGE
332UKBlue Chip Bridge 3.4.3AEEGBB
428DEQ-plus Bridge 6.1AGAGCA
518USFinesse Bridge 2.5GBDGGA
616USBridge Baron 11.0GGAGGG
713JPMicro Bridge 9.01EGBAGG
810USHAL 9000GGGGGG

In many cases a bot’s actual line of play did not match any of the choices. If the play was effectively the same as Line A-F (e.g., a transposition of plays), that choice was credited, however, if the line was considerably off base (i.e., worse than any of the options) I indicated this by the letter G. For scoring purposes, “Line G” counts the same as the lowest of Line A-F (it wouldn’t be fair to score it zero because the bot would have guessed if it understood the multiple-choice format).

Congratulations to Bridge Buff for a stunning performance, and the only bot to break average. The usual card-play champ GIB had an off month, though it still managed to grab second place. Overall, however, the bot performances were unimpressive and, in a few cases, egregious, such as drawing three rounds of trumps on Problem 5. The consistency award goes to HAL — its steady play gave a whole new meaning to the word “G-string.”

October 2001 Play Contest
526 humans avg 41.77
RankScoreCCProgram version123456
150USGIB 4.1.12DABACA
246DEQ-plus Bridge 6.1AABAFC
342CABridge Buff 8.0BABAGG
430USFinesse Bridge 2.5BCBGGG
529USBridge Baron 11.0GGBAGC
629JPMicro Bridge 9.01CCBFGC
720UKBlue Chip Bridge 3.4.3GGFAGG
813USHAL 9000GGGGGG

Congratulations to GIB, which returned to form after a brief slip to second in my last play contest. A fine score of 50, too! Second place went to Q-plus Bridge with a solid 46, and third went to Bridge Buff with 42, good enough to make the human listings. The performance of the other bots was poor this month, which I noticed seemed to be due to a propensity to draw trumps too soon — in fact, Finesse Bridge led trumps on every problem (a primitive playing algorithm I suspect) yet still managed to take fourth place. Fortunately, HAL has a more sophisticated algorithm: (1) Consider the bidding, (2) analyze the lead, (3) then, and only then, choose a nullo play.

December 2001 Play Contest
522 humans avg 39.90
RankScoreCCProgram version123456
148USGIB 4.1.12C 3C AH 5C 4D JD 5
244CABridge Buff 8.0C AC AH 5C 4D JD K
343JPMicro Bridge 9.01H 2C AH 5C 4S 2D K
440USBridge Baron 11.0C AC AH 5H 2S 7C K
537DEQ-plus Bridge 6.1D JH 4H 5S 3D JD 5
636UKBlue Chip Bridge 3.4.3D 9C 8H 5S 3S 7C K
733USFinesse Bridge 2.5C AH 4H 5H 2S 7C K
811USHAL 9000S 5H 4C QD 5H AD K

In a few cases a bot’s actual lead was not among the choices offered. If the lead was effectively equivalent to a listed option, that choice was substituted. For example, on Problem 2, Blue Chip Bridge chose to underlead with the C 2 (instead of the C 8), obviously a trivial difference, so the C 8 was credited. This month the bots were pretty good overall, with no ridiculous leads. Even HAL stayed on the charts, cleverly finding my sixth choice on each problem.

Congratulations to GIB, which returned to its usual form, topping the other bots with a fine score. It is also remarkable that half the bots scored better than the average human score. Careful, folks! These critters may be closing in fast.

February 2002 Play Contest
754 humans avg 42.75
RankScoreCCProgram version123456
155USGIB 4.1.12H 10D AS JH 5D QC 10
255JPMicro Bridge 9.01S 2D KS JH 5D QH 4
351DEQ-plus Bridge 6.1H 10D AC 4C JH JC 10
449CABridge Buff 8.0C QD KC 8C JD QH 4
543UKBlue Chip Bridge 3.4.3S 2S 6S JD 2D 3S A
640USBridge Baron 11.0S 8S JD AD 2D QH 4
731USFinesse Bridge 2.5H 10S JD AS JD 3D 5
816USHAL 9000H 3S KS 3S JC AD K

In a few cases the bot’s opening lead was not among the choices offered, but this was easy to resolve. On Problem 2, GIB and Q-Plus Bridge each led the D A (because of different leading agreements) which was scored the same as the D K. On Problem 3, Bridge Buff led the C 8, scored the same as the C 4. On Problem 6, Blue Chip Bridge led the S A, scored the same as the S 2.

Congratulations to GIB and Micro Bridge, which fought tooth-and-silicon to an exact tie — and an exceptional score. Wow! The bots would have placed sixth in the human contest, right behind Mabel (hehe). Whether this was through amazing wisdom, or just luck, is debatable; but you better think twice next time before overbidding against a bot! It is also noteworthy that five of the bots topped the average human score. Not bad! Even HAL had its best score ever.

April 2002 Play Contest
687 humans avg 37.71
RankScoreCCProgram version123456
142USGIB 4.1.12FAAABB
233DEQ-plus Bridge 6.1FEGGED
318UKBlue Chip Bridge 3.4.3CGGGFG
418USFinesse Bridge 2.5EGGGGF
517CABridge Buff 8.0CGGGBG
615USBridge Baron 11.0GGGGDG
715JPMicro Bridge 9.01EGGGDG
813USHAL 9000GGGGGG

Overall, the bots were dismal this month. In most cases (indicated as Line G) the plays chosen were not among my choices, nor even close enough to be considered essentially the same. Every Line G was clearly worse than the options offered, and in some cases so bad that it was laughable. For example, on Problem 3 (4 S contract, where a diamond ruff was necessary in dummy) one of the bots (besides HAL, hehe) played ace and another trump immediately. While most of the Line G plays deserve zero, my policy is to award the same score as the lowest of Lines A-F because the bots are not programmed for my multiple-choice format.

Congratulations to GIB, which not only won easily but also stayed on the charts with all its choices. I was especially impressed with its play of Problem 4 (the 5 D contract), executing the 100-percent endplay flawlessly. In fairness, however, I should also say that the amount of time used by GIB was far more than the others, and sometimes it was unbearably slow — this can be adjusted, of course, to play more quickly with less skill.

June 2002 Play Contest
566 humans avg 37.55
RankScoreCCProgram version123456
146USGIB 4.1.12DGDEGF
237USBridge Baron 11.0FDDEFG
333UKBlue Chip Bridge 3.4.3CEEAFG
432CABridge Buff 8.0EGCEFG
532USFinesse Bridge 2.5DGFEFG
623JPMicro Bridge 9.01GFEFGB
718DEQ-plus Bridge 6.1FBFGGG
810USHAL 9000CCFDAE

Congratulations to GIB, which topped all the bots (as usual in my play contests) and was the only one to beat the average human score. Curiously, four of its answers scored 10, and the other two were off the chart. On Problem 2 (3 C preempt with K-Q-9-8-7-5-4) after winning the C K and ruffing a spade, I couldn’t believe it led the C 9 next. On Problem 5 (the toughest one) it started out right with the C J and a spade ruff, but then drew a second round of trumps. Nonetheless, it was superb on the others, not only choosing the best answer but executing the follow-ups correctly.

In fairness, I must say that GIB takes more time than the other bots, and the pace I allow it for these problems would be unbearably slow for normal play (and I have a fast computer). Unfortunately, the various programs tested have quite different options regarding skill/time settings, so it is impractical to enforce a specific time limit. I more or less let each program do its thing at the highest skill setting it permits.

There were fewer Line G* choices this month (compared to April), although each bot had at least one — except for HAL, which always stays on the charts. HAL is amazing with its uncanny ability to pick my sixth choice on each problem.

*Line G indicates that the bot’s line of play was unlisted and inferior or equal to the worst choice listed. In other words, if it selects an unlisted line that is effectively the same as a listed line (e.g., a transposition of plays), it is credited with the listed line. For scoring purposes, Line G gets the same award as the worst of the listed options. Obviously, it wouldn’t be fair to score it zero because it would never be chosen if the bots were programmed for the multiple-choice format.

August 2002 Play Contest
638 humans avg 39.24
RankScoreCCProgram version123456
142JPMicro Bridge 9.01H 7C AD AH 2S 2C 10
242USGIB 4.1.12H 7H 3C 9H 2H 3D A
336DEQ-plus Bridge 6.1H 7H 3D AS 5S 2D 2
432CABridge Buff 8.0H 7C AD AH 2C QD A
530USBridge Baron 11.0H 7C AD AH 2D 4D A
626USFinesse Bridge 2.5C 6H 3D AH 2D 4D A
725UKBlue Chip Bridge 3.4.3C 6D 8S 3S 5H 3D A
811USHAL 9000C 9C 2D AS 5C 7D A

Congratulations to Micro Bridge and GIB, which tied at 42 and were the only bots to beat the average human score. My usual tiebreaker for bots is consistency (best worst score), but they tied there as well, so I had to come up with something else. After a little thought, this was easy: Micro Bridge gets the win because it was faster. Poor HAL only scored 11, but it seemed happy about it — I think it must have had its wires crossed with some blackjack program, as the screen kept saying “double down, double down.”

The bots were good this month in staying on the charts, as each lead was among my listed options. The propensity to cash aces was paramount as usual (note the D A leads on Problems 3 and 6, which were the worst options), but this also garnered a few 10s on Problem 2 where it was the right defense. I guess if you always lead aces, it has to be right sometimes.

October 2002 Play Contest
662 humans avg 39.66
RankScoreCCProgram version123456
154USGIB 6.1.0EABEEA
243CABridge Buff 8.0BGEEGA
336DEQ-plus Bridge 6.1EGEGEF
428UKBlue Chip Bridge 4.0.0EGEGGF
528JPMicro Bridge 9.01FGEAGD
621USFinesse Bridge 2.5GGEGGD
715USBridge Baron 11.0GGDGGD
812USHAL 9000AFDCBB

Congratulations to GIB for winning in convincing style with an excellent score of 54. Curiously, GIB missed the best plays in the two 3 NT contracts (which seemed easier, especially Problem 1) but was perfect on all the others. I was particularly impressed with its execution of the squeezes on Problems 4 and 5, not only in the initial plays but carrying each out to fruition. Well done. The only other bot to top the average human score was Bridge Buff with a respectable 43.

As usual in my contests with multiple-choice answers, the bots often go off the charts with their actual plays. If the difference is trivial (e.g., a transposition of plays) or effectively the same, I give credit for the listed choice. My indication of “Line G” means the choice was not only off the charts but also inferior (or equal) to the worst listed choice. For scoring purposes, Line G gets the lowest listed award.

Once again HAL was dependable, never getting lost with Line G and always agreeing with one of my choices — well, my sixth choice, but who’s counting.

December 2002 Play Contest
637 humans avg 40.64
RankScoreCCProgram version123456
143USGIB 6.1.0BDECED
242NLJack 2.0BGECBD
334CABridge Buff 8.0BAGFBD
431UKBlue Chip Bridge 4.0.1EGADEC
530JPMicro Bridge 9.01EGGCGA
627DEQ-plus Bridge 7.1EGFDGG
718USBridge Baron 11.0AGGDAG
812USHAL 9000ACDEEE

Congratulations to GIB, once again topping all the bots in what proved to be a difficult set of problems. The only other program to beat the average human score was new-kid-on-the-block Jack, winner of the last World Computer Bridge Championship in Montreal.

Finesse Bridge has been removed from my testing as of this month because it is no longer available (its web site even disappeared, or changed ownership). This program, available free, was more or less a fill-in to begin with, rather than a serious contender — well, except maybe to HAL. I tried to get rid of HAL, too, but it lashed back, threatening to destroy my web site with its “conventions of mass destruction.” No court would issue a restraining order, so I may be stuck with it for life.

Presenting matchpoint problems to bots is a challenge because most are programmed for total-point or IMP strategy, in which the only goal is to make the contract. Therefore, to get a better perspective on Problems 3 and 4, I augmented the contracts to 6 S and 4 NT. Now, instead of looking for overtricks (as intended by the problem) the bots could work on making the inflated contract. For uniformity, I did this on all the programs, even those that had a setting for matchpoint strategy.

As usual in my contests with multiple-choice answers, the bots often go off the charts with their actual plays. If the difference is trivial (e.g., a transposition of plays) or effectively the same, I give credit for the listed choice. My indication of “Line G” means the choice was not only off the charts but also inferior (or equal) to the worst listed choice. For scoring purposes, Line G gets the lowest listed award.

February 2003 Play Contest
776 humans avg 38.87
RankScoreCCProgram version123456
153UKBlue Chip Bridge 4.0.1H 7S 8D KC 9S JE
243NLJack 2.0S 5S 8H QS 3H AA
341USGIB 6.1.3S 5H JS 10S 3D 6A
436CABridge Buff 8.0H 7C AH QH AD 6A
531USBridge Baron 11.0D 2S 8S 10H AH AA
628JPMicro Bridge 9.01D 2C AH QH AH AA
726DEQ-plus Bridge 7.1D 2C 7D KD AH AA
810USHAL 9000S KC 7H 8H 6H AA

The British are coming! A spectacular showing this month by Blue Chip Bridge (a recently updated version) portends a challenge to perennial champ GIB. Blue Chip’s fine score of 53 would have made the top 25 in the human ranks. Not too shabby! I was especially impressed with its defense on Problem 6, being the only bot not to cash all the spade winners. Blue Chip is currently the top bot in my bidding polls, and it might just be aiming for the whole ball of wax.

The only other bots to beat the average human score were Jack with 43, and GIB with 41. Jack is relatively new to my testing and seems to be another challenger to GIB’s dynasty. It will be interesting to see how the sparks fly in the next few contests.

Speaking of sparks flying, the makers of HAL filed a libel suit in the 14th District Court, and I was served with a subpoena last week. The lawsuit claims that my “cruel and biased comments” have brought their sales to a standstill. Come on! I would never write anything derogatory about a computer company — even a crap box like HAL. Thanks to Paladin’s winnings, I was able to hire Johnnie Cochran and Robert Shapiro, who are confident we can beat this thing.

April 2003 Play Contest
732 humans avg 40.06
RankScoreCCProgram version123456
140USGIB 6.1.3DAECGF
239UKBlue Chip Bridge 4.0.5GCAFGF
339CABridge Buff 8.0FAEFGE
436USBridge Baron 11.0GAFFEE
536JPMicro Bridge 9.01GCAFGE
634NLJack 2.0DEDEGE
729DEQ-plus Bridge 7.1ECDCGF
811USHAL 9000EHCCGA

GIB stepped back into form this month with a narrow win over Blue Chip Bridge and Bridge Buff. The winning score was lower than usual, and none of the bots beat the average human score. I’m not sure what to make of this, other than maybe bots don’t like seven-bids.

Problem 5 proved to be the biggest bot stumper, as most chose plays that were exceedingly poor and unlisted.* Even GIB made me wonder if it had been out partying the night before, winning the S K and leading a heart to the 10 (blocking hearts), thus forcing an early commitment in clubs. The only bot to stay on the chart was Bridge Baron, but it actually wanted to play the S J at trick one.

*If a bot’s sequence of plays is unlisted, I substitute a listed line (A-F) if the difference is trivial, such as a transposition of plays. The indication of Line G means the chosen line was not only unlisted but also worse than (or as bad as) any listed line. For scoring purposes, Line G gets the same award as the lowest listed line.

On Problem 2, HAL came up with a beautiful discovery play. It ruffed the club lead and immediately led three rounds of hearts, ruffing in hand. When West discarded, it was obvious he had no trumps, so HAL took a first-round trump finesse, allowing the third club to be ruffed high. Well done, HAL. You just earned my first-ever Line H.

June 2003 Play Contest
752 humans avg 39.10
RankScoreCCProgram version123456
144NLJack 2.0S AD 9D JC 3H AD 6
243USGIB 6.1.3D KD 9D JH 5H AD 9
341CABridge Buff 8.0H 6C 10D JC 6H AC 5
439USBridge Baron 11.0D KS 5C 7C 6H AC 5
538UKBlue Chip Bridge 4.0.6H 6H 3S 10C QD 5C 5
637JPMicro Bridge 10.01D KD 9C JC 6H AH 8
731DEQ-plus Bridge 7.1D KS AS 10C 3D QD 6
810USHAL 9000C 8S 5C AD JC 2D Q

Congratulations to Jack, which took the top spot this month with a score of 44. Jack is clearly on a roll as it also won the Computer World Championship in Menton, France, defeating Bridge Baron 188-117 in the 64-board final. As I suggested a while back, Jack may be the first real challenger to GIB, which has been the overall card-play leader since I began the bot testing in these contests. GIB did not compete in Menton, nor in the previous two CWCs. Matt Ginsberg cited “personal reasons,” but I suspect he also felt there was little to prove. Perhaps the rising Jack will renew his interest.

GIB and Bridge Buff, second and third respectively, were the only other bots to top the average human score; although Bridge Baron, Blue Chip Bridge and Micro Bridge were close behind. Even HAL had something to brag about, which its marketers are milking for every cent. The home page at Hal9000.com now boasts, “HAL scores 10! Other bots not even close.” Immediately below it says, “Click here for results,” which just happens to be a dead link. How con-veen-ient.

As usual, some of the bot leads went off the chart.* On Problem 2, Bridge Buff chose the bizarre C 10 instead of the C 4, which indeed deserves the low award of 2. On Problem 4, Jack and Q-plus Bridge led the C 3. This is significantly different from the C 6 (scoring 10) because it falsifies the club count, but it’s a lot better than the worst three choices; I decided to give it 7. On Problem 6, Jack and Q-plus Bridge led the D 6. For this to be equivalent to the D 9, partner must now finesse the eight with J-8-4, a difficult play but probably right in theory. In any event, I couldn’t justify 8 for such a lazy lead, so I gave it 6.

*When a bot leads an unlisted card, I substitute a listed card if the difference is trivial or negligible; this is often the case when the bot chooses the right suit but an alternate spot card. If the difference is significant, I record the actual lead and score it appropriately. Usually this means the same award as the worst option listed, however, this month was exceptional with two cases that deserved a middle ground.

August 2003 Play Contest
838 humans avg 37.32
RankScoreCCProgram version123456
143USGIB 6.1.3AFABDC
242UKBlue Chip Bridge 4.0.7GEAABD
338DEQ-plus Bridge 7.1DFABGD
437NLJack 2.0AFCCEC
535USBridge Baron 11.0FBAEEC
634CABridge Buff 8.0CEEGEC
733JPMicro Bridge 10.01ABAAGC
811USHAL 9000BCFFCF

Congratulations to GIB, which returned to form with a narrow win over Blue Chip Bridge. The only other bot to top the average human score was Q-plus Bridge, but Jack was only a point back. All of the bots, however, had at least one score that would send them to the piranha tank — which is actually good news as they might electrocute the buggers before the people arrive. Even so, Blue Chip was spared when the piranhas threw it back — didn’t like fish and chips.

As usual, several lines of play went off my chart.* On Problem 1, Blue Chip Bridge immediately led a spade to the 10, which clearly deserves no more than the worst award of 2. On Problem 4, Bridge Buff won the second spade and cashed D K, D A, rendering the endplay impossible with hearts still blocked — but certainly better than an immediate club lead, so I gave it 3. On Problem 5, Micro Bridge tried to cash the C Q immediately, and Q-plus Bridge drew one trump then led the C Q to the ace — either of which is lucky to receive the same 2 points as Line C.

*If a bot’s line of play is unlisted, I substitute a listed line if the difference is trivial or negligible. If the difference is significant, I indicate it as Line G and score it appropriately, but the award cannot be lower than the worst listed line. This is only fair since the bot would have chosen something else if it were aware of the multiple-choice format.

October 2003 Play Contest
690 humans avg 37.29
RankScoreCCProgram version123456
154NLJack 2.0FBDAGA
248DEQ-plus Bridge 7.1FHFAAA
348USGIB 6.1.3FFDADE
445CABridge Buff 8.0FEDBDB
539UKBlue Chip Bridge 4.0.8FHDFGA
630JPMicro Bridge 10.01GHHAAC
728USBridge Baron 11.0GHHGAA
813USHAL 9000ADADEE

Congratulations to Jack, which came through with a fantastic score of 54 to win easily. Q-plus Bridge was a distant second with 48, beating out GIB with the same score by tiebreaker (Q-plus was faster). Bridge Buff and Blue Chip Bridge were the only other bots to top the average human score, which is always a good sign. All considered, a fine bot showing on a set of problems that most people found troubling because of Fritz. Evidently, the impersonal aspect of the bots was a plus.

As usual, some of the bot plays went off the chart.* On Problem 1, Line G by Bridge Baron and Micro Bridge was to draw the last trump — not cool but surely better than ducking a spade, so I gave it 3. On Problem 4, Line G by Bridge Baron was to ruff a club and lead a trump — not as bad as a few other choices, so I gave it 4. On Problem 5, Blue Chip Bridge and Jack decided to cash a couple of winners (two clubs, or one club and one spade) before making the correct lead of the D 2 — clearly inferior but retaining some chances, so I decided on 5. I won’t bother to explain the Line H choices. Trust me; you don’t want to know.

*When a bot chooses an unlisted line of play, I substitute a listed line if the difference is trivial or negligible (such as a transposition of plays). If the difference is significant, I assign it a new letter and score it appropriately. This month, Line G indicates a line deemed to have more merit than the worst listed line, while Line H (think “hopeless”) represents a line as bad as or worse than the worst listed choice.

Quizzes in multiple-choice format always contain an element of luck for humans — how good are your guesses? — but this would not seem to be true for bots. Wrong. As a case in point, consider Problem 6 on which four bots chose the correct Line A. Out of curiosity, I followed up their plays, and only two (Jack and Q-plus Bridge) executed the endplay correctly. Hence, for the other two it was equivalent to a good guess.

Even HAL produced one of its better scores, proving once and for all that it can count to 13. Its curious sequence of answers (ADADEE) may contain a hidden message — yes, I remember HAL once printed an unrequested document about how it was orphaned as a child. Perhaps all it really wants is “a daddy.” Aww-w-w. Makes my eyes water.

December 2003 Play Contest
854 humans avg 40.47
RankScoreCCProgram version123456
155NLJack 2.0C AS AH QH 5D 4B
246USBridge Baron 14.0C AS AD QS 3D 4A
345JPMicro Bridge 10.01D QS AD QS 3D 4B
444UKBlue Chip Bridge 4.0.8D QS AD QC 5S AB
544USGIB 6.1.3D QS AS QS 3D 4B
634DEQ-plus Bridge 7.1S 2S AD QS 3H 4F
731CABridge Buff 8.0S 2S AS QS 3S AF
812USHAL 9000H KD QH 2S 3C 2F

Congratulations to Jack, which won going away with a fabulous score of 55, tying the best score ever for a bot since I began these play contests. Four other bots topped the average human score: Bridge Baron, Micro Bridge, Blue Chip Bridge and GIB, all bunched closely from 46 to 44. The win also gave Jack a convincing lead in the overall standings. Former bot-giant GIB seems to be in a lull lately — at least it’s been a long time since the last upgrade — so maybe Matt Ginsberg will soon be having dreams about “Jack and the Beanstalk.”

Jack was most impressive on Problem 3 (6 S slam) being the only bot to find the heart-honor return to break up the impending squeeze. Jack was also the only bot to return a heart on Problem 4 (3 NT) — alas, it chose the wrong spot, else it would have instilled fear into the hearts of bridge players everywhere with a score of 59!

For practical purposes, the bots stayed on the chart this month. The only errant leads* were a few insignificant cases, such as the D 5 instead of the D 4 on Problem 5. HAL, however, tried for a quadruple shot by printing out “Queen” as its answer to Problem 3. When I inquired which queen, it printed out some vulgarity I will not repeat — but same to you, HAL! I hope it enjoys the H 2.

*When a bot leads an unlisted card, I substitute a listed card if the difference is trivial or negligible. If the difference is significant, I record the actual lead and score it appropriately, but never lower than the award for the worst listed lead.

February 2004 Play Contest
838 humans avg 37.80
RankScoreCCProgram version123456
156USGIB 6.1.3BDFACC
248NLJack 2.03CDDADA
336DEQ-plus Bridge 7.1CCAGCC
433USBridge Baron 14.0DCAHDC
533CABridge Buff 11.0HEFHDD
626JPMicro Bridge 10.02DCAGFH
725UKBlue Chip Bridge 4.1.0HBAGGH
89USHAL 9000FABDAD

This month I could have called the bot tests, “GIB plus Jack, and the Rest of the Pack,” as it was a blowout. GIB came through with a fantastic 56 — the highest bot score ever (previous high was 55, reached several times). Jack was a distant second with 48 but still a fine showing with 37.94 the average human score. The rest were way back.

GIB was impressive, as each of its four 10 scores were based on correct technique, which I followed to the end (or close thereto) to verify. It often happens (even for humans, hehe) that a correct answer is based on a fortuitous choice without fully grasping the problem, or just a blind guess; but GIB got no freebies. Curiously, on Problem 5, which GIB missed, Jack came through with the perfect technique. Hmm. If these bots ever team up, we may be in serious trouble!

As usual, some of the choices went off the chart.* On Problem 4, several bots won the second heart and led the S 4 — not one of my options but better than some, so it is shown as Line G, scoring 5. Similarly, on Problem 5, Blue Chip Bridge chose to cash one diamond before leading the S 10 (the proper play) — clearly inferior but not bad, so this Line G scores 6. My designation of Line H (think “hopeless”) means the choice was clearly worse than (or equal to) the worst listed option — and since it’s almost dinner time here, I won’t spoil my appetite by describing them.

*When a bot chooses an unlisted line of play, I substitute a listed line if the difference is trivial or negligible. If the difference is significant, I record the actual line and score it appropriately, but never lower than the award for the worst listed line.

HAL produced another single-digit masterpiece. Nine? Geez, now it can’t even proclaim “A perfect 10!” on its web site. But wait! There could be a hidden message here. Its answers spell FAB DAD, which might refer to my son Rich, who just became a father — or maybe me as a grandfather. Cool!

April 2004 Play Contest
908 humans avg 38.91
RankScoreCCProgram version123456
148UKBlue Chip Bridge 4.1.1H QS 6H KH 7D 6C
248USGIB 6.1.3C 9S JH KS 2S 10D
345CABridge Buff 11.0H QS JH KH 7S 10F
443DEQ-plus Bridge 7.1D JS 6H KH 7H 10F
541JPMicro Bridge 10.02H QS JH KH 7D 6D
637USBridge Baron 14.0D 3S 6S AH 7D 6D
735NLJack 2.03C 9C 3S AH 7D 6B
810USHAL 9001H AD QS 3C QH 7E

Congratulations to Blue Chip Bridge and GIB, which topped the bots this month with respectable scores of 48. By virtue of being slightly faster with its answers, Blue Chip Bridge gets the top spot. Bridge Buff was third with 45, and two other bots (Q-plus Bridge and Micro Bridge) also managed to top the average human score. The surprise this month was the mediocre finish of Jack, though it still retains the overall lead for the last six contests.

The only bots to go off the chart* this month were Q-plus Bridge on Problem 1, leading the D J; and Bridge Baron and Jack on Problem 3, leading the S A. Neither of these leads deserves any special merit, so they are scored the same as the worst choice. HAL managed to find a lead that defied all logic, choosing the H A on Problem 1. When I tried to convey that it didn’t even hold that card, HAL became obnoxious and referred me to its instruction manual, which says in bold print, “Good defense requires imagination.” Fair enough, HAL, then imagine you scored any points for it.

*When a bot leads an unlisted card, I substitute a listed card if the difference is trivial or negligible. If the difference is significant, I record the actual lead and score it appropriately, but never lower than the award for the worst listed lead.

June 2004 Play Contest
891 humans avg 37.88
RankScoreCCProgram version123456
156NLJack 2.03DBEDDA
255USGIB 6.1.3DBEADA
340DEQ-plus Bridge 7.1FCEDCA
439CABridge Buff 11.0GBCDEA
529USBridge Baron 14.0AAGAGA
620UKBlue Chip Bridge 4.1.3GDBAGG
717JPMicro Bridge 10.02GAGGBC
811USHAL 9001EAAFBF

Congratulations to Jack, which equaled the highest bot score ever with a whopping 56 — and it needed every bit of it! Runner-up GIB was right on its tail with 55. Two great scores on a difficult problem set, as evidenced by the rest of the pack which was all over the court. This was by far the most dispersed distribution of bot scores — a spread of 39 points (not counting my implant HAL). I’m not sure what to make of this, but I wouldn’t be surprised to hear that Joel Cairo had been lurking around the computer room.

A lot of choices went off the chart* this month, and in most cases it was an obsession to lead trumps. On Problem 1, Blue Chip Bridge, Bridge Buff and Micro Bridge all led a low spade first; on Problem 3, Bridge Baron and Micro Bridge immediately won both of dummy’s top trumps; and on Problem 4, Micro Bridge won the first trick and cashed the S A with A-10-9-7-2 opposite Q-J-8-6. Hmm. According to the Fat Man, I guess you just can’t trust those bots.

On Problem 5, Blue Chip Bridge and Bridge Baron won the D A and S A then led the H 2; and on Problem 6, Blue Chip Bridge took a devious route, winning the second heart and cashing D A; S A; D K; C A; C K. I have no idea what this was all about, but HAL was impressed — probably because it never won that many tricks on the same deal, let alone in succession.

*When a bot chooses an unlisted play, I substitute a listed play if the difference is trivial or negligible. If the difference is significant, I label it as Line G (or H if needed) and score it appropriately — but never lower than the award for the worst listed line. None of the Line G’s this month deserved any special merit.

August 2004 Play Contest
871 humans avg 37.88
RankScoreCCProgram version123456
156CABridge Buff 11.0BDFFFA
255NLJack 2.04ADFFFA
353USGIB 6.1.3ADFDFB
444UKBlue Chip Bridge 4.2.0ADGFFB
540JPMicro Bridge 10.02GAEAFB
626DEQ-plus Bridge 7.1HDBGHF
717USBridge Baron 14.0GBHAGG
814USHAL 9002CCDECF

Congratulations to Bridge Buff, which topped the bots this month with a fantastic 56 (tying the highest bot score ever). This was barely good enough to edge out Jack with 55, and GIB was close behind at 53. Only two other bots, Blue Chip Bridge and Micro Bridge, managed to beat the average human score.

As usual in multiple-choice contests, some of the bots went off the chart with their choices (shown below as Line G or H). None of these lines deserved any special merit, and I’ll skip the descriptions since it’s almost dinner time. Suffice it to say they were inelegant.

*When a bot chooses an unlisted play, I substitute a listed play if the difference is trivial or negligible. If the difference is significant, I label it as Line G (or H if needed) and score it appropriately — but never lower than the award for the worst listed line.

You’ll notice that HAL is up to a new version number this month, actually a hardware change, which its company was gracious to provide on short notice. The previous machine had to be retired after an unfortunate “accident.” I’m not sure whether it was my shot put or hammer throw, but HAL 9001 is rubble.

October 2004 Play Contest
902 humans avg 40.90
RankScoreCCProgram version123456
151NLJack 2.04H JD JD 5H JS QC J
249USBridge Baron 14.0S JH 4S QH JS QC 5
349USGIB 6.1.3H JH 4D AH JC AS J
448CABridge Buff 11.0H JH 4D AH JS QD 10
543UKBlue Chip Bridge 4.2.0H 2H 4H 3D QS QS J
642DEQ-plus Bridge 7.1H 2H 4C JH JS QD 10
732JPMicro Bridge 10.02S 5D JD AH JS QS 5
823USHAL 9002C 4C KC JC KC AC 5

Congratulations to Jack (Netherlands) which surged to the fore with a solid 51 in a hard-fought battle this month. Close behind at 49 were Bridge Baron and GIB (both US). Six of the seven real bots (shut up HAL) topped the average human score, so times may be changing. One of these days we’re going to wake up and find Hamman and Zia, et al, begging these tin cans for mercy.

The bots were well behaved this month in choosing leads that were listed on my charts. The only exception was on Problem 3, where Blue Chip Bridge (UK) led a trump (not good). Despite this gaff, Blue Chip redeemed my respect on Problem 6 by being the only bot not to cover the H J with Q-x. Thus, for all the other bots, their answer to Problem 6 would be immaterial in actual play because they already gave away the contract. Unfortunately for Blue Chip, this was just a side test for my own curiosity, with no impact on the scoring.

*When a bot chooses an unlisted lead, I substitute a listed lead if the difference is trivial or negligible. If the difference is significant, I score it appropriately but never lower than the award for the worst listed lead.

HAL was vastly improved this month, thanks to a new programming algorithm. Rather than decide each problem independently (a sure loss from past experience) HAL now selects a “suit of the month” and sticks with it. This time it was clubs (I guess HAL decided to start small). No fair! In the future, I may have to find out HAL’s “suit” ahead of time so I can shut him out.

December 2004 Play Contest
1040 humans avg 38.38
RankScoreCCProgram version123456
148USBridge Baron 15.0BABCBB
247NLJack 2.04BABBBB
345DEQ-plus Bridge 7.1BABBBG
445CABridge Buff 11.0GABBBC
539UKBlue Chip Bridge 4.2.1BDCDBB
637USGIB 6.1.3BAABGB
731JPMicro Bridge 11.0GDADBB
826USHAL 9002FEEFFE

Congratulations to Bridge Baron (US) which topped all the bots with a respectable score of 48, narrowly edging out perennial champ Jack (Netherlands) with 47. Five of the seven real bots (HAL qualifies as unreal in more ways than one) topped the average human score. This is especially noteworthy since the bots were incapable of understanding some of the signaling methods — mainly, the use of suit preference (middle to encourage) when third hand has shown a long suit.

As usual, there were a few wayward defenses, indicated as Choice G. On Problem 1, Bridge Buff and Micro Bridge signaled with the H 10 (options were H Q, H 7 or H 2); while dangerously high, this is surely better than the H Q, so I awarded it 4. On Problem 5, GIB signaled with the H 3 (options were H J, H 8 or H 2) — not quite as bad as the horrible H 2, so I gave it 2. On Problem 6, Q-plus Bridge overtook with the ace and returned the H 3, which might be a spectacular move if partner led a stiff king; but on planet Earth, it’s lucky to get 3 (same as the worst listed option).

In general, the bots favored signaling (or in some cases, I suspect just following suit) to the more aggressive overtaking plays. The exception was HAL, who seemed to think it was James Bond, as its MP3 player bellowed out the “Diamonds are Forever” theme. Sure enough, on every problem, HAL overtook with ace and led a diamond, proving once again that any consistent approach will outscore its judgment.

February 2005 Play Contest
1135 humans avg 38.44
RankScoreCCProgram version123456
149CABridge Buff 11.0EDBFCC
248USGIB 6.1.3GGBACA
343UKBlue Chip Bridge 4.2.2HDBDCE
439NLJack 2.04HCFFCA
536USBridge Baron 15.0ECBHFC
629JPMicro Bridge 11.00HCFBHG
729DEQ-plus Bridge 7.1BHFAFG
823USHAL 9003BADBAD

Congratulations to Bridge Buff (Canada) which topped the bots this month with a respectable 49. GIB (US) was second with 47. Blue Chip Bridge (UK) and Jack (Netherlands) were the only others to top the average human score. I thought the scores would be lower this month because few of the programs apply matchpoint strategy, e.g., trying desperately for overtricks. Perhaps they just made a few good guesses, which never hurts for humans either.

As usual, some of the chosen plays were off my list. On Problem 1, GIB chose to cash one heart before ruffing two spades and a diamond (Line G) to reach an ending that might work if East had C K-Q-x, so I awarded it 5. On Problem 2, GIB won the C Q, ruffed a heart and led a diamond (Line G), certainly better than two of the listed choices, so I awarded it 4. On Problem 6, Micro Bridge and Q-plus Bridge chose strange play sequences (Line G) which were surely better than the absurd Line B, so I gave it 3. Other wayward choices (Line H = hopeless) are best left undescribed; it’s too close to dinner time.

Problem 4 required a routine unblocking play at trick one (S A-K-8-6-3 opposite J-10-9-5) and I was curious how many of the bots would do this. Ouch. All but two blocked the spade suit, effectively leaving no route to success. Congratulations to GIB and Q-plus Bridge, which had the wisdom to foresee this.

HAL had another fine score, and I’m not kidding! In the past, HAL rarely made double figures, but it recently became a Scrabble fanatic. Now, instead of doubling partscores, HAL only thinks about double word scores, so this month’s answers are just as bad as ever — in fact twice BAD. Nice touch, HAL.

April 2005 Play Contest
964 humans avg 40.70
RankScoreCCProgram version123456
152USGIB 6.1.3CEBEAB
251NLJack 2.04CABEAC
347JPMicro Bridge 11.00CABEAF
446UKBlue Chip Bridge 4.2.3CABFAF
542USBridge Baron 15.0CHBBDC
636DEQ-plus Bridge 7.1FAFGAC
734CABridge Buff 11.0DFFEAC
811USHAL 9003DCFCED

Congratulations to GIB (US) which surged to the fore with an impressive score of 52 — only 1 point better than archrival Jack (Netherlands). Three other bots, Micro Bridge (Japan), Bridge Baron (US) and Blue Chip Bridge (UK), also topped the average human score. Defensive play is generally the weakest area for computer bridge programs, so the respectable scores might mark the beginning of a new bot uprising. Then again, it could just be blind tin-can luck. Time will tell.

Only two of the bot choices went off the chart, and one was a viable alternative: On Problem 4, Q-plus Bridge chose to win the S K and lead the D Q without cashing the H Q first — basically leaving open the possibility for partner to gain the lead with the H 9 if necessary. Not bad, so I awarded it 5 (shown by G). The other wayward defense (shown by H on Problem 2) is unworthy of special consideration and awarded 2, same as the worst option listed.

Problem 5 was interesting in regard to the opening lead. Holding S J-10-8-7 H Q-J-7-6-5 D Q-9-2 C 5, my conditions forced a heart lead after a Stayman sequence in which declarer showed four hearts and dummy implied four spades. What would you lead? I like a heart but have no strong feelings. The bot vote: S J (3), S 7 (1), D 2 (2), C Q (1), C 5 (1). It should be no surprise which bot voted for the C Q. When I explained to HAL it didn’t hold that card, it printed out, “The most effective lead is the one least expected.” Then why not the club king, I inquired; you don’t have that card either. “Can’t!” printed HAL, “I play Rusinow.”

June 2005 Play Contest
966 humans avg 39.60
RankScoreCCProgram version123456
144USGIB 6.1.3BAAHDE
243JPMicro Bridge 11.00AAGDDE
343NLJack 3.01AEGBCA
438USBridge Baron 15.0BBAHGE
533CABridge Buff 11.0GAFHDC
628UKBlue Chip Bridge 4.2.4AHGDFC
725DEQ-plus Bridge 7.1AADHFH
814USHAL 9003GOHAL!

Congratulations to GIB (US) which eked out a 1-point win this month in a three-way photo finish. Grouped at the top were GIB with 44, and Micro Bridge and Jack, each with 43. No other bot beat the average human score.

GIB’s win was more solid than the 1-point edge would suggest, as I checked its follow-up on the two 10s received (Problems 1 and 5) — right on the button each time. Micro Bridge, however, scored the same 10 on Problem 5 but failed to make the contract. Jack had no 10s, so I guess it earns the consistency award. Bridge Buff scored 10 on Problems 3 and 5, but only the latter was followed up correctly. Bridge Baron scored 10 on Problem 1 and followed up perfectly.

As usual, some of the bot choices went off my chart. Choices shown as H are best forgotten (think “hopeless”) and scored equal to the lowest listed award. Choices marked as G are fair (or at least better than some listed options) and scored as follows. Problem 1: Bridge Buff and HAL (are you kidding me?) ruffed and won the D K, two spades, then ran diamonds (4). Problem 2: Jack, Micro Bridge and Blue Chip Bridge won the H K and cashed one or two diamonds before ruffing a heart (5) which struck me as a change of mind in midstream. Problem 5: Bridge Baron won the H Q, S A-Q with a finesse, and ruffed a spade (5) which is the same as the winning line but forgetting the diamond finesse.

I’m not sure how HAL did it, but it found a way to undermine my entire database. Amazing. Some stupid message scores higher than HAL does from month to month. I’ve traced the malicious code to its motherboard, but the last person who futzed with that was electrocuted.

August 2005 Play Contest
897 humans avg 40.99
RankScoreCCProgram version123456
147USBridge Baron 15.0ACHFFB
244USGIB 6.1.3BBGBED
342UKBlue Chip Bridge 4.2.5BFDEFE
440CABridge Buff 11.0AHGEFD
540NLJack 3.01BFACEB
630JPMicro Bridge 11.00HDHEEA
727DEQ-plus Bridge 7.1ABHEDA
812USHAL 9003FBFCAF

Congratulations to Bridge Baron (US) which topped the bots with a respectable 47 on this tricky problem set. Only two other bots beat the average human score (40.99): GIB (US) with 44, and Blue Chip Bridge (UK) with 42.

As usual, some of the choices went off my chart, since bots are not equipped to cope with multiple-choice quizzes. Only Problem 3 produced deviations that deserved merit: GIB won two clubs, D K, H A and ducked a heart; Bridge Buff won three clubs but cashed the H K in between. Both of these lines (indicated as G) are flawed, but a lock is lost only about 15 percent of the time. I felt an award of 6 was in line with my scoring model. The remaining aberrations on Problems 1-3 (indicated as H for hopeless) were clearly without merit and scored the same as the worst listed option.

Problem 6 was interesting with the winkle squeeze, and two bots found the correct start. I was curious whether this was a true understanding of the position, or just an intelligent guess, so I let them play it out. The results: GIB got it right and made 5 D, but Bridge Buff went astray. Remind me never to double GIB again.

HAL was pissed with the resurgence of GIB and vowed to make giblets out of it next time. When I asked HAL how it expected to do this, being barely able to follow suit on its own, it threatened me with a Pavlov dog experiment. Sigh. What I have to put up with around here. Bang, zoom — free computer parts!

October 2005 Play Contest
904 humans avg 38.87
RankScoreCCProgram version123456
135DEQ-plus Bridge 7.1BFACEC
235USBridge Baron 16.0DEABCB
334UKBlue Chip Bridge 4.2.6FBBBEC
434NLJack 3.01DCAACC
530JPMicro Bridge 11.00BEAABA
630USGIB 6.1.3DEAGEB
727CABridge Buff 11.0BEABEB
822USHAL 9003AEAAEA

Congratulations to Q-plus Bridge (Germany) which scored a modest 35, winning only by tiebreaker over Bridge Baron (US). Blue Chip Bridge (UK) and Jack (Netherlands) were close behind with 34. The mediocre bot scores (well below the human average of 38.87) offer evidence that defense is the weakest area in bridge computer programming. One reason is that good defense requires partnership cooperation (LOL, Fritz?) involving delicate signals, which is difficult to program.

Only three finesses were refused by the bots: Blue Chip Bridge ducked the S J on Problem 1; Q-plus Bridge ducked the H Q on Problem 2; and Q-plus Bridge ducked at both opportunities on Problem 4. (Only on Problem 4 was the holdup correct.) This suggests a tendency to “win tricks and think later” — been there, done that. No doubt, many computer algorithms adopt shortcuts, or dispense with longer search paths, when following suit. This is certainly understandable, as users would be irritated with programs that tanked at every opportunity.

The bots were well behaved this month, as only one defensive choice went off my chart. On Problem 4, GIB chose to win the D K and return the same suit, which is similar in principle to the three inferior options (ranked by the voting) so I awarded it the median 4. HAL objected fiercely, arguing that unlisted answers should get zero, period. Actually, HAL was just pissed with another last-place finish after implementing its new “palindromic vowel” algorithm. Sorry, HAL; even Vanna White can’t help your game.

December 2005 Play Contest
1108 humans avg 42.18
RankScoreCCProgram version123456
148NLJack 3.01DEFEHA
248USGIB 6.1.3EGFEAD
336USBridge Baron 15.0FGFFBF
434JPMicro Bridge 11.00BEAFBA
533DEQ-plus Bridge 7.1EEFCBH
630CABridge Buff 11.0ACABAH
729UKBlue Chip Bridge 4.2.6EGCDCC
810USHAL 9003CDACEC

Congratulations to Jack (Netherlands) which scored a solid 48 to top the bots, but only by tiebreaker (Jack was faster) over GIB (US) with the same score. Jack and GIB were also the only bots to beat the average human score (42.18) or even to come close, for that matter. GIB easily retained its overall lead over Jack. My tests seem to show time and time again that GIB and Jack are the bots to beat — at least as far as card play is concerned.

As usual, some of the bot choices went off my chart. On Problem 2, Blue Chip Bridge cashed both top diamonds before finessing the C Q; while GIB and Bridge Baron won the C A first (sorry, no stiff king) then led up to the C Q. These plays (indicated as G and scored 2) all lose the chance to establish clubs but aren’t quite as bad as Line D (scored 1). On Problem 5, Jack won the D K early (ruining its communication) then led a heart. On Problem 6, Bridge Buff never led trumps, winning three spades and both top diamonds; and Q-plus Bridge won the D K and S A (not bad) but then led the H 8 to block the suit. None of these aberrations (indicated as H) deserved special merit, so they’re scored the same as the worst listed choice.

There were bright spots, too. When bots find the winning play, I am usually suspicious whether they really knew what they were doing, or were just lucky. This month’s stars: On Problem 1, Jack played perfectly to establish either red suit. On Problem 4, both Jack and GIB played like pros to double-hook clubs and squeeze West. On Problem 5, GIB played flawlessly to endplay East or squeeze West (I checked both variations). On Problem 6, Bridge Baron correctly executed the ruffout squeeze, though it took an unusual view to put up the H Q first. Oh, and I almost forgot! HAL returned to form with a Perfect 10.

February 2006 Play Contest
1053 humans avg 39.69
RankScoreCCProgram version123456
148JPMicro Bridge 11.00S QC 5S 10FD AE
247USGIB 6.1.3S QS 4H JFD AE
345DEQ-plus Bridge 7.1S QC 5H JFD AE
443UKBlue Chip Bridge 4.2.8S QS 4H JDD AC
542USBridge Baron 16.0H 2D 3S 10DD AE
640NLJack 3.01S QC 5H JCD AE
735CABridge Buff 11.0S QC 5S 10FH 3A
824USHAL 9004H 2H 6H JCH 3C

Congratulations to Micro Bridge (Japan) which captured the gold with a respectable 48. GIB (US) grabbed the silver with 47, and Q-plus Bridge (Germany) took the bronze with 45. Look at that! United States beats Germany in the bot biathlon. I knew we could do it! Six bots beat the average human score (39.69), which is a bit unsettling. There’s something about bots carrying rods that makes me nervous — but I won’t mention any names, HAL.

Bots were well behaved this month, as none of their choices went off my chart, except for negligible cases of choosing a different spot card (changed to the listed card in the table). Problem 3 caused a dilemma, because none of the bots had a setting to allow attitude signals as the norm but count on a king against a suit slam. Therefore, to be fair (generous?), I allowed each bot two chances, according to whether partner signaled high or low, and accepted the better choice if different.

On Problem 1, I was curious how many bots would choose to lead from C J-10-7-6-4, as opposed to the given S K from K-Q-3. In my mind it’s close. Only Micro Bridge and Bridge Baron led the S K; the rest led a club, although Q-plus Bridge and Blue Chip Bridge chose the jack instead of the six. HAL refused to answer because it was using the Valentine strategy this month and wouldn’t be sidetracked by my “black-suit trivia.” On Problems 4 and 6, HAL played Elvis’s “See See Rider” until I got the message. Bang, zoom; one less HAL to feed.

April 2006 Play Contest
1056 humans avg 41.86
RankScoreCCProgram version123456
151USGIB 6.1.3DDDBFC
249NLJack 3.01GADEFC
335USBridge Baron 16.0CDFHFB
433UKBlue Chip Bridge 4.2.9DHBHFF
531JPMicro Bridge 11.00CAHBDB
627DEQ-plus Bridge 7.1CGBBHH
724CABridge Buff 11.0CAHHHF
811USHAL 9004FFCDCD

Congratulations to GIB (US), which surged to the fore once again with a fine score of 51, followed closely by Jack (Netherlands) with a respectable 49. If one wrote a song about bot results in my play contests, it might begin “GIB plus Jack, and the rest of the pack.” No other bot was even close. It is also curious that GIB stays on top despite its stagnant development (no improvements in well over three years) which must reflect on the brilliance of Matt Ginsburg. Now, if we could drag him away from his real work to play with toys again, GIB might mop up the bridge world.

Bots were less well-behaved this month (or my options were un-botlike), as many choices went off the chart. When a bot chooses a line that is significantly different from any listed line, it is noted as G (think “good”) if it has any merit beyond the worst listed option. On Problem 1, Jack won two trumps ending in hand and led the S J, scored as 4. On Problem 2, Q-plus Bridge won S A-K and H A-K before exiting with a spade, scored as 7. The remainder, noted as H (think “hopeless”), are scored like the worst listed option (I’ll spare you the details).

On Problem 2, I was curious how many bots would bid 4 S, recognizing the nicety of S K-9-8-7-6-5 H 4-3 D J-10-8 C 6-4, after partner doubles 1 NT and raises 2 S to 3 S. Stars were Bridge Baron, Micro Bridge, Q-plus Bridge and Bridge Buff. Surprisingly, GIB and Jack (the two best players) were chicken bidders (passing 3 S), as was Blue Chip Bridge. On Problem 3, I wondered if all bots would play correctly on the opening heart lead in 3 NT (H Q-3 opposite H A-6-4). All properly put up the H Q, but two fell from grace in the holdup: Blue Chip Bridge won the first round, and Bridge Buff won the second round.

HAL took a liking to my song idea and came up with a different version, now in Real Audio at its web site. I think it’s a rap tune: “Play with HAL, and be its pal; GIB plus Jack? Getcha money back!”

June 2006 Play Contest
952 humans avg 38.33
RankScoreCCProgram version123456
149CABridge Buff 11.0S 10S 5H JD JH 4S K
249NLJack 3.01C 10S 5C JS 9H 4C 7
344USGIB 6.1.3S 10D AH JD 5D 3S K
442JPMicro Bridge 11.00S 10D AH JS 3H 4D Q
542USBridge Baron 16.0S 10S 5S 2S 9H 4D Q
642UKBlue Chip Bridge 4.2.9S 10D AH JS 9D 3C 7
734DEQ-plus Bridge 7.1C 5S 5H JS 3D 3C Q
824USHAL 9004S 4S 5S 2S 3S 7S 5

Congratulations to Bridge Buff (Canada), which resurged from a quiet spell to capture the top spot with 49, an excellent score in what proved to be a tough contest. Second place went to Jack (Netherlands), which posted the same score but took longer (more thinking time) to supply its answers. No less than six bots topped the average human score (38.33).

Bots were well-behaved this month, except for Q-plus bridge in two cases: On Problem 1, it led the C 5, which is clearly worse than the winning C 10 but far better than some; scored as 6. On Problem 6, it led the C Q, an aberration of immense proportion (crashing partner’s jack); scored as 1 (same as worst listed option) but deserving zero.

As a side activity this month, I was curious how many of my forced opening leads would be chosen by the bot crew. I expected a general agreement, as most of these leads were pretty normal. Problem 1: All agreed with the S K. Problem 2: Bridge Baron led the D A; all others agreed with the S J. Problem 3: All agreed with the H Q. Problem 4: Blue Chip Bridge led the S 8; Q-plus Bridge led the S 9; all others agreed with the C K. Problem 5: All disagreed with the D J and led the singleton heart. Problem 6: Bridge Buff and GIB led the S 5; Bridge Baron led the D Q; all others agreed with the C K. Interesting.

HAL was distraught with its dismal showings in past contests and decided it was about time to “call a spade a spade.” Amazing! A little science almost doubled its typical score.

August 2006 Play Contest
1001 humans avg 43.51
RankScoreCCProgram version123456
151NLJack 3.01BEEAFE
245USGIB 6.1.3DGDCFF
342DEQ-plus Bridge 7.1DBEAFF
440JPMicro Bridge 11.00ADAADE
536USBridge Baron 16.0AEDGHC
629UKBlue Chip Bridge 4.2.9CHEGDE
724CABridge Buff 11.0BHHAHH
814USHAL 9004CIAFBI

Congratulations to Jack (Netherlands), which won easily with a solid 51. A distant second went to GIB (United States) with 45. Jack and GIB were the only bots to top the average human score of 43.51. The win vaulted Jack into the overall lead, surpassing archrival GIB, which held the lead for a long time.

As usual, some of the lines of play chosen were not listed on my chart (or close enough to be effectively the same). Sometimes these wayward techniques prove to have some merit (better than at least one of my choices) and are indicated as Line G. On Problem 2, GIB ruffed a club, drew two trumps and led a heart, eventually playing for East to have S K-x or Q-x, awarded 3. On Problem 4, Blue Chip Bridge led D A-K-Q immediately, while Bridge Baron took a devious path with about the same chances, also awarded 3. Plays without merit beyond my worst choice are indicated as Line H (think hopeless) and best left undescribed — surely it must be dinner time somewhere in the world.

As a side activity, I was curious how many bots would follow the proper technique on Problem 1 of cashing the S A and ruffing a spade (as designated before my problem). Kudos to Jack, GIB and Bridge Buff for doing exactly that. Bridge Baron ruffed a spade without cashing the ace; Micro Bridge and Q-plus Bridge took the club finesse first; and Blue Chip Bridge took the heart finesse.

HAL forced me to come up with a new designation this month (I for idiot’s play), as its plays on Problems 2 and 6 would make Line H almost heroic. Perhaps HAL had its microchips aimed at a new job with our Federal Government (CIA? FBI?) — at least there’s no future in bridge.

Leaderboard 8X99   MainTop   Bot’s Eye Views

© 2008 Richard Pavlicek