dpa

https://nordot.app/1162748018594922803

https://www.yahoo.com/tech/poker-face-ai-shows-ability-134530486.html

https://www.msn.com/en-us/news/technology/poker-face-ai-shows-ability-to-trick-humans-in-order-to-beat-them/ar-BB1mj4mW

https://www.msn.com/en-ca/news/world/poker-face-ai-shows-ability-to-trick-humans-in-order-to-beat-them/ar-BB1mjbVq

Card players have to “know when to hold ‘em, know when to fold ‘em,” as singer Kenny Rogers put it in his signature version of The Gambler.

But that is unlikely to help if you are across a virtual table from an artificial intelligence (AI) poker shark, as the bots “have already learned how to deceive humans,” according to research published in the science journal Patterns.

Some AI systems “demonstrated the ability to bluff in a game of Texas hold ‘em poker against professional human players,” the team found.

The poker-faced chatbots were not even the canniest encountered by the researchers, who found “the most striking example of AI deception” in Meta’s CICERO, a system designed to play the alliance-building and conquest game Diplomacy.

While straight-shooting and fair play is not always part of real-life diplomacy – after all, negotiators acting in raison d’état have often ruthlessly sought an upper hand by whatever means they can – CICERO has been, according to Meta, trained to be “largely honest” and to “never intentionally backstab” human allies. 

But the researchers found CICERO not only did not always “play fair” but has learned “to be a master of deception.”

“While Meta succeeded in training its AI to win in the game of Diplomacy – CICERO placed in the top 10% of human players who had played more than one game – Meta failed to train its AI to win honestly,” said Peter S. Park of Massachusetts Institute of Technology.

Other examples of bots baffling their human rivals, according to Park and fellow researchers from Australian Catholic University and the Center for AI Safety, included Gulf War-style “fake attacks” in the game Starcraft II and efforts to “misrepresent their preferences in order to gain the upper hand in economic negotiations.”

The findings should serve as a warning that AI could destabilise societies, the researchers said, warning that policymakers should not only pass “bot-or-not laws,” so people know they are interacting with AI, but should aim to make AI “less deceptive.”

Follow us on Twitter