HomeTechArtificial intelligence is smart, but does it play well with others?

Artificial intelligence is smart, but does it play well with others?

Artificial intelligence (AI) has far outperformed the best players of games like chess and Go. Although these “superhuman” AIs have many rivals, it is perhaps more difficult to collaborate with them than competing against humans. Is it possible for the same technology to get along with humans? Researchers at MIT Lincoln Laboratory wanted to see how humans can play Hanabi, a cooperative card game. They used an AI model that was specially trained to play with new teammates. Participants played two versions of Hanabi in single-blind experiments. One was with an AI agent as their teammate and the other was with a rule-based bot, which is a bot programmed to follow a set pattern.

Researchers were surprised by the results. The scores with the AI teammate were no better than those with the rule-based agent. Humans also hated playing with their AI partner. It was unpredictable, unreliable and untrustworthy for them, which led to negative feelings even when they scored well. The 2021 Conference on Neural Information Processing Systems has accepted a paper from this study.

Ross Allen, coauthor of the paper and researcher in the Artificial Intelligence Technology Group, said that the study highlighted the distinction between creating AI that performs objectively and creating AI that is subjectively trusted and preferred. Although it may seem like these things are inseparable, the study revealed that they are two distinct problems. These two problems must be separated.

Researchers designing the technology that will one day be able to work with humans and solve real problems, such as performing complex surgery or defending against missiles, could worry about AI colleagues being hateful. This is called teaming intelligence and uses a specific type of AI called reinforcement.

Reinforcement learning AI does not tell you which actions to take. Instead, it discovers which actions are the most profitable numerically by repeatedly trying different scenarios. This technology is what has produced the incredible chess and Go players. These AI don’t follow rule-based algorithms like other AI. This is because the possible outcomes for the human tasks they are slated to handle, such as driving a car or driving, are too numerous to code.

Reinforcement learning is a more general-purpose method of developing AI. It is possible to train an agent to play chess but it won’t be able to drive a car. Allen points out that you can still use the same algorithms to teach a different agent how to drive a car if it has the right data. The possibilities are limitless in terms of what it could do, theoretically.

Bad hints, bad plays

Hanabi is being used by researchers today to evaluate the performance of reinforcement-learning models that were developed in collaboration. This is similar to how chess was used as a benchmark for testing AI competitiveness for decades.

Hanabi is similar to Solitaire, but it’s played in a group. Players team up to stack cards from the same suit. Players may not see their cards; they can only view the cards of their teammates. To get their teammates to choose the best card to stack next, each player has a limited amount of information they can share with them.

Researchers at Lincoln Laboratory did not create the AI or rule-based agent used in this experiment. Both agents are the best in their respective fields for Hanabi performance. The AI model achieved the highest score ever for Hanabi play when it was paired up with an AI agent with whom it had never played before.

Allen states, “That was an important outcome.” “If these AI who have never met before can get together and play really well then we should also be able to bring people that know how to work well with the AI. They’ll do well.” We thought the AI team would perform better and humans would also prefer it. We generally like things that work well.

Both of these expectations were not fulfilled. There was no statistical difference between the AI agent and the rule-based one. All 29 participants indicated a preference in surveys for the rule-based agent. Participants were not told which agent they were playing for in which games.

Jaime Pena, a researcher at the AI Technology and Systems Group and the author of the paper, said that one participant was so upset by the AI agent’s bad play that it gave them a headache. Another participant said they thought the rule-based AI agent was stupid but practical. However, the AI agent demonstrated that it understood the rules but its actions were not consistent with the team’s. They thought it was making poor plays and giving bad advice.

Human creativity

This perception that AI makes “bad plays” is linked to the surprising behavior researchers previously observed in reinforcement learning. DeepMind’s AlphaGo defeated a top-ranked Go player in 2016 and move 37 was one of the most praised moves by AlphaGo. It was so unique that many human commentators believed it was a mistake. The move was later found to be extremely well-calculated and was called “genius” by later analysis.

These moves may be praised by an AI opponent, but are less likely to be recognized in a team setting. Researchers at Lincoln Laboratory found that the most damaging moves for destroying trust between humans and their AI teammate were those that seemed illogical or outlandish. These moves not only affected players’ perceptions of how well their AI teammate worked together but also affected how eager they were to work with the AI, even if there wasn’t immediate payoff.

Hosea Siu, who is also the author of the paper and a researcher at the Control and Autonomous Systems Engineering Group, said that there was much commentary about giving up. Comments like, “I hate working on this thing,” were common.

Participants who considered themselves Hanabi experts (which the majority of participants in the study did), were more likely to give up on the AI player. This is a concern for AI developers as domain experts will be key users of the technology, according to Siu.

Let’s suppose you train a super-smart AI guidance aid for a missile defense scenario. It’s not something you give to a trainee, but rather to the experts who have been doing this for 25+ years. He adds that if there’s a strong expert bias against the idea in gaming scenarios, it’s likely to show up in real world ops.”

Squishy humans

Researchers note that the AI used for this study was not designed to be a substitute for human preferences. However, this is part of the problem. Not all AI models are human-friendly. This model, like most collaborative AI models was created to score as high possible. Its success has been measured by its objective performance.

Allen states that if researchers do not focus on subjective human preference, then “then we will’t create AI humans actually want to use.” It’s much easier to work with AI that improves a very clear number. It is much more difficult to work on AI that works within the mushier world human preferences.

This is the mission of MeRLin (Mission Ready Reinforcement Learning), which was funded by Lincoln Laboratory’s Technology Office in collaboration with U.S. Air Force Artificial Intelligence Accelerator and MIT Department of Electrical Engineering and Computer Science. This project aims to determine what is preventing collaborative AI technology moving beyond the game space into a more messy reality.

Researchers believe that trust will be built when the AI can explain its actions. For the next year, this will be their main focus.

“You could imagine that we rerun our experiment. But after the fact, which is much more difficult said than done, the human could ask: “Why did you do this move, I don’t understand it?” If the AI could give insight into their thoughts based on their actions then we believe that humans would trust this. Allen states that our results would change even though they didn’t alter the AI’s underlying decision-making.

This kind of exchange, similar to a huddle following a game is often what helps humans develop camaraderie as well as cooperation in a team.

“Maybe it is also a bias in staffing. Siu laughs and says that most AI teams lack people who are interested in working with soft problems and squishy humans. It’s people who love math and optimization. That’s the foundation, but it’s not enough.

A game like Hanabi, where AI and humans can play together, could open the door to new possibilities in team intelligence. However, until researchers are able to close the gap between AI performance and human likings, technology will remain machine-versus-human.

RELATED ARTICLES

LEAVE A REPLY

Please enter your comment!
Please enter your name here

- Advertisment -

Most Popular

Recent Comments