News in English

How gamification sparked the AI era in tech

A look at how the fun side of AI lit the fuse of the latest tech boom.

An illustration of a person running on a treadmill.

This is the time of the AI tech boom.

The AI market is currently $241.8 billion according to Statista. For scale, that means AI already dwarfs Statista's estimated size for the global movie industry — $79.22 billion — and puts it neck-and-neck with the video game industry at $282.30 billion.

Since the auspicious November 2022 day which saw OpenAI unleash ChatGPT on the masses, AI products have been in the hands of consumers as both tools and toys. These products have sparked an entire era of tech that is even forcing Apple to say "AI," but the speed of this boom was propelled by waves of imperfect tools being honed in the public eye with varying degrees of success.

AI critics are rightly hard on the current crop of AI tech for its tendency to hallucinate or otherwise screw up. Ed Zitron, for instance, has dismissed the AI hype machine as "the theoretical promises of any number of monotonous billionaires that want to turn every website into fuel for a machine that continually gets things wrong." That's more or less fair, and I'm with Zitron that the public should demand more and better from this tech.

But something that always bears keeping in mind is that AI's errors and odd responses are super fun, and that under-stated fact has created a giant game environment where everyone with an interest in tech is playing.

Some of this "gaming" is literal, and these games lay out the basic framework of the broader, global game. For instance, I'm addicted to a literal AI game created by Google called "Say What You See." In this game — possibly intended more as "demo" than "game" — a little cartoon brain mascot prompts the player with AI-generated images, which must then be described in 120 characters or less. That description is then fed back into the image generator, and if the new generation is a plausible replica of the original, you get a passing score. It's a sort of visual, AI-mediated game of telephone.

The rules of "Say What You See" are kindergarten simple — the title contains all the instructions you need to play — and yet, it gets demonically hard very quickly as it goes along, forcing the player to dig deep in their brain for words typically used to describe objects of beauty and transcendence, and cram them into a cold, emotionless machine.

It goes like this: You're shown an object you might call a "little brown horse" if you saw one on a shelf. But the prompt "little brown horse" will produce nothing close to the image you need. So your thought process starts to resemble the following: Is that a "wooden horse figurine on a wooden stand, depicted walking?" Is it actually loping? Technically is it tölting? I think it might be tölting, but does the robot need me to be that pedantic? Would the AI respond better to "statuette" than "figurine?" It did! Wtf? Why? Etc, etc...

Why do I find this game of trial and error fun rather than frustrating? After all, practical AI is perceived by its critics as dreary and designed to make us better worker bees, and to an extent, that perception is accurate. Having fun with AI should theoretically be hard, but fun isn't something occasionally shoehorned into AI.

In fact, a form of gamification has always been front and center in AI's development, and it's been the secret to AI's viral success.

The AI game's Nintendo origins

Way back in 2013, computer scientist and YouTuber Tom Murphy VII, a.k.a., Tom7, a.k.a., suckerpinch, published a paper called "The First Level of Super Mario Bros. is Easy with Lexicographic Orderings and Time Travel . . . after that it gets a little tricky." In it he showcased the use of classic Nintendo games as a benchmark for the AI agents of the time. Another way of putting this would be he made computers play retro video games. Forcing these systems through a game over and over, Murphy demonstrated, can allow researchers to glimpse the capabilities and limitations of their systems.

It was far from the first time AI and games were combined (famous examples abound of scientists using computers to hack games like chess and go), but it's an example of what I call AI gamification that helped set the tone for the current crop of uncanny, consumer-oriented AI applications.

Strictly speaking, the term "gamification" means ordering specific goals into a game structure, perhaps in order to make onerous activities more attractive, or to make social media apps more addictive. But the definition of gamification when it comes to AI should be expanded to include the tendency of a piece of tech to constantly turn into a game accidentally. Tom7 was using a computer system to play a game, but he was also gamifying his computer system — discovering the hilarious and mind-expanding trial and error game we're all playing nine years later.

As Tom7 explained in the YouTube video released with his paper: winning has to be defined numerically if you want it to be automated, which means it won't necessarily look like anything recognizable as "playing" or "fun."

This, in turn, causes the terms "play" and "fun" to slip into a special uncanny valley. Goal-oriented, procedurally focused machines will fail utterly at having fun, and the global AI game came about in part because this phenomenon is deeply, endlessly, fascinating to behold.

Automated Mario intuitively finds shortcuts no human player would, but just as often falls into traps and detours no human would. After a short time, he can kill enemies with ninja techniques only a speedrunner would know, but he pointlessly breaks blocks, fixates on jumping backwards, and can't seem to get over a totally jumpable pipe. And when Murphy's system tries to play Tetris, it just racks up points by building a tower of pieces — the exact opposite of a good human strategy — and then pauses the game for eternity. To the computer, that's a win.

Any human's puzzled reaction to a computer failing to understand a video game — perhaps conveying the idea "try to increase your score slowly, over a long period of time, through strategy" — will inform the computer's next go, but it also defines the whole point of gameplay, which doesn't really need to be defined for most humans. Fun, after all, is fun. Helping the computer improve its ability to achieve goals in games gives us humans an ever-changing funhouse mirror view of "fun." And that's also fun.

Gamification in the generative AI era

Trying to make a computer into a better player is a good distillation of reinforcement learning from human feedback (RLHF) — one of the bedrock techniques that make AI tech usable. As computer scientist Giovanni Mariotta explained in a 2020 Medium post called "Gaming ML/AI-based on Reinforcement Learning," reinforcement learning enables AI agents to learn optimal behaviors, and "helps humans," by putting otherwise human-generated decisions into what he calls an "automatic flow." A computer that's stumbling through a game can be seen, Mariotta notes, "identifying hidden patterns, and observing multiple state variables impossible to be detected easily by human beings."

But no actual game-playing is required to play the global AI game, because trying to make a putatively intelligent machine achieve any human goal, particularly a creative one, is a game. Watching the system break down in unexpected ways, failing toward the goal of "general intelligence," whatever that is, seems like it should be an unrewarding grind — like perfecting your pandemic-era sourdough by baking loaf after inedible loaf — but in the global AI game, each level has proven more entertaining than the last.

In 2018, for instance, Nvidia rolled out StyleGAN, which could create eerily accurate images of human faces. The viral website This Person Does Not Exist, created by software engineer Phillip Wang, simply produced and published human faces at the url ThisPersonDoesNotExist.com — refresh the page; get a new human face with no actual corporeal form. Wang made it because he was worried about "how much people are in the dark about AI and its potential," he told Inverse.

On the face of it, This Person Does Not Exist was a warning about the magnitude of AI's success. The game, however, came along when someone noticed its failures.

"I Draw Side Demons" was a Twitter (Now X) account focused on the ubiquitous visual artifacts in most "This Person Does Not Exist" images, which tended to be confusing fragments of larger objects apparently just to the person's side, but partially cropped out of the final image. With disturbing consistency, these partial objects looked like fleshy lumps of shuffled facial anatomy, suggestive of — as the title implies — some kind of horror movie demon.

The game for this user was to draw the rest of the demon that the original image only hinted at. The joy in viewing these images stemmed not just from the hideousness of the final image, but the fact that it wasn't purely a product of the artist's imagination. It was as if the neural network in which the friendly human face had been conjured had a sort of evil twin consciousness that was being suppressed. The anonymous side demon artist was just a midwife for the hidden evil that was there all along.

Gamifying AI for the betterment (hopefully) of humanity

At its best, the game at the center of the AI explosion is the jester in the court of the king, relentlessly, and hilariously popping the hype balloon with each new step toward the supposedly inevitable future. Janelle Shane, an engineer and blogger, has been blogging about what she calls "AI Weirdness" for over a decade. In a 2019 interview, she told IEEE Spectrum why she felt it was important for AI commentators to focus on the weird — and mind you, this was three years before the release of ChatGPT:

"Many of the impressive examples of AI have a really narrow task, or they’ve been set up to hide how little understanding it has. There’s a motivation, especially among people selling products based on AI, to represent the AI as more competent and understanding than it actually is."

In 2019, OpenAI teased the public about GPT-2 being too dangerous to release, and gave ominous quotes to the press about it. "The rules by which you can control technology have fundamentally changed," said Jack Clark, who was OpenAI's head of policy at the time before leaving to co-found Anthropic. But then Clark and company went ahead and released it anyway. A game was clearly afoot.

GPT-2 wasn't packaged for easy public consumption, but I found a way to tinker with it online, and I quickly discovered that, yes, it was as creepy as its rollout implied. By that I don't mean it was unsettlingly human-like, or that the technology was creepily sophisticated — I just mean it kept saying scary things and talking, unprompted, about blood and death. In an effort to find out why, and because it was fun, I solicited stories from the public and received numerous unsettling examples from strangers who had played the game I had invented. Jack Clark himself told me I might be getting lots of violence in my results because "GPT-2 has read a non-trivial amount of fan fiction online [which] tends to involve lots of sex and/or violence."

Three years later, OpenAI released ChatGPT and, I probably don't need to tell you the global AI game kicked off in earnest. The public found endless ways to probe the system's uncanny combination of brilliance and shocking stupidity. We invented jailbreaks to make it lie, say offensive things, or create blueprints for supervillainy. We were amazed by its ability to blurt out instant Shakespearean sonnets and Python code that actually worked. But we pushed it past its breaking point by asking it to do simple arithmetic.

Twitter (now X) was inundated with screenshots of ChatGPT results, some of which were impressive (at the time).

But just as many, it seemed, were less favorable.

But everyone was playing the game. All this engagement drew the world's attention to OpenAI, which attracted Wall Street interest, and before we knew it, the company was valued at $80 billion and its CEO was feuding with the most famous actress in the world.

Last month, Google — the mega-company that has emerged as OpenAI's fiercest competitor — added AI Overviews to millions of users' search results, and wound up getting a taste of the global AI game. Users quickly started generating search results pages with hilariously wrong answersput glue in your pizza, eat rocks, that sort of thing.

In a blog post from late last month, Google wrote that "there’s nothing quite like having millions of people using the feature with many novel searches," and then pointed out what the AI game players were doing, mentioning "nonsensical new searches, seemingly aimed at producing erroneous results" — the essence of the AI game, in other words.

Business leaders are preparing the public for an era of unprecedented automation through AI, and in our economic system, businesses are not democracies. Most of the public doesn't get a say in whether this software makes incursions into the workplace. Worse still, the applications are flawed — sometimes critically flawed. The global AI game isn't a cure-all, but it gives me a glimmer of hope about the public having a say in how all of this AI software shapes up. After all, the public goofing on AI turned Google into the business world's punching bag for a week. That's not easy to do in normal circumstances.

Whatever your feelings may be about AI's "inevitable" rise, the global AI game is probably a force for good. Let's hope it is, anyway, because as long as AI remains this fun to engage with, it will never end.

Читайте на 123ru.net