How To Tell If That Recipe You Want To Try Was Written By AI
There are many concerns about the sudden proliferation of artificial intelligence, or AI. Some people believe that our chatbots and large language models will gain sentience and turn against us. Other, more reasonable people believe that greedy CEOs will fire everybody they can and replace them with shoddy, environment-destroying robots so they can hoard ever more grotesque amounts of wealth. One somewhat more immediate concern is that AI has flooded our search engines with nonsense, making us sift through a bunch of computer-generated blather in order to find what we're looking for. Unfortunately, recipes are hardly immune — but you can keep your eyes peeled for signs that a seemingly cool new recipe is actually bogus.
The thing about AI is that it's not really intelligent. When you ask ChatGPT for a meatloaf recipe, it doesn't actually "know" that it would be delicious with a maple glaze. Instead, it's a souped-up version of the autocomplete function on your phone: It scrapes huge amounts of data, identifies what combination of words would most likely satisfy your prompt, and spits it back out. In a vacuum, this is obviously impressive, but it is far from perfect. AI can be given faulty data, and if it can't find what it's looking for, it just makes stuff up (or "hallucinates").
This is how you get stories where an AI recipe tells someone to glue cheese to a pizza, or where an AI-generated mushroom guide ends up getting somebody killed. You should be on the lookout for bizarre flavor combos, such as angel food cake with mayonnaise frosting, or bad titles, like a "diabetic-friendly recipe" with lots of sugar in it. If something doesn't sound right, it probably isn't. Remember: AI doesn't have a mind of its own, but you do.
Beware of the uncanny
Sometimes, however, there aren't any obvious signs that a recipe is AI-generated. It's not telling you to put highly-divisive Marmite on your ice cream, and it's not telling you to try some poison dart frog legs. So how can you tell? In some cases, you just get a gut feeling: The writing is too neat, too precise; it gravitates towards particular words, such as "delve;" and it overuses certain rhetorical devices such as negative parallelism ("it's not X — it's Y;" if you read any AI-generated writing, you see this all the time). It's close to something a person might write, but there's something just a little bit off in a way that makes you uneasy.
The same is true of AI-generated pictures, which often accompany bogus recipes. AI images have come a long way from the first Will Smith spaghetti test, but there are still some tells. Does it look unnaturally glossy, like a magazine ad you might see in a dream? Are there strange, half-formed objects in the background? Does the fork look kind of weird? These are all signs that you're dealing with a phony food picture.
Of course, one of the biggest problems in our AI-saturated present is that it's getting harder and harder to know for sure. While the techno-evangelist promise of AI becoming indistinguishable from human work may or may not come to pass, the technology is improving: You can't just count the fingers in a picture to see if it's real anymore. Maybe one day, our gut check will fail us, and we will simply have no way of knowing what's real and what isn't — a state of affairs which will surely be cool and fun for everybody.