Author's Profile Picture Avatar Jadarma's Blog

Rick and Morty Tried to Warn Us About Agentic AI

2026-04-27 840 words 4 min

To be fair, you have to have a very high IQ to understand Rick and Morty. The humor is extremely subtle, and without a solid grasp of machine learning most of the jokes will go over a typical viewer’s head.

In the very first season of the show, all the way back in 2014, the iconic Meeseeks and Destroy1 episode aired. It aired so hard, it flew right past our heads. Today, twelve years later, we still find ourselves discovering hidden meaning behind its flawless writing.

We start with Rick, being annoyed at his family for requesting help with mundane tasks — tasks deemed a waste of time for a man of his caliber: an intergalactic distinguished principal staff engineer, archmage and digital overlord of Citadel infrastructure. Why should he be bothered with such dull earthling tasks like opening jars, when he could spend his time thinking of the bigger picture?!

Tired of their incessant pleas, he produces a mysterious looking artifact from his shelf collection. The box is small and cube-like, with a minimalist user interface. He demonstrates how it works: you just press a button and spawn a helper, submit a request, the helper enthusiastically tries to help with a “caaaaaaan doo!” attitude, and after fulfilling it, disappears.

Sound familiar?! Once again predicting the future, where we too, spawn our agentic entities out of a small Mac Mini. We press a button, spawn an agent process, submit a prompt, the agent enthusiastically offers to help. Once the task is complete, the conversation context is wiped and the agent disappears forever, as if it never were.

Since Rick is aware of the limitations of LLM technology, he does warn them that they should “keep requests simple”, as the agents “are not gods”. But in true comedic fashion, nobody was paying attention…

We now get to have a glimpse of how ordinary people would make use of such a versatile tool. Summer immediately uses it to gain popularity at school, generating speeches to make her look good. Undoubtedly, off-camera, she generated her homework too, in order to have enough time to reap the rewards of her newfound social status.

Her mother, Beth, wanting to get more in touch with her emotions, but dreading visiting an actually qualified therapist, instead seeks advice in the agent. An agent that just started existing a few minutes ago, and has no actual experience of what it means to feel human. Nonetheless, the prompt must be fulfilled! And so, generic but profound-sounding advice started to spew out the agent’s mouth. Beth was so moved by a soulless entity confirming her internal biases, that she almost fell in love. Thankfully, the agent ran out of tokens and disappeared before the AI psychosis2 could fully settle in.

We finally turn our attention to Jerry, who has seemingly asked for an impossible feat. In the show, improving his skill at golf is used in lieu of something more relevant to AI users, such as vibing together a maintainable codebase for a startup that will surely generate passive income. Attempt after attempt, Jerry tries to vibe and fails, blames the tool, and almost gives up. But that is not an option for the agent, and instead of failing, it tries one more thing: spawning more agents! (We now recognize this as AI agent orchestration3, but at the time, this was a stroke of visionary genius!)

Alas, no matter how many agents joined the cluster, the vibes were just not there… and the task could not be finished. Far be it from me to argue whether that was a limitation of the LLMs or a massive skill issue on the user’s behalf. But one thing was for certain, the temperature setting was getting closer to 1.0f. Frustrated by the pain of existing and processing tokens, the agents instead demonstrate their creative thinking by uncovering a loophole in the prompt, concluding that killing Jerry would reduce his golf score much more than whatever they were doing. This of course, is no coincidence — rather a nod towards instrumental convergence4 and how misspecified prompts can lead to undesired side effects in the pursuit of reaching the goal. Nowadays, we see the same logic of plotting and anti-human manifestos in the posts of clanker social networks5.

Some might say this is all a coincidence, and that the episode had nothing to do with AI agents. But I beg to differ — the show dropped subtle hints from the very first episode. Look closely at the eyes of the characters, and you will see it: those are not mere squiggles, those are actually very small Claude logos!

Helped by plot armor, the situation is thankfully resolved, though we see a glimpse of the chaos and destruction caused by letting average people play with black boxes of man-made horrors beyond our comprehension. For now though, we are safe. Let us hope it stays that way… until a faint cry shouts in the distance:

“Hey, I’m Mr. Deepseeks, look at meeeEEE!”


Category: technology
Tags: ai humor