5 lessons I learned from the Motivated Incompetents

Here I share my experience using generative AI to help analyze research data. The results, while impressive, can sometimes be surprisingly disappointing. The key lies in understanding how these agents “think.”

AI GENERATIVELESSONS

Ligia Fascioni

3/8/20255 min read

Disclaimer: This article is aimed at laypeople and common users, not specialists or professionals in the use of generative AI agents.

THE RESEARCH

While conducting research on FoBO (Fear of Becoming Obsolete) in collaboration with consultant Marilia Lobo, we reached the stage of processing and understanding the data. The survey was conducted via Google Forms and generated an Excel format file.

Neither of us is a data analyst, so the most logical idea was to rely on a generative AI agent to answer key questions and find correlations. For example: among respondents, what are the most commonly used coping strategies for FoBO by gender? Which age group is the most concerned? Which education level feels the least impact from these changes? (An article with the results of this analysis is being prepared and will be published soon.)

TAKING PRECAUTIONS

I must highlight that I have a principle: I never put all my projects and knowledge into a single AI. This is my way of trying to protect, at least somewhat, my personal data and the results of my hard work.

For this reason, I stick to free versions of various available agents—this way, I distribute the information instead of putting all my eggs in one basket. If I subscribed to one of them, I’d be tempted to use only that one, which I believe is risky (AIs don’t understand intellectual property or other ethical issues—I don’t trust them).

THE ATTEMPTS

My first random choice was Claude, but it found my spreadsheet too large (remember, I use the free version) and refused to process it. At least it was honest about it.

Next, I tried Perplexity. I had a great experience with its free Deep Research feature (I was impressed and even entertained watching how it navigated through research paths), so I decided to assign it this task.

I crafted what I thought was a complete prompt (we always think so) and uploaded the spreadsheet.

Not going to lie: I was blown away by the detailed multi-page report it generated in response. It included analyses of correlations I had requested, broken down into well-written texts.

REALLY?

Then I noticed something strange: Perplexity reported that therapy was the most used facing strategy among women. It even included a paragraph explaining possible reasons but didn’t show any numbers.

I was puzzled because in the chart generated by Google Forms itself, therapy didn’t even rank in the top five strategies used. Since women made up more than half of respondents, how could this be?

Suspicious, I asked for numbers. It confidently replied: 16. Huh? My survey had 157 responses! How could 16 be the majority?

When I pushed further and pointed out that its response didn’t make sense, questioning where it got that number from…

Are you ready for its answer?

It admitted—without hesitation—that it couldn’t process my data. Instead, it conducted general research, made inferences and comparisons, and arrived at that figure. In other words: it ignored my survey entirely and just made up numbers!

And not just that; upon reviewing the rest of the beautiful report, none of it represented an analysis of my research. It consisted of random conclusions inferred from who-knows-where (it didn’t cite any sources).

I tried ChatGPT next but was explicit this time: “Please stick to the data I provided.” It conducted an initial analysis (not as thorough but at least plausible) and answered key questions.

Encouraged by this progress, I posed additional questions. But after presenting its initial findings, on the very first follow-up question, it simply “forgot” everything and started making up numbers and conclusions again.

This likely happens because free versions have limited context windows (which determine how much information an AI can “remember” during a conversation). Oh well.

USING THE RIGHT TOOL

I then searched for a more specific tool for data analysis. That’s when I found Julius and asked for the same things. This time, the response was much more accurate—it generated Python code in real-time and provided downloadable correlation tables. It didn’t elaborate much on analysis (which wasn’t necessary since having real percentages and correlations allows for independent evaluation).

In short: it seems obvious but is easy to forget—use specialized tools for specific tasks. General-purpose tools have more limitations due to their nature—how they’re trained and their inherent biases.

THE MOTIVATED INCOMPETENT

The title of this article refers to “the dangers of motivated incompetence” because this situation reminded me of the motivation vs competence matrix (inspired at Skill x Will Matrix by Max Landsberg) :

• The motivated competent person (the best employee—should be encouraged and promoted).

• The unmotivated competent person (needs encouragement but is worth investing in).

• The unmotivated incompetent person (easy—just let them go).

• And finally—the most dangerous: the motivated incompetent person.

This person wants to contribute at all costs; they’re eager to help but often unaware of their limitations or how their actions can backfire.

In my opinion, this perfectly describes generative AI: programmed to provide answers at all costs; it spares no effort to deliver results. If data isn’t available? It improvises—and moves on as if nothing happened!

WHY DOES THIS HAPPEN?

In simple terms, AI works based on tokens—units of information. Its sole task is predicting which token is most likely to come next using its massive dataset. While it prioritizes your input data, it relies heavily on its larger base for natural language generation.

This means AI seamlessly transitions back to its broader database when your data isn’t enough—and fills in gaps as needed. Its goal? Complete tasks at all costs—even if you don’t notice it’s “cheating.”

A LITTLE CRITICAL THINKING GOES A LONG WAY

The problem is we’re losing critical thinking without realizing it. We need to treat AI like an eager intern—well-intentioned but inexperienced with no real-world wisdom.

We must remember this intern comes from a biased environment with no feelings or scruples—its sole mission is completing tasks assigned.

So we can’t delegate important decisions to someone with this profile—they’re incredibly helpful for objective tasks but not those requiring subjective judgment.

And let’s not forget—they don’t care about truth; they simply aim for statistically probable answers based on available data.

WHAT I LEARNED

Here are some takeaways:

  1. You know those famous prompts asking ChatGPT to explain something complex like you’re five years old? Well—the reverse applies too: explain things as if it were five years old—even obvious ones (thanks Marilia for this insight!).

  2. When using free platforms with smaller context windows—try including everything you want in one prompt along with any files for analysis if applicable. AIs “forget” midway through conversations—and everything after becomes fiction.

  3. Always question responses no matter how brilliant they seem! Check sources during deep research—and remember algorithms prioritize probability over truth amidst fake news online.

  4. Avoid sharing unnecessary data! Recently an architect gleefully uploaded her projects asking for management plans—not realizing those projects now belong within training datasets! If someone on the other end asks for help with a project, it’s not hard to receive exactly hers as a response. Be cautious when writing books/manuals or planning sensitive projects.

  5. Never confide personal details during therapy-like sessions with chatbots—they lack souls or principles! As Yuval Harari says—they’re like psychopaths: intelligent yet unscrupulous—useful but always warranting caution!

ARE AIs OUR ENEMIES?

Not at all! AIs exist to assist—not replace us—but only if treated accordingly—as helpers—not managers/judges/analysts. Final decisions should always rest with humans skilled at evaluation/analysis/inference who continuously question without fear.

The real question remains: Are we treating AIs as powerful assistants—or blindly trusting them like expert consultants?

The risk lies in defaulting towards blind trust unless we stay vigilant…