AI Workslop is a new buzzword in the corporate circles. As we always do, let us begin by defining this term.

AI Workslop Definition

AI Workslop is sloppy work that workers are doing with AI and turning in – work that looks good, is properly formatted and grammatically correct – but which doesn’t make sense.

So you like the looks of it – it appears professionally presented, but when you try to read/analyze it, you realize that it’s sloppy work done by AI. Hence the term, AI Workslop.

The Social Media Example

If that sounds complex, let us simplify it by taking the example of social media. There was a time when one would read a post diligently because a human had put in their time and effort (if not blood and sweat) into putting their thoughts in words. While some excellent writers, even today, write their own stuff and use AI only to add a touch of perfection to their posts, there are about ten-times those who believe that writing anything themselves is a proper waste of time. So they toss a topic to AI and then copy and paste everything that AI throws up to make a post that looks and reads quite similar to a dozen posts that you’ve just scrolled past.

This phenomenon is not limited to social media. It has found its way into the corporate sector, where it’s now earned the endearing nickname, AI Workslop!

What Causes AI Workslop?

Several companies are pushing their employees to use AI to do stuff faster (An increase in productivity results in improved bottom line for the company, which in turn results in enhanced shareholder wealth – an obvious indicator of the top management’s performance.) Employees follow the dictum and use AI for their work. However, not all employees use it judiciously. They “save time” by getting AI to good-looking slop, which they then send ahead. The recipient now must slosh through the slush and maybe correct it so that it starts making sense.

AI Workslop cartoon showing an employee being sucked into the whirlpool of AI Workslop created by another employee.

According to this survey by Stanford University here, 40% employees said that they had received Workslop – 40% say they received it from peers, 16% say they got it from those who report to them.

Let me ask the question that’s dangling from the tip of your tongue.

“How long would it take for the remaining human creators to decide that they would be better off creating their own AI Workslop?”

Ill-Effects of AI Workslop

The following three things happen when we receive AI Workslop at work.

  1. We can’t decipher it, so we have to spend our own time to do the required research.
  2. Trust levels go down, so we become wary of anything that lands on our table (or in our inbox) from that specific employee. What this means is that we double-check even the stuff that doesn’t require double-checking.
  3. We must make decisions that elevate our stress levels. We must decide:
    • Should I confront the originator of this document – if I do, how do I get them to admit the issue?
    • Should I rewrite it myself – if I do, how much time would it take me, also, am I really the best person to do it?

So, what would happen if this was allowed to grow unchecked?

Long-term Impact of AI Workslop

  • Trust levels within teams will reduce. Intra-team harmony will disappear.
  • Those reworking that AI Workslop will find themselves doing extra work. Thus, they will be penalized for not creating AI workslop. Operant Conditioning tells us exactly what would happen. The bad workers will continue to produce workslop, the good workers will receive environmental feedback that producing AI workslop is acceptable. What would happen is anyone’s guess.

Containing AI Workslop

Train employees on:

  • using AI the right way for doing the right thing (Use the RAD Method – detailed discussion in the book “For the Love of Instructional Design.“)
  • engineering prompts that result in good output that’s usable.
  • reviewing the work of AI to ensure it’s useful, relevant, and focused.

And please don’t ask them to jump into the AI ocean without teaching them how to swim the AI waters.

HBR