There is a certain kind of criticism I receive from time to time, usually on social media, sometimes beneath a blog post:
“This is just generic AI slop.”
It is often meant as a full dismissal. No further engagement needed. No interest in whether the underlying ideas are true, useful, or actually mine. The text has a certain shape, perhaps a slightly polished rhythm, perhaps too much structure for someone’s taste — therefore it must be empty.
I understand the suspicion. Truly, I do.
The web is filling up with hollow content at a frightening pace. Articles written to occupy search results rather than to say something. LinkedIn posts that simulate insight. Entire streams of motivational sludge, fake expertise, invented anecdotes, and bland “thought leadership” generated from prompts no deeper than: Make me sound impressive.
That is a real problem.
But it is not the same as what I do.
Yes, I work with AI. A lot.
I use AI constantly. In writing. In programming. In strategic work. In systems design. Sometimes to test an idea, sometimes to sharpen a thought, sometimes to accelerate execution.
For my blog, the process is often quite simple in principle: I bring the substance. AI helps shape it.
I may start with rough notes, a personal observation, a stream of thoughts, a few examples, and the structure of what I want to express. Then I work through it conversationally. I refine the angle, add context, identify stronger formulations, discard weaker ones, and eventually use AI to help produce a coherent draft. After that, I edit it again. I remove what sounds too generic. I add my voice where it is missing. I sharpen the thoughts where they have become too smooth.
That is not automation replacing thinking.
It is tooling compressing the distance between thinking and publication.
Why some of my posts may feel unusual
A recent example was my post about reading roughly 800 headlines a day.
A few people dismissed it as AI slop, or as blatant showmanship. Yet the underlying essay was not fictional self-mythologizing. It was a structured reflection on something I quite literally do every day: maintaining an unusually broad and deliberate international news diet, scanning a large number of headlines and teasers across regions, languages, and domains in order to maintain a high-resolution picture of the world.
Could I have written that piece fully by hand? Of course.
Would I have had the time to write it in that shape, among running a company, building products, refining my positioning, writing software, developing media formats, and keeping countless other systems alive? Probably not.
And that is precisely the point.
My life contains a drawer full of essays, observations, unfinished frameworks, old projects, and ideas I considered worthwhile but never managed to publish properly. Not because they lacked substance. Because expression is expensive. Turning a genuine thought into a readable article takes time, and time is always competing with creation elsewhere.
AI opens that drawer.
It allows me to externalize more of what already exists.
The difference between slop and assisted work
To me, the distinction is not whether AI touched a text. That will become an increasingly useless criterion.
The relevant question is:
Where did the substance come from?
There is a world of difference between these two prompts:
“Write me a blog post bragging about how I read hundreds of newspapers every day so I sound exceptionally intelligent. Make it appear realistic.”
and:
“Here are my notes on my actual daily news consumption, why I built this habit, what it gives me cognitively, where it may be excessive, and how I think about breadth versus noise. Help me turn this into a clear essay.”
The first is performance without reality.
The second is editorial assistance.
That distinction matters.
I do not use AI to invent a persona for me. I use it to make my existing thinking more legible.
I do not use it to fabricate depth. I use it to publish depth that would otherwise remain stuck in fragments, private notes, or half-finished drafts.
I do not use it as a ghost that speaks instead of me. I use it as a team member that helps me get from raw material to communicable form.
Much closer to a media team than to a vending machine
In media production, the person with the core vision does not usually do every single operational step alone. An idea becomes notes. Notes become a treatment. A producer, writer, editor, camera team, designer, or researcher may each shape part of the result. The final work can still clearly belong to the person whose thought, taste, and direction initiated it.
That is much closer to how I use AI.
I provide the conceptual architecture. AI helps with assembly.
Sometimes it is a sparring partner. Sometimes an editor. Sometimes a junior researcher. Sometimes an intern who hands me a first draft that is useful, but not yet publishable without review.
Anyone who works intensively with AI will recognize that some of my texts are AI-assisted. Certain sentence patterns, certain transitions, certain structural gestures can remain visible. I do not consider that a scandal. I consider it part of working with a still-young toolchain in public.
But I also do not publish blindly. When a draft feels too smooth, too abstract, too much like generalized “good writing,” I pull it back toward myself. I add specific examples. I reintroduce edge. I make sure it says what I mean, not merely something reasonable.
I work the same way in code
This is not limited to writing.
I use AI inside my IDEs as well. To draft functions. To scaffold classes. To generate repetitive structures. To explore implementation paths. To accelerate the boring parts, and sometimes to widen the option space when thinking through architecture.
But no serious developer would argue that every AI-assisted codebase is therefore fake software.
The responsibility remains mine. The system design remains mine. The review remains mine. The integration remains mine. The judgment about what belongs and what does not remains mine.
AI saves time. It does not absolve me of authorship.
The same applies to writing.
People have always used tools
Every generation normalizes the tools it grew up with and becomes suspicious of the next acceleration.
Spellcheckers did not end writing. Search engines did not end research. Digital photography did not end visual judgment. Code libraries did not end programming. Templates did not end design.
They changed the baseline.
AI changes the baseline more dramatically than most tools before it, because it operates unusually close to language, reasoning, and synthesis. That makes it powerful. It also makes it dangerous. It can blur fiction and reality. It can mass-produce pseudo-thought. It can lower the cost of deception.
So skepticism is warranted.
But skepticism should remain precise.
A text is not worthless because AI helped draft it. A text is worthless when there is no real thought behind it, no honest grounding, no meaningful judgment, and no one willing to stand behind it.
Even the older tech guides mattered
Some of my older technical guides may read more recognizably AI-assisted than I would write them today. That is fair. My process has evolved.
But I still stand behind publishing them.
A guide on setting up Time Machine backups over Samba. A walkthrough of building a high-precision NTP server with GPS PPS. A reflection on rendering OpenStreetMap tiles over several weeks. These are not imaginary content farms. They document things I actually built, solved, configured, and learned from.
Could the wording be more distinctly mine? Yes.
Did AI make it possible to publish useful, technically solid articles without sacrificing days of time I simply did not have? Also yes.
And when people find those guides, use them, and solve their own problems faster because they exist, I consider that a net positive.
Not every valuable text needs to be handcrafted sentence by sentence in artisanal isolation.
Sometimes it is enough that a real problem was solved, the solution was documented responsibly, and someone else benefits.
My actual standard
My standard is not “no AI.” That would be performative and, for the kind of work I do, frankly irrational.
My standard is closer to this:
- The underlying idea, experience, or argument must be genuinely mine.
- AI may help me clarify, structure, expand, or compress it.
- Factual claims must remain grounded and reviewable.
- The final piece must express what I actually think.
- If the draft loses my voice, I take it back.
That is the line I care about.
Not purity. Integrity.
A tool for publishing more of what would otherwise stay invisible
I have always produced more thinking than I could formally publish. Notes. Systems. Half-written essays. Technical experiments. Strategic reflections. Observations that might matter to five people, or five thousand, but remained trapped because every finished article competed with more urgent work.
AI changes that economics.
It does not replace my mind. It reduces the editorial friction around it.
That means more of my actual thinking can leave the drawer. More of my experiments can become usable guides. More of my internal architectures can become communicable. More small but meaningful insights can enter the world instead of decaying in private files.
For me, that is not slop.
That is leverage.
And, used with care, it is one of the most beautiful things this new toolset makes possible.
This blog post has been written by me with the assistance of AI (GPT 5.5 Thinking).
Noch keine Kommentare
Kommentar hinzufügen