Using LLMs to Identify Harmful Thinking Biases #
I’ve been thinking about an interesting product idea lately. What if we could use large language models to help us identify the biases in our own thinking? Not to eliminate them entirely, because I don’t think truly bias free thinking actually exists. Bias is subjective after all. What one person considers bias, another might call common sense.
The real opportunity here is building something that helps identify biases that are actually harmful to us, the ones that genuinely impede our thinking and decision making process.
The Foundation: Learning from Established Research #
There’s a book “The Art of Thinking Clearly” by Rolf Dobelli that I read many years ago, and it really stuck with me. The sheer number of biases humans have in their thinking was eye opening. Dobelli catalogs all these cognitive biases with solid examples. It would make sense to use LLMS to identify similar biases and make a lot of synthetic training examples. The product would work like a graph system, similar to other decision making tools I’ve been exploring, where each node represents a potential bias in your thinking pattern.
Why This Matters #
My work has been evolving more and more toward algorithmic decision making, which is quite different from where I started. It’s becoming a fascinating series of exercises, and I feel there’s enormous potential in this space because frankly, we don’t have good solutions right now. I’ve read extensively on this topic, and the gap is clear.
The Technical Approach #
The system would analyze your reasoning process and flag patterns that match known harmful bias categories. Instead of trying to make you bias free (impossible), it would help you recognize when your biases might be working against your best interests. Think of it as a thinking partner that says “hey, you might want to double check this reasoning because it looks like confirmation bias” or whatever the case might be.
My solution #
I started with a local LLM, built up a series of toy examples that encompasses each of the biases that were highlighted in the book, used that example database to retrain a small LLM model. The retrained model should act as a classifier that will be wrapped around another LLM. I am not happy with the results so far. So I need to revisit this project another time when the models get better plus need to expand the training dataset.
Moving Forward #
This ties into my broader exploration of decision making algorithms. There’s something powerful about having systems that can help us think better, not by replacing our judgment, but by making us more aware of our own patterns. The technology is finally at a point where this kind of analysis becomes feasible.
The question now is how to build something practical that people would actually use. But that’s the exciting part of working in this space right now. We’re at the beginning of figuring out how AI can genuinely improve human thinking, not just automate tasks.