My current opinions on AI Generation #
The MIT Paper #
Firstly I'll talk briefly on a paper that I will be referencing quite a bit here. There is an MIT Paper[1] which showed that use of AI tends to show up with homogeneous works, with a lack of memory for the work and with a lack of ownership of the work. Additionally it showed lower brain activity when using AI generation to make essays. The results of this paper will be referenced a few times throughout.
Before getting into any of this, I will also reiterate Richard Feynman's quote:
"The first principle is that you must not fool yourself - and you are the easiest person to fool"
With AI, you can fool yourself even better due to how sycophantic it all is...
AI Generated "Art" #
My opinion on any artistic field where there is not a discrete correct or incorrect answer, is that AI Generation should not be used. One of the mixed qualities of AI is the very homogeneous output it produces. This means any writing by AI will sound the same(and we all know many of the patterns these days. "It's not X it's Y", a lot of emoji use, rule of 3 bullet points, and just generally ridiculous flowery language that sounds fairly sycophantic). This also means that images produced by AI will all look samey, and in fact many of them do from obviously flawed understanding of anatomy that is so flawed even beginners don't make that sort of mistake, to a failed understanding of lighting and shading and details that don't make sense(Not to be confused with actual drawing techniques where you draw the effect of detail rather than the detail itself).
Generally speaking I think AI is absolutely garbage for Art as many people value the fact it was written by a real person. I also value this and it adds a certain... Je ne sais quoi to it. In spite of this AI Generated poetry is rated more favorably than human-written poetry and struggled to distinguish the poems of the great Poets of history from AI generated poetry.[2]
In this regard, I generally have a "well fuck you then" attitude to AI generated art as it completely disrespects the time of the audience.
AI Generated code #
My opinions on AI Generated code are a little bit warmer, but still relatively cold. It shifts the problem from writing the code to reviewing it and ensuring the correctness of it- which for things that will never be in a production environment is fine. For things that will be in a production environment, your code had better be correct. AI in its current state cannot verify the correctness of the code it produces. It additionally cannot know the most appropriate way to architect a solution that considers your long term future needs and extensions. In this way I think AI generated code is comparable to working with a Junior Programmer, but it is still on you to ensure the correctness of the code.
Three other issues, is a lack of ownership of the code(in general everything with AI has a lack of ownership) and a lack of memory or experience. This latter one is a little trickier to explain, but if you don't have a memory or experience of that codebase it can be hard to track down and think through what could be the issue. In this regard, I am reminded of an anecdote told in Masters of Doom where John Carmack had a bug where the game state would be incorrect after a long time of running the game, so stopped, thought for 15 minutes and then realised where the bug would be. This kind of thing is only possible when you are intimately familiar with the codebase you're working on.
The final issue I leave for last as it's the worst issue. It puts Junior Programmers in a competition against LLMs so in many ways it is short-sighted and is putting their jobs at (some... not all because they can still learn) risk. This also means many Junior Programmers will resort to using LLMs which combined with a lack of experience in architecting software and the fact that AI-generated code does not help you build skills in programming means I will expect to see a multimodal distribution(basically 2 normal distribution, where there's pre-AI Programmers who have better skills, and post-AI programmers who have stunted skills).
I will also leave one more issue with AI generated code. It completely fails for anything that requires a huge amount of context, which for many things... especially many stateful things, and doubly especially many brownfield software projects, means it is effectively unusable. On isolated scripts that will not be depended on it is still fairly useful.
In general I think this is one area AI is... ok if not very good in, but it's not very good in the same way a Junior programmer is not very good. I think a lot of this comes down to software development being a fairly homogeneous field where there's often only a handful of good approaches to a problem and correctness can be verified. Other industries that are similar about verifiable correctness can likely benefit... such as medical diagnosis and analyses of x-ray photographs(as you can still have a Doctor look over them and verify correctness and compare results). It remains to be seen how truly useful this is, especially as many things like law and accounting, while they may still have firm rules they also have shifting rules and regulations on how they work.
AI for Research #
There are three cases of using AI for Research. Either using AI as a stand-in for humans, which can help test some hypotheses but is ultimately flawed(due to the fact they aren't humans and their behaviour is fairly sycophantic which is unlike us), or using AI to help us find new sources to read up on things, or using AI to give us answers and information.
It should be fairly plain and obvious the issues of the latter. AI is not intelligent, it is a very sophisticated advancement on Markov Chains[3] but it is still based on probability. In the second case, I have mixed opinions, but a lot of it comes down to Search Engines not being very good. In this regard, I have heard about Kagi as an alternative, but I haven't looked into it. As it is currently search results are already heavily weighted by Advertisers.
The first case is a little more interesting and nuanced and I haven't heard very much about it. As an example of this, there is this paper[5] which used LLMs in place of Humans in order to test different types of social media algorithms and their effectiveness in reducing radicalisation(unfortunately it suggests the problem isn't the Algorithms, but it's us). With this kind of approach, I'm not really sure how you handle either false positives or false negatives in how these LLMs behave compared to Humans. Additionally it is limited to more social subjects.
AI as a teacher #
From the MIT paper I linked in my opening paragraph, due to the lower brain activity when using AI Generation, I cannot recommend LLMs as a Teacher, especially if this is by "learning through doing". As a tool to generate problems it is also incredibly limited as if you're a novice, how can you even verify the question is correctly formed and even has an answer(for stuff like mathematics). Plus there's an absolute deluge of problems and exercises out there you can use that are already verified to be solvable and are structured in an appropriate way for novices.
In general I cannot recommend AI at all as a teacher.
AI as a Companion, Partner or Therapist #
The first two are absolutely awful use cases. It is about on par with loving a car(not in that way that many men do, but in THAT way that mentally ill people do). Its usecase as a Therapist is still playing out and I remain undecided. It's sycophantic nature has to be balanced against constantly refreshing the context for it about all the behaviours a therapist has, which means the person doing this must be aware of how a therapist works and how to do therapy as well as the different schools of thought around therapy(for which I personally think Behavioural Therapy is the best and most helpful for people, which isn't to say there isn't any value in doing that Jung-style inner-work). On this point, I will point to Nicky Case's self-experimentation here[6]. As there's a high risk of AI Psychosis and there have been people who have made large lifestyle and relationship changes for the worse it is something I have a lot of caution about. The upside is that you can receive something while stuck in a waiting list of being unable to afford a therapist.
Conclusion #
AI for Art? Terrible. AI for coding and other areas where correctness can be verified by someone else? Bit rubbish, and often requires constrained circumstances to actually be usable. AI for Research? As a subject in research, it's an interesting and innovative approach that can help suggest at some hypotheses. Will have to see how it plays out over a longer time. For finding research... it's ok, but not very good compared against search engine. Even compared against Google Scholar. AI as a teacher? Garbage and will likely set you back. AI as a companion or partner? Don't do this. As a Therapist? Only if circumstances are extreme enough that you can't get a real therapist currently.
I have also neglected to go over points like AI's effect on the environment(where most of the effect is in training new models. Provisioning already trained models isn't too expensive), nor social points like how it will hurt some jobs, nor cynical points about how CEOs are implementing this to fire workers and cut costs and cut corners, or even political points like how Trump's Regime is shuffling the policy deck in AI companies favour or how tech companies are currying Trump's favour in a way matching an Oligarchy(which in this case is Capitalism working as intended as it is these CEOs doing the best to make the most money for their shareholders though it comes at big social cost). I'm aware that all these and more topics have not been covered, and I'm covering more the practicalities of using AI. This is not me saying they aren't important... they are! They just don't come that much into the picture of how an individual would use LLMs. The most notable thing among these various things I didn't really cover is how it'll mean critical thinking is a rare and valuable gemstone-skill that in our short-sightedness are treating as dirt. I suspect 10-40 years from now for this skill to shine brighter than it does currently(and it does shine bright still currently).
Finally, if you have any comments about this, please be a bit civil as I know AI gets everyone's chests hot, heavy and puffed up with their tribal warspears sharpened and their bloodlust given undue enthusiasm. I'm just trying my best to look at this as objectively and impersonally as possible(and per Feynman's quote even I may have fooled myself in some places and I hope time will correct my course if I have).
P.S. One of the minor things that drives me up the wall about AI Generation, is it has polluted information on AI approaches for videogames as AI is too wide a net of a name.
References #
1 The paper I'm talking about is this Paper. "Your Brain on ChatGPT: Accumulation of Cognitive Debt when Using an AI Assistant for Essay Writing Task" with the PDF seen here. The one dodgy thing about this is the sample size is fairly small ↩
2 The paper I'm talking about is this paper "AI-generated poetry is indistinguishable from human-written poetry and is rated more favorably" from nature.com. ↩
3 Markov Chains are a kind of way of generating text, based on frequencies. It takes a word, and looks by frequency of what word should come after it. LLMs are similar to this but not exactly as they aren't just looking at the current word, but all the words said before in the "context". ↩
4 Kagi is found here, and I think it is a hard sell for most people considering we have gone for 20-some years with free search engines and the fact that google is still... good enough for many things. ↩
5 The paper I'm talking about is Can We Fix Social Media? Testing Prosocial Interventions using Generative Social Simulation, PDF where LLMs were used as the subjects in it due to how difficult the hypothesis would be to test in reality. ↩
6 Nicky Case on AI Therapist as it's the most recent thing I've seen on this subject ↩
Email your feedback about "My current opinions on AI generation"
Published on 2026/02/19
Articles from blogs I follow around the net
A Sonnet for Ash Wednesday
I am reposting this Ash Wednesday Sonnet from Sounding the Seasons, with a new sense of urgency. It was eleven years ago that I wrote the lines: The forests of the world are burning nowAnd you make late repentance … Continue reading →
via Malcolm Guite February 18, 2026Alcoholic Beverage Review: Steel Reserve Alloy Series Spiked Pineapple
I can't seem to get a job and I just don't give a fuck anymore
via goeshard.org February 17, 2026Generated by openring