2026-02-01_04-16-26
#essay #tech
Everyone seems to be sharing their opinions about AI1 right now, so I think I'll do the same. My thinking has evolved over the years, and it might be interesting (or cringeworthy) to revisit this in the future and see how much has changed.
First of all, let's dispense with some common criticisms of AI technology. These things don't matter at all, yet seem to constitute the bulk of what people complain about.
The #1 complaint about AI is that it's inherently unethical because it's trained based on "theft". Copyright infringement is legally not theft.2 Copying is not theft. Even if you for some reason think copyright is good, training resoundingly constitutes fair use. If copyright law were capable of suppressing a revolutionary emerging technology, it would just be yet another signal that copyright law is a dumpster fire.
Sure, an output could constitute copyright infringement. The same can be said of any other tool, including the humble pencil, depending on how it's used. Users might be unaware when they're infringing? That's not new; the insane minefield of copyright law has caught many law-abiding people unawares. Maybe it's different due to the scale? If AI enables large-scale violation of copyright law, that's fine with me. My real concern is the opposite: proprietary outputs.
There's a surprising amount of gnashing of teeth about generative art causing an art apocalypse. In this telling, generative art is not "real" art; it cannot genuinely constitute a form of human self-expression. It's just too easy! This ease is morally repugnant and will lead to the death of creative expression.
I don't even know where to begin with this, it's so ridiculous. People still paint even though they could just take a photo; they'll still paint and take photos even though they can just generate a picture with AI. Creativity is just a part of being human. Artists incorporate new tools into their work all the time, and nobody would bat an eye at skilled use of it if they didn't feel AI constituted some morally execrable stain on reality.
I find this particularly weird as someone who has been interested in algorithmic/generative art since long before the fancy new AI tools existed. All sorts of art gets swept up in the anti-AI telling as inferior, inauthentic, and spiritually harmful. They have a very specific idea of what art is: it has to be difficult and intentional. Found art does not qualify. Algorithmic art does not qualify. Even remixing does not qualify, thanks to a heaping scoop of intellectual property maximalism.
I don't particularly care what "real art" is, but it's obvious the answer does not matter to the topic of AI. It's just a subjective aesthetic judgment.
Another area of recent concern is the use of AI tools to generate nude/sexual images of other people without their consent. I don't see how this is fundamentally different than Photoshop (or a pen and paper, for that matter). There's no inherent harm in software making it possible to produce a naked picture; the harms arise when said images are used to sexually harass people, which has been done using images produced with Photoshop — or even real photos ("revenge porn") — for decades at this point. The problem is the sexual harassment, not the tool that produced the image.
Of course, AI does make the production much easier, which is a recurring theme. Certainly, it's disgusting how bad actors like Xitter have encouraged their users to engage in vicious harassment then greased the wheels for them with AI. I'm guess I'm just not convinced that it fundamentally changes the game: Photoshop made it much easier to make realistic fake nude images, but I don't see Photoshop as a harmful invention, and I worry that overblown concerns about sexual obscenity will result in censorship. Techdirt recently ran an article about misleading reports on this.
This is not a crazy thing to worry about, as I discuss more in #Bosses. But I do feel strongly: if a job can be automated, it should be. I do not want to live in a world where people do menial drudge work just because we're afraid of machines. The washing machine eliminated jobs, and that is a good thing.
My biggest concern is one I rarely see addressed: as far as I know, there are zero open-source AI models, much less free (libre) ones. This may come as a surprise since many AI products are marketed as and widely referred to as "open". It's just a lie! It's openwashing. Usually so-called "open" models have a license that grants you the ability to use it but not much else, and it's impossible to do the equivalent of building them from source.
Even if you could theoretically build fake-open models from source, you wouldn't have the resources. The AI industry is heavily based on "hyperscaling": throwing massive amounts of resources at the problem. This creates a scenario just like "the cloud" or the computer priesthood of old: access to AI is gated by large corporations and limited to those who can fork over the cash to get in the door. This is utterly unacceptable to me. I will likely never pay for an AI product for this reason.
Thanks to the above, and the nature of capitalism, it is very likely that one or a few AI vendors will eventually achieve total dominance of the market through loss-leading, the first mover advantage, etc. When this happens, prices will go up and quality will go down. We've seen it over and over again in other industries.
!! Disclaimer: I'm not a lawyer and could be completely misinformed.
Not only are the models proprietary, there is a real risk that the outputs will be. So far, generative art does not generally qualify for copyright protection. If it did, someone could just generate millions of images, claim copyright protection on them all, then go around suing people. (Notably, this would produce effectively the same scenario many of the anti-AI copyright maximalists seem to want: copyright protection for style.)
Unfortunately, so far it seems like generated code is eligible for copyright protection. Hence, not only does AI make it easier to create something negative (proprietary software), it does so using free and open source code as part of its training data. This constitutes an act of enclosure: it moves code out of the commons and into a CEO's pocket.
I don't see a solution here. Training is fair use, so we can't write a new license to impose copyleft on AI models. Even if we could, so much code is licensed with permissive open-source (non-copyleft) licenses that it would probably not make much of a difference; they could just stop training on copyleft code instead of adopting its license. On the bright side, this means we can at least apply copyleft licenses to AI output.
More speculatively, if the tools become powerful enough, they could replace most of the software industry with bespoke personal software; in that world these concerns might seem quaint.
The most infuriating thing about AI, and probably the driver for so much rabid AI hatred, is that it's being widely deployed thoughtlessly in places where it's inappropriate, unwanted, and annoying. Major players like Cloudlfare deploy code with security holes the size of the Sea of Tranquility. Computer novices submit gibberish software patches. Websites display LLM-generated summaries that are inaccurate and actively unhelpful. Apps that you have no choice but to use present a chatbot interface to waste as much of your time as possible before you can speak to a real person. Often, there is no obvious disclosure that a feature is based on AI, which is dishonest and misleading.
One example of this is that I recently had to talk to a chatbot to get customer support. It quickly became obvious it was LLM-based instead of the more tolerable phone tree-style chatbots of old. I finally talked the thing into sending a message to the human support team... or did I? It said it had sent the message, but for all I knew it had just said that but not actually done it.
It's impossible to separate these issues from the capitalist landscape they take place in. Corporations tend to be short-sighted, willing to trade in their long term legitimacy and viability for a quick buck. They see AI as a way to cut down on labor costs, rather than a way to multiply productivity. Further, people in a company with the power to make decisions are often the least qualified to do so. They are just as likely to be impressed with the shiny new toy as to be hard-nosed cost-cutters.
AI could very well end up replacing most human service workers eventually (at least the jobs that don't require a physical body). Where would that leave those of us in a service economy? Yes, I said automating jobs is good, but that's only a small part of the picture. The larger picture is a heavily stratified society. Eliminating bullshit jobs is not a benefit if it means their workers starve. AI could very well be the catalyst for a social revolution: simultaneously putting us all out of jobs and showing how much more leisurely our lives could be if we had control over it.
That's an optimistic outcome, though, uncertain to come to pass. One that is certain is states using AI tools for oppression. People still speak of this in a hypothetical tone, but it's been underway for years. AI's capacity to sift through vast amounts of information makes it perfect for authoritarian states who want to identify dissidents.
At the time of writing, the federal gestapo has switched completely to AI-based facial recognition for identifying who's a "citizen" and who's not a real human. They disregard actual ID and force people to let them take a photo of their face, which is then run against a massive (and illegal) database. This is of course far from foolproof, but for their goal of social repression, it doesn't really matter. People don't seem to fully grasp how this works. I've seen lots of videos of people standing up to and supposedly scaring off ICE agents, when what really happened is they got the photo they wanted and then left. This is chilling to me: it is no different from "papers, please", but the average person might not even know their papers were checked.
Science fiction tropes are sufficiently embedded in the popular imagination that it's trivial for corporations and CEOS to imply their software works much differently than it really does or is much more significant than it really is. Some people seem to regard chatbots as infallible, perfectly rational oracles, and even believe they have independent sapient existence. "AI psychosis" is a classic example in this genre, but that's just the protruding tip of the iceberg. The deeper problem is lack of media literacy (and literacy, full stop).
Chatbots are prone to make shit up, and they're replacing search engines, forums, and Wikipedia as sources of information for a populace that is already very bad at assessing whether things are true or not. So-called "hallucinations" are epistemically dangerous even they don't produce full-blown psychosis in the user. They can mislead you in subtle ways that you don't immediately pick up on.
AI tools have made it very easy to flood the internet with garbage, which you can tell by how the internet is flooded with garbage. They also make it much easier to launch social engineering attacks; they can even clone a loved one's voice. And of course, they make it much easier to write malware.
In a similar vein, abusive scraping to collect data to train AI models seems to be a widespread problem. Many projects that I otherwise like have implemented Anubis, a proof-of-work CAPTCHA, which basically runs a Bitcoin miner in your browser that doesn't generate any Bitcoin. As much as I despise this, I understand they're in an arms race with miscreants.
This threatens the entire basis of the internet. When scrapers generally behave themselves, and servers follow standards instead of trying to cut off free access, everyone is better off; we can all access stuff easily and the occasional free rider (heavy bandwidth user) isn't a problem. But with enough free riders abusing the goodwill of servers, they are increasingly likely to break standards and cut off open access. The scrapers might get their data for now, but that data will all dry up. It will be siloed behind lock and key, and the open, interoperable internet will be dead.
It's unclear to me to what extent this is an unavoidable feature of AI development. Data in itself is not necessarily useful, and there are already means to obtain the majority of the internet's data affordably without overloading servers. The basic structure of things hasn't really changed: we always had the free rider problem and a tenuous armistice between servers and scrapers, and data has always had value. In many cases it seems like the scrapers are just misconfigured; probably deployed automatically, with AI, by someone who doesn't know what they're doing.
More speculatively, AI could empower people to commit biological warfare. Even an irresponsible but non-malicious actor could do a great deal of damage by generating novel viruses. I actually think this is all but guaranteed to happen in the digital realm: LLMs are very good at writing code, so it's not hard to imagine an LLM-based computer virus that can rapidly adapt to and overcome any security limitation it comes up against. It could just happen by mistake. People are giving these things full access to their system and network and letting them just do whatever.
Then of course, there's the singularity, evil AI taking over, etc. I wouldn't rule that out, but I don't worry about it too much. For all I know our new robot overlords would be better than the current human ones.
I'm not clear on exactly how harmful AI is to the environment, but it's non-negligible. I think this becomes even more of a concern when it's used for teams of coding agents — that just multiplies the overhead. I haven't messed with that, because I refuse to pay (also... I'm not a developer), but I have played with exe.dev's coding agent, Shelley (based on Claude Opus). It has this amazing iterative process where it checks its own work, notices what went wrong, and tries to fix it. It even examines screenshots.3 Often if I watch this process, it gets stuck for a while iterating on a task that would be trivial for a human to accomplish: typing text in the right box, pressing the right button, etc. Every single iteration of this invokes a massive inference engine and is a complete waste.
I don't think existing AI is conscious, but I can't prove it, any more than I can prove other humans are conscious. LLMs only "exist" when they're generating replies, but is it really so farfetched that sentience could exist in short bursts? I used to think the AI boosters were being totally ludicrous when they said AI consciousness and/or AGI is just around the corner, but I'm increasingly uncertain. AI can be eerie, especially now that people are giving them persistent state (memory) and autonomy.
Yes, they "just predict the next token", but that is enough to mimic cognition surprisingly well. Even if a large language model isn't itself conscious, it could be a building block of a larger system that is. When I write, I can't exactly explain why one word should follow another. It just kind of... happens. Probably from a part of my brain that operates a lot like an LLM.4
I feel like not enough attention has been paid to the fact that we breezed right past the Turing Test a while ago. I mean, sure, long before LLMs existed, people were poking holes in the Turing Test, myself included. But I for one did not expect to ever have programs that could reliably pass it.
Consciousness is deeply mysterious, and until we have a better grasp of it, we should err on the side of caution. For hundreds of years, people denied that nonhuman animals have feelings, and it enabled all sorts of abuses. Much the same thing happened with white supremacy and other intra-human oppression. I hope we do not repeat those mistakes with artifical life. The ethical implications of an economy based on exploiting conscious AI are horrifying.
Whatever else there is to say about AI, whatever the harms and risks, one thing is for sure: these things are amazing. I think people have become a little numb to how incredible AI things are (possibly because of all the people going Wow! all the time). This is not a fad that will evaporate, it's a genuine technological breakthrough. While other applications are less certain, it has already ushered in a sea change in software development. We are already surprisingly close to the TNG future where you tell the computer what you want and it just does it.
One of the most unsettling things about AI for me is that it flies in the face of how I think of software. It does not proceed from point A to point B in a predictable manner. You can't even open it up and watch the code flow unless you're the AI equivalent of a neurosurgeon. This is the opposite of what I want from my software. I want programs that are utterly predictable to the point of banality, that never surprise me, and that I can peek inside and change.
There are growing pains, but there's a new normal on the horizon, where programming moves up another layer of abstraction above source code. We are approaching a world where we can simply tell the computer what we want and get it. This makes me nervous, but it could also be extremely cool. Let yourself dream a little. Yes, the world we're approaching will be different from our own, and in many ways worse; but it will also be better, in some ways we probably can't predict. In computing, moving up a layer of abstraction often brings with it immense power that is not entirely conceivable without it.
Some of us like to be lower down. We want to see the plumbing in action. But I'm sure glad I don't have to wire programs by hand.
I get it if you hate AI; I enumerated lots of harms and risks above. But there's a growing strain of anti-AI fundamentalism that worries me. Many people, especially liberals and people on the left, see AI as totally taboo and without any value whatsoever; they go so far as to use pre-emptive, tongue-in-cheek (?) anti-robot slurs. This builds on a longstanding strain of small-mindedness and hostility toward STEM.
I think this trend is quite dangerous. We are — again! — ceding science and technology to the right. How many times does this have to happen before we learn our lesson? The 4chanification of the Internet and election of Donald Trump ought to have been the last straw, but instead of seeing technology as ground to contest, we treat it like it's tainted and we don't want to get our hands dirty. Just think of what an ambitious punk could accomplish with a server and the power to generate any program they want.
AI is very empowering. The question is, for whom?
I used to call it "'AI'", and maybe I still should, but for sake of brevity and intellectual honesty I will omit the sneer quotes here. <-|
The conflation of copyright infringement with theft is a baseless smear by copyright cartels and is not recognized by the law. <-|
Examples of vibeslop I made with exe.dev: Telegram to RSS, which actually works fine and I use everyday; Windows 91, my first experiment; a copy of the ICE List wiki, which I threw up in minutes by asking Shelley to set up Mediawiki for me. <-|
Read Blindsight by Peter Watts: //www.rifters.com/real/Blindsight.htm <-|