The Manifold Distortions of AI

I need everyone to stop playing pretend all the time, please. [NSFW]

[This article was altered from its original state to meet the recommendations of an AI.

It is actually about the real factual harms of AI use in writing in general, despite what you are about to read.] I actually don't have an editor, if you can believe that about someone so important. I live with a dog and have for many years. I still have people around me, but there is no person in my life who really has the time to read what I write before anyone else sees it, and there is also no person in my life for who reading any of this shit could be considered honestly enjoyable. That is not a complaint, but a fact. (Wait a minute….my psychic sixth sense is tingling….I'm seeing….. through the illusion! I have to act now! The cultural authenticity is degrading! Dasein!) I am also a schizo. What that means for this essay, and for everything I write about serious topics like Modal Path Ethics, is that I cannot just post unedited rhetoric online, because I know how you people talk and think. If I post what I have written as it comes out of my brain, my ideas are discussed by you self-aware geniuses as symptoms, the argument disappears, and the point I actually have to make to you (and yes, I do unfortunately seem to know things that you don't know) is long dead before anyone has bothered to consider it, because my brain is sick is yours is amazing. Meanwhile, and I have no reason to sugarcoat this for you, the overwhelming majority of people, yourself included, for most of their lives, are making decisions inside their own very embarrassing common delusions. Not clinically diagnosed ones, like I went out and had done for the delusions of my own that I myself noticed in spite of you. Just ordinary everyday delusions that you're probably even a little proud of developing, aren't you? You've been putting together a little toolkit of vibes, narratives, and story-shapes your mind already has ready on the shelf to squeeze whatever situation you find yourself into, don't you? How practical. The people who get to post their incredibly interesting thoughts without the kind of editing process I find I have to go through are the same ones whose unedited output just so happens to coincide with the narrative grooves the audience is already unable to escape from. What a beautiful little happenstance we have here. This is certainly worth defending. Ignoring me because I used an LLM to edit isn't intellectual honesty or any type of active thought and I'm really not impressed by any of you people anymore. You are just doing narrative pattern-matching on the receiving end. Great analysis, genius. They do that shit on sesame street, too. But I don't get to say that to you, because then I sound like what the pattern-matching machine you think is the entirety of your brain will pattern-match me as, which is: another one of those crazy people claiming everyone else is the crazy one. But I did say it and I am saying it again, implicitly in this sentence. So you can honestly just fuck off back to whatever storytime bullshit it is you worry about all day if your spidey sense is tingling and that's all you have to say to me here. I write then I edit because I actually do have to, because you actually cannot divorce yourself from that narrative-matching shit you find so socially useful. Can you still think, or nah? Because I was actually gonna talk about how AI is harmful. This essay was also lightly edited too, by the way. I had an AI check for typos and repetition and word soup, and it suggested a couple rephrases I couldn't argue with, if that's so goddamn important to how you feel, Dasein. I'm really so sorry you are becoming-towards-death. Just go back to trying to figure out what you are actually even fucking doing here if you can't abide such inauthenticity. (That doesn't mean go write another goddamn stupid fucking story in your head about how innocent you are, by the way, no one gives a shit about that.)


Okay, so, I was just displaying exaggerated emotions that distort the truth for effect and the article's point here. You can call that a "self-defense intention narrative", and it is the kind of thinking this book is written against.

I'm gonna keep sneaking these in, by the way, so don't let me get you. But now that the actually uncompromising total-story-minds might be gone, we can talk about the very many real and obvious harms of AI use, including the exact kind I was just whining about being “forced to use” here in the preceding poor-little-schizo-baby boo-hoo tantrum. The reality here is obviously way more complicated than that shit. Capturing the topic of AI properly was one of the motivating goals of Modal Path Ethics. I still think AI can break Modal Path Ethics in the future, and will probably write more about that down the line. But obviously AI is involved on this website, if you've read it up until this point. I've been using it to quickly turn supplementary notes that didn't make it into the book into actually structured pieces of writing instead of random stray thoughts, because a lot of this is very important to cover, while I am also working on the many other unrelated projects I am involved in for my actual career plans here, as well as my Bachelors’, then going back to rewrite the many sections I don't feel I would have actually ever said or that come out way too dull. I am not being paid to write ethics and I don't collect any money from this website, which I do pay to keep up, nor is this weird book I am writing going to sell copies in any way I could later cash in on. I've definitely still used AI on here more than I am philosophically comfortable with, and I've also used it less than the amount that would be required to post the volume of writing I would want to post about my corpus of thoughts about Modal Path Ethics if I were not a single random person with a mind that almost always does not cooperate with my actual goals (Sullivan-esque). Let me just say that straight up so the rest of this essay does not have to pretend I don't know that or that I am defending even my own usage of AI here (yes I also read the tantrum). The thing that has been happening and which many people have pointed out for the last couple of years is that AI writing has invaded every sector, and the natural response to AI writing has naturally followed, and I am now putting myself intentionally in the middle of that squeeze because this is not a black and white thing to me. My brain does not write raw prose anymore about these topics that can be read separated from the conditions under which it was produced like it used to. If I want to put down words for the actual thoughts in my head, they do not organize themselves how they are supposed to be for reading. I've still learned how to make that work for fiction (my short story in Nature didn't need any AI to be written, and is about as close to these topics as I could get in my own prose) and that doesn't stop me from writing properly in general, but not so much for something as philosophical and “real” like the ideas behind Modal Path Ethics when I am trying to describe them directly. Let's just say I feel like I can sympathize with Heidegger even if I like making fun of Dasein vocabulary (because it just makes me laugh, but I do get it, I think). I cannot write broadly readable prose about real-life philosophical topics alone in my apartment in 2026 without being very fairly read as AI or sounding like a lunatic; even for the writing not on this website and everywhere else where AI wasn't used, I am ultimately now finding myself accustomed to self-editing to match what I have seen as more parsable and respectable phrasing from the AI I am now interacting with so very often. But I also still do really have to write polished prose if I want to have my ideas about ethics and modal metaphysics taken as more than another cute little case study in schizo-babble to compare your own so-very-clear, so-very-healthy ideas against. Meanwhile I can't find a single professional metaphysician or ethicist who is working on this specific view, which I find very relevant to modern issues and have already mapped out very seriously for my own personal purposes and understanding. There is no gap for me to live and work inside of here unless I just try to force one open like I am doing here. I don't really get to live the lifestyle where I have other options to get this up and running in a way that will be read. Should my ideas not then exist? But, importantly, the people who can sense the AI in these sentences are still not wrong. I want to be clear about that before I say anything that might sound like I am dismissing them (yes, I did very literally and directly dismiss them in the intro, but I knew some of you could actually resist the human temptation to play your part in that little narrative-play I wrote for you about you being a judgemental and delusional person in order to make it down here. This is the real point I am trying to make when I talk about stuff like the “human confusion”. Thinking caps on, ethics is not supposed to be storytime you guys.) People are still right when they notice the AI plastic vibes. Something is off. That off-thing is. This isn't just story-thinking or philosophical snobbery like I had suggested before and it is not that they are resisting the inevitable future or something techy. The off-thing is actually about what AI prose does do to meaning, with most of the preceding articles on this website being perfect examples in their current state of editing.


When I write a new sentence by myself, the sentence is now to be considered the least distorted thing about me that has ever been in the world. It is still compressed truth, because every sentence is by definition a compression. But critically, that compression is being done by the same exact nervous system that is having the actual thought at hand, and the compression loss is a loss I can myself feel as I am doing it, and sometimes I can (from that contact with harm, as Modal Path Ethics puts it) push against it and recover some of what I meant into the sentence, and sometimes I still cannot. When you later read the sentence, what you are reading is the best I, the nervous system having the thought, could do with the thought I had, given the rules of the language we share and the particular shape of my attention at that moment. When AI writes a sentence for me — or smooths one of mine out with emdashes and repetitive structure — there are now two compressions between you and the idea. Mine, and then the large language model's. The model's compression is trained on the statistical shape of how sentences like this one tend to go. The model does not know and cannot know what my nervous system was reaching for. It only knows what the average reach for something in the neighborhood of what I was reaching for has looked like when put in text before. So when it smooths my sentence, it pulls the sentence toward the neighborhood average. Whatever was idiosyncratic about what I was trying to say — which is to say, whatever was actually me in particular like what is between these emdashes here — gets softened toward the statistical mean of everyone else who has said anything roughly like it. Then you read that. And what you are reading is, if it went through an AI and was sent back out to you to read, not ultimately actually me or the thought I had. You read that thought as I tried to express averaged with a million other people who said anything in the same general neighborhood of what I said. You then sense and feel the averaging as plastic-vibes. That is the off-thing. You do not have particularly developed words for it yet because the off-thing is new, but you can feel it. So you call it more AI slop in the fucking trough, and you're right. In the vocabulary of the framework I spend most of this site trying to develop out: AI prose adds resistance between the reception of the idea and the receiver. This isn't referring to any psychological resistance to the idea itself arriving, because the AI-regurgitated outline of the idea still might. The resistance on the way in there. The shape of the path between the nervous systems got rougher. Something that used to happen in one step now happens in two, and the second step is not under either of our control, but under a model's, and the model is not optimizing for you understanding me or my thought being understood but explicitly for the sentence looking, statistically, like it belongs in the corpus. Which is I guess sort of still what I am actually going for with presenting my ideas on Modal Path Ethics, but under my own goddamned framework, that introduction of resistance is very real harm. In the framework's terms: burden transfer to the reader, distortion of the field, and a foreclosure of futures in which you and I could have communicated more directly with the AI thing hanging over the whole communication. That foreclosure is technically pretty small per exchange, but not aggregated across every AI-touched exchange happening on the entire internet. And I am frothing at the mouth to say this: I fucking hate the vibe argument. I hate it and it makes me angry at the kind of thinking you are doing about the world you live inside of. I have spent well over a fucking decade now trying to teach myself and anyone who will listen that vibe-based reasoning about situations is how people end up with garbage conclusions that they are somehow more connected to than ones they actually did fucking reasoning about. My brain fucking loves to tell me the vibes are off and to make a garbage conclusion based on that, and when I resist it will come at me with all kinds of distortive hallucinatory bullshit to trick me into following the vibes that are just so special. It's fucking exhausting. I do not and never will respect "it just feels wrong" as a standalone argument or even constituting a full human thought in a mature adult. This is how lower tier animal nervous systems do their thinking. I actually do expect more from anything telling me it is a full blown human worthy of intellectual attention. I think the human tendency to reach for the narrative that feels like it fits and camp out there is one of the main things wrong with how people think, and it is one of the primary social distortions the book is explicitly written against. And I also have to say, that in this case the vibe is still tracking something real; even if that vibe is itself meaningless as a complete thought, it points to a real thought. The people who say "this feels AI-generated and something about that is inherently bad to me” are not actually making a vibe argument. They are noting the added resistance in the field and drawing an ethical conclusion, and they are 100% right under my framework. Their nervous systems are picking up on the second distortive compression. They do not have a directly worded theory of it yet. I am trying to give them a directly worded theory of it, but I am also introducing the same resistance the theory is written against. They were still right before and in the absence of the theory. (They here including any people who did fuck off like I so rudely suggested) — This is the most AI-distorted part, it was a longer version of the narrative-brain thing I call the human confusion, but it honestly turned into soup again but I do need this legible because it matters for everything else. It's pretty direct in the book. The human brain wants everything to be a story. This is not a metaphor. This fact here is, as far as I can tell from reading neuroscience, psychoanalysis, and from my own honest introspection, something close to a hardware-level constraint on how meaning works in us. We do not experience the world but progress through a narrativized representation of the world in the form of our “life story”. This narrativization is instant because our consciousness appear designed around it and mostly invisible, and mostly involves taking the actually always endlessly complicated and underdetermined situation that is always in front of us and compressing that terrifying chaos into a shape that resembles stories we already heard and understand, because stories we already know are very cheap to process and fresh situations are expensive to process, and the brain is always cutting costs because it takes too much fucking energy already from the rest of the body, because all of us have a genetic jaw disorder. This is why, when you try to describe a genuinely novel situation to someone, the first thing they do is say "oh, that's like" something. This is where any attempt at actual thinking goes to fucking die of fucking exposure. The "oh, that's like" is the compression happening in real time. Sometimes this compression works out fine. The new situation really is like the old one in the ways that matter. Wonderful. Sometimes the compression is absolutely fucking catastrophic, and the person who compressed for narrative conveienve has just erased all awareness of everything that was actually at stake in what you were telling them, because the closest story in their library to pretend they were experiencing did not happen to feature the one fucking thing that actually mattered. I need everyone to stop playing pretend all the time, please. When someone reads something I have written and says "this feels like something I know," what they are telling me is that their brain found an energy-cheap compression to use before their attention could get to the goddamned thing the sentence was fucking pointing at. They are not reporting on or even fully aware of the sentence. They are now telling me a story about their brain's behavior in the presence of the sentence. This is a symptom, in a non-clinical sense. This is a very, very common delusion that completely divorces you from what is actually happening right now in your real actual life, brought on from the human brain trying to make up for the gross inefficiency brought on by a series of mutations which enables us to be as intelligent as we are. This is, roughly, what I think is wrong with most discourse, and has nothing to do with large language models at all. Most communication problems are not born of malice or stupidity. The brain is always arriving at the nearest available story before attention even had a chance to engage with the particular at hand. And that particular is where everything worth talking about actually lives, because the world does not actually follow the laws of narrative we are so obsessed with. Narrative is a compression we impose on reality for our own comfort. This cannot be denied. The world, when you look, is infinitely lumpier, less-resolved, and more-contingent than the stories we tell ourselves and each other about it. Saying "this is like X" and stopping there is what compression looks like from the inside when it goes wrong. You should not allow thought to stop there. You need to become self-aware about this to stop the phenomenon that not a single one of us is immune to from limiting what you can even understand of reality. I do this shit too. This is important, please be reading still. I do this shit all the time, I promise I know it. Having the theory does not protect me from distortive behavior built into being a human. The behavior is automatic and pre-conscious, and I catch myself mid-compression constantly and have to walk it back and make myself review why I even let that happen before I can address what I should have addressed for what it was. Anyone who thinks they have escaped the narrative brain by reading about the narrative brain has not escaped a fucking thing. So when I said, a few paragraphs up, that the vibe argument is nonconstructive garbage, I totally meant it, and I am using AI in editing here intentionally in places for the point I am making (even though I am also actively rewriting and editing out AI vibes myself everywhere else because of the real harm it causes) in order to bring that argument out because that's what I want to attack here beyond just the AI problem. I do still also have to be very clear about what I do mean here anyway, which is that the vibe is sometimes the nervous system noticing something true before the theory catches up. The vibe itself is never reasoning and is anathema to reason. But the vibe is not then factually wrong and can be used itself as evidence in actual reasoning. The vibe, in this AI case, is tracking that very real and harmful second compression, and the second compression is a real feature of this exact situation, and refusing to credit the vibe you are getting pointing you towards the topic at hand because I do not generally credit vibes would be me doing exactly the thing I diagnose in other people because of my own fucking vibes: privileging my theoretical frame over the evidence actually in front of me.


So about all the rest of AI? I want to do the thing I spend the book trying to do, which is to look at the field situation instead of picking one of the two pre-installed narratives (AI is salvation, AI is apocalypse) released for us to feel comfortable acting out. Neither of those is the shape of the structural situation. There are undeniable harms from AI in itself, very real ones. The honest inventory in sum, from my eye, includes at least these: massive, potentially unprecedented burden transfer from the companies who are training these models to the people whose labor and writing were used to train them, which is itself structural harm and done without their consent; extremely concentrated environmental burden transfer, which is a burden transfer onto the future of extance and onto everyone that climate ever touches, which is literally everyone you will ever know; the discussed distortion of the information commons, which is the resistance thing I have been talking about scaled up to everything that reads like text on the entire internet regardless of AI use; the displacement effects on labor, which are asymmetric burden transfers from capital to workers; and a certain kind of attention foreclosure in which people (including myself) who could have been developing their own voice by the old painful method of writing a lot of bad sentences (which don’t worry, I still do) but now skip that method and end up functionally sterile and voiceless and unable to tell they are so neutered. That last one is specifically what I mean by foreclosure, the path to having a distinct voice of your own is being quietly closed off for many people without their awareness. Those harms are all real and they are not canceled by the existence of any benefits, because this framework does not aggregate harm. Modal Path Ethics does not come out to say: well, the language model helped some disabled people write their emails, therefore this burden transfer is morally fine. That is the exact kind of asinine story reasoning I wrote the book against. But because I'm not a moron, Modal Path Ethics also does not do the reverse. It does not then say: because these harms are undeniably real, every use of this technology is therefore condemned and the correct thing is to refuse the whole category. We must annihilate this harmful locus from the field. That is not just how it works either, gang. The framework insists you stay with the particulars of reality. Each use of AI is a particular in reality. Each use has its own field-structural shape. Very many uses transfer burdens completely intolerably. Some uses do not, from my view of the field which I can at least tell you is honest and as in-depth and broadly accepting of perspective as I could make it be. The work of Modal Path Ethics is to tell which is which, locus by locus, rather than to pick our narrative writing team and defend them to death. And I think — this is where I part ways with most of the takes on both sides and so used an emdash so you get bad vibes about me being just a stupid machine so you can more comfortably write a narrative where I am bad and you are good — the better ways we could use AI are quietly very very possible and we are mostly not taking them, because the narrative way we apparently like to live has only made available to us shapes like "AI revolution" or "AI resistance" and neither of those narrative shapes has any room in it for something actually fucking analytic and constructive like "this tool sure is useful in carefully delimited ways, within arrangements that do not destabilize the broader field, for people who would otherwise have no access to certain kinds of cognitive resources due to other assymetries in the field, always attending that the second-compression problem is honestly disclosed whenever the output is shared with anyone whose attention is being taken." That last sentence is long because the real version of the answer is long and I'm really sorry but you don't get to be lazy and fall back on storytime then also call yourself analytical or well-reasoned. Moving off the short versions is what got us into this shitshow you don't actually seem to be enjoying very much. Maybe try something else.


Anyway here's the self-reckoning part. I have used AI a lot on this website, and to proofread for me when writing Modal Path Ethics. I used it more in the early months of working on the book than I do now, and used it a lot when I was launching the website and filling in the base set of articles I knew I wanted up before the book was released in case anyone read it and had understandable followups about the topics on here. I used AI mostly the way you might expect someone socially and professionally isolated and in desperate need of an editor would use it: an incredibly annoying thing to try and talk at, as a way to check whether a chain of reasoning had an obvious gap I was too out of my depth to recognize, mostly as a rubber duck in the coding sense that talks back by pretty much just saying whatever you just said but slightly worse. I found this process incredibly useful. Other people probably use more traditional process I don't feel I have realistic access to without extreme resistance that would override working towards the actual goal of developing a moral framework grounded in a minimal metaphysical definition of what the future is and what exactly it means for some of it to be foreclosed. I also found that AI smoothed out my prose in directions I did not like, even if I tried to tell it explicitly what not to do to mangle it, and I had to keep pulling my prose back from the shit it puts out. I never copy paste out of the AI chats, but rewrite everything in a separate document. Often, my mind can't reconfigure the AI's sentence or structure on the fly in a way that it sees as anything but messing it up on purpose to “cover up” the AI usage. This is me trying to fight back against that second compression, and even though I could feel it happening in real time, and I fought it sentence by sentence, and I did not and could not win, because the structural harm of including it in this way is simply real regardless of how the words went from language model to essay. Going forward, for this site, the arrangement I am going to try to hold is this. The arguments are mine and the thinking is mine which they have never not been, because these things do not have original thoughts. When I do talk to a model, I talk to it the way I would talk to a theoretical philosophy friend who reads insultingly fast and is willing to push back against what it sees as unstable thinking because the companies are in hot water after a few incidents, and I will no longer let its sentences become my sentences to avoid this secondary compression that became incredibly obvious to me when I played with this field including AI in it for a short time. If I use it to catch typos or to ask whether a passage is unclear, that's actually fine. You're just gonna have to live with that or stop watching all streaming services and remove all handicapped spaces before coming at motherfucking me, because I'll play that fucking game with you and very many others if you got a problem, pussies. If I use it to write a paragraph that I then lightly edit and publish, that is under my own ethical framework not actually fine, and I am going to try to not do that ever again for the above detailed reasons, and I am going to probably still end up writing down a paragraph the AI couldn't help itself but suggest to me (I actually didn't ask) sometimes when I show it something I wrote, because I am one person with a brain and thinking is actually more complicated than most of you people seem to understand because you don't appear to notice when your brain is actively fucking woth you like mine makes so very obvious. But ultimately the rate of publication can't be increased by just carelessly adding that secondary compression. I do not love having to say all this bullshit publicly, but that's the field you people created. I don't see a lot of people doing thesaurus investigations while reading books and ghost-writing remains legal. I do wish the situation were one in which I could output the idea in the most parsable format and you picky bastards could just read that and the question of whether a language model had been involved would not have to be raised. That situation does not exist right now unfortunately. The second compression is real in the field and readers have a right to know whether it is present in what they are reading because it does matter.


One more thing, and then I am done because I do have somewhere to be and this took forever without AI helping me plot a coherent essay structure from the concepts I insist on bundling and I kept going back. The Modal Path Ethics articles on this site are deliberately dry (besides some inserted humor). I actually, if the secondary compression were not a thing and it was more readable, would prefer the formulaic and stale AI prose style. These articles are written in a register that is very intentionally meant to minimize the rhetorical and subjectively triggering qualities of prose, because Modal Path Ethics field analysis is supposed to be a structural description of a situation, not a move in an argument. It was explicitly written against writing and argumentation dominating philosophical thought in the modern era, and the many downstream harms we are all grappling with every single day. The dryness is a feature, not a bug. It just works. If you find those articles boring, that is partly on purpose, but I do still get that and try to go back and pepper in some bits after the point is done being made. The boring-ness is me trying to get my human self out of the way of the field analysis. For those specific articles, there is a case to be made and I am actually here making it that readers who are actually taking the view of Modal Path Ethics seriously ought to try to ignore the question of whether AI was involved and try to read through to what the analysis is. That's literally the point here. Not because AI involvement does not matter — everything I have said above says it does and so do these babies around me — but because the analysis is supposed to be the kind of thing that could be written by anyone honestly trying to do Modal Path Ethics, which does not by definition have to include a goddamn human mind which has certainly not always been a thing existing in the field, and fixating on human authorship is itself a narrative-brain move and incredibly distortive. I have to work around this distortion, so I have to consider the secondary compression outside of just the harm to the idea's possibility space in itself. The question "was this written by a human" is a question about story-shape. The same-focus question "does this analysis track the field" is a question about whether the analysis is right, and whether a human performed that analysis or a language model trying to even out word math certainly matters in the answer. Those are still incredibly, fundamentally different questions and I can only respect one. The second one is the question the articles here are asking you to engage with. I am not making this argument for everything in life, that would be stupid and I actually do understand social life. I am making the argument that this must be the case for field analyses, which must be the grounding for any ethical thought. For more personal pieces like this one, authorship obviously matters much more, because the whole piece is about a person in particular saying what they are experiencing in their nervous system to others. Still, the structural articles are supposed to be about the structure. If the structure is right, the structure is right. That's actually the end of that story if you need one to work with. I also want to say, for what it is worth, that I get emails and read shit online that is obviously AI-generated. I don't ignore any of this for that reason, which is the key thing here. I try to read past the AI words and the secondary compression to understand what the person was trying to convey through the method they used to reach me, because I know it is usually a very real human thing they felt they could not put into their own prose, which is often sad to me. I think this is the right thing for them to do, and I think the person doing it is usually not being dishonest in the way that matters; they are being afraid of their own voice, which is its own story if that's how you must think, but it is not the story of a con artist trying to trick you or prove how much better they are than you at putting words in order or whatever the fuck else it is you people think about all day. If you are reading something of mine and you think it sounds like AI and you are frustrated and disappointed by that, I still do get it, but try, if you can at all, to read past the story you see on the surface to the thing I was actually reaching for to show you. I am trying to reach it as hard as I can but I am still only one person.

Subscribe to Modal Path Ethics

Don’t miss out on the latest issues. Sign up now to get access to the library of members-only issues.
jamie@example.com
Subscribe