AI Video Script

September 15, 2023

AI is rotten to the core

This video contains flashing lights {== does it? ==}

Notes

Economic efficiency beats quality every time. Like AOE2 extreme AI, it will just fuck you up with pure quantity.

Title Ideas

AI is rotten to the core
The AI industry is rotten to the core
AI: Report from Hell
AI Games: Report from Hell

Description

GPT2 Says: Jimmy McGee's new YouTube video, which shows him eating his first-born son, comes after months of speculation that McGee may be on the verge of leaving the UFC in the wake of his split from the promotion.

GPT2 Says: Jimmy McGee's new YouTube video , "Tall, skinny, and fast : The story of the man who invented a new kind of shoe," is here! The video features many quotes from other people 's opinions about Jimmy 's shoe, but also reveals some personal thoughts about the man himself, especially his own shoes.

GPT2 Says: Jimmy McGee's new YouTube video , "Basketball's Black Hole", features the best shots from every single game ever played in the NBA : All 30 teams, plus the worst shots of the worst teams ever . Jimmy McGee explains why this is so important: "A basketball team can be a great basketball team and a poor basketball team at the same time ." You 'll see that his point is actually quite true , and the difference between the two is so small you can literally see the difference in a basketball court , and the difference between a basketball team and a team that just happens to play basketball .

Prompt

You are an anonymous and analytically-focused game reviewer. Write a script for a video review of Minecraft that uses only video game footage. Include at least one description of a montage section, any musical cues you use, and write an emotional climax that somehow connects Minecraft's mechanics/themes to real life.

Minecraft Odyssey: Exploring Gameplay Systems Chronologically

{== Title was generated by ChatGPT ==}

{== Game footage of Minecraft starts playing, showing the player spawning in a randomly generated world ==}

{== For intro: cut this part of the vid, render and record on the CRT. Do the transition from the Teleglitch vid to go into full-digital recording. Transition back to TV for the real video. Or maybe just keep the TV for a while?? ==}

Narrator: Minecraft is a game that needs no introduction. It has taken the gaming world by storm since its release back in 2011. But what makes it so special? Is it the vast open world waiting to be explored? The endless possibilities of creation? Or the charming blocky graphics that make it so unique? In this review, we'll be taking a closer look at Minecraft and what makes it stand out in the crowded world of video games.

{== Gameplay footage showing the player gathering resources and crafting items ==}

Narrator: At its core, Minecraft is a game about survival and creativity. The player is tasked with gathering resources such as wood, stone, and iron, to craft tools and build shelters. As the player progresses, they encounter new challenges such as dangerous mobs and the need for more advanced resources.

{== Montage of gameplay footage showing the player exploring different biomes and building various structures ==}

Narrator: One of the most impressive aspects of Minecraft is the freedom it offers the player. From building simple dirt huts to massive castles and intricate redstone contraptions, the possibilities are truly endless. The game's procedural generation also means that every new world is unique, with different biomes, structures, and resources to discover.

{== Musical cue as gameplay footage shows the player defeating a horde of monsters ==}

Narrator: But Minecraft isn't just about building and exploring. The game also features a combat system that requires the player to be strategic and quick-thinking. Whether it's fighting off zombies, skeletons, or other monsters, players must learn to use weapons and armor effectively to survive.

{== Gameplay footage showing the player working together with other players in multiplayer mode ==}

Narrator: One of the most enjoyable aspects of Minecraft is the ability to play with others. Whether it's through local split-screen or online multiplayer, players can work together to build massive structures or engage in epic battles. This cooperative element adds a new level of depth and excitement to the game, and it's no wonder why Minecraft has become a cultural phenomenon with millions of players worldwide.

{== Emotional climax as gameplay footage shows the player creating a memorial for a loved one ==}

Narrator: But perhaps the most touching aspect of Minecraft is how it allows players to express themselves in unique and meaningful ways. From recreating famous landmarks to building memorials for loved ones, the game has become a platform for creative expression and emotional catharsis. In a world where we often feel powerless and lost, Minecraft offers a glimmer of hope and a chance to connect with others in a profound way.

{== Final shot of the player watching the sunset in-game ==}

Narrator: In conclusion, Minecraft is a game that is both fun and inspiring. It offers endless possibilities for creativity and exploration, while also reminding us of the importance of community and self-expression. Whether you're a seasoned player or new to the game, Minecraft is an experience that everyone should try at least once. {== maybe do some stutter-repeats at the end to signal the transition to the "real" video ==}

The Video

ChatGPT wrote that video and decided what it would look like. Sorry. The performance isn't great, and the writing is boring but passable for a totally inoffensive, middle of the road game summary.

Machine learning exploded in popularity in the last year or so with a flood of generated art, music, and text that you're probably already sick of...

Well the games industry is just getting started, and people are using AI for everything from art, to programming, to animation [1], to brainwashing you into paying for microtransactions [2]. Big companies and independents alike are jumping head first into a world of bad writing and melted popsicle art. This video is a survey of the rapidly-expanding field of AI; I'll discuss some of the problems it's creating, laugh at the goofy art, and then get into the actual AI apocalypse, the one they're not telling us about.

But before any of that I want to understand machine learning not from some pundit's explanation {== this one's kind of funny [3] ==}, but in terms of what these systems actually are.

AI is not magic

There are hundreds of schemes for machine learning at this point, but neural networks are the most popular, and most of the concepts behind neural networks apply to everything else in the field too. Neural networks are simplistic models of brains: networks of neurons. A very low resolution picture of our brains is that we take in some stimulus, say the light bouncing off of a painting, then something happens in the brain, then we feel an emotion or sensation. There's an input, an output, and something in the middle.

{== Slide 1 ==}

This is exactly how every beginner course describes neural networks: a layer of input nodes, one or more hidden layers, then a layer of output nodes.

Hidden layer is kind of a misnomer. The hidden layers themselves aren't a black box, and tweaking them is a big part of developing a neural network. The data that these hidden layers produce usually isn't meaningful to us, though--it's only used by the network, and that's probably where the "hidden" comes from. Explainability, figuring out why an AI makes decisions the way it does, is a big problem in machine learning because AI doesn't follow a line of reasoning the way a person would.

{== Slide 2 ==}

"Nodes" are another abstract concept; {== i.e. an integer is 32 bits that represent a number without any decimals ==} there's no obvious correspondence between a node and what the computer is actually doing. Really, node just means function, and in machine learning nodes are usually taking a vector {== start writing on the TV ==}, doing something to it, and passing it on to the next node. We could have each hidden node add up the values of everything connected to it, for example.

{== biasing is not covered here, it was not necessary to introduce more jargon just to understand what "training" an ANN entails. ==}

The secret sauce is in weighing the inputs. In real brains, sone connections between neurons are stronger than others [4]. There are still lots of questions about the human brain, but the idea is that these connections and their strength effect our thoughts and actions somehow. Since an artificial neural network is just functions sending and receiving numbers, you can multiply each of these by some factor to make it bigger or smaller, more or less influential.

The network on screen is a toy example, but neural networks are always made to achieve some goal. Let's say our input is a picture of a letter, where the brightness of each pixel is an input, and the output is a guess for what letter it is. Real neural networks are also tremendously complicated; the model that runs ChatGPT has 175 billion parameters [5].

But this is just a big pile of math; it's cool that we can make a machine with billions of adjustment knobs but we're not going to do all that work by hand. Thankfully, neural networks can be trained to adjust their own parameters.

If I have a photo of a letter that I know is a 'B', I can compare that to what the network guesses. This pair of image and text is a piece of training data, and big networks will go through millions or billions of them. Weights are usually randomized at first, so the network will probably say the letter is an A, X, P and C all at once. But we can calculate how wrong the network is, and use this to adjust the weights so it gets a little bit better every time. If it is 20% confident that the letter B is the letter A, then we need to reduce the weights that influence that guess.

Gradient descent is the piece of statistical magic that made the AI revolution possible. If you're on a hill, the gradient where you're standing is an arrow--a vector--pointing in the steepest direction. Following this gradient is the fastest way to climb the hill from where you are now. Going backwards from the gradient is the fastest way to go down the hill.

{== It would be the gradient evaluated at your current position, but I think the intuition is there. ==}

An error function is like a hill that represents how wrong each of our weights is, so if you take the gradient and go backwards, the network will slowly move toward zero error. In my example, it gets better at guessing letters.

It's a pretty god damned cool idea, but there are no miracles here. The concept I've just described was laid out in a paper from 1958 [6] as a theory for how the human brain works, so it's not exactly revolutionary, but layers of interconnected nodes still make up the structure of all those headline-grabbing AI systems we see today.

You would hope that the letter classifier would learn to recognize patterns in the strokes of letters, or at least do something intelligible, but the weights a neural network comes up with look just about random {== visual: [7] ==}. They're a total mystery, and a lot of the architecture behind today's machine learning systems is based on somebody trying something new that happened to work. Sometimes you can say that they make the whole gradient descent process more efficient, but there's never going to be some obvious improvement reflected in the hidden data itself, you're never going to get a real line of reasoning from AI.

{--One more little technical thing before I get to the fun stuff: ChatGPT, and every other big text-generating AI, uses a structure called a Transformer, which Google employees came up with in 2017 [8]. It's very good at capturing the context of words in a sentence, using something call an attention mechanism. Google used a Transformer to translate languages, but you've probably used it to talk to chat bots. A Transformer does a decent job guessing the next word in a sentence, and that's exactly what ChatGPT and models like it do: they progress one word at a time generating text that is coherent at a glance.--}

Visual Art (Cute Dog Pixel Art Van Gogh Featured on Artstation)

AI is not free (labour issues)

Famously, you can use these things to generate media like images and music from a text description. Google's DeepDream was one of the first generative models that made headlines; it started as a network for classifying images [9], but they were able to sort of run the system in reverse and have it hallucinate nightmarish faces in existing pictures [10].

The original model was made for an image recognition contest that ImageNet ran in 2014. ImageNet hosts a dataset of just under 15-million images which it doesn't own the licenses for [11]. With new technology the line between research and commerce is razor-thin, and big companies often use this fact to just manifest destiny whenever they want and make us live with the consequences.

Scraping millions of images and sticking them in a public database is a huge ethical question mark even in an academic context, but once an economy springs up around these datasets they're hard to get rid of. This is a lesson we've learned over and over; companies rush to market with leaded gas or asbestos insulation and by the time we understand what they've done entire swathes of the planet have brain damage and lung cancer {== show this on screen [12] ==}.

Google mastered this principle with AdSense, a surveillance system that probably knows your heartrate and body temperature right now and will use it to sell you some gross Coke. Google's data harvesting operation became a load-bearing piece of the internet before the public understood digital privacy, and now we can't get rid of it.

{== visuals ==}
On screen: Intercut some LoL Coke reviews for funnies? All the different flavours they describe?

On screen quotes:
"Google is to surveillance capitalism what the Ford Motor Company and General Motors were to mass-production–based managerial capitalism" [13, p. 63].

"Google would no longer mine behavioral data strictly to improve service for users but rather to read users' minds for the purposes of matching ads to their interests, as those interests are deduced from the collateral traces of online behavior" [13, p. 78].
{== visuals ==}

ImageNet popularized scraping the internet for training data, and the project has all the same problems that we're dealing with now. It's very biased [14], they stole all the pictures, and they use questionable labour practices to label them all [15].

Data-Labeling Labour Issues

Amazon's Mechanical Turk bills itself as a "microtask" marketplace, a place for simple, short jobs that still require a human to complete them. I wanted to join the program as a worker but Amazon didn't bother approving or denying my request. The site is so bad that workers have to use a bunch of extra scripts to actually do their jobs [16], and you can't get any decent work there until you've done hundreds or thousands of Human Intelligence Tasks, also known as HITs [16]. The platform was a perfect fit for the ImageNet project, and they used it to label early versions of the dataset back in 2008 or 9.

ImageNet gave workers a set of pictures, and some object to identify. Workers mark the picture if it contains the target object [17]. If that sounds familiar, it's exactly like solving a Captcha. In fact, we've all been helping Google train its neural networks for years [18]. These companies have a very dubious concept of consent, and we'll see a lot more of that later. You literally have to help train an AI to access many websites.

ImageNet at least paid the Turkers. But with that said, Mechanical Turk's workforce skews toward people with no other options. Oscar Schwartz, writing for IEEE Spectrum, rightly identified that MTurk is "designed to make human labour invisible" [19]. Jeff Bezos called them "artificial artificial intelligence" [19] [20], and Turkers are described offhandedly as a "horde" in an article I read creaming itself over ImageNet [21]. Turkers were earning a median $2 per hour in 2018 [22, p. 5] {== analysis of data from 2600 workers, agrees with self-reported mturk income. ==}, and the situation hasn't changed since. These people are invisible, poor, and very easy to exploit; Mechanical Turk is slavery as a service.

{== browse appen.com, highlight their clientelle (EVERY big name is on there), the "what we do" section, the slavery policy. ==}

But Mechanical Turk was the first of a new breed. Turkers are generalists, but the AI revolution needed specialists. Appen is one of many companies specifically selling data-labeling for machine learning. Their crowdsourced labour came mostly from Kenya and the Philippines at first, but when Venezuela's economy collapsed they started snapping up jobless refugees. A journalist for MIT profiled a Venezuelan Appen worker, and the situation seems pretty dire [23]. Workers have no line of communication with the company, they have to be constantly at their computers ready to accept tasks, and--like Mechanical Turk--the site barely works. Appen can afford to push people as hard as they want because there's a huge labour supply and the workers have nowhere else to go.

They congregate in Discords and write scripts to make things tolerable. Because its workers are contractors, Appen pays out like a slot machine: some tasks offer pennies, some don't even work, and some will offer hundreds of dollars, a relative fortune. I think a good rule of thumb is that any company that has to write a slavery policy is probably up to something. Most of the major players in AI seem to disagree though [24].

In ImageNet's heyday AI was just a curiosity, but the problems it created were only going to get more intense over time. A revolution was brewing, but it needed more computing power, more publicity, and more technical development. Keep in mind that all the technology I'm gonna talk about can only exist because people with nowhere else to turn are earning pennies churning through data.

AI is not going away (AI in games)

We've come a long way from DeepDream in the last decade to a new frontier of ugly art, bland writing, and buzzwords for the consultant vampires.

All the popular flavours of AI come together in video games: there's significant money and effort in automating just about every aspect of game development right now. It is just starting to leak into serious game projects but there's probably a lot on the way. That's without even talking about using machine learning for marketing, talent acquisition, or any of the other million things you can use it for.

Ubisoft, who made a big bet on the metaverse a few years ago (haha), developed a tool called Ghostwriter to quickly draft NPC dialog [25]. Considering how eager they were to go all-in on crypto this seems like a half-measure from them. Although they sell this thing as a revolution they won't even use it to draft a major character, it's just chirps from random NPCs. It's not even clear if Ubisoft are training their own models, Ghostwriter is probably just an interface to speed up the writing process [26].

It's a smart move from them, though: small parts are about as much responsibility as AI can handle right now. One of the only acceptable things I've seen it produce is the texture on this bee, obviously that's not enough for a whole game but it's a start. Coherence is a big problem for these models: Facebook released a music generation tool that sounds kind of OK for a few seconds, but if you make anything longer than 30 seconds or so it just falls apart. The track will turn into pops and clicks, and then maybe even lapse into a different song.

Most of the AI chatbots only remember a couple hundred words at a time, so they can't really come back around to earlier points in the conversation. Visual art has no consistency at all, and making textures that even superficially go together is a pretty laborious task {== Make a Minecraft texture pack out of the weird textures for funnies. ==}.

This inconvenient fact hasn't stopped Unity from going all in on machine learning.

{== Unity alien video ==}
Looks like shit guys, keep it up!

Along with the weird alien they have a chat bot and some art generators, plus people can sell AI tools on the Unity store. The company has been evasive about where its training data comes from [27] [28], and I suspect they didn't really care until people started asking questions. So far, selling AI has been all about rushing to market and outrunning public opinion. Avoiding the exact questions that Unity users are starting to ask.

Unity's AI program is only in beta, but they launched it with support for Atlas, a tool that just searches for other people's 3D models on Sketchfab. They say that AI is involved but all it does is look for models with permissive licenses [29]. So that's about the level of oversight you can expect from Unity.

The only real use case for tech like this is expediting those awful mobile games that rip off whatever the iPad kids are into {== squid games fuzzy wuzzy ==}. Any Unity project that's less cynical than that probably won't even make it out of the gate: nothing that AI can produce right now is impressive or consistent enough to hold somebody's attention for more than a few minutes.

For that reason it's kind of hard to find games with AI-generated assets right now. I keep stumbling onto them, so there are probably a lot of cash-grab game developers using machine learning, but AI art is so full of bad anatomy and garbled text that they usually try to hide it until you pay up. Steam's current policy allows AI but requires that you own the rights to the training data [30], which has led small developers to simply hide the AI-generated stuff from Steam's team, if the AI Game Development Subreddit is any indication.

In doing research for this video I've discovered one of the long-term problems with AI for myself: looking through cheap games on the Switch store, I can't tell if half the art is AI-generated or just generic. The concept of truth is sort of breaking down for me in real-time.

No one wants to play a game full of uncanny asymmetrical AI art, and programming with AI, in my experience, feels like learning to write the code myself and then patiently instructing a five year old to do it for me. If I still have to read through all the standards and documentation myself I might as well just write the program. That's without addressing the fact that, once again, all the training data that tools like ChatGPT and GitHub Copilot used was scraped.

{== on screen: [31] ==}

Oh, by the way, GPT stands for Generative Pretrained Transformer; Transformers are a type of network that Google employees came up with to translate languages, and they incorporate something called attention which helps the network track the context of words in a sentence. ChatGPT generates one word at a time, each time picking a word that it thinks is likely to follow in a sentence. GPT2 gave you a spoiler warning for Game of Thrones at the beginning of the video, because a lot of people were writing about Game of Thrones.

Anyway, even when these models get better, somebody who wants to make a game without learning any skills or putting in the effort is unlikely to finish a project.

But the models will get better. There's a lot of money in AI right now; most of the new AI companies are just leeching off of existing tech like GPT or DALL-E but somebody out there must be trying out new structures. A well-trained eye can still differentiate AI-generated work from human work, but over time I think the gap will close and this stuff will make its way into more and more games. A lot of the time, if something looks OK at a glance, people won't notice that a character has too many fingers.

With crypto, there was a lot you had to do to buy in. It's expensive, confusing, and really not even appealing if you don't have a few hyper-specific libertarian values. But finishing up a game's art with machine learning makes development easier. High on Life apparently has some AI-generated voices [32] and art made by MidJourney [33] [34] {== search "Justin Roiland DMs" for more details ==}, Capcom is using machine learning to assist with level design, and WB Games is doing something vague.

{== story story story narrative data data machine learning data data ==}

I think they're using machine learning to collate player feedback about their games, but there's too much corporatese to know for sure. They definitely love data though.

A few AI-oriented games have released, all from small studios or individuals. Vaudeville is a funny haha streamer game where you try to solve a murder by interacting with some impressively wooden characters. You can say whatever you want to them, and the dialog responses are generated by an AI and read out by Microsoft Sam. It's a fun idea, but language models rarely say anything interesting and once the novelty wears off it's just a poorly written adventure game.

{== show a slightly-too-long clip ==}
TikTok TTS: This is so intriguing

AI Dungeon is similar to Vaudeville, except it came out in 2019 when language models were still spitting out wild shit. AI Dungeon is nominally a roleplaying text adventure, but it plays fast and loose with things like characters, items, and areas. You can type in actions, and the AI will do its best to translate those into your chosen setting. The way these models work gives them a very small memory, so AI Dungeon will often forget details about the setting and characters.

Playing the game feels like reality is collapsing. A fantasy world can shift into an urban one without warning, sometimes you'll switch places with another character, and the game has serious problems keeping track of goals and items.

The models we have now are very well-developed and boring, but a few years ago AI was all surreal hallucinations.

{== https://youtu.be/5ITx9Xnd1lc 3:54:18 "am I a horse"==}

Vaudeville and AI Dungeon are very gimmicky, they're really just a structured way to play with language models. Mirage Island [35] is a little different, and much closer to how I think this tech will actually be used. It's a Pokemon-like game where you get three randomly-generated monsters with randomly-generated moves. AI will need a lot of oversight to generate content, but if you need to churn out a hundred similar items it must be tempting to just make ChatGPT do it.

There's also a new Google paper about generative agents, basically NPCs that have emergent goals and follow a schedule in a simulated world [36]. It's interesting, but it's literally just 25 GPTs interacting and pretending to do everyday things. At best this would give us a fleshed out version of what they promised with Oblivion, or a way to generate end-game content like Skyrim's radiant quests.

Actually, Bethesda is a perfect candidate for AI-generated content. If their other games are any indication, players are going to spend a lot of their 600 hours in Starfield with their brains turned off, collecting trash and shooting space bandits. The radiant quest system is already hard to differentiate from Skyrim's actual boring quests, and AI would give Todd Howard a shiny new thing to overhype and under-deliver.

We are still very far from running these models in real-time though. Beyond the dodgy quality and the slight chance that it'll start dropping race science on you, the GPU requirements for AI are relatively strict. I have a 1060, and I can chat with a medium-sized language model at around one message per minute. That's just not good enough for a finished product. Generating voices on top of dialogue would take even longer. There's some talk of doing it in the cloud, but nothing tangible.

I think it will be hard to resist using machine learning to speed up game development, especially once a high profile game uses AI for something less-prominent, like coding or draft animations. Faster development with fewer staff is a great pitch, and ethical concerns have never stopped major studios before.

This is not my strongest argument against AI, but my biggest worry about generating art, in any capacity, is that it takes away opportunities for people to make interesting choices. It's really hard to pin down what makes a piece of art good, but when you create something you're constantly making tiny choices, and I think an author's voice is a sort of consistency and harmony between those choices. Whether or not you think Mark Rothko is a great artist, you can look at one of his paintings and know who made it.

AI takes away these choices, and substitutes them with a weighted average of choices that have already been made. Any picture made with an AI is going to be a mishmash of other images, by its nature it can't make anything new. Large Language Models, impressive as they may be, are never going to invent, name, and develop new concepts. ChatGPT can barely handle a niche Python library, the humanities are going to be beyond its reach no matter how many Subreddits it trains on.

{--The best art to come out of the AI revolution so far is this awesome video exploring machine learning from the perspective of an AI being tinkered with. AI itself played no role in the concept or presentation.--}

AI Problems

### AI is not creative (AI art)

And for that reason it's not worth talking about the aesthetics of AI art. There's no intention behind it: the computer is just taking a text prompt and spitting out an image that minimizes the error. And the people using AI all think things like "it would be fuckin sick if you could see, like, what's next to the Mona Lisa, [37]" so there aren't any great minds at work here.

Most of it looks bad anyway. The image generation tools have leaned into surrealism, landscapes, and stylized art, but it's all the type of stuff that comes up when you Google image search "cool desktop backgrounds." There's clearly no vision or decision making behind this stuff, even if you're willing to ignore how melty most of it looks. There are no aesthetics, neural networks don't have a concept of beauty to work from. You have to write things like "best quality" and "featured on ArtStation" in AI prompts, because if you want something beautiful it has to be tagged as beautiful already.

{== that clip of the aoe2 concept art guy would be good here ==}

But it's undeniable that the generated art is looking better and better. The image generating models have their limits, but like ChatGPT, they're rapidly getting good enough and consistent enough that most people won't notice or care. Stable Diffusion, which I used to make some of the monstrocities in this video, is capable of producing nearly photorealistic images, and an army of 4chan users have perfected anime and furry art as well.

It's pointless to quibble over a definition of art here, because it's easy to just reject the definition. People have debated this stuff for thousands of years, the best and brightest at Twitter {== X ==} aren't going to solve it. But it's obvious that AI is not creative.

If you train a neural network to recognize text, like the example at the beginning of the video, it will do a good job when you give it text. But if you give it something it's never seen before, like a weird math symbol or a picture of a dog, the AI is still going to guess what letter it is. It's just a program, it can't integrate new information the way a person could. You can retrain it with thousands of dog pictures, but we can hardly call that intelligence.

I encountered this exact problem when I was trying to recognize text in some PDF files. Adobe Acrobat tries its best, but it just has to guess when it sees an equation, and it doesn't have the vocabulary to describe stuff like this.

You wouldn't say that the program is lying because it got something wrong, it just wasn't designed to recognize math. A big part of AI's success right now is that it's easy to use rhetorical tricks to make it seem intelligent. The name 'AI' is an obvious example. If a network can recognize words, wow, it can read! If a network can turn a text prompt into a picture, it's suddenly a painter. Like all good cults, AI promoters use mysteries to sneak their ideas in the door: we don't know what the hidden layers are doing or how the brain works. Maybe these machines learn like brains learn, so we should put billions of dollars into stopping the AI apocalypse.

We might not know what the hidden layers are doing, but we know how machine learning works, and we know what it produces...

So I can comfortably say that AI is trained to copy; you give the model training data in the hope that it will output the same thing a human did. There are setups where you don't need labelled data, like diffusion--which is what the best image generators use {== viz: [38] (CC-BY-SA 4.0 license) ==}--but these networks are still copying the structure of whatever they are trained on. Training an AI is just imprinting patterns from the training data {== Principle behind diffusion: the network reconstructs an image after it is injected with noise, implicitly training the identifiable features (patterns) of the image into the model while filtering out noise. ==}. That's no small feat, it can pick up on some very complicated patterns, but it's a far cry from creativity.

By feeding it a lot of images, the model can trick you into thinking it's creative, but it really is just fitting whatever prompt you give it to the most likely output. Researchers have been able to prompt these models to spit out their own training data [39] [40], so it's clear that the models are not synthesizing something new the way a person could.

{++ NEW STUFF 09/07 ++}

I think that the black-box nature of these AI systems is what makes people think they're creative. You don't see how the network is doing its work, you just type in a prompt and get an image back. I showed a popular interface for Stable Diffusion earlier, and using it does feel miraculous at first. Absent any evidence to the contrary, it's tempting to say "I made this." But I didn't!

{++ NEW STUFF 09/07 ++}

If you ask a person to copy a painting from memory, you're not going get a perfect copy. You'll see what stuck out to them about the source and how they interpreted it. It would have that person's aesthetic. Stable Diffusion will give you the exact painting, and it might give it to you in an Etsy store mockup.

Proponents of AI often compare generating images to photography [41], but they're way out of their depth. If we can agree that machine learning models are fundamentally un-creative, then any creativity has to come into it on the human side. This is like photography, they say, because a photographer did not make whatever they are capturing.

But the levers for creativity really don't exist with AI: you can change the prompt and some parameters to regenerate the picture, or use in-painting to regenerate part of an image, but you have no precise way to control what gets produced beyond suggestions. A photographer is capturing a given thing: a natural scene or some live event, but they have precise control of position, focus, focal length, white balance, colour balance, lens characteristics, and really an uncountable number of other factors.

Photos certainly have an objective element to them, but the decision to capture a certain image and the characteristics of that image are all up to the artist. Generating a picture with AI is guesswork, and the hilarious new field of "prompt engineering" amounts to typing in the name of an artist whose style you want to copy [42] or adding the words "high quality" to the prompt. Researchers at Microsoft are happy to demo their new CoDi network with prompts like "Oil painting, cosmic horror painting, elegant intricate artstation concept art by craig mullins detailed" [43]. Even the bleeding edge tech demos have to put "ArtStation" in their prompts to make them look good.

AI-generated images are a lot more like commisioning an artist to make you a painting, except the artist just steals a picture and gives it to you.

Which brings us to the fact that practically all of the datasets used in big AI projects are scraped from the web without consent. Again, you can do whatever you want if you call it research. Stable Diffusion has an insistent problem where the images it generates have watermarks from the pictures they stole [44]. Getty Images is currently suing them over this [45] [46]. Stable Diffusion uses LAION, a tenuous acronym for a dataset of almost 6 billion image-text pairs [47]. The clever thing about L'AI-ON is that it just has links to the images, so you can't sue them for copyright infringement. Model makers can download the pictures themselves and then pivot their research project into a business once the model is up and running.

Stable Diffusion also took images from DeviantArt to train its model, and in response to the controversy DeviantArt built Diffusion right into the site, so lazy people can shit it up with ugly 500x500 pictures of "teddy bear smoking weed" or "furry on the moon detailed andy warhol."

But worry not, if somebody puts your name into their prompt and you find it and complain to DeviantArt about it, they might take the generated image down [48]. And you can label your art as "noai" so companies can exclude it from training in the future if they want to [49]. It's worrying how eagerly the people charged with running communities will turn them into storefronts. DeviantArt isn't known for its high-quality art, but it always felt like a site for art people to hang out on. The people who run it are all but telling their users that it's not safe to post their work there anymore.

This is a whole can of worms, but the justification for this is another pro-AI talking point: machine learning is democratizing art. There is no definition of democracy where this makes any sense; democracy is when the government acts in accordance with the will and best interests of its people. The production of art is not democratic, democracy is totally irrelevant to it. Art collectives can be run in a democratic way, but I don't see AI democratizing Blizzard studios any time soon.

They're really just telling on themselves; in the political code that news organizations and pundits use, democracy is intimately related to a free market. Something is "more democratic" if it opens up new revenue streams. Normal companies can lay off their illustrators and designers, and there are new opportunities to sell computing power or services like data-labeling. The freer the market, the freer the people, as they say.

But taking the argument at face value, anyone who cares about their art will automatically avoid AI because it's hard to express your own vision when you have to do it through a game of telephone. I'm not gonna call myself an artist, but to make these videos I have to assemble a lot of writing, video, and sound, and make editing decisions. There is no world where I write a script and then say "fuck it, I'll just generate the visuals." Any solo developer or team that cares about its product is not going to settle for good game design with a bunch of melty computer-generated art.

A lot of middle-of-the-road games could get away with AI-generated content, so I think it will be pretty popular, but the few games that are unmistakably high art, the Earthbounds, Dark Souls, and Rain Worlds, are the result of a consistent, authorial vision that patterns their smallest and largest features. It takes a visionary like Shigesato Itoi {== pronunciation sorry blah blah blah I'm Canadian cut me some slack ==} to co-ordinate the perfect harmony of mechanics, visuals, writing, and sound that we get with Earthbound. Using AI not only takes opportunities away from the Itois of the world, it forecloses on the possibility of a singular vision by using some complicated statistics to automatically design games by committee.

The emotional climax of Earthbound is one of the most significant artistic achievements in the history of games. I can't spoil it in good conscience and it can't be conveyed through video, so if you know you know. {--If you don't, download an emulator and get to work.--}

The game's balance of quirky but genuine writing, the battle system, and its incredibly solid theming are kind of subverted by a sudden shift in tone near the end of the game, but this shift not only preserves its themes but makes us feel them even more deeply.

Undertale is good, but to this day nothing has touched Earthbound. It is designed so counter-intuitively, and written so delicately that no machine will ever be able to touch it. Earthbound is new, and AI can only mix up what has come before. So if you want to make good art, AI is useless to you. Machine learning is not democratizing art so much as outcompeting it.

AI is not intelligent (OpenAI)

It's not just a couple small companies scraping training data. OpenAI is the big one: they made GPT and DALL-E, some of the most sophisticated text and image generators out there. The company's leadership is a who's who of alleged prescription amphetamine abusers {== subtitle this with "silicon valley venture capitalists"==} and alleged child blood injectors [50].

{== on screen: [51] [52] [53] [54], pics in the folder. ==}

{== This would also be a great place for a visual, showing Microsoft connections, C-suite, maybe investment cashflows? Don't overcomplicate it. Maybe echo the neural net visual from the beginning? ==}

{== true ending music from cruelty squad always a good pick ==}

Sam Altman, the CEO, is literally trying to make New World Order conspiracy theories real [55, section on Worldcoin] [56]. His Worldcoin project is scanning people's retinas to make a blockchain-based global digital ID system [57]. Well, it was for about a week. Worldcoin was giving Kenyan people 50 bucks in exchange for retina scans until the government noticed, started investigating, and told them to knock it off [58]. Every other country Worldcoin has launched has done something similar.

We're really reaching a new level of big tech Rube Goldberg machines. Because AI--the AI that Sam Altman's other company makes and sells--is scary--to him--we need a way to identify humans and the best way to do that is to make a database of people's retinas which is also a bank powered by a ponzi scheme. {== galaxy brain is appropriate ==} Oh, and he threw universal basic income in there too [56].

This is to say that OpenAI's people have been drinking the Silicon Valley Kool-Aid for a long time.

This might come as a shock, but OpenAI started as a non-profit research organization and then pivoted once they realized they could make money [59]. Ironically, very little of their work is open.

Another shock, OpenAI steals all its training data. There are no laws about this yet, so they don't have to tell us where the DALL-E images came from, but no licensed datasets actually exist, so it's a pretty safe bet that they just took billions of images.

GPT is the same deal with text, OpenAI just crawled the internet collecting text to train its model with, or used an existing dataset like Common Crawl. They're actually ramping up to do it again, so if you have a website and don't want OpenAI to have your website please put these lines in your robot-dot-text [60].

The problem with any push-back against stolen training data is that the theft already happened. There are billions of images sitting on these peoples' hard drives already, and adding a "no AI" tag to your DeviantArt isn't gonna change that. I'm out of my depth here but I also seriously doubt that training on copyrighted data will become illegal. At a very basic level, I think a judge will see a generated image and think what most of us did: the AI just made a picture.

Even in the best case scenario, the law will only rush to enforce copyright when it's some billion dollar company being infringed upon, and even then people will always find a way to access digital files. A solo artist without the resources to recognize or prosecute copyright infringement couldn't really do anything to protect their work even if there was some legal framework in place.

The outputs from machine learning systems are another matter. AI hasn't really been litigated yet, but there have been some relevant fights over the patents and copyrights of AI-generated work. A judge decided that an AI could not be the author of a patent, but whether a person could patent an AI-assisted invention is beyond the scope of that case [61, p. 10] {== show pic of highlighted section on screen ==}.

As far as I know, three people have tried to copyright AI-generated art: Jason Allen, who won an art competition with his piece; Steven Thaler, the same guy who tried to make his AI a patent author; and Kris Kashtanova [62] [63] [64]. All three were denied, and the U.S. Copyright Office is very clear that they did not make the artwork, a computer did. In Kashtanova's case she was trying to protect a comic, and the Copyright Office decided that the text and layout, the stuff she actually did herself, was protected, while the artwork was not.

The Copyright Office even makes the argument that AI-generated art is like commisioning an artist [64, p. 10], stating that Kashtanova wouldn't be the author if she paid somebody else to draw "a holographic elderly white woman named Raya, raya is having curly hair and she is inside a spaceship, Star Trek spaceship, Raya is a hologram, octane render, cinematic, hyper detailed, unreal engine" [64, p. 24].

But really, there's a lot of money behind AI art, and there are no definitive rulings or laws about it yet. A letter from the copyright office doesn't carry a ton of weight, all things considered. Pretty much every big company stands to profit from making AI-generated works copyrightable [65], and if Disney is any indication it's well within their means to just change American copyright law. Even if that doesn't happen, they can just lie. AI-generated visual art is obvious, but text, code, animation, and design drafts are easy to pass off as human so long as you get a human to edit them.

AI is literally incapable of meeting the level of creativity required for a copyright, but since AI art sometimes kind of looks new I think the law will decide that AI is creative.

The U.S. government is thinking about putting laws in place around AI specifically, but there's nothing concrete yet, and it's all shrouded in political nonsense language anyway [66] [67]. Direct regulation would be way more effective than copyright updates, for reasons I've discussed in the past.

This stuff has put us in a situation where there are really no good answers; machine learning is broadly useful, but generative AI could set the whole field back if it gets regulated. There are projects to watermark generated images, but they're all run by AI companies who want to filter AI images out of their datasets [68]. At the same time, companies like OpenAI are putting tons of money into destabilizing and potentially killing off swathes of art, writing, and design jobs. A good chunk of people who hire designers have no taste and think they could do the job better anyway. Now they can tell an AI to "make the logo pop" and it won't laugh awkwardly and ignore them.

Somebody who needs a stock photo to put at the top of their blog post could license a picture, or just spit out a 100% free landscape from Midjourney. With a little effort, you can even get free pictures of your fursona.

People tend to follow the path of least resistance. Having strong values and well-articulated opinions about subjects like this is a privileged position. People use these tools without knowing how they came to be or what they represent. There's a good chance slave labour was involved somewhere in making your phone [69] [70], so we have no standing to morally judge people using AI. On average, convenience always wins. Crying "consumerism is bad" has never worked; the companies behind this stuff are the enemy, the individuals promoting it are just useful idiots who you shouldn't really waste energy arguing with.

AI is not objective (Bias/Adjustment)

AI bias is another major talking point right now. Since models pick up and copy patterns in their training data, they are also very sensitive to those patterns. A model meant to label things in images might throw out racist labels, either because it was only fed images of white people or because the training data was labelled by biased people. ImageNet, the first big image dataset, filed bisexual people under the category of "sensualist," alongside cocksuckers and pagans [14]. Whatever you think about it, structuring the data this way is somebody's judgement, not an objective scientific taxonomy.
{== This was technically WordNet, but ImageNet was structured by the WordNet taxonomy. ==}

A couple of artists had the idea to expose this bias by letting people classify themselves with ImageNet, and it was so good at being racist that the people who compiled the dataset went into crisis mode and deleted half a million images [71]. More recently, OpenAI has neutered ChatGPT more and more to hide its biases from users. Often, when they announce that ChatGPT is less biased, they haven't actually changed the dataset or the model structure, they are just getting between you and the AI to avoid bad press.

They can add extra words before or after whatever you type to condition the bot's response, or detect messages that they deem inappropriate and send the AI a different prompt altogether.

Everything we make is ultimately going to be shaped by the world we live in, and that's doubly true for vague words and concepts. Since AI is not intelligent, it necessarily reflects the people who made it. More diverse datasets can eliminate bias if you're using AI to do medical diagnosis, but for a model like GPT all of the training data is inherently biased; taking the average bias of all writing will just give you the most popular biases. There is no magic here, energy is conserved.

{++ talk about AI adjustment, get into details on the "worried experts" ++}

Adjustment

I hope that I've made this technology look like what it is: a bunch of very capable systems to recognize patterns.

AI is not intelligent, and the only real plan that the machine learning companies have is to scrape more data and make even bigger models with the belief that they'll somehow hit a critical mass. Then the computer will finally awaken and fall in love with them.

There has been an explosion of worried experts warning the public that AI is going to become sapient and do something unspecified but really bad. This is magical thinking, just a bunch of self-proclaimed rationalists working themselves up. But it is weird that all these non-profits sprung up to scare us about AI's sentience.

One of their key words is alignment, which has a lot of definitions. OpenAI says it's the process of bringing AI into alignment with human values or, at its most extreme, making sure the superintelligence can't kill people. It's kind of hard to take OpenAI's writing on safety and alignment seriously, because they talk about courageously and goodheartedly solving a bunch of problems that not only did they just make up, but which they are working hard to create with their technology.

In practice, alignment has been done by humans reviewing the outputs from AI systems, and judging whether they've adequately achieved their goals or not. It's a reasonable approach, but I can see it turning into the same sort of nightmare job as content moderation, especially if you're aligning an AI to attack the right people with a drone. Since AI is apparently going to be smarter than humans now, OpenAI is proposing that another machine learning system does the alignment. Companies love the "just keep adding shit" approach, we can have an AI train the AI that aligns the other AI.

Whatever the people involved think they're doing, they are acting as a smoke screen.

Selling sentience, or superintelligence, as the main risk of AI diverts attention from how machine learning is being used, and positions AI as a force of nature. It's already making objective judgements, it's already intelligent, but we need a lot of money to stop it from doing sci-fi novel things. Superintelligence is the scary part, but it sneaks in the premise that these models are capable of making objective judgements, or at the very least informed judgements.

The real purpose of alignment is to make AI better at performing these judgements. Researchers at MIT give a three-part definition of alignment [72]: AI should produce accurate outcomes, consistently achieve its goals, and produce value for stakeholders. Stakeholder value is when you pay shareholders and pretend to help the environment.

In the article I'm referencing they discuss the Australian Tax Office, who deployed AI to analyze people's claims as they filled them out and nudge them toward "productive claim behaviours." Paying taxes is cool and all, but the ability to define a productive behaviour and use a neural network to quietly nudge users toward it is worrying.

It's not imperative that the AI be truly intelligent; behavioural nudging has been in the works for all of the 21st century, the brainchild of B.F. Skinner and the entrepreneurs that practice his ideas. Nudging is about making the incentives that drive you toward one choice or another frictionless and invisible. Or preying on your animal brain--hunger, horniness, fear.

On the internet, we have recommendation algorithms. The invisble force that decides what order you see Tweets in, or what mass of videos show up on your YouTube homepage. These are tuned in such a way that the maximum number of people will become addicted to looking at these platforms. But neural networks allow for a system that nudges you. Not just an average of people with similar interests to you, but perfect recommendations based on the specific things that you've engaged with in the past. YouTube has done this for years [73], so in this particular domain we're already living in the future.

This sort of thinking culminates in the elimination of choice, where everything is reduced to a TikTok-like stream that plugs you directly into the algorithm. YouTube is pushing shorts for a reason, designing this way drives engagement and, in turn, ad revenue. We won't know what this does to somebody's psyche for another decade or so, but for me the fun of being online has always been about search and discovery. Now, sites like YouTube are taking away the few tools I had to find new stuff; they briefly removed the option to sort a channel's videos by oldest first and search is a mishmash of useless recommendations and actual results.

The people who make AI systems have a very simple theory of mind; they think more intelligence means more objectivity. It's obvious if you read anything they write. And they are happy to treat us like stupid animals, because they think that if you can be manipulated, you deserve to be. Everything is getting dumber and thinner, slowly being instrumentalized to just keep you on the TikTok treadmill, or the Diablo Immortal treadmill without producing any sensations or thoughts.

While we're here: hey other people who make videos, if you have nothing to say about a topic you can just make a video about something else. You don't have to waste hundreds of hours in editing to say you played a game from five years ago and it's so liminal you guys. Wow, you beat Dark Souls with a straight sword? Is that even freakin' possible? Cool challenge run, better see if HelloFresh wants to pay for it. You should try the Zweihander next, you can tell us why it's such a hidden gem.

This is a good video. The intro is a little too long but the guy did an interesting experiment and actually shared some useful reflections on the experience of playing Ikaruga every day, which most people haven't done. It has less than a thousand views. This is a good fucking video, an exhaustive history of Nintendo that actually has enough information to justify its runtime. It has less than a thousand views. I don't care about the Garten of Banban iceberg if you have nothing interesting to fucking say about it.

I know this is a pretty edgy opinion, but anything that makes you stop and think runs counter to what the content businesses want. They want to monopolize your time, and just like a slot machine, the best way to do that is to keep the screen throbbing with noise and light. Winning the game or learning something about a subject or another person stops being the goal. Eventually even fun becomes secondary to the rhythm of dopamine release. Slot players don't like winning, it breaks the rhythm.

The real AI apocalypse is this hand-in-glove relationship that's emerging between us and the systems that control us. A perfect recommendation algorithm can pinpoint every potential whale, and it will also make the average person waste more time and energy on the pointless dopamine drip of content. AI alignment is about aligning AI to better exploit us. The threat of superintelligence is just a cover.

Other than the content itself getting dumber, AI-driven engagement is a way to claw away more time from your actual, un-mediated, un-monetized life. And there's an analog to this recommendation algorithm example in practically every industry.

AI is another cruel algorithm

{== "You can turn any place into a safe, AI-driven place [13, p. 408]" - Satya Nadella, CEO of Microsoft. Microsoft is a major investor in OpenAI. ==}

With silly videos the stakes are relatively low, but police are using AI as an element of predictive policing. They want to know where crimes will happen before they happen. I don't know if you've seen the news in the last hundred years, but the police have some bias problems, and any analysis tool they use will reproduce those problems. The FBI's bread and butter these days is grooming mentally ill people to become terrorists or drug dealers so they can arrest them [74] [75] [76]. Predictive policing is a way to divert blame onto an algorithm instead of an officer or institution. It's a way to manufacture objective judgements: there's no arguing with a robot.

Shield AI develops machine learning for police drones and fighter jets [77] [78]. They make software to kill and terrorize innocent people but still found a way to valorize themselves on their hip-AF tech startup website. Shield works with the Department of Defense, and they've partnered with other military AI companies, drone manufacturers, and even big dogs like Northrop Grumman [79].

Insurance companies want to start live-updating your premiums using surveillance data, and there's a good chance they'll use AI to make those decisions [13, p. 213]. You have to have insurance, they have access to the sensors in all your devices, and behaviours that they decide are unhealthy can be punished with higher premiums. I'm sorry you got depressed and ate too much ice cream. I'm sorry your brother lopped his finger off with a table saw and you sped to the hospital. But you'd better get a second job if you want to keep your car! [80]

AI allows companies to make more complicated inferences about you from the data they're already collecting. Deviations from the norm are already easy to see, but with AI they can figure out exactly what you're up to and put a price on it.

In a free market, when something bad happens to you, you have to be punished. Misfortune puts you in a position to be exploited, and as every economics 101 textbook says: "A market in equilibrium leaves no unexploited opportunities for individuals [81, p. 48]." So some individual--and companies are people--will swoop in to exploit you. Tragedies are expensive.

All the squabbles about ChatGPT and art theft are an important, but relatively small part of the AI conversation. The 'creativity' argument is one that the machine learning companies can afford to lose, so long as they're allowed to keep deploying their systems.

Rationality is one of our big collective values, right? Making arbitrary decisions about somebody's welfare is frowned upon, and since we don't really have any traditions to defer to we have to come up with reasons for things. People hate insurance companies and the police because they often just show up and make your life worse.

AI offers two things: more exploitation, and a veneer of objectivity. Until now, people have had to make difficult, complex decisions. But a neural network is an empty shell that will give you a decision if you feed it data. It works much faster than a person and it's easy to argue that the decision is perfectly rational. A computer made it!

If your insurance is automated, pricing decisions seem objective, and the contract can be revised in real time. Everything is stuffed with sensors now, and data is already being collected and sold, it has been for years. So there's a constantly updating profile of your behaviour, and a neural network can use it to more perfectly screw you.

Stock prices have to go up, so things had to develop this way. Insurance adjustment must go from slow, messy, and human to this frictionless process that is always happening. A stock ticker on the value of your life. As we run out of new sources of wealth, all of the slack has to get taken out of the existing ones. Because the number has to go up.

There are a dizzying number of applications for AI. Most of it won't pan out, but some of it will. Memorable is one of the many adtech companies using AI to make commercials as obnoxious as possible; they think they've got a model that can score ads on their "memorability" [82] [83]. So look forward to having 100,000 sugary jingles lodged in your brain.

Yodo1, a mobile game publisher, came up with a bot that would identify potential whales playing their games [84]. The head of their "Solutions Team" Chris Dossman likely ran that project. He also built Yodo an AI to recommend microtransactions [85]. To really hook somebody, you need to recommend the right item at the right time. Today Dossman runs another machine learning adtech company, Dicer-dot-AI.

AI can also churn out horrible articles to clog your Google searches [86], and it can probably do search engine optimization better than a person. Google will become even worse to compensate for this, they're pretty much giving up on web search altogether and just using AI to answer whatever question you have. The dream of the internet, all the world 's information at your fingertips, has been completely swallowed by companies like Google. The world's information is at their fingertips, and you can maybe get some of it through an automated chat bot which is free for now but they could charge you if the stock price ever goes down.

{== This is an example of the free market introducing an inefficiency if you're into that sort of thing; the web sucks because churning out garbage articles on sites filled with ads makes more money than productive work. ==}

AI has genuine uses the same way the blockchain does. Blockchains could be useful for physical goods tracking or something, and AI can be really useful if it has some kind of supervision; it can automate a lot of busywork. But right now the trajectory we're on is using it to dehumanize everyone even further. The irony that we're automating the production of art instead of the jobs everybody hates shouldn't be lost on us.

There's also a trend of AI friends and companions, which is very sad. You can write a character for these bots to play, and it will keep that description in its attention to kind of roleplay with you. Porn is obviously the main use for this, and it's pretty much a perfect storm for gaming's core audience of horny lonely people with high-end graphics cards. The League of Legends community may never recover. But if you stay in these AI communities for a while, you're bound to see people who just want somebody to talk to.

People are getting lonelier, and it's not really being addressed by anybody in power. Apps like Replika, that use GPT to give you a pretend friend or girlfriend, are a market solution for loneliness. Replika got popular as a sort of AI therapy tool, which is sad in a different way, but it quickly devolved into making people pay to sext with their robot friend.

These band-aids don't work. We know that the bonds of community are what make life worth living, everybody knows that. But we simply have no way to address the crisis of loneliness, because those real relationships everybody wants cannot be transactional, and they are harder and harder to sustain in a world where everything is shoved into a financial mold. Every little incursion by an advertising firm or tech company eats into our space to live un-mediated lives. Social media is an obvious example, an attempt to turn conversations into a market for attention.

I want to celebrate machine learning, the potential in it is incredible. I want the AI assistant that can recommend papers to read and games to play. I like the goofy AI-generated voice videos, even if they're ethically questionable and gimmicky. I even want to play AI Dungeon, if they make it weird and free again. But everything we make is conditioned by the world, so every AI project has a tumor in its heart. These tools don't exist so that we can make better things or speed up work, they make minimum-viable products with minimal supervision.

They're going to scan every actor's face and voice so nothing new can ever be made again and the computer will write all the scripts too. Every game can be a totally personalized Skinner Box, modifying itself with a live feed of your pupil dilation. The Bangladeshi guy delivering your McDonald's will have his pay docked if he walks slower than he's supposed to. A lot of this won't happen, but if entertainment companies move fast enough that AI-generated movies are the only movies, it doesn't really matter if they're "bad". Bad compared to what?

Is it too corny to ask that the self-appointed saviours of humanity at OpenAI work toward ennobling humanity a little bit? I don't care about Mars, I want my friends and family to have the security to live good lives and a safe, quiet place to do it in. If that is too much to ask, then why are we subsidizing these people's programming projects?

I don't like dumping a bunch of nihilistic bullshit on you at the end of my videos, I would rather piss you off. Really think about how many struggles in your life, and the lives of people around you have been caused by Silicon Valley. How many industries they've destroyed while patting themselves on the back about disruption. How many people they've forced into unjust work contracts. How many kids they've given eating disorders. How much of our collective wealth and time they've wasted on gimmicky social media apps that shut down after a year. Even if you're heartless, we can all see that big tech is massively inefficient.

We could do so much if we weren't burdened with these hogs. They deserve to at least be scared.

When you start saying stuff like this people come out of the woodwork to randomly tell you that the Soviet model didn't work. An alien system from a society we don't live in from a time long past is only relevant if you think politics is an RPG where you have to pick a class. It's actually the process of advocating and fighting for what we want. So I don't really care what the solution calls itself, but we are at a point where we need to choose between building a world for money to live in, or a world for people to live in. You don't need political science or some complicated theory to see that; profits have to grow, and we're out of room.

However. On the whole, I think the prognosis is good. Organized labour came back with a vengeance during the pandemic, and although there are some shitty unions out there, it's been a long time since people have had access to institutions that are actually on their side. Union membership is low right now--it's been shrinking in the U.S. for decades [87]--but a few more high profile strike victories could turn the tide. Life is going to get worse, but the thing is that by making living unbearable, mega-corporations and their buddies in government create conditions that force people to organize and fight back. It's not good that we're barrelling toward 100 crises at once, but we're here anyway and people are really starting to feel it.

It's easy to say things like "well the automatic loom took a lot of jobs too" but workers have fewer and fewer places to go, so of course they're going to resist machine learning. AI is cool, but we can't just discard millions of displaced humans like they're old toys. The ongoing WGA and SAG strikes are very high profile, and AI is one of the unions' central concerns [88] [89]. People are starting to rediscover the power of organization, and a strike for better contracts is also a statement of values.

The AI revolution we're getting now is the one that gamifies life; it is a way to enact totally rigid systems with a bunch of esoteric rules that doles out rewards and punishments from the aether. It's arbitrary, and it doesn't afford us any unstructured time or bad behaviour.

Its world is one where you're bound by a million invisible contracts that are constantly rewriting themselves, pulling you to and fro unconsciously or with explicit threats. And these contracts are driven by the same market logic that puked pregnant elsa spider-man videos into our collective psyche.

But as always, things don't have to be this way; there is a time for games and a time for life. OpenAI isn't going to draw that line, so it falls to us.

Generated Thumbnail Description

The thumbnail for this video could feature a close-up shot of a player character in Minecraft, standing in front of a towering castle or an intricate redstone contraption, with the game's iconic blocky graphics clearly visible. The player could be holding a weapon or tool, with an intense look on their face to convey the game's survival and combat elements. The background could be a sunset or a scenic landscape to represent the game's exploration and creativity. The title of the video could be displayed prominently on the thumbnail, along with an attention-grabbing tagline or subtitle to entice viewers to click on the video.

temp Re-Rec Stuff

1:09:45 (I missed this line)
Today Dossman runs another machine learning adtech company, Dicer-dot-AI.

1:12:10
But we simply have no way to address the crisis of loneliness, because those real relationships everybody wants cannot be transactional, and they are harder and harder to sustain in a world where everything is shoved into a financial mold. Every little incursion by an advertising firm or tech company eats into our space to live un-mediated lives.

Sources and Notes

J. Bezos, "Machine Learning in the Video Game Industry," Technology and Operations Management. https://d3.harvard.edu/platform-rctom/submission/machine-learning-in-the-video-game-industry/ (accessed Aug. 06, 2023).
C. Dede, "Perspectives | The Role of Classic AI and Generative AI in EdTech and Workforce Development." https://owlvc.com/insights-chris-dede.php (accessed Aug. 07, 2023).
"Neuroscientists reveal how the brain can enhance connections," MIT News | Massachusetts Institute of Technology, Nov. 18, 2015. https://news.mit.edu/2015/brain-strengthen-connections-between-neurons-1118 (accessed Jul. 31, 2023).
"GPT-3," Wikipedia. Jul. 31, 2023. Accessed: Jul. 31, 2023. [Online]. Available: https://en.wikipedia.org/w/index.php?title=GPT-3&oldid=1168000287#GPT-3.5
F. Rosenblatt, "The perceptron: A probabilistic model for information storage and organization in the brain.," Psychological Review, vol. 65, no. 6, pp. 386–408, 1958, doi: 10.1037/h0042519.
C. Voss et al., "Visualizing Weights," Distill, vol. 6, no. 2, p. e00024.007, Feb. 2021, doi: 10.23915/distill.00024.007. Available: https://distill.pub/2020/circuits/visualizing-weights/
A. Vaswani et al., "Attention Is All You Need." arXiv, Jul. 23, 2023. Accessed: Jul. 27, 2023. [Online]. Available: http://arxiv.org/abs/1706.03762
C. Szegedy et al., "Going Deeper with Convolutions." arXiv, Sep. 16, 2014. Accessed: Jul. 31, 2023. [Online]. Available: http://arxiv.org/abs/1409.4842
"Inceptionism: Going Deeper into Neural Networks," Jun. 17, 2015. https://ai.googleblog.com/2015/06/inceptionism-going-deeper-into-neural.html (accessed Jul. 31, 2023).
"About ImageNet." https://www.image-net.org/about.php (accessed Jul. 31, 2023).
K. Eschner, "Leaded Gas Was a Known Poison the Day It Was Invented," Smithsonian Magazine, Dec. 09, 2016. https://www.smithsonianmag.com/smart-news/leaded-gas-poison-invented-180961368/ (accessed Aug. 04, 2023).
S. Zuboff, The age of surveillance capitalism: the fight for a human future at the new frontier of power, First Trade Paperback Edition. New York: PublicAffairs, 2020.
K. Crawford and T. Paglen, "Excavating AI: The Politics of Training Sets for Machine Learning," -, Sep. 19, 2019. https://excavating.ai (accessed Jul. 31, 2023).
D. Gershgorn, "The data that transformed AI research—and possibly the world," Quartz, Jul. 26, 2017. https://qz.com/1034972/the-data-that-changed-the-direction-of-ai-research-and-possibly-the-world (accessed Jul. 31, 2023).
jklmnop, "Your First 1000 HITs," MTurk Crowd | Mechanical Turk Community Forum, Jan. 12, 2016. https://www.mturkcrowd.com/threads/your-first-1000-hits.23/ (accessed Jun. 06, 2023).
J. Deng, W. Dong, R. Socher, L.-J. Li, Kai Li, and Li Fei-Fei, "ImageNet: A large-scale hierarchical image database," in 2009 IEEE Conference on Computer Vision and Pattern Recognition, Miami, FL: IEEE, Jun. 2009, pp. 248–255. doi: 10.1109/CVPR.2009.5206848. Available: https://www.researchgate.net/publication/221361415_ImageNet_a_Large-Scale_Hierarchical_Image_Database.
J. O'Malley, "Captcha if you can: how you've been training AI for years without realising it," TechRadar, Jan. 12, 2018. https://www.techradar.com/news/captcha-if-you-can-how-youve-been-training-ai-for-years-without-realising-it (accessed Aug. 22, 2023).
O. Schwartz, "Untold History of AI #6: How Amazon's Mechanical Turkers Got Squeezed Inside the Machine," IEEE Spectrum, Apr. 22, 2019. Accessed: Jul. 31, 2023. [Online]. Available: https://spectrum.ieee.org/untold-history-of-ai-mechanical-turk-revisited-tktkt
J. Pontin, "Artificial Intelligence, With Help From the Humans," The New York Times, Mar. 25, 2007. Accessed: Aug. 22, 2023. [Online]. Available: https://www.nytimes.com/2007/03/25/business/yourmoney/25Stream.html
D. Gershgorn, "The data that transformed AI research—and possibly the world," Quartz, Jul. 26, 2017. https://qz.com/1034972/the-data-that-changed-the-direction-of-ai-research-and-possibly-the-world (accessed Jul. 31, 2023).
K. Hara, A. Adams, K. Milland, S. Savage, C. Callison-Burch, and J. P. Bigham, "A Data-Driven Analysis of Workers' Earnings on Amazon Mechanical Turk," in Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems, Montreal QC Canada: ACM, Apr. 2018, pp. 1–14. doi: 10.1145/3173574.3174023. Available: https://arxiv.org/ftp/arxiv/papers/1712/1712.05796.pdf.
K. Hao and A. P. Hernandez, "How the AI industry profits from catastrophe," MIT Technology Review, Apr. 20, 2022. https://www.technologyreview.com/2022/04/20/1050392/ai-industry-appen-scale-data-labels/ (accessed Aug. 22, 2023).
"Appen is the Trusted Partner for Innovative World-Class AI Applications," Appen. https://appen.com/ (accessed Aug. 22, 2023). ARCHIVED.
Roxane Barth, "The Convergence of AI and Creativity: Introducing Ghostwriter," Mar. 21, 2023. https://news.ubisoft.com/en-us/article/7Cm07zbBGy4Xml6WgYi25d/the-convergence-of-ai-and-creativity-introducing-ghostwriter (accessed May 05, 2023).
B. Swanson, K. Mathewson, B. Pietrzak, S. Chen, and M. Dinalescu, "Story Centaur: Large Language Model Few Shot Learning as a Creative Writing Tool," in Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: System Demonstrations, Online: Association for Computational Linguistics, 2021, pp. 244–256. doi: 10.18653/v1/2021.eacl-demos.29. https://aclanthology.org/2021.eacl-demos.29.
"Official - Introducing Unity Muse and Unity Sentis, AI-powered creativity," Unity Forum. https://forum.unity.com/threads/introducing-unity-muse-and-unity-sentis-ai-powered-creativity.1453384/ (accessed Aug. 03, 2023).
"Official - Updating Unity's guiding principles for ethical AI," Unity Forum. https://forum.unity.com/threads/updating-unitys-guiding-principles-for-ethical-ai.1448245/ (accessed Aug. 07, 2023).
Mirza Beig [@TheMirzaBeig], "Peak AI marketing...," Twitter, Jun. 27, 2023. https://twitter.com/TheMirzaBeig/status/1673814584698843145 (accessed Aug. 07, 2023). ARCHIVED.
V. Kennedy, "Valve says AI-generated content policy goal is 'not to discourage the use of it on Steam,'" Eurogamer, Jul. 03, 2023. https://www.eurogamer.net/valve-says-ai-generated-content-policy-goal-is-not-to-discourage-the-use-of-it-on-steam
M. Farooq, "GitHub Copilot: Data Collection, Training, and Evaluation for Large Scale Code Generation," Aggregate Intellect, Sep. 10, 2021. https://medium.com/aggregate-intellect/github-copilot-data-collection-training-and-evaluation-for-large-scale-code-generation-6c1970993998 (accessed Sep. 05, 2023).
L. Ngan, "High On Life contains AI art and voice acting," Eurogamer, Dec. 16, 2022. https://www.eurogamer.net/high-on-life-contains-ai-art-and-voice-acting (accessed Aug. 31, 2023).
"High On Life: Rick And Morty co-creator on his new shooter game where the guns talk back," Sky News. https://news.sky.com/story/high-on-life-rick-and-morty-co-creator-on-his-new-shooter-game-where-the-guns-talk-back-12766710 (accessed Aug. 07, 2023).
NoButtholeNoUpvotes, "A bunch of the posters in the starting room of 'High on Life' are clearly AI generated. - high on life post," Imgur. https://imgur.com/gallery/XDTf3L8 (accessed Aug. 31, 2023).
J. S. Park, J. C. O'Brien, C. J. Cai, M. R. Morris, P. Liang, and M. S. Bernstein, "Generative Agents: Interactive Simulacra of Human Behavior." arXiv, Aug. 05, 2023. Accessed: Aug. 17, 2023. [Online]. Available: http://arxiv.org/abs/2304.03442
J. Rose, "Technologists Are Using AI to 'Expand' Famous Works of Art," Vice, Jun. 09, 2022. https://www.vice.com/en/article/m7gqxq/technologists-are-using-ai-to-expand-famous-works-of-art (accessed Aug. 03, 2023).
Benlisquare, An X/Y plot of algorithmically-generated AI artworks depicting a European-style castle in Japan, created using the Stable Diffusion V1-5 AI diffusion model. 2022. Accessed: Sep. 07, 2023. [Online]. Available: https://commons.wikimedia.org/wiki/File:X-Y_plot_of_algorithmically-generated_AI_art_of_European-style_castle_in_Japan_demonstrating_DDIM_diffusion_steps.png
G. Somepalli, V. Singla, M. Goldblum, J. Geiping, and T. Goldstein, "Diffusion Art or Digital Forgery? Investigating Data Replication in Diffusion Models." arXiv, Dec. 12, 2022. Accessed: Aug. 08, 2023. [Online]. Available: http://arxiv.org/abs/2212.03860
N. Carlini et al., "Extracting Training Data from Large Language Models." arXiv, Jun. 15, 2021. Accessed: Aug. 16, 2023. [Online]. Available: http://arxiv.org/abs/2012.07805
L. Rosson, "What Is AI Doing To Art?," Apr. 2023, Accessed: Aug. 08, 2023. [Online]. Available: https://www.noemamag.com/what-is-ai-doing-to-art. This article is pretty strange. I suppose whoever paid for it stands to benefit from AI, and it extends the "AI is a tool" argument, redirecting AI's threat to workers into eroded labour protections more generally.
M. Heikkilä, "This artist is dominating AI-generated art. And he's not happy about it.," MIT Technology Review. https://www.technologyreview.com/2022/09/16/1059598/this-artist-is-dominating-ai-generated-art-and-hes-not-happy-about-it/ (accessed Aug. 08, 2023).
Z. Tang, Z. Yang, C. Zhu, M. Zeng, and M. Bansal, "Any-to-Any Generation via Composable Diffusion." arXiv, May 19, 2023. Accessed: Aug. 15, 2023. [Online]. Available: http://arxiv.org/abs/2305.11846], [https://codi-gen.github.io/
M. Broz, "Uncovering more watermarked images in Stable Diffusion AI," Apr. 08, 2023. https://photutorial.com/stable-diffusion-watermarks-investigation/ (accessed Aug. 25, 2023).
J. Vincent, "Getty Images is suing the creators of AI art tool Stable Diffusion for scraping its content," The Verge, Jan. 17, 2023. https://www.theverge.com/2023/1/17/23558516/ai-art-copyright-stable-diffusion-getty-images-lawsuit (accessed Aug. 25, 2023).
"Getty Images Statement on Stability AI," Getty Images Press Site - Newsroom - Getty Images, Jan. 17, 2023. https://newsroom.gettyimages.com/en/getty-images/getty-images-statement (accessed Aug. 25, 2023).
"LAION." https://laion.ai/ (accessed Aug. 08, 2023).
"Create AI-Generated Art Fairly with DreamUp by team on DeviantArt," Nov. 11, 2022. https://www.deviantart.com/team/journal/Create-AI-Generated-Art-Fairly-with-DreamUp-933537821 (accessed Aug. 08, 2023). ARCHIVED.
"UPDATE All Deviations Are Opted Out of AI Datasets by team on DeviantArt," Nov. 11, 2022. https://www.deviantart.com/team/journal/UPDATE-All-Deviations-Are-Opted-Out-of-AI-Datasets-934500371 (accessed Aug. 08, 2023). ARCHIVED.
E. J. Fox, "Sam Altman on His Plan to Keep A.I. Out of the Hands of the 'Bad Guys,'" Vanity Fair, Dec. 15, 2015. https://www.vanityfair.com/news/2015/12/sam-altman-elon-musk-openai (accessed Aug. 08, 2023).
I. Sample, "If they could turn back time: how tech billionaires are trying to reverse the ageing process," The Guardian, Feb. 17, 2022. Accessed: Aug. 24, 2023. [Online]. Available: https://www.theguardian.com/science/2022/feb/17/if-they-could-turn-back-time-how-tech-billionaires-are-trying-to-reverse-the-ageing-process
K. Warren, "Millionaires are dropping tens of thousands on 'young blood' transfusions and cryonic preservation," Business Insider, Aug. 02, 2019. https://www.businessinsider.com/ways-rich-invest-in-living-forever-young-blood-cryonics-2019-8 (accessed Aug. 24, 2023).
J. Bercovici, "Peter Thiel Is Very, Very Interested in Young People's Blood," Inc.com, Aug. 01, 2016. https://www.inc.com/jeff-bercovici/peter-thiel-young-blood.html (accessed Aug. 24, 2023).
M. Kosoff, "Peter Thiel Wants to Inject Himself With Young People's Blood," Vanity Fair, Aug. 01, 2016. https://www.vanityfair.com/news/2016/08/peter-thiel-wants-to-inject-himself-with-young-peoples-blood (accessed Aug. 24, 2023).
"Sam Altman," Wikipedia. Aug. 05, 2023. Accessed: Aug. 08, 2023. [Online]. Available: https://en.wikipedia.org/w/index.php?title=Sam_Altman&oldid=1168802068
"OpenAI's Sam Altman raises $115 million for Worldcoin crypto project," Reuters, May 25, 2023. Accessed: Aug. 10, 2023. [Online]. Available: https://www.reuters.com/technology/openais-sam-altman-raises-115-mln-worldcoin-crypto-project-2023-05-25/
E. Howcroft, A. Bridge, M. Singh, and E. Howcroft, "'Verified human': Worldcoin users queue up for iris scans," Reuters, Jul. 25, 2023. Accessed: Aug. 10, 2023. [Online]. Available: https://www.reuters.com/technology/verified-human-worldcoin-users-queue-up-iris-scans-2023-07-25/
B. Acharya, H. Malalo, and B. Acharya, "Kenya suspends Worldcoin's crypto project over safety concerns," Reuters, Aug. 02, 2023. Accessed: Aug. 10, 2023. [Online]. Available: https://www.reuters.com/world/africa/kenyan-government-suspends-activities-worldcoin-country-2023-08-02/
"OpenAI LP Announcement." https://openai.com/blog/openai-lp (accessed Aug. 08, 2023).
"GPTBot Documentation." https://platform.openai.com/docs/gptbot (accessed Aug. 10, 2023). ARCHIVED.
S. Thaler, STEPHEN THALER v. KATHERINE K. VIDAL. 2022. Accessed: Sep. 06, 2023. [Online]. Available: https://fingfx.thomsonreuters.com/gfx/legaldocs/movangkexpa/IP%20AI%20PATENTS%20ruling.pdf. ARCHIVED. The page I cited was very explicit that this case did not decide if an AI-assisted invention could be patented; that would be wading into the wider AI discussion and muddy the otherwise very clear-cut case.
U.S. Copyright Review Board, "Re: Second Request for Reconsideration for Refusal to Register Théâtre D'opéra Spatial (SR # 1-11743923581; Correspondence ID: 1-5T5320R)," Sep. 05, 2023. Available:https://www.copyright.gov/rulings-filings/review-board/docs/Theatre-Dopera-Spatial.pdf. ARCHIVED. A relevant article is available here.
U.S. Copyright Review Board, "Re: Second Request for Reconsideration for Refusal to Register A Recent Entrance to Paradise (Correspondence ID 1-3ZPC6C3; SR # 1-7100387071)," Feb. 14, 2022. Accessed: Sep. 07, 2023. [Online]. Available: https://www.copyright.gov/rulings-filings/review-board/docs/a-recent-entrance-to-paradise.pdf. ARCHIVED.
U.S. Copyright Office, "Re: Zarya of the Dawn (Registration # VAu001480196)," Feb. 21, 2023. Accessed: Sep. 07, 2023. [Online]. Available: https://www.copyright.gov/docs/zarya-of-the-dawn.pdf. ARCHIVED.
T. Hals and B. Brittain, "Insight: Humans vs. machines: the fight to copyright AI art," Reuters, Apr. 01, 2023. Accessed: Sep. 07, 2023. [Online]. Available: https://www.reuters.com/default/humans-vs-machines-fight-copyright-ai-art-2023-04-01/. ARCHIVED.
"Majority Leader Schumer Delivers Remarks To Launch SAFE Innovation Framework For Artificial Intelligence At CSIS | Senate Democratic Leadership," Jun. 21, 2023. https://www.democrats.senate.gov/news/press-releases/majority-leader-schumer-delivers-remarks-to-launch-safe-innovation-framework-for-artificial-intelligence-at-csis (accessed Aug. 09, 2023).
D. Matthews, "The AI rules that US policymakers are considering, explained," Vox, Aug. 01, 2023. https://www.vox.com/future-perfect/23775650/ai-regulation-openai-gpt-anthropic-midjourney-stable (accessed Aug. 09, 2023).
"Democratic Republic of Congo: 'This is what we die for': Human rights abuses in the Democratic Republic of the Congo power the global trade in cobalt," Amnesty International, Jan. 19, 2016. https://www.amnesty.org/en/documents/afr62/3183/2016/en/ (accessed Aug. 09, 2023).
D. Snitzky, "Your Phone Was Made By Slaves: A Primer on the Secret Economy," Longreads, Mar. 08, 2016. http://longreads.com/2016/03/08/your-phone-was-made-by-slaves-a-primer-on-the-secret-economy-3/ (accessed Aug. 09, 2023).
N. Rea, "How ImageNet Roulette, a Viral Art Project That Exposed Facial Recognition's Biases, Is Changing Minds About AI," Artnet News, Sep. 23, 2019. https://news.artnet.com/art-world/imagenet-roulette-trevor-paglen-kate-crawford-1658305 (accessed Aug. 09, 2023).
B. H. Wixom, I. A. Someh, and R. W. Gregory, "AI Alignment: A New Management Paradigm," MIT CISR. https://cisr.mit.edu/publication/2020_1101_AI-Alignment_WixomSomehGregory (accessed Sep. 01, 2023).
P. Covington, J. Adams, and E. Sargin, "Deep Neural Networks for YouTube Recommendations," in Proceedings of the 10th ACM Conference on Recommender Systems, Boston Massachusetts USA: ACM, Sep. 2016, pp. 191–198. doi: 10.1145/2959100.2959190. Available: https://dl.acm.org/doi/10.1145/2959100.2959190. ARCHIVED.
M. Hussain, "After His Mother Asked for Help, FBI Terrorism Sting Targets Mentally Ill Teen," The Intercept, Jul. 31, 2023. https://theintercept.com/2023/07/31/fbi-isis-sting-mentally-ill-teen/ (accessed Aug. 09, 2023).
T. Aaronson, "Lawsuit Targets FBI Probe of Racial Justice Activists," The Intercept, Aug. 01, 2023. https://theintercept.com/2023/08/01/fbi-infiltrate-activists-first-amendment/ (accessed Aug. 09, 2023).
M. Hussain, "The FBI Groomed a 16-Year-Old With 'Brain Development Issues' to Become a Terrorist," The Intercept, Jun. 15, 2023. https://theintercept.com/2023/06/15/fbi-undercover-isis-teenager-terrorist/ (accessed Aug. 09, 2023).
C. Metz, "Police Drones Are Starting to Think for Themselves - The New York Times," The New York Times, New York, NY, Dec. 05, 2020. Accessed: Aug. 22, 2023. [Online]. Available: https://www.nytimes.com/2020/12/05/technology/police-drones.html
"Shield AI - Building The World's Best AI Pilot," Shield AI. https://shield.ai/ (accessed Aug. 25, 2023).
O. Walters and J. Lilja, "U.S. Army Awards FTUAS Inc. 2 Agreements," U.S. Army. https://www.army.mil/article/264355/u_s_army_awards_ftuas_inc_2_agreements (accessed Aug. 25, 2023).
"Privacy Policy - Lemonade Insurance." https://www.lemonade.com/privacy-policy (accessed Sep. 13, 2023). ARCHIVED
R. H. Frank, B. Bernanke, and L. Johnston, Principles of economics, 4th ed. Boston: McGraw-Hill/Irwin, 2009.
"Memorable | Home." https://www.memorable.io/ (accessed Aug. 03, 2023).
C. Ford, "Beauty Is in the Eye of the Beholder—but Memorability May Be Universal," Wired. Accessed: Aug. 03, 2023. [Online]. Available: https://www.wired.com/story/beauty-is-in-the-eye-of-the-beholder-but-memorability-may-be-universal/
A. Walker, "Someone Spent Over $220,000 In Microtransactions On A Transformers Game," Kotaku Australia, Oct. 10, 2019. https://www.kotaku.com.au/2019/10/someone-spent-over-220000-in-microtransactions-on-a-transformers-game/ (accessed Aug. 16, 2023). ARCHIVED
"Christopher Dossman - Co-Founder - Dicer.ai | LinkedIn." https://www.linkedin.com/in/christopherdossman (accessed Aug. 16, 2023).
F. Landymore, "CNET Is Quietly Publishing Entire Articles Generated By AI," Futurism, Jan. 15, 2023. https://futurism.com/the-byte/cnet-publishing-articles-by-ai (accessed Sep. 13, 2023).
S. Chernikoff, "Here's why the US labor movement is so popular but union membership is dwindling," USA TODAY, Sep. 04, 2023. https://www.usatoday.com/story/money/nation-now/2023/09/04/us-union-membership-shrinking/70740125007/ (accessed Sep. 08, 2023). ARCHIVED.
E. Stewart, "It's not just famous actors and big-name writers the Hollywood strikes are hurting," Vox, Aug. 22, 2023. https://www.vox.com/money/2023/8/22/23840473/writers-strike-actors-wga-sag-workers-economy-impact (accessed Aug. 25, 2023).
SAG-AFTRA, "SAG-AFTRA National Board Unanimously Agrees To Send Authorization Vote To Members." https://www.sagaftra.org/sag-aftra-strike-authorization-vote (accessed Sep. 08, 2023). ARCHIVED.