Listen to Chapter 4 Part 1 of my book You Teach the Machines! If you find this helpful, please support original writing and buy the full book wherever you get audiobooks. Available from Libro.fm, Amazon, Audible, Apple and many more. Also in print at Amazon, Barnes and Noble, and my favorite: delivered to your local bookstore through bookshop.org. Help other readers by leaving a review on Amazon or Goodreads! Thanks so much --Jeff
CHAPTER 4: Side Effects and Pitfalls
"The vitality of democracy depends on popular knowledge of complex questions."
—S.S. McClure
Writing this chapter, in which I present what many see as the "bad news" of AI, was simultaneously depressing and encouraging. Depressing because, at the time I'm writing, a relatively small number of large corporations are deploying AI into our lives as fast as possible. And it's all pretty opaque. Encouraging because major change from AI has yet to happen. There is time for you, me, our loved ones to shape change for the better. To be a driver, not a passenger. You teach the machines.
The words came easily, but I became dejected while building a point of view from facts, interpretation of facts, and theories to explain what is not publicly available. There's a lot that is behind a curtain. You intuitively know AI will reshape your life. Simultaneously, you don't understand how. You can be overwhelmed by this combination of knowledge and uncertainty. I became overwhelmed and depressed as I considered the negative implications of this new technology accelerated by a generational deployment of capital, concentration of wealth, erosion of education, disruption of jobs, and shifting global security. My editor stepped in and coached me to focus on the specific, the actionable. Always good advice.
In this chapter, you'll see my editorial point of view come through, so be a critical reader. Know that I remain an AI optimist, so I try to balance points of potential doom with action you can take. The legal and publicity departments at the companies I discuss may argue with what I write. In many ways I am rooting for these same companies to succeed. They're doing incredibly difficult and historical work. I invite them to help make a second edition of this book even better. But a corporation is legally obligated to seek one simple outcome: Maximize profit. The reality is that better human outcomes depend entirely on you, me, your parents, your kids, the values we teach, and the decisions we make. I try to give you at least some idea of how you can be part of the solution to the problems I discuss. But if there is one thing you should take away from this chapter, it is that you need to prepare for the unknown. Prepare by taking stock of your first principles. Mine are "Be nice. Get stuff done. Make things less crappy." Medical professionals go with "Above all, do no harm." What are yours? We're in for a lot of change, currently driven by corporations in effect experimenting and gambling with our economy and lives. Anchor yourself with clear principles that can steer you when unexpected change from AI hits. Humans are built to adapt. We're going to do a lot of it in the coming decades.
A side effect is an unintended bad thing you experience from doing something else. A headache from taking antibiotics, maybe. A pitfall is a known hazard you allow yourself to fall into. A headache from drinking too much.
An unintended side effect in the world of AI? Depending on your point of view, the relative reduction in investment in renewable energy in favor of investment in nuclear energy. A corresponding pitfall "we" knowingly step into with more nuclear energy? The coming increase in solid nuclear waste stored on site at nuclear energy plants, at least in the U.S., because we as a society, represented by the people we've elected for the past twenty years, are politically unable to pull off long-term consolidated storage. See Yucca Mountain.
But even this side effect can have a balancing upside. Investment in nuclear energy is bringing real innovation in the form of more efficient, cleaner nuclear reactors. And if you consider a reduction of investment in renewable energy a side effect because of climate change, then you have to consider that use of nuclear energy is better than burning more fossil fuels.
Regrets
In May of 2023, Geoffrey Hinton resigned from Google. Eleven years earlier, he and his team had built the first neural networks at the University of Toronto. They founded a company that was quickly bought by Google for $44 million. Dr. Hinton went to work for Google to advance the research. Ten years later, at the time of his resignation, he stated that a part of him regrets his life's work (Kleinman & Vallance, 2023).
The main inventor of modern AI regrets his life's work. Sit with that.
Geoffrey Hinton, an insider's insider, knows AI as much as or more than anyone else on the planet. He resigned from Google, the original industrial AI company, so he could speak freely about the hazards he sees at Google and beyond.
Dr. Hinton was a hero of sorts to me and my colleagues working in health AI long before he stood on principle. He worked for decades against conventional wisdom to prove the power of computer programs modeled on how neurons in the brain learned skills by analyzing data. After Google joined the AI arms race started by Microsoft's investment in OpenAI in 2020, he became concerned that his company and its competitors were moving too fast, given the stakes for the rest of us.
He grew concerned that rapid proliferation of "fake" AI-generated text, video, and voice would make it impossible for us to know what was true. He grew very concerned that we would lose our jobs and incomes as AI replaced or cheapened the labor of paralegals, analysts, call center workers, writers, lawyers, financial experts, doctors, nurses, engineers, and software programmers. He became very, very concerned with the weaponization of AI into autonomous killing machines. Dr. Hinton wasn't alone. Even before his resignation, over a thousand technology leaders called for a moratorium on training advanced AI. They wanted time to understand possible side effects and work to minimize the harm of known pitfalls.
Too late. A year later, Microsoft effectively bought a nuclear power plant. Bezos, Musk, Pichai, Nadella, Altman, and Cook—the modern-day Stanfords, Rockefellers, Dukes, and Morgans—couldn't risk someone else winning. Shareholders demanded returns. Not just some mysterious shareholder "other," but each and every one of us invested in the tech-heavy U.S. stock market. Google ignored the call for a moratorium and rolled out AI-generated search answers at the top of their search page.
Change
Side effects and pitfalls flow naturally from change. Artificial intelligence is a miles-long freight train of change driven by hundreds of billions of dollars. You, I, your parents, your kids are locked in a stalled car at the railroad crossing. Artificial intelligence is changing or will soon change how you write a report for work, an essay for school, improve your firm's profits by automating junior associate work, drive a car, identify mental health problems, deny insurance coverage, get your electricity, trust or mistrust information, experience art and entertainment, and fight wars.
Which changes will bring side effects? Which have known pitfalls? Wouldn't it be nice to take a minute and think about it? Like the experts wanted "way back" in 2023?
Practical AI went from invention to industry in ten years. Neural networks emerged in 2012 and became scalable five years later with the Transformer in 2017. Corporate Industrialization into a financially and politically intertwined handful of corporations? Five years between 2017 and 2022. What took more than one hundred years for the first Industrial Revolution took only ten for AI. As I write this in 2025, the Big AI companies are in a race to remake the knowledge economy. How many quarterly earnings reports do you think they're willing to produce before they can report returns to their impatient investors? The leadership and shareholders of the Big AI companies in the U.S. alone are betting hundreds of billions of dollars that they can return trillions as fast as possible. Look at the concentration of wealth in the hands of the leaders of these companies and their investors. Again, they have a legal obligation to maximize profits. Do you think they're truly, fundamentally interested in growing the whole pie?
AI has never happened before. It went from theory to practice in ten years. The economy of AI as it's currently playing out means the richest corporations control the means of production right up front. Contrast this with the rise of the internet and World Wide Web. Public communication protocols arose out of publicly funded research and were taken up by anyone with a computer and a phone line. Web browsers and server software freely available to all allowed people to use their existing phone lines to build their own websites at home. Internet service providers sprung up at the local town level. The web quickly became of, by, and for the people.
Artificial intelligence is on the opposite track. The Big AI corporations possess barely comprehensible financial power. They use real and perceived expertise to gain political influence based in part on a popular assumption that AI is central to national security. Multiple sessions of Congress and multiple presidents have come and gone with no new regulatory guardrails in the U.S. Hundreds of billions of dollars already at stake demand returns. It's as if Gutenberg and the early printing press experts weren't chased out of Mainz during an unrelated religious power struggle so the printing press could disseminate organically. It's as if, instead, they formed a corporate combine, an industrial business group that held absolute power over the manufacture and use of the printing press. It's as if they cozied up to and contributed hundreds of millions in dark money to the most powerful leaders in Europe of every party to insulate themselves from regulation and maximize profits. It's as if they relaxed standards on control of misinformation in the books they printed to gain influence. It's as if human knowledge and skill were captured, transferred to others, and used for good and bad in entirely unprecedented new ways by corporations concerned only with winning the trillions of thalers at stake.
Rapid societal, cultural, and economic change directed by profit. Maybe it'll be fine. Let's look at puppies and covet the lives of others on social media instead of worrying.
With that rant out of the way, it's time to buck up and be part of the solution. "You teach the machines" could mean you're passively milked of your data and money. Or it could mean you're in front of the room, directing and taking charge. You're the windshield, not the bug. You teach the machines.
Dr. Hinton's Fears
Geoffrey Hinton gives us a framework for the first major known side effects and pitfalls we'll discuss: misinformation, job loss, and killer robots (the use of AI in war). In my professional life, I've spent a lot of time working on the first two, albeit focused on the health sector. Thankfully, I've never met a killer robot in a "hot war" but have had direct experience with scary "cold war" AI threats and harm.
Misinformation
Consider the same internet search three months apart for "european causes of accidental death" using the Google search page, which around ninety percent of us do globally. I have screenshots of everything to prove this actually happened.
On November 19, 2024, Google's AI Overview listed the causes as:
- road traffic injuries
- drowning
- falls
- burns
- poisoning
It stated that "road crashes are the most significant cause," which I took to mean this was a rank ordered list.
On January 27, 2025, Google's AI Overview reordered and changed the causes to:
- road traffic accidents
- falls
- drowning
- poisoning
- work-related accidents.
In three months, falls overtook drowning and burns dropped from the list to be replaced by work-related accidents.
Which is true? Turns out neither, according to my own research on the website of EuroStat, the statistical office of the European Union, which Google seemed to point to as a source for both AI Overview results. A half hour spent with the freely available data there revealed the causes, in order of decreasing death rate (deaths per 1,000 people), to be:
- falls
- other
- transportation accidents
- poisoning
- drowning
"Other" includes a scary list of things like struck by falling objects, exposure to animate mechanical forces, and overexertion.
Generative AI by its very nature will always give you (generate) an answer, an image, a song, a diagnosis. Truth doesn't matter in subjective "eye of the beholder" circumstances like composing an advertisement, and you need a starting point. Truth matters in the realm of life-and-death facts and figures. Say you're a busy European legislator and need a sound bite for your speech about accidental death. You draft the speech with results from the top of the search page and understandably miss the small-font disclaimer that "Generative AI Is Experimental." This is what Dr. Hinton was talking about when he resigned. People lose sight of what is really true with unreliable or manipulated AI as an intermediary.
You'll hear the term "hallucination" thrown around when AI presents something that you figure out is straight up wrong. I prefer "drunk uncle" because I'd rather think of AI in the context of the example above as an inebriated relative spouting off malarkey and conspiracy theories. Hallucination is a kind word used by some people who don't want you to think too hard about the fact that they are putting out technology that is inherently flawed, that they know it, and are more interested in profits than the integrity of information. A lie is an intentionally false statement. I have to believe Google knows its AI Overview makes false statements, yet they put it out there intentionally. Hallucination, drunk uncle malarkey, or lies? You decide. And by so choosing, exert influence.
I'm going to leave you to extrapolate to other important situations where you or someone you trust uses an unreliable intermediary like AI Overview. School, health, work, personal finance. A single screwy web search may seem harmless until you multiply eight and a half billion searches per day by this demonstrated potential to be both inconsistent and wrong. Is this OK?
Let's be the windshield, not the bug, and—to mix a metaphor but keep it automobile-related—put ourselves in the driver's seat. What's your first principle for truth? When do facts matter to you? What's your threshold for trusting an intermediary? When does it matter if you believe your drunk uncle or not? Start by deliberately picking and choosing when you take information at face value when it comes to you via AI. Are the stakes low or high for whether the information is true? Would you go with what your drunk uncle tells you or ignore him? Be a skeptic at whatever level is right for you in that situation. When objective truth matters, AI should be considered wrong until proven right, at least as it's being rolled out to us in the mid-2020s. If you're going to trust AI, consider verifying through a third party that the AI has controls in place to detect and remove misinformation.
Fake text, audio, and video are easily generated with AI. A one-time spike in downloads of my podcast originated in a foreign country a few months before I started writing this book. At the time, we'd published thirty-three episodes, with one or two downloads from the same foreign country. I was excited to see an unexpected bump in downloads, but something looked fishy. All thirty-three episodes had been downloaded at once, something that had never happened before. I looked for information using Google, and it turns out the podcast community sees this type of activity regularly, and not just from foreign countries. The accepted explanation is that these bulk downloads are data harvests by AI companies working on voice generation AI. The hard truth is that I am at increased risk of a "deep fake" of my voice because my recordings have been harvested by an unknown entity in a country sometimes viewed as an adversary to my homeland. For that reason, we don't use last names for guests on the show.
Misinformation can be more subtle, too. I have a couple of friends, one young, one… not so young, who are single and trying to meet people to date. Instagram shows them both a steady stream of content along the lines of "You don't need anyone! You're strong, independent, and don't need anybody!" TikTok feeds another friend a steady stream of "The opposite sex is controlling and mean!" Major social media apps show us what they think will grab and keep our attention. In the olden days of the web, when I worked at Ask Jeeves, we talked about "engagement" and "stickiness" of a website. Could we gain "eyeballs" and keep them looking at a website longer so we could show an ad or sell something? Social media makes billions on the same principle. Behind the scenes of Meta, the understandable set of rules that Instagram and Facebook started with (my friend is interested in boats, so maybe I will be, too) has been replaced entirely by AI. And that AI will do whatever it takes to gain and hold our attention. It learns that negative stereotypes and "us vs them" division will grab lots of people. We're evolutionarily hardwired to respond more to differences than similarities. So that's what social media AI feeds us.
When it comes to truth, social media AI is effectively unmanageable. It may be politically expedient for the leadership of social media companies to say they are increasing or pulling back on their fact-checking efforts, depending on which party is in power. But the reality is they've lost control and couldn't do the fact-checking if they wanted to. The machines they taught to gain and hold our attention move too fast and are too complex to govern. How have you seen misinformation spread in your life? Where could AI have played a role? Did you contribute to the spread? Remember, you teach the machines.
What's the windshield stance with more subtle misinformation? Decide what your first principles are. When do you care that you're being manipulated to gain your attention? Turn awareness into action and make more deliberate choices about what social media you use and how you interpret and consume social media content. Run an experiment: Click on a series of negative or divisive suggestions from the AI. Search for a divisive or negative topic. Observe how your feed changes. Do your friends and loved ones use social media? Have a conversation with them about what you discovered in your experiment. Don't like what you see? Engage with social media AI on your own terms. Vote with your feet and seek alternatives.
Job Loss: Automation over Augmentation?
Artificial intelligence machines can be taught to do work that once only humans could do. A friend asked me for help understanding AI. He was in a new job and had been tasked with learning the alphabet soup of AI: LLM, GPT, GPU, NLP, GenAI. I happily obliged over lunch. We had a great discussion, during which he shared public information about his company's products. One of the products is an AI that watches video feeds from multiple hospital rooms. Its job is to watch the patient on the video and automatically alert a single human mental health monitor (babysitter) to bad stuff—bad stuff like patients trying to harm themselves. The promise was a reduction in the number of humans required to care for at-risk patients. Buy this AI, save even more money because you won't have to pay as many people. On one hand, I am all about reducing the cost and improving outcomes of mental health care so maybe we can have more of it. I've also been in one of those rooms. I've experienced the healing power of human connection and warmth a mental health monitor provides while keeping the patient safe. The business could have sold this new technology as a way to help existing mental health monitors do an even better job, get an even better outcome. Collaborate with mental health monitors to come up with ways to augment their care. Maybe by giving them insight into hidden signals in the patient's behavior and mental state so they can intervene and provide the right support before a crisis occurs. Instead, the business was making the case for saving money by automating work required by regulators and accepting the loss of human connection. Pure automation over augmentation.
We've been here before, just not for knowledge work. The summer I turned nineteen, I earned money as a brick making machine operator in a factory that made specialty bricks used in high-temperature smelters and kilns. I would press a big red start button and step to the end of a conveyor belt. The brick machine stamped out three bricks and deposited them on the belt every few minutes of my eight- to twelve-hour swing shift, minus two fifteen-minute breaks and thirty minutes for lunch. The raw bricks were fragile and crumbly, made of an exotic (for bricks) mix of specialty clays and minerals, I picked up each brick with very flat hands pressing evenly on two sides of the brick, and deposited it on a metal rack whisked away to an oven by forklift when full. I was probably one of the last summer workers to be hired because the first robotic brick handler was installed that summer. The ten or so full-time operators were offered training as robot technicians. One, a man named Baker, took the opportunity that summer. The other nine refused with reasons varying from "It'll never work for all the types of brick" to "I'm no geek; Baker, you're a nerd!" I left at the end of August to go back to school, never to return. If the company is still in business, my guess is it's with all its brick machines operated by robots, no manual operators in sight.
A few years after I worked at the brick factory, my first job in technology was to get data out of databases. I learned to write code in "Structured Query Language" to select rows and columns from different parts of the database. The language was and remains a well-standardized and powerful way to tell a database what you want (the "query"). The trick was knowing both the language and what the data meant. I understood biology, so was able to quickly and accurately retrieve the right data from a database full of complex biological information. I earned a good living with these skills and others as I built a career solving problems with data and technology. In the two years leading up to writing this book, I saw the same dynamic from the brick factory play out in my technology career. One person in ten embraced new AI tools that "understood" both the structured query language and the meaning of rows and columns in a database. The remaining nine were some combination of frightened, comfortable, and skeptical. My job was to drive adoption of these AI tools because they would allow more scientists to work directly with the data they needed to do critical research in child health. This would scale up and speed up science previously bottlenecked by the relatively small (vs the needed) number of technical experts who had for decades been the only intermediaries between scientists and databases. Resistance came from both camps—the technical experts and the scientists. Both would have to step out of their comfort zones. The technical experts would need to give up some autonomy and replace soon-to-be-obsolete skills with new, more advanced knowledge. The scientists would need to learn to use the new AI tools rather than rely on an expert. It is very human to be uncomfortable with and resist change, but the resistance was discouraging nonetheless, given what was at stake: increased pace and breadth of discovery in child health. I got a grant to create new training programs, but outside of early adopters, the central tendency was to stick with the old model. I am not without empathy. It's hard to learn new skills, especially when you're already working full time, have kids, and a life outside of work. But just as automation changed manufacturing work, AI is changing knowledge work. Windshield or bug?
Do you remember Louis Winthorpe III and Billy Ray Valentine? Both are fictional characters in the movie Trading Places, a 1983 comedy about wealth disparity. Winthorpe, played by Dan Aykroyd, was a privileged genius at making money trading frozen concentrated orange juice, bacon, and other commodities. Valentine, played by Eddie Murphy, was a smart-mouthed hustler living on the streets of Philadelphia. I won't spoil the plot, just share that it's a great window into the 1980s in lots of ways. If you've never seen it, or it's been a while, watch and throw the AI that recommends movies in your streaming service for a loop. As you watch, ponder which characters would have the same job in the age of AI, and which would be out of or in a drastically changed job. Hint, Winthorpe wouldn't make it if he held tight to his colorful blazer and open outcry trading on a chaotic market floor covered in slips of paper. He probably wouldn't make it even if he made the transition to electronic trading at a desk somewhere far away from the old stock market exchanges with their bells and hand signals.
The frozen concentrated orange juice and bacon commodities markets are where buyers and sellers bet on the price of breakfast next month or six months from now. Sellers auction off the right to buy their orange crop or hog herd at some future date. This allows farmers to hedge—insure—against the risk of crop failure. Buyers bid to buy the future crop, betting they'll be able to resell juice and bacon for a premium when it's actually time to put breakfast on the table. The auction used to be run by people yelling out, "Who'll give me three hundred dollars a ton for two hundred tons of orange juice for December delivery?" "I'll give you two-eighty a ton!" "I'll give you two-ninety!" "Sold for two-ninety!" Today, the auction happens at nearly light speed between computer programs taught to get the best deal on either side of the trade. This automation also reduces the transaction cost to nearly zero, which benefits both buyer and seller. Winthorpe would be on the street running cons with Valentine. Of course, Winthorpe is a caricature. Humans adapt, including financial traders.
I have a friend, Gerry, who is one of the smartest people I know. They are the first person I know to have their knowledge work completely automated. (Don't worry, the story has a happy ending.) Gerry's career as a trader started in open cry on the floor of the Chicago Board of Trade, transitioned to electronic trading at a desk upstairs, and was ultimately replaced by AI. I remember one phone call where they asked me, "Hey, how do you speed up a computer network? The algorithm is making money; now we need it to make money faster!" Gerry's work had evolved from making trades directly based on knowledge of risk and return in the market to using their knowledge to help computer scientists build automated systems. They taught the machine to make more money with less risk than they could manually. This got boring, so after twenty-five years, Gerry transitioned to a new career in high-end building renovation and construction. Lots of other traders had to make the shift much earlier, like our intransigent fictional Winthorpe. They were "old school" traders who didn't want to or couldn't contribute to the automation of their jobs like Gerry did.
It's no surprise that some of the biggest of the Big AI directly focus their data and AI efforts on only two areas of the economy: finance and healthcare. AI in other areas are left to partners, startups, and other big companies already working in the area, like Monsanto in agriculture. Financial services (including insurance) represent more than $7 trillion, healthcare $18 trillion in the U.S. alone. Finance and healthcare are also almost entirely based on expert humans performing knowledge work. The same Big AI companies investing hundreds of billions of dollars choose to dig into the two areas of the economy with the most valuable (in dollars) knowledge work. Don't get me wrong. They've set it up so they'll get a piece of everything. Startups and big companies in law, publishing, advertising, engineering, software development, and entertainment incorporate Big AI foundation models into their own products and pay a toll to Big AI. But healthcare and finance are where Big AI is focused "in-house." Finance and healthcare are where they're looking for "partners" to "co-invest" in data aggregation and training AI. You better believe each company intends to win the race to develop powerful and lucrative foundation models specific to healthcare and finance. The prize is the biggest possible share of potentially trillions of dollars transferred from human worker payrolls to Big AI service fees.
My friend Gerry lived through the leading edge of this inevitable trend. As their knowledge of the commodities market and work as a trader became less valuable, they adapted and taught the machine to analyze risk and return and make trades. Gerry moved on to finding creative new trading opportunities, figuring out how to make money, then taught the machine the new stuff. And on and on until it was more interesting to move to a completely different field.
People who do work based on knowledge and cognition watched automation drastically change the economic prospects of their fellow citizens who worked in what little domestic manufacturing remained in the U.S. Large corporations grew wealthy or just remained in business through robotic automation of what they couldn't move to lower-wage economies. The same is happening and will happen to knowledge workers. Call the customer service number of your bank to combine the balance of two accounts and close one. Twenty years ago, you talked to someone in Texas or Minnesota. Ten years ago, you interacted with someone in Mumbai, India. Now you interact with AI that is rapidly improving its ability to solve your problem. If the AI can't help you, that failure is recorded and you're directed to a person in a low-wage or low-regulation economy. The audio of your interaction with that person and the resulting actions they take to deal with your accounts are recorded and used to teach the machine to solve the new problem. That is, unless you have a lot of money on deposit with the bank; then you may speak with a real person in Texas. For now.
The best software engineers I known in my career live by the mantra "My job is to put myself out of a job." Here is a real story from the stress-detection project I expand on later in the chapter: The job is to retrieve heart rate data from two watches and combine it into one dataset where the data from one watch can be compared to the data from the other. The best engineer will write the most efficient, powerful, and user-friendly software possible so they never have to solve this same problem again. New watch? Someone else can adjust a couple of settings and it's done. Other, lesser software engineers might write unique code for each watch, code that requires them to fiddle and be in the loop. With the advent of AI, the best software engineers I know are immediately figuring out how the AI can write code for them so they can move on to solve harder, more important problems. They'll continue to do this, just like Gerry did with trading.
If you're a knowledge worker, the value of your current work is going to change. Do you read, summarize, and write reports? Do you write or edit words? Do you write or test code? Do you provide financial advice? Do you analyze data repeatedly? Do you research and make recommendations? Do you answer the phone and help people? Do you teach? Do you decide where to invest money? Do you make podcasts? Do you interpret lab results and make a diagnosis? Do you act? Do you edit video? Do you read x-rays and identify bone fractures or other problems? Do you design interior or exterior spaces? Do you talk to people about their cognitive and behavioral problems on the phone or video? Do you compose or perform music? Value real estate? Design bridges? At least the routine aspects of your work will be performed better, faster, or more cheaply by a machine. There is too much money to be saved or made for it not to happen, at least in the U.S. You will have to adapt or see your income go down. Use AI in your work in any way and you are already adapting. Windshield or bug?
You choose windshield! What to do?
Start by reflecting on work you do repeatedly but with your brain instead of your hands. We're used to thinking of repetitive labor as something that happens on an assembly line. Repetitive physical labor is subject to automation by physical machines with moving parts. Repetitive cognitive labor is subject to automation by AI machines with cognitive parts.
Before getting to knowledge work, consider how AI will change the time-is-money business of artists and skilled craftspeople. If you're an artist, a sculptor, a furniture-maker, a metalworker, a stonemason, a gardener, a chef, a pipe fitter you need to use AI and AI-enabled software to work smarter, not harder. Artificial intelligence running on your smartphone can collect and analyze data about your work to help you make more money from your time. Artificial intelligence‒enabled purchasing services can help you find the best deals on materials. Artificial intelligence can help you write your own advertising copy and generate your own graphics. Many of the business services you depend on, or wish you could tap into, may become cheaper because of AI. At the same time, be prepared for your customers who don't work with their hands to go through economic upheaval.
Most important? Do. Not. Give. Away. Your. Data. I cannot emphasize enough the degree to which the current U.S. corporate economy is harvesting the data of workers so that it can put them out of a job at some point in the future. This is not because these corporations are somehow evil. It's because they are intrinsically and legally obligated to survive and thrive using new technology to increase productivity. And productivity is calculated as output per worker. Dollars of revenue or profit divided by number of workers. In an ideal world, the whole pie will grow fast enough that every worker will continue to be employed and maybe even see their income grow faster than inflation. This happened for the most part during the bottom-up phase of the first Industrial Revolution. The AI-driven industrial revolution of the knowledge economy is starting with top-down industrialization, as we've discussed. Increased productivity through automating the work of expensive analysts, billing experts, doctors, lawyers, engineers, nurses, accountants, and home health workers is the model for success I see gaining traction with the people who sign the checks when AI vendors come calling in my professional life, at least in the U.S. Cost reduction is the first stop for entrenched organizations that have difficulty innovating. We've already offshored or automated just about everything else possible, so knowledge work comes next. So what leverage do you and I have? The data we generate in the course of our work. Consider yourself an independent contractor, the expertise you've gained over a career to be your intellectual property. The data you generate can be used to teach a machine what you know and how to do your work. Your employment contract almost certainly says that your employer owns this data. They do unless you speak up about it. Are you a lawyer? Ask the partners in your firm how they plan to compensate you for the use of your briefs, memos, and contracts to train AI. Are you a doctor, nurse, or billing agent? Ask your practice or hospital administrators how they plan to cut you in on the savings they hope to achieve through using your notes, diagnoses, reports, and discharge summaries to automate aspects of your work. Are you an investment advisor? Ask your managers how they plan on making you whole from the use of your client communications over the phone or via email to automate aspects of your relationship-building. Ask your leadership if their corporate contract with Big AI allows those companies to use your documents, emails, or code to teach AI about your industry, company, or specific job. If the answer is yes, then ask how much money they saved on the contract by agreeing to those terms. Suggest that perhaps they could share that with you and your fellow workers. If you're an author, carefully read the terms of the self-publishing service you use to put out your first book to figure out if your writing will be used to teach a machine to write books about AI.
Sound radical? The concept of a "data collective" is emerging among economists as a potential counterbalance to even out the economic risks posed by AI. In theory, fair distribution of economic gains that are only possible through the use of human data to train AI would grow the whole pie. Allow us all to share in the profits from this amazing new technology. It's not just theoretical. In 2025, the concept is just starting to be tested via the first lawsuits against Big AI companies for copyright infringement. Turns out if you're a publisher or a clothing designer (Gerkin, 2023; Rogelberg, 2024), you may think it's economically unfair when your text and pictures are harvested, potentially without your knowledge, incorporated into a foundation model, and then monetized by a Big AI company so people can write books and design clothes derived from your work. The question is, if the New York Times wins its suit against OpenAI, will it distribute a portion of the proceeds to its journalists, copywriters, and editors? Maybe. Maybe not. Ironically, twenty-five years ago, as the defendant in another lawsuit, the New York Times argued against its current position versus OpenAI (Pope, 2024). At that time, the U.S. Supreme Court decided against the Times and in favor of a group of freelance journalists whose articles the Times had sold piecemeal to database companies without permission (Supreme Court of the United States, 2001). We've been here before. You teach the machines.
Other economies may attempt to manage their AI industrialization following the example of Japan's Meiji Restoration in the late 1800s. The new leadership of Japan actively managed the transfer of industrial technology into Japan to maximize benefit and minimize harm. Keep an eye on deliberately managing the AI Industrial Revolution so as to at least try to benefit all. In the U.S., and by extension many of the places where Big AI companies dominate, it's a free-for-all driven by a few corporations with lots of political influence.
As I write this, my three children are just about to graduate from college, in college, and just about to enter college. All of them are entering an era of change. Guidance I (try to) give them? I start with my own first principles. Be nice. Get stuff done. Make things less crappy. When it comes to AI, I try to help them see that it will bring change. Stick to first principles when the ground shifts under you. Embrace the change and you'll be just fine. I emphasize to them the importance of two things: They need to use AI, and they need to learn about a specific area of the world they're interested in, one that is constantly changing, the faster the better. To succeed in an AI world, you have to be good at augmenting your work and your life with AI so that you can invest your time in what AI can't do (yet). Use AI so you know what's possible. Use it consistently so you know when it can do something new and useful, when it gets something wrong, or when it gets something right it previously got wrong. Use AI so you're not competing with it to do work that can be automated. Dream up and experiment with creative uses of AI nobody else thought of. Be the person who figures out how to use AI and help your family, friends, coworkers, and employer adopt and adapt. Use AI to pursue interests previously out of reach.
Regarding interests, I try to guide my kids to constantly learn about something they find intriguing. Their current interests are biology, art, and creating value in a business, respectively. The more they know about their interests, the more they can tell when AI gets things right and when it gets them wrong. The more they know about the world outside the bubble of AI, the more they understand AI isn't perfect and they can be informed and critical consumers.
Regarding change, I try to guide them to be more than comfortable with change. I say they must excel at finding opportunity and value in disruption and changing circumstances. Because things are going to change a lot faster over the next couple of decades vs the last couple.
I also encourage my kids, and you, to learn to do things with your hands. To be creative. My kids probably don't think I'm serious, but I made a standing offer to pay for them to learn a trade in addition to their higher education—welding, carpentry, plumbing, electrical wiring, house painting, stone masonry, tilework. Makeup artistry, hair styling. Avalanche control, emergency medical response. Knowing how to use AI and do valuable work with their hands means they have a killer combination. Combine AI, hands-on work, and creativity and you're futureproof. Pure knowledge work is likely to pay relatively less on average in the medium term. If the AI Industrial Revolution plays out on the more pessimistic track, wealth will continue to concentrate, but wealthy people still clog their toilets, want new kitchens, and value artistry.
Want a good outcome? Use AI. Learn constantly. Pursue change. Don't give away your data.
Killer Robots
You're not going to hear much about the use of AI in weapons of war. The Terminator movie franchise made robots with guns into a public relations nightmare for the AI industry. That doesn't mean it isn't happening. Geoffrey Hinton called this out as one of his reasons for resigning from Google. This could mean he knows something of what may be happening behind a top-secret curtain. Before we get to AI actually blowing people up, it's important to consider that not all conflict involves the guns and missiles of a "hot war."
In my professional life I have encountered credible hacking threats and actual attacks on critical health infrastructure. I've briefed three-letter agencies, not on defense against, but, scarily, the detection of theft of what I consider to be some of the most sensitive human information in existence. That means I was asked to come up with ways to figure out if information had been stolen after the unseen and undetected theft took place. By a state-sponsored attacker.
Cyberattacks, hacking, ransomware. Whatever you call it, state sponsored attacks on people's digital lives have grown into a new Cold War. Real examples of attacks include tricking you or me into giving access to our bank account, encrypting a hospital's patient data so it can't be used then extorting ransom for the decoder key, planting misinformation about a political candidate in social media, causing power outages, and remotely destroying nuclear weapons manufacturing equipment. "State sponsored" means that a government or agency in a government looks the other way, pays for, staffs, or even directly carries out attacks. Up to a certain point in political conflict, "good guy" and "bad guy" depend on your point of view. When it comes to cyberattacks, everyone does it. Artificial intelligence lowers the bar for what it takes to fight in this new, digital Cold War. You don't have a bunch of sophisticated hackers to write malicious computer code? Use AI to write the code instead. A hard-liner is up for election in a rival country? Use freely available AI to generate and post fake video of the candidate doing something sketchy in a hotel room. An economic competitor is moving in on an emerging market you want for yourself? Use AI to manipulate the local stock market with automatic trades. AI scales up information warfare. Flyers dropped from airplanes, propaganda broadcasts over the radio, and even "troll farms" are being replaced by AI trained to push issues through social media, finance, education, anywhere people interact with a stream of information.
Military AI emerges somewhere between information warfare and shooting warfare. The military uses, or will use, AI for lots of the same purposes for which it's being utilized in other areas. Logistics, intelligence gathering, management. The primary concern, however, is a category of military AI known as Lethal Autonomous Weapons Systems (LAWS). To the layperson, a weapons system is lethal and autonomous when a machine picks a target and fires a gun or missile without a human in the loop. Policy and military experts, not surprisingly, have to wrestle with a gray area. In 2025, the United Nations is an intermediary for governments to try to agree on a definition of autonomous weapons systems. They publish helpful information useful for thinking about AI in general, not just for weapons.
Autonomous weapons systems require "autonomy" to perform their functions in the absence of direction or input from a human actor. Artificial intelligence is not a prerequisite for the functioning of autonomous weapons systems, but, when incorporated, AI could further enable such systems. In other words, not all autonomous weapons systems incorporate AI to execute particular tasks. Autonomous capabilities can be provided through pre-defined tasks or sequences of actions based on specific parameters, or through using artificial intelligence tools to derive behavior from data, thus allowing the system to make independent decisions or adjust behavior based on changing circumstances. Artificial intelligence can also be used in an assistance role in systems that are directly operated by a human. For example, a computer vision system operated by a human could employ artificial intelligence to identify and draw attention to notable objects in the field of vision, without having the capacity to respond to those objects autonomously in any way. (United Nations Office for Disarmament Affairs, 2023)
There's some particularly helpful wording in the above paragraph: "…using artificial intelligence tools to derive behavior from data, thus allowing the system to make independent decisions or adjust behavior based on changing circumstances." Rewritten in a more human context, we get: "using your eyes and ears (data) to figure out how to behave (derive behavior), allowing you to make your own decisions or adjust what you do based on changes in the world around you (changing circumstances)." Truly autonomous driving lines up nicely with this definition. Think of AI driving a car in a safety-critical situation with potential injury or death from a traffic accident if the AI decides to accelerate or brake at the wrong time. When a weapon meets the same definition, AI makes another kind of life-and-death decision.
People often have an intuitive, negative reaction to drone warfare. A missile fired from a drone piloted by a flight officer hundreds or thousands of miles away can be viewed as wrong relative to a missile fired from an airplane by its pilot in the cockpit. The invasion of Ukraine by Russia took modern drone warfare to a whole other level from its origins in post-911 conflict. Suddenly, hundreds of thousands of drones were deployed—at first to drop grenades from above on nearby enemy positions and vehicles, then for one-way flights of explosives directly into the same targets, destroying the drone in the process. Both Russia and Ukraine used drones, but Ukraine, with its smaller population, leaned on the technology more. A soldier is safer piloting an explosive drone into a bunker from a few hundred yards away than fighting their way across an open battlefield to deliver the same explosives. Ukraine also embraced the "first-person view" (FPV) drone early in the conflict. A soldier wears goggles that display the view from a camera on the front of the drone. They "see" what's in front of the drone through the goggles as if they were right there. The soldier uses a hand-held remote control with joysticks and buttons to pilot the explosive-laden drone into the tank, bunker, or concentration of enemy soldiers. This visual immersion of the goggles enables the soldier to make more precise and rapid flight maneuvers than if they are looking at a screen. In fact, the FPV drone gained its first popularity in drone racing a decade earlier. A second, larger observation drone is used to view the attack from above and look for the next targets for a stream of one-way explosive drones to hit. Recordings of FPV drone attacks are regularly posted to social media.
What does this all have to do with global geopolitical concerns about military AI? Why did United Nations leaders suddenly gain traction on the issue in 2023, after first raising it in reports to the Human Rights Council ten years earlier? What does any of this have to do with Geoffrey Hinton calling out autonomous weapons systems when he resigned from Google in 2023?
In the U.S., we can look backward at an intertwined cast of characters starting in the first Obama administration and continuing through multiple administrations of both parties. Our players are Eric Schmidt, Robert Work, and Peter Thiel—all patriots and, I believe, acting in good faith, though one could argue in apparent if not actual conflict of interest. The following may be hard to follow because the natural revolving door between government, the military, and industry is hard to follow:
- Eric Schmidt, then-CEO of Google, was appointed by President Obama in 2009 to the President's Council of Advisors on Science and Technology.
- Along with others, under Eric Schmidt's leadership, Google funded (and continues to fund) the Center for a New American Security.
- Robert Work, retired Marine Colonel, was appointed Undersecretary of the Navy in 2013, leaving soon after to run the Center for New American Security for about a year until appointed Deputy Secretary of Defense in 2014.
- During his tenure at the Center for a New American Security, Robert Work advocated that the U.S. address the long-term threat of adversaries gaining an advantage in military AI.
- Peter Thiel, a longtime technology executive and investor, co-founded Palantir Technologies in 2003 to develop security software and now military AI. After the 2008 financial crisis, the first Obama administration paid Palantir for software used to detect fraud in stimulus funding and Medicare payments.
- Throughout both Obama administrations, Eric Schmidt maintained a professional relationship with Peter Thiel, with the two appearing on multiple technology panels together.
- During Robert Work's tenure in the Obama administration and subsequent first Trump administration, Google received a contract to develop military AI for the Pentagon. After employee protests, Google transferred the contract to Palantir in 2019.
- Eric Schmidt continued as chairman and then technical advisor to Google's parent company, Alphabet, until 2020.
- From 2019 to 2021, Eric Schmidt and Robert Work co-chaired the bipartisan National Security Commission on AI, which advocated for investment in AI for national security.
- In 2021, Eric Schmidt funded and founded the Strategic Competitive Studies Project, which has Robert Work on its Board of Directors.
- In 2022, Google and Palantir announced a strategic partnership.
- In 2022, Eric Schmidt and Peter Thiel launched America's Frontier Fund, an investment fund focused on national security in microchip manufacture critical to AI, weapons, and the economy. Their new investment organization lobbied the government for $1 billion in funding and won a lead role in an international investment fund led by the former CEO of the Central Intelligence Agency's venture capital subsidiary, In-Q-Tel. Official Pentagon policy and Palantir's publicly stated goal is that military AI augments human decision making. Palantir continued to win defense contracts from the U.S. and other governments throughout the first half of the 2020s.
I lay out these circumstantial points to illustrate that the historical relationship between the technology industry and national security apparatus is alive and well in the age of AI. I also lay out these circumstances to prepare you for a theory I'll get to shortly.
The revolving door is a good thing if you take seriously the national security threat posed by hostile use of AI (which I do). It also presents by its nature potential conflicts of interest and the need for transparency to stakeholders like you and me. Stakeholders who also have a responsibility to educate ourselves and participate, not just sit back and throw around conspiracy theories like your drunk uncle. In their final report to the National Security Commission on AI before it was dissolved in 2021, Chairman Schmidt and Vice-Chairman Work stated, "Americans have not yet seriously grappled with how profoundly the AI revolution will impact society, the economy, and national security." They were and unfortunately continue to be right. It's a scary world out there.
Russia stated in 2020 that it intended to replace soldiers with lethal autonomous weapons systems. Russia has established a drone weapons program in China, resulting in the U.S. placing sanctions on Chinese entities. Simultaneously, the Ukrainian military's success rate for drone attacks reportedly rose from fifty percent before 2023 to more than eighty percent in 2024, reportedly in large part through use of drones running Palantir software. Turns out killer robots weren't so sci-fi after all.
From a purely professional technical standpoint, I consider the video of drone attacks coming out of the Russian invasion of Ukraine to be priceless for training AI. The repeated nature of the attacks under varying landscape, foliage, weather, and countermeasures provides a rich sample of real-world images for teaching machines to identify targets. Surveillance drones' simultaneous recording of attack drone strikes provides built-in labeling of the result—hit or miss. The recording of front-line soldiers' commands and flight telemetry over and over during actual combat drone maneuvers paired with video and geospatial location under varying conditions provides a near-ideal, unique data set for teaching machines to fly drones into targets.
Russia also uses FPV attack and surveillance drones and is in a position to collect the same unique data and share it with its allies, or at least with "the enemy of my enemy is my friend."
As I mentioned a few paragraphs ago, I'm going to advance an informed theory that attempts to explain this landscape. Because of the top secret, or at least highly confidential, nature of the situation, my theory is certain to be a simplification of complex events.
The theory: Eric Schmidt followed in the footsteps of patriotic industrialists before him. He and others identified the risk that adversaries would develop military AI that could defeat the U.S. and its allies in the future. "They" would have more or better killer robots than the U.S. "They" would have more or better AI-enabled cyberweapons. Because he is simultaneously a patriot and a capitalist, Schmidt aligned Google with the Pentagon to drive research and development through multiple presidential administrations. Simultaneously, Peter Thiel aligned Palantir with the Pentagon and other U.S. government agencies to develop AI for national security, also through multiple presidential administrations. When Google's employees forced a change because of ideological disagreement over the company's involvement in military AI, Eric Schmidt and Peter Thiel worked together at the top of their companies to successfully transition the project from Google to Palantir. Schmidt and his original national security partner, Robert Work, continued to build the case for U.S. investments in military AI. Palantir developed its business through contracts with Israel and other U.S. allies, in addition to the U.S. A goldmine of unique training data became available from drone warfare in Ukraine. The U.S. directly and indirectly facilitated systematic and large-scale acquisition by Palantir of drone warfare data from Ukraine. Palantir used this data to rapidly accelerate development of military AI for drone warfare. Palantir simultaneously contracted directly with the Ukrainian government and military to develop and field systems that enable front-line soldiers to continue to train and fine-tune the performance of its military AI. United States intelligence agencies obtained evidence that Russia had been sharing drone warfare video data with China in return for access to Chinese drones equipped with military AI.
Now, in a lethal feedback loop, China also gets data from front-line Russian soldiers on the performance of its AI. The U.S. and China cannot take the risk that the other develops lethal autonomous weapons systems, so a secret military AI arms race is under way. Palantir, as a private company, is able to research and develop lethal autonomous AI, and the hundreds of billions of dollars in unrestricted aid going to Ukraine can be used to pay for it. United Nations policy makers receive reports of the pursuit of lethal autonomous AI by both the U.S. and China. On the U.S. side, the AI arms race, at least in part, involves Google through its partnership with Palantir, and almost certainly other Big AI companies as well. The United Nations starts to develop and advocate for policy to reduce harm of military AI. I'm an outsider so this is just a theory. It is also my theory that that Geoffrey Hinton knows enough from his insider status that he decides to resign.
As a child of the 1980s, I lived with the reality of nuclear deterrence through mutually assured destruction. Ten years before I was born, the U.S. and Russia came to the brink of nuclear war during the Cuban missile crisis, itself a response to perceived escalation by the U.S. placing weapons closer to the Soviet Union. We, as global citizens, eventually de-escalated through reciprocal nuclear arms control and non-proliferation treaties. We're not there with military AI. We don't yet (and hopefully never will) have the equivalent horrific example of the use of nuclear weapons in the bombing of Hiroshima and Nagasaki at the end of World War II, followed by decades of living under the real fear of nuclear holocaust. However, state or state-sponsored actors have deployed AI in cyberattacks. Russia has targeted civilians with drones in isolated terror attacks. It is a matter of scale to get to swarms of cheap, fully autonomous lethal drones overwhelming military defenses or, worse, being used in a terror attack on a stadium full of people. Possibly worse yet, the use of powerful AI to degrade or destroy military and civilian digital infrastructure. Or trust in our institutions and fellow citizens. This is the through line Geoffrey Hinton, many other technology leaders, and the United Nations are concerned about.
What can you do? I strongly suggest buying a cup of coffee or something stronger for the Chief Information Security Officer or equivalent at your company; they're dealing with more than you can imagine. After that, get informed and come to your own conclusion on whether or how military AI in any of its forms are acceptable. It is certainly inevitable. You may go on to demand more transparency about military AI programs, or at least how the money flows in what is an evolution of the Cold War military-industrial complex. You may advocate politically against an AI arms race, in favor of mutual defense agreements, or for spending on offensive and defensive capabilities. You may talk to your peers and kids about the importance of cybersecurity and data protection. You may work toward a global ban on military AI. You may choose a career developing effective AI weapons. It's up to you. You teach the machines.
"Other"
Geoffrey Hinton may also be concerned with other side effects and pitfalls. A few of the more fundamental ones follow.
Bias
Artificial intelligence is as biased as the data and people who teach it. The machine will learn from the people who tell it what it gets correct when looking at its training data. If that data reflects unfair biases because it contains only a biased view of the world, then the AI will have those biases. If the people teaching the machine hold a biased view of what's correct and that makes it through to the machine during training, then the AI will have those biases.
Imagine you take videos of cats. Like, a weirdly large number of cat videos. Say two thousand and counting. You prefer orange tabby and black cats, especially when they're sleeping (so cute!). Your two thousand videos contain mostly sleeping orange and black cats, with the rest showing brown, gray, and spotted cats mostly not sleeping—play fighting with each other, chasing birds, and eating. Your videos are in a cloud service hosted by a Big AI company. The same Big AI company also sells a home smart speaker device recently upgraded with a camera. Pat, a bright young marketing person, pitches a new feature for their camera devices: "Bad Kitty: your AI cat babysitter!" Even better, Pat gets Nick Bruel, author of the beloved Bad Kitty children's books on board to co-brand the thing. Pat's vice president loves it and greenlights the project. Since it's legal, you've accepted the terms of use, and live in the U.S., the Big AI company gives your cat pictures to the AI team without asking or informing you. The team sends your pictures overseas for labeling by low-wage AI workers, again without asking or informing you. The workers dutifully tag the videos of sleeping orange and black cats with "good." They tag the videos of mischievous brown, gray, and spotted cats with "bad." Pat's VP is in a hurry to publicly announce the product at the company's rapidly approaching sparkly launch event in Las Vegas, so the AI team quickly teaches a machine to automatically recognize cats misbehaving on camera and yell "NO! BAD KITTY!" through the speaker. Just before the Vegas event, Nick Bruel pulls out of the deal. Turns out he has an orange cat that is a complete jerk and an angel of a brown cat. The pre-release camera speaker he got in the mail gives his orange cat a pass when it knocks the toothpaste cap down the drain and yells nonstop at his poor brown cat. The VP is embarrassed, fires Pat, and cats everywhere breathe a sigh of relief because they don't have to live in a surveillance state.
A silly make-believe example, but an entirely realistic illustration of how harmful bias can make its way into AI out in the world. Look for articles about bias in AI and you'll find plenty of examples in much more critical areas of our lives. I encountered a near-miss formative example early on in my work.
In a research study I did together with a behavioral health scientist, we wanted to figure out how to help caregivers and teachers by detecting stress in nonverbal children who couldn't otherwise say, "Hey! I'm freaking out over here!" We taught AI to recognize stress from changes in heart rate detected by commercially available heart-rate watches. We had an intern with brown skin working on the otherwise all-white research team. She figured out that one of the otherwise ideal brands of watch gave bad readings on her skin but not on white skin. We immediately excluded the watch, recruited study participants with different skin tones, thanked the science gods for the luck of that intern, and kicked ourselves for not thinking about what could happen if we hadn't had her. The research continues, the AI has promise, but it could have started out with a very damaging implicit bias against people with brown skin.
Wondering how we taught the machine to recognize stress without doing harm to vulnerable nonverbal children? We collected a bag of tricks, including a remote-control rubber rat that darted across the floor, a puzzle that couldn't be solved, a jack-in-the-box with a scary clown face and spooky music, and a wind-up jumping rubber spider. Research participants, first adults, then children with their parents, were put in a room wearing the heart-rate watch and subjected to jump scares. We collected heart-rate data and time-synched video so we could see when changes in heart rate co-occurred with the rat darting across the floor or the creepy jack-in-the-box popping. Not my favorite moment as a researcher, but everyone was informed and consented ahead of time and the work was for a very good cause. Can't make this stuff up.
What can you do about harmful bias in AI? Turn awareness into questions that ultimately drive expectations and accountability. Ask your doctor if they're using AI to help them care for you. If they say yes, ask them how the AI has been taught to avoid harming patients from any one demographic. Pay attention to proposed regulations and legislation to require transparency and mitigation of harmful biases. When your employer rolls out new AI, ask how it was trained to avoid harmful bias. Better yet, ask how it will be monitored for harmful bias going forward. Pessimistically, recognize that AI could be used to deliberately reinforce bias in conjunction with misinformation. Bias toward buying a brand of toothpaste, for example. Bias against voting for one candidate over another. Be on the lookout and question how AI may reinforce your own very human biases to manipulate you. Optimistically, recognize that AI can help reduce bias, too. A teacher who is biased to be more sympathetic to kids with one skin color may tend to ignore signs of distress in kids with a different skin color. Artificial intelligence that can detect stress in kids with skin of any color is a powerful equalizer to get a better outcome for all children.
Electricity
Industrial AI companies need more than the existing electricity grid you depend on to light your home and run your coffee maker. The biggest AI companies are like aluminum producers in that industrial AI needs high-volume, stable electricity. Aluminum smelters are often located right next to big fossil fuel, hydroelectric, or geothermal electricity generators—sometimes built just for that purpose, whether nearby communities wanted them or not. And just as aluminum producers bring in a steady supply of bauxite ore, AI data centers bring in a steady supply of raw data. Aluminum companies bring bauxite to aluminum smelters. Raw data flows into GPU computers. Aluminum smelters need lots of electricity to electrolyze bauxite into primary aluminum billets. Big AI data centers need lots of electricity to train foundation model AI from raw data. Aluminum producers' billets are subsequently turned into cars, soda cans, and fishing boats. Big AI's foundation model AI are subsequently turned into chatbots, x-ray diagnostics, and search assistants.
In 2025, AI companies are already using a lot of electricity from fossil fuels to train AI. They know burning even more coal and natural gas for their rapidly expanding data centers will put a lot of carbon into the atmosphere. They know this will make it harder for them to make money from customers concerned about climate change. They know that what they're doing will add to the risk of floods, hurricanes, droughts—extreme weather affecting the economy they depend on and the data centers full of their AI computers. Industrial AI is already mitigating this side effect with nuclear energy and its low- to no-carbon output. The pitfall? More nuclear waste in the form of spent fuel. Although fuel reprocessing technologies promise to reduce the final volume of high-level radioactive waste, there will still be more to safely dispose of for the hundreds of years it will take to lose its radioactivity. Consensus favors burial deep underground in stable geological formations without nearby groundwater. Absent these sites, high-level waste will continue to accumulate in "casks" at the reactor site. China, Japan, Canada, and many EU countries are in the process of selecting and designing long-term burial locations. The U.S. has one but, for political reasons, has been unable to license it for use. What can you do? Decide how this reality matches up with your values and make choices accordingly.
Cooling Water
Data centers use water to manage heat. Effectively, all the electricity consumed to train and operate AI is turned into heat. Data goes into the GPU. The GPU burns electricity to do calculations. The process releases heat. The GPU produces data.
Think about when your laptop, phone, or computer gets hot when it works hard. Global estimates for 2025 put annual electricity usage by data centers above five hundred terawatt hours (Masanet et al., 2020), equivalent to two trillion of those red heat lamp bulbs keeping french fries warm. Graphics processing units get slow when they get hot, necessitating quick removal of heat from the data center. This process requires billions and billions and billions of gallons of water, ultimately evaporated to achieve actual cooling (think about pictures of nuclear power plant cooling towers; that's what's going on). The need for data center cooling water comes on top of the need for water to cool the nuclear reactors that generate the electricity in the first place. Usually, this is fresh water because salt water is corrosive and bad for pipes and machinery. This is fresh water that would otherwise stay in the ground, a river, or a lake and be available for fish to swim in and people to drink.
What to do? Same as with electricity. No matter what you think, electricity and cooling water are costs, so companies have an incentive to work to minimize usage. Decide how water usage by AI companies aligns with your values and make decisions accordingly. Make purchasing decisions, or ask your employer or stockbroker to make purchasing decisions, based on your views about water usage by AI companies. The point is that every industry we depend on in our modern lives consumes resources. Artificial intelligence is no different. Just as with packaging, transportation, and housing, your responsibility is to be informed and make decisions about how you consume AI that align with your values.
Individual Rights
Your unique data is likely the most valuable asset held by any technology company you've knowingly given it to, or that has it anyway because you agreed to obscure Terms of Service (at least in the U.S.). Mark Gorenberg and Ash Fontana were two of the first venture capitalists to invest in AI. They wrote an article in 2016 about the new rules for venture capital in technology that includes the following about the new requirement for a start-up to have a data strategy:
Data Strategy: A crucial part of your plan to build an intelligent software product. A clean, unique data set is a competitive advantage in itself (so don't sell it!). From there, you can start building predictive models with your customers' data and turning successful experiments into features that help them make decisions. Finally, you will have a product that uses incremental data to improve models; making the product better, attracting more customers, getting more data and so on — a "Virtuous Loop."
Use of your data to gain a competitive advantage is a fundamental pillar of AI. Your individual rights to correct, mask, prevent use of, or simply delete that data are in direct opposition to the profit motive of technology companies. At its best, this remains a virtuous cycle where use of your data directly benefits you. There are no guarantees, especially in the U.S.
While traveling abroad in the EU, I checked the privacy policy of a major U.S.-based news website I subscribe to. Since I was accessing the news site from the EU, I found (and have a great screen capture of) a page that stated the news site and ninety-four of its "partners" would store and access any personal data they could collect through the site to do all sorts of things, including "understand audiences through statistics or combinations of data from different sources." I was also able to see the full list of ninety-four partners and the exact use they make of my data. I had the choice of opting in to this arrangement. No sharing would happen if I did nothing or rejected the option. When I got back to the U.S., I went to the privacy policy to find I did not have the same rights to control my data. In place of the relatively clear EU-mandated language, there was a confusing and ultimately opaque set of terms that kind of disclosed what would happen with "partners" unless I was a resident of California, Oregon, or Nevada, in which case I had some other rights. But not at home in Pennsylvania!
The hard reality is that there's a massive global marketplace where your data is collected, used to maximum competitive advantage, sold, traded, and monetized. Friends who work in marketing, politics, and philanthropy have access to scarily detailed subscription-based databases that can be used to profile you and your behavior down to the most minute detail. Up until the mid-2010s, I had to call my bank to file an "international travel plan" when I was travelling abroad so they would allow credit and debit card transactions to go through. Now? The bank has an AI model that's been taught everything about me. It'll allow a late-night poutine purchase in Quebec City to dilute the beer with cheesy gravy fries, no travel plan required!
Banks aren't the only large organizations that know, or are able to know, everything about you recorded in the digital world. PRISM is the code name for a surveillance program run by the U.S. National Security Agency. Under oversight, government agencies are able to search through your and my individual-level data processed by U.S. technology companies, including all the Big AI players. PRISM is the program brought to light by Edward Snowden in 2013. It continues and is the basis for a series of legal actions brought in the EU by a private nonprofit called None Of Your Business (NOYB), a comical name if the issue weren't so serious. The organization was founded by Austrian lawyer Max Schrems who, while studying at Santa Clara University in Silicon Valley, was blown away by Facebook's lack of respect for individual privacy. Schrems used the EU's privacy laws to get his hands on Facebook's store of his own personal data. As a result of his actions, a patchwork collection of oversight was put in place to allow citizens of the EU to safely use Facebook, Apple, Google, Microsoft, etc. The oversight is fragile and was gutted in the early months of the second Trump administration. It remains to be seen if and how the oversight is reinstated. I encourage you to spend some time reading about the privacy rights action being taken by NOYB on their website: www.noyb.eu.
This all gets creepier when you consider that the companies subject to PRISM collect and can be forced to share pictures, videos, voice recordings of you, your parents, your kids. Not to mention your fingerprints. Travel by air in the U.S. lately? Notice the "optional" face scan going into use at airport security checkpoints? On a recent flight, the security worker was rapidly and emphatically instructing passengers to insert their state-issued ID and look into the camera to be scanned. The facial recognition scan is technically optional, as stated in the fine print on the sign off to the side, but what I saw did not at all appear to be voluntary. My line didn't have it in place yet; otherwise, I would have refused and had more to share here! The same systems are going into place on some European airlines. China isn't the only surveillance state.
Your individual copyright is also at risk from AI. A copyright gives an author or creator "the exclusive legal right to reproduce, publish, sell, or distribute the matter and form of something (such as a literary, musical, or artistic work)" per Webster's dictionary. Copyright protects you and me from someone taking our writing, music, art, or performance and using it for their own financial gain. Three publishers, including the New York Times, have brought a lawsuit before a U.S. federal court, arguing that OpenAI and Microsoft are making money from copying and processing news articles without asking or paying for the right. The roughly fourteen million authors and editors of Wikipedia articles may also have a complaint, as their work was copied and used to train OpenAI's ChatGPT, among other AI. What if the corporations had to pay all the authors of Wikipedia a toll for redistributing their knowledge?
There are more known side effects and pitfalls you could inform yourself about. By all means, go dig in and be more informed. But AI, in the U.S. especially, is happening whether you like it or not. Your job is to maximize benefit and minimize harm according to your first principles. There are many unknown side effects and pitfalls ahead. You must accept uncertainty and be prepared to respond to events as they unfold. You teach the machines.
Knowledge, Uncertainty, and Ignorance
In 2002, then U.S. Secretary of Defense Donald Rumsfeld was probably referencing a beautiful Persian poem by Ibn Yamin in a press conference when he made his (in)famous "known unknowns" remarks. The poem wrestles with knowledge and uncertainty. Mr. Rumsfeld was attempting to help people process the early days of the Iraq invasion and the uncertainty that came with it. The famous poem gives us a gift: a framework for living in an uncertain world. Here is the poem, translated by theoretical physicist Niayesh Afshordi (2016):
One who knows and knows that he knows…
His horse of wisdom will reach the skies.
One who knows, but doesn't know that he knows…
He is fast asleep, so you should wake him up!
One who doesn't know, but knows that he doesn't know…
His limping mule will eventually get him home.
One who doesn't know and doesn't know that he doesn't know…
He will be eternally lost in his hopeless oblivion!
Knowledge and Action
"One who knows and knows that he knows…
His horse of wisdom will reach the skies."
The straightforward side effects and pitfalls are the ones you know of, understand, and act on to avoid or reduce harm. When you see bad and do something about it your "horse of wisdom will reach the sky." Pay attention to known problems with AI that have solutions, and if you're motivated and able, be part of the solution.
Knowledge with Uncertainty
"One who knows, but doesn't know that he knows…
He is fast asleep, so you should wake him up!"
Harder are side effects and pitfalls you can see but not understand or know how to address. You know something bad is going to happen but not exactly how to do something about it. You are fast asleep, so you should wake up! Trust your instincts. If you think there is a problem with AI in your work or life, but don't know for sure, you're probably right. Follow the "see something, say something" rule. Bring it up, talk about it. Try not to catastrophize, rather to understand.
Knowledge with Your Head in the Sand
"One who doesn't know, but knows that he doesn't know…
His limping mule will eventually get him home."
This is the head-in-the-sand or profits-before-principles category that is the most disturbing to me: Ignoring known risks in a mad dash for "creative destruction" by corporations and their leaders—the same Masters of the Universe who brought you the dot com crash, the 2008 financial crisis, and subsequent recessions. Mark Zuckerberg, knowing he'll be OK in the end, plows ahead despite the risks. "His limping mule will eventually get him home." It's not just the Big AI "them," but increasingly also the leadership of every corporation or large nonprofit built on human knowledge work. Geoffrey Hinton is a canary in the coal mine, warning us that there is internal knowledge with external denial. Your job is to ask the hard questions in the all-hands meeting, the school board meeting, or in the customer satisfaction survey. Bring up solutions to go with known problems. Show leadership even if you're no AI expert.
Ignorance
"One who doesn't know and doesn't know that he doesn't know…
He will be eternally lost in his hopeless oblivion!"
There are unknown risks. Things we can't see, often because of groupthink. You're completely ignorant of what might happen. Worse, you aren't even aware of your ignorance. You're blind to what might happen and may have a false sense of security. With stakes as high as our world faces with AI, it's important to consider where we may have a blind spot. Nassim Nicholas Taleb's book The Black Swan is helpful again in this context. Often, individuals will call out a risk that their group can't see. Pay attention to these lone voices, often seen as contrarian or disruptive. What can they tell you? Would it really cost that much to mitigate a seemingly unthinkable risk, considering the downside?
In conclusion, AI is brand new but has a lot of momentum behind it. Ibn Yamin's poem goes against our instinct, our drive to stay in our comfort zone of knowledge, to put our heads in the sand. Artificial intelligence brings much uncertainty and holds unseen challenges of which we are ignorant. The best defense is a good offense built on as much knowledge, understanding, and practice as we can pull off. Which brings you to the final chapter—using AI in your life!
References
Afshordi, Niayesh, 2016. He Will Be Eternally Lost in His Hopeless Oblivion! (Retrieved on April 24, 2025, from https://nafshordi.com/2016/07/26/he-will-be-eternally-lost-in-his-hopeless-oblivion/)
Fontana, Ash, & Mark Gorenberg, 2016. Growing Up in the Intelligence Era. TechCrunch. (Retrieved on May 9, 2025, from Growing up in the intelligence era | TechCrunch)
Gerkin, Tom, 2023. New York Times Sues Microsoft and OpenAI for 'Billions.' BBC. (Retrieved on May 2, 2025, from New York Times sues Microsoft and OpenAI for 'billions')
Kleinman, Zoe, & Chris Vallance, 2023. AI 'Godfather' Geoffrey Hinton Warns of Dangers as He Quits Google. BBC. (Retrieved on April 22, 2025, from AI 'godfather' Geoffrey Hinton warns of dangers as he quits Google)
Masanet, Eric, Arman Shehabi, Nuoa Lei, et al., 2020. Recalibrating Global Data Center Energy-Use Estimates. Science 367, 984‒986. (Retrieved on April 24, 2025, from Recalibrating global data center energy-use estimates | Science)
Pope, Audrey, 2024. NYT v. OpenAI: The Times's About-Face. Harvard Law Review, April 10. (Accessed on May 14, 2025, from https://harvardlawreview.org/blog/2024/04/nyt-v-openai-the-timess-about-face/)
Rogelberg, Sasha, 2024. Fashion Giant Shein Has Been Slapped with Yet Another Lawsuit Alleging Copyright Infringement, Data Scraping, and AI to Steal Art: 'It's Somewhat Shocking That They've Been Able to Get Away With It' Fortune, April 16. (Retrieved on May 2, 2025, from Artists sue Shein, accusing it of using AI and data scraping to steal their art | Fortune)
Supreme Court of the United States, 2001. New York Times Company, Inc., et al., Petitioners v. Jonathan Tasini et al. (Retrieved on May 14, 2025, from https://www.law.cornell.edu/supct/pdf/00-201P.ZO)
United Nations Office for Disarmament Affairs, 2023. Lethal Autonomous Weapon Systems (LAWS). (Retrieved on April 23, 2025, from https://disarmament.unoda.org/the-convention-on-certain-conventional-weapons/background-on-laws-in-the-ccw/)