When crypto and NFTs had the hype, I made it my goal to know just enough to never be in a situation where it would be mansplained to me—again.
With AI, my goal is different, because the relevance of AI is different.
I’m not a tech person by nature. My friend generously welcomed me to 2018 when, in 2021, I finally set up Apple Pay. I’ve taken the scenic route to becoming an early adopter, often favoring observing how things get used before joining in, so an interest in tech has been a slow build for me.
Not only is tech literacy a professional imperative for many industries, I’m increasingly seeing it as an essential part of being an informed citizen and consumer. Tech companies hold a significant amount of power over our daily lives and societal functioning, and we need and deserve to understand what’s happening, what tech can and can’t do, who’s creating and profiting from them, and what safety measures are or are not being put into place. The more we understand, the more we can optimize its benefits and minimize its harms, and hopefully play a role in choosing the future we want to create for ourselves.
Almost synonymous with AI at this point, ChatGPT amassed 100 million users in its first two months, faster than any app to date. Things are moving really fast with tech companies pouring billions into development. At the beginning of the year, AI generated images were discernible by spooky hands drawn with an alarming number of fingers and lack of joints, but a few months later, the issue has been fixed. It only took a few months to go from GPT-3.5 passing the Bar Exam in the 10th percentile, to GPT-4 passing it in the 90th percentile.
The impact of the development of AI is often likened to the invention of the internet or the industrial revolution. Yet it’s coming at us quicker than any preceding innovation.
My goal is to develop a sober-minded approach to AI in which I can understand and articulate the ways it offers exciting possibilities and as well as its limits and potential dangers. Said differently, I want to be both hopeful and alert when it comes to both understanding the future of technology and my humanity.
Writing this has not only challenged me to have a better understanding and framework for talking about artificial intelligence, it has also challenged me to try to hold, in tension, all the possible futures that could come to be. So much is prediction, ranging from highly utopian to dystopian, with any number of more moderate outcomes where AI does not save nor destroy us, but could make substantial shifts in how we work, relate, and make decisions.
At times I have felt the pressure to lean more heavily on the excitement and opportunities of AI for the sake of being an optimist. But isn’t everything a little more complicated than just “more good” or “more bad”? If design has taught me anything, it’s we must also ask: good for who? For how long? Under what circumstance? Bad for who? At what cost? For how long?
But more than anything, I’ve felt challenged to dig deeper into a conviction that fear is not a primary framework I want to engage any part of life from. The skill of learning is more than anything the skill of being uncomfortable and if we can exist in sustained discomfort, we can venture into almost anything.
I’m not sure yet what it looks like to be able to claim being “AI proficient”—it’s an accelerating target of course—but I’ve started using it some, mostly as a synthesized search engine, asking ChatGPT to explain definitions to me and help plan my upcoming vacation. I’m curious about the other areas I will find it helpful and useful.
What AI Is and Isn’t
Artificial intelligence is technology created to synthesize data, make decisions, and perform tasks typically associated with human intelligence and capabilities. We’ve been using it for years though aids such as auto-correct, Google Maps, Siri, and Alexa.
AI has been around in its earliest forms since the 1950s, but recent advancing in computing power and availability of data, which feeds AI, has led to generative tools like ChatGPT, Midjourney, and DALL-E that creates new content like chat responses, images, music composition, designs, and deep fakes from human prompts.
Currently, these machines are good at performing specific tasks like generating images, synthesizing data, and predicting outcomes, but no one machine can match the breath of human tasks, though some can match and exceed even exceed at specific tasks.
Machine learning is when a computer is trained, like humans, to improve accuracy through experience over time and adapt their algorithm without being explicitly programmed to do so. Examples of this is when computers play chess and learn from their mistakes, your Netflix recommendations improving over time, and the iPhone FaceID still recognizing you when you put your glasses on.
Maybe the craziest thing about AI is that the people making it can’t fully explain how it works. Between inputs and outputs is a “black box” that doesn’t reveal how the machine got to the output from the input (as illustrated cutely at the top of this essay). AI has been modeled after the human brain and a subset of machine learning called deep neural networks are made up of hidden layers of interconnected neurons in which inputs from the previous layer are passed to the next layers until reaching the final layer with an output. There’s a lot of math involved that I don’t understand, but then again, does anyone??
AI, in any form we know, is not generally considered sentient. Because the interface we use to interact with it often reflects ways we communicate with humans, such as having human names or a chat format that mirrors how we talk to other humans online, mixed with our tendency to prescribe human attributes to non-humans, it can feel very real. Last year Google fired an engineer who claimed their unreleased AI model was sentient, and Microsoft just released a paper saying that AI is showing signs of human reasoning. Some people have also raised flags about chatbots going rogue and appearing to have personal interests and desires, such as when New York Times journalist Kevin Roose published a piece on his conversation with Bing’s chatbot, Sydney, which, when prompted, declared love for the user and claimed wanting to be human.
Fellow New York Times journalist Ezra Klein offers an interesting explanation: “‘Sydney’ is a predictive text system built to respond to human requests. Roose wanted Sydney to get weird — “what is your shadow self like?” he asked — and Sydney knew what weird territory for an A.I. system sounds like, because human beings have written countless stories imagining it. At some point the system predicted that what Roose wanted was basically a “Black Mirror” episode, and that, it seems, is what it gave him. You can see that as Bing going rogue or as Sydney understanding Roose perfectly.”
What’s Exciting About AI?
First of all, it’s of course quite cool and mind blowing. The technology is amazing in and of itself. There’s novelty and joy in getting to experiment with bringing creative ideas into being in a way that a single person wouldn’t have been able to before. In a lot of ways this era of AI usage feels characterized by play and opportunity.
It will be a game changer for busy individuals and small business owners who can’t afford assistants. It has some beautiful opportunities to improve quality of life and reduce burnout by having machines do tasks that add burden without adding value — both for people in their personal lives, as well as professionals such as medical workers who spend a lot of time doing paperwork in a time when nurse and clinician burnout is at an all-time high. It can offer recommendations for what to read next, translate text, or help busy parents create meal plans and grocery lists.
AI can improve safety in automobiles and prevent accidents.
It can detect cancer and other health threats sometimes years before a human can.
Climate scientists are using it to visualize the effects of climate disasters and make better predictions and decisions.
AI algorithms are being used to protect wildlife and biodiversity by monitoring at risk species, stopping poachers, and tracking water loss.
Questions and Concerns
On March 22nd, an open letter was published by Future of Life Institute calling for a 6 month pause on the development of AI more powerful than GPT-4, signed by leading technology and AI players such as Elon Musk and Apple cofounder Steve Wozniak.
While it doesn’t appear that any sort of a pause has been implemented to date, the letter is a call to create the time and space to be more deliberate and careful about what is being built and the role we want AI to play in our lives and societal functioning.
The signers insist that “AI developers must work with policymakers to dramatically accelerate development of robust AI governance systems. These should at a minimum include: new and capable regulatory authorities dedicated to AI; oversight and tracking of highly capable AI systems and large pools of computational capability; provenance and watermarking systems to help distinguish real from synthetic and to track model leaks; a robust auditing and certification ecosystem; liability for AI-caused harm; robust public funding for technical AI safety research; and well-resourced institutions for coping with the dramatic economic and political disruptions (especially to democracy) that AI will cause.”
Currently, AI development is largely being driven by the companies developing it: mainly, OpenAI (the creator of ChatGPT and DALL-E), Microsoft, Google, and Meta. And regulation, protective measures, and public understanding of the systems have not accelerated at the same pace.
While some argue that a 6 month pause on the development of AI would cause more harm or be wasted time, the ethical and very human concerns are significant and require attention and priority.
Legislation around privacy rights and data collection have yet to be formed. How do we make sure users are providing truly informed consent when interacting with AIs? Are people being protected? What are the intentions of the companies collecting data and how is it being used?
While able to give the illusion of objectivity, AI systems are trained on human made data and can replicate the biases and perpetuate inequality. A famous example of this was Amazon using AI to filter resumes in 2015. Because the model was trained on past applications, which had more men than women, when the model scanned the resumes for keywords, it favored resumes submitted by men and women, perpetuating a bias against women applicants.
While misinformation and fabrication of reality have always existed, AI significantly lowers the barrier of entry to creating convincing images, audio, and videos that use the likeness of others from anything from meme creation, scamming, and political misinformation. The Pope was looking really good in that puffer coat. Which raises questions around how AI generated content should be tagged or watermarked. Do people deserve to always know when they are interacting with a human vs an AI model? Should people be allowed to use the likeness of others without consent or compensation? How are these things regulated and enforced?
And there are existential questions raised as the capabilities of AI grows. For a philosophical doomscrool: the paperclip example.
This is, of course, a highly non-exhaustive list. There are possible futures in which many current jobs are outsourced to computers, and possible futures where jobs are deliberately protected and AI gives us a higher quality of life through increased productivity. There are possible futures in which AI deepfakes are not sufficiently regulated and play a large role in political tactics, and one where AI made content using the likeness of others has to be disclosed. Futures where AI is used to increase and to decrease the wealth gap.
I am hopeful that education around AI will empower people to participate in the shaping of our technological future. I’m hopeful that robust definitions of consent when interacting with AI can be developed and hopeful that people feel a sense of agency to use this technology in ways that will benefit them.
The joy of being human in an AI world
Regardless of which of the many possible futures we in habit, I believe if we can be attentive to ourselves and others and deliberate in the ways we use AI, there are incredible opportunities to savor and appreciate what is unique and wonderful to the human experience and the physical world around us.
Nothing can replace the quality of attention and care a human can give another human. Nothing can replace physical touch and acts of kindness. This is where I feel most hopeful. We have always needed each other and always will. We are a communal species and are better when we care for each other.
If we use AI to aid with tasks that burden us, to better care for our health, our environment, and our fellow humans, there are opportunities to correct some of the ways we have overvalued productivity and dehumanized people by treating them like machines.
It’s a wonderful thing to be human, regardless of how terribly complicated it can be, and new technologies and societal shifts bring with them new opportunities to see ourselves and others with more clarity and intention. If I have one wish for AI, it’s that it will better help us to let machines be machines and humans be humans. Come what may.
I pasted the last couple of lines of your article into ChatGPT to see it's thoughts:
https://chat.openai.com/share/e828e543-789a-4eca-8098-24bd053d8221
Loved this piece by you - keep it up!