Recently, OpenAI’s Amanda Askell, Miles Brundage, and Jack Clark joined Rob Wiblin on the 80,000 hours podcast to discuss a wide range of topics related to AI philosophy. policy, and publication norms.
During the conversation, they also discussed where to start if you’re trying to understand AI and AI policy. It was a topic that spoke to me directly, since I’m interested in the field but totally overwhelmed by the resources (or lack thereof) that are available.
A common theme in their responses was that the highest-leverage opportunity for someone trying to wrap their head around AI was to create content that’s useful to laypeople, government staffers, and researchers around specific news and topics within the field.
In this article, I summarize and expand on the reasons why they recommend this path, and the ways to go about doing it yourself.
I still strongly recommend giving the whole podcast a listen. They discuss ways that non-technical people can work their way into the field (without getting a PhD), and some of the characteristics and soft skills you need to be effective. The whole podcast is excellent, but the discussion around “getting into the field” starts around 1:51:00.
Jack Clark is the policy director at OpenAI, splitting his time between San Francisco and Washington D.C. (which he refers to as “the happiest place on earth.”).
He’s also the author of the Import AI newsletter, a weekly email that summarizes news in the AI community and explains why it matters in a way that’s readable and useful to non-experts.
With one foot in research and the other in policy, Clark is as qualified as anyone to speak from experience on what the AI policy world needs more of.
And according to Clark, what it needs is content.
“Any staffer for any politician in any country I’ve been to has made mention of needing more materials to read to get them into AI and AI policy,” says Clark. Specifically, summaries of “AI and its relevance to policy within a specific, tightly-scoped domain.”
Ironically, the medium he recommends couldn’t be further from the trailblazing mediums and messaging that AI is producing.
“In the glorious AI future, the cutting edge is text-based emails that have no images in them.”
He’s not wrong. According to a survey of 273 hill staffers shared in The Atlantic, newsletters that address a topic “related to issues [their] boss is active on” will be read, “regardless of where it’s coming from”, and staffers “typically read text-only newsletters immediately.”
That’s right — plaintext emails are the new responsive HTML.
But it’s not just hill staffers and policymakers. As the field of AI grows, there will be more opportunities for folks who can communicate AI news and advancements to laypeople.
Amanda Askell sees the lack of a set curriculum for people interested in AI policy as a key issue to growing the field. “Increasingly I hope that there’s going to be more material for people that are interested in the field as it grows.”
“If you feel like your skillset is you’re really good at communicating and synthesizing the latest innovation to a public audience”, says Askell, “that’s probably going to be a skill set that’s really useful.”
The bottom line; there’s a clear need for more sophisticated, clearly-written content related to AI policy, for everyone from the general public to congressional staffers to AI institutions looking to make it easier for smart, interested individuals to join the field.
In lieu of actually performing the research or working directly with policymakers, creating that content is the highest-leverage, lowest barrier-to-entry place to start for anyone looking to dip their toes into AI and AI policy.
Writing about AI policy is actually still a useful endeavor, even if nobody reads it. The mere act of writing your thoughts can pay off in helping you shape your thoughts and understand a topic better.
Most broadly, writing forces you to:
- Choose a single, defined topic
- Research that topic enough to decide what’s important about it
- Understand it well enough to either have an opinion on it or summarize it in a way that’s useful and clear
Within the realm of AI policy, writing requires you to synthesize complicated topics in a way that’s clear enough for a layperson to understand, since presumably, you yourself are starting as a layperson on that topic.
If you look at writing as a form of teaching (where you’re the teacher and the reader is the student), the effect that writing has on your knowledge becomes clearer. Many studies have shown that teaching a topic is one of the most effective ways to learn the topic itself.
But where should you start? Askell shares some advice: “start by finding a problem that’s interesting and relevant, then write something and form an opinion on it.”
To illustrate, take my experience in writing this article. As someone with a background in content marketing, Jack’s advice on starting a newsletter hit close to home. Since “writing a newsletter” is a topic within AI policy that I can connect with, it made for a good micro-topic that I could unravel.
Going through this process has helped me set my feet on the ground and build some confidence. If I can speak the same language as AI researchers, even on a topic that doesn’t substantively get into AI or AI policy at all, I have a place to anchor myself and move on from.
The other primary advantage of writing about AI policy is that it provides proof of your knowledge on the subject, and gives you assets on which you can build credibility.
Clark’s Import AI newsletter is evidence. According to Clark, it’s been one of the best tools he has to build his network.
‘I’ve met quite a substantial number of people in policy, not through my OpenAI affiliation, but through the fact that they subscribe to my newsletter.” Says Clark. “It’s also increased my belief that just producing stuff that’s designed to be useful for your target audience is a really good action to take to let you get evidence about how to meet and who to meet.”
Creating content builds credibility in a number of ways.
Most importantly, writing is proof of your ability to think through a complicated topic and connect the dots to produce something useful. AI is a challenging topic to learn about; Askell likens it to getting “a Ph.D. in 7 different subjects” since the effects touch everything from technical programming to economics to philosophy.
Clark agrees; “if you like looking at multiple bits of information from multiple domains and bringing all together into some theory of the world or theory of change, I think you’ll do better in AI as a consequence.”
Writing something on a specific topic in a way that’s thorough and accurate requires that you understand how these different disciplines interact and overlap. And doing so in a way that’s clear and concise is an important skill that you’ll need to demonstrate for many of the roles you might be interested in.
By creating a shareable asset (in this case, a text-based article), you’re now in a position to use it to start productive conversations with knowledgeable people in the field that you may hope to work for and work with down the road. This is the easiest way to start building a network from scratch.
Askell recommends the approach; once you’ve written something, “reaching out can be really fruitful, because you’ve shown interest, and roughly what you can do in terms of the research you’re engaged with.”
There’s a balance, though, between being well-prepared and being too risk-averse in your outreach. Brundage explains, “I think, generally, people should not be afraid to reach out and get feedback and circulate ideas.”
Obviously, you should be familiar with the work of the person you’re reaching out to as it relates to what you’re reaching out about, but it’s probably better to err on the side of connecting sooner rather than later.
According to Clark, one of the easiest ways you can display your understanding of the material is by engaging with research put out by AI organizations and finding a place where you disagree with or have questions on the conclusions.
Clark expands; “That could be reading some of the blogs from Microsoft calling for federal regulators to look at facial recognition, or it could be Google’s governance of AI paper… and then responding to it. Because none of those things are entirely correct documents or blogs. They have points that you can disagree with.”
According to Clark, being able to form your own opinion on a subject and “identify the logical inconsistencies” in materials shared by AI organizations is a barometer by which he measures how well someone understands the topic.
The reason? Having an opinion gives others an example of the way you think, and “people underestimate how valuable it can be to produce an example of your thinking.”
A natural offshoot of developing an opinion is being able to ask thoughtful questions.
“I’ve always found that a good way to tell if I understand something is if I can go up to an expert in that domain and ask them a question that is relevant and sophisticated.” Says Clark.
Leveraging content you’ve created, the most productive, beneficial conversation you can start with someone in the field is one where you ask a relevant question.
Long-form articles are not the only form of content that folks in the AI field respond to. “An uncommonly large amount of the primary technical researchers use Twitter to announce results, talk to each other, and exchange ideas,” says Clark.
Following their conversations gives you the insights into the topics they’re discussing and the way they’re discussing them. The more you’re exposed to these conversations, the better your antennas will be at picking up what’s important and how to talk about it in a way that maps onto the people you’re trying to connect with.
Given how multi-faceted the field is, a prerequisite for anyone looking at “getting into AI” is having genuine, self-motivated interest in the field.
“It’s important to have excitement in the field and the work that’s being done”, says Askell. “All of us in our spare time have either self-educated in this, and on an ongoing basis have tried to build our own things, even though we’re not technical researchers.”
Unless you’re already in the field, most people who are interested in AI and AI policy will have to start off with self-education. Creating content is an easy way to display those efforts.
At the end of the day, if you’re serious about dipping your toe into the world of AI and AI policy, you need to be producing something.
Written content is the most accessible tool in your toolbox that gives you:
- The opportunity to organize your own thoughts on the subject
- An asset you can use to get feedback and start conversations with experts
- Confidence in your ability to get out of the dark on a broad subject area
So start writing.