
AI is now part of my toolkit for science communication, just like cameras, editing software, and drawing tablets.
It’s not a replacement for scholarship, and it’s not a magic oracle. It’s a tool I use carefully to share deep history with more people, more clearly.
This page explains:
Why I use AI at all What I do and don’t use it for How I handle accuracy, ethics, and artists’ labor Where I draw the line and how I plan to evolve these practices
I’m laying this out because I care about both truth and trust. If I’m asking you to take my work seriously, you deserve to know how it’s made.
1. Why I Use AI in Science Communication
My work sits at the intersection of:
Human origins and deep history Public-facing education and outreach Limited time, limited funding, and a lot of ideas
AI helps me:
Translate complex research into accessible language Create conceptual visuals quickly (when I don’t have the budget or time to hire an artist) Experiment with narrative and visual formats that might reach people who’d never pick up an academic article
I use AI to amplify my ability to communicate—not to replace critical thinking, scholarship, or human creativity.
2. What AI Images Are (and Are Not) in My Work
What they are
When I use AI-generated images in posts, talks, or social media, they are:
Conceptual / illustrative: visual metaphors, mood pieces, or stylized depictions meant to support a point, not stand in as literal data. Tools for engagement: a way to grab attention so I can then talk about fossils, methods, uncertainty, and real evidence. Clearly my responsibility: I treat every image I publish as something I am accountable for, regardless of the tool that helped generate it.
What they are not
AI images in my work are not:
Fossil evidence Direct reconstructions of specific individuals or species (unless clearly stated as such and based on referenced work) Substitutes for actual site photos, artifact images, or primary data
Whenever I show a real fossil, site, or artifact, I aim to:
Use actual photos or well-documented reconstructions Label them as such Keep AI out of that part entirely or clearly separate it
3. How I Handle Accuracy and Misrepresentation
AI is very good at making things that look plausible and very bad at knowing when it’s wrong.
To avoid misleading people:
For images
I don’t present AI images as literal, data-level reconstructions unless they are explicitly grounded in referenced research. When an image is AI-generated, I: Treat it as concept art, not evidence. Avoid “overconfident realism” that might confuse viewers about what we actually know. Add clarifying text in captions/posts when needed (e.g., “conceptual illustration” or “AI-generated conceptual scene”).
For text
I use AI tools to: Brainstorm structure Simplify wording Draft accessible summaries I do not use AI to: Fabricate sources Invent data Replace reading the primary literature
At the end of the day, I am responsible for the claims, citations, and interpretations. AI is a drafting assistant. The judgment is mine.
4. Artists, Labor, and Why I Still Support Human Creativity
I take seriously the concern that AI systems are trained on massive datasets that include the work of human artists who were not asked for consent.
My current commitments:
When I can hire artists, designers, or animators, I do, and I’m happy to credit and pay them. AI visuals are a stopgap, not my ideal state. They fill the gap when: Funding is limited Turnaround is tight The alternative is “no visuals at all”
I’m also open to:
Collaborating with artists who want to build AI-informed but human-led projects Updating these practices as the legal and ethical landscape evolves (e.g., models trained with explicit consent, more transparent datasets, better protections for creators)
If you are an artist with specific concerns—or interested in collaborative work—reach out. I’d rather have a real conversation than argue via comment sections.
5. Transparency and Labelling
Transparency is the minimum ethical standard for using AI.
My commitments here:
If an image is AI-generated, I will: Treat it as conceptual Label it clearly where the context could confuse people If text heavily involves AI assistance, I will: Maintain final editorial control Ensure all claims are checked against real sources Make clear, when relevant, that AI was used in the drafting process
My goal is that a reader or viewer should never have to guess whether something is evidence or illustration.
6. Respect, Harm Reduction, and Sensitive Topics
Working in human origins and deep history means I touch on:
Human remains Past populations who have living descendant communities Themes of race, identity, and humanity
My AI practices sit inside a broader ethical framework:
I do not use AI to: Generate sensationalized or disrespectful images of past or present people Create clickbait around trauma, colonization, or genocide When material involves human remains or descendant communities, I aim to: Stay grounded in respectful, accurate representation Avoid dehumanizing or “monsterizing” depictions Prioritize clarity about what is known, what is hypothesized, and what is imagined
AI is not an excuse to be careless with human stories.
7. Continuous Revision: This Is a Living Document
AI and its ethics are moving targets. The point of this page isn’t to claim I’ve solved it once and for all. It’s to:
Make my current principles visible Give you something concrete to critique or discuss Hold myself publicly accountable to better standards over time
I expect to revise these guidelines as:
New tools emerge Better ethical frameworks develop I learn from colleagues, artists, students, and community members
If you have specific, constructive concerns—for example:
“This image misrepresents X fossil because…” “This caption could mislead people about what we actually know…”
—then I’m very open to hearing them. Vague “AI bad” comments are not very actionable. Concrete critique is.
8. Summary: What You Can Expect from My AI Use
If you follow my work, here’s what you can expect:
AI is used: To brainstorm and draft text, which I then edit and fact-check To generate conceptual illustrations and memes that support educational content AI is not used: As a replacement for reading the literature As a source of data or evidence To make unlabelled “realistic” reconstructions that blur the line between concept art and evidence I aim to: Be transparent about AI use Avoid misleading audiences Respect human creators and communities Update my practices in response to well-founded critique
If we’re going to tell the story of us—humans, in all our weird, ancient, creative glory—then the tools we use should be handled with care, honesty, and a willingness to improve. That’s what I’m committing to here.