A Collaboration between Anna Madill & Alexis Braly James
Introduction: Why AI Needs an Equity Lens
Sitting in a coffee shop together Anna and I reflected on the conundrum we face as B corp businesses needing to scale and systematize. AI is a tool that has so much promise but how do we use it with integrity? The idea for this commentary came about.
For those of us who’ve spent our careers building equitable systems, AI presents an uncomfortable call to adventure. We didn’t ask for this. Many of us would prefer to wait it out. But we’re being asked to decide: Will we engage with this imperfect tool, or will we let others shape how it’s used?
As a Black woman, and the leader of Construct the Present, I’ve spent my career guiding organizations through building equitable systems for their people and embedding culture strategies. I’ve seen firsthand how hard it is to undo inequitable practices once they’re baked into operations. AI is no different.
The organizations that will do this well aren’t the ones who rush in without thinking or stand on the sidelines critiquing. They’re the ones who engage thoughtfully, build guardrails first, and stay committed to their values even as they experiment with new tools.
Your concerns about AI and equity are valid. But so is the cost of inaction.
As a result, the question we face in 2025 isn’t whether to engage with AI; It’s how to do it in a way that advances equity rather than undermining it, and how to build the organizational capacity to keep doing that as the technology evolves.
AI isn’t neutral. It reflects the data and systems that built it. Without intentionality, it can reproduce harm. As B Corps, we have a responsibility to model ethical innovation that centers people and the planet, not just productivity. Let’s explore how integrating AI thoughtfully can create space for deeper relationships, creativity, and collective liberation.
As the leader of a small business, I often find myself balancing liberation (friends, family, impact & rest) with logistics. The vision I hold for my work is rooted in freedom, justice, and care. Yet the day-to-day demands of running a business can pull me into systems that value output over connection. Emails, scheduling, and constant communication can quietly drain the energy needed for creativity and relationship-building. Learning to integrate AI with an equity lens has helped me reclaim time and redirect it towards deeper work: nurturing people, building trust, and imagining what’s possible beyond the inbox.
My work reminds me that the practice of equity isn’t just about systems, it’s about relationships. Deep, meaningful relationships are how I lead, how I build trust, and how I create impact. But as my business grew, I started to notice how much of my time was being consumed by logistics instead of connection.
Email became one of my biggest time sinks. I was scheduling meetings, replying to requests, following up, rewriting the same messages over and over. Each response pulled me further from what matters most in my role: listening, reflecting, showing up for my community, and being fully present in conversations.
When I started integrating AI intentionally into my workflow, I was hesitant. I didn’t want to lose the authenticity and warmth that come from personal communication. But I learned that using AI with an equity lens isn’t about replacing humanity, it’s about reclaiming it. By batching prompts and creating communication templates, I was able to automate the repetitive tasks that drained me, while keeping my voice and values intact.
Now, instead of spending hours buried in my inbox, I have time to attend community events, read deeply, and think more strategically about how Construct the Present can liberate workplaces and expand belonging. AI gave me back the hours I needed to slow down and reconnect: with myself, my people, and the purpose behind the work.
Grounding technology in liberation means asking:
How can this tool serve human connection instead of replacing it?
For me, that looks like using automation not to escape my relationships, but to make more space for them.

How to Stay Human-Centered and Creative in the Age of AI
For the past couple of years, our partners at Avenue have been reflecting on how AI is reshaping our industry, drafting and publishing our AI Manifesto and continuing to innovate our services and operations to meet our team and clients where they are and what our current world demands. We keep coming back to one central question: How do we embrace the power and opportunity of these tools while preserving what makes our work distinctly human?
At Avenue, we’re constantly navigating this balance between innovation and authenticity, and we’re having so much fun continuing to adjust and re-create how we work. There’s no denying that AI is transforming how we create, research and execute campaigns. What strikes me, though, is that the heart of truly impactful creative work remains unchanged. That creativity still emerges from lived experience, unique perspectives and genuine human connection.
While AI excels at generating content, it quite often misses the nuanced understanding that drives truly effective creative work. AI can’t read the room the way humans do, or pick up on those subtle shifts in cultural tone that inform our best campaigns. When we think about the campaigns that have had the most impact (both for our clients and in the broader market) they often came from someone’s willingness to take a calculated risk, to hold back when others might push forward or to include an unexpected detail that made the whole piece resonate more deeply. And sometimes, AI just comes up with a wrong or inaccurate answer for the situation. This makes it imperative that someone is still proofreading and fact-checking the outputs.
The most powerful creative breakthroughs we’ve witnessed often emerge from those quiet moments between meetings, or from wrestling through the messy middle of a complex challenge. They’re shaped by deep listening, gut instincts and genuine care for the outcome. I’m not convinced these can be automated by AI yet, though we’re open to being surprised as the technology evolves.

Balancing Innovation with Environmental Stewardship
AI is powerful, but it comes with a real environmental cost. In Oregon, where Avenue is based, the growing demand for data centers from Google in The Dalles to Amazon in Boardman and Meta in Prineville is putting pressure on local water and energy supplies. These facilities rely heavily on electricity and water to keep massive server farms cool, and communities nearby are beginning to feel the strain. If we’re going to use AI to increase our capacity, then we also need to take responsibility for its environmental footprint. That means making smarter choices and encouraging our clients and community to do the same.
At Avenue, we take this seriously. As a Certified B Corporation and member of 1% for the Planet, we invest in environmental nonprofits to help offset our footprint while prioritizing efficient, low-impact AI tools and practices. We streamline workflows, reduce unnecessary output, and choose partners with strong sustainability commitments. We also apply conscious habits internally, refining and batching prompts to minimize excess queries, reusing content instead of regenerating it, and pausing before prompting to ask, “Do I need AI for this?” These small choices add up.
According to MIT Technology Review, training one large AI model can emit as much carbon as five cars do over their entire lifespans. And generating a single image can consume as much energy as fully charging your smartphone. While AI contributes to emissions, it also holds immense potential for environmental monitoring and sustainability efforts, from tracking greenhouse gas emissions to predicting natural disasters.
Our goal is to leverage that potential while lessening its negative impact. For example, while we need to use electricity to power lights in our office or home for our day-to-day pursuits, rather than leave the lights on when leaving a room, we intentionally use only what we need and turn the lights off when we leave. The same principle applies to our AI usage. We are living in an AI-forward world now, and we will absolutely lean in and harness the technology to help power our client’s impact into the future, however, we will do so intentionally and find ways to minimize our AI usage in the process.
At Avenue, we recognize both the incredible potential and the profound responsibility that comes with AI. It is not just a tool; it is a transformative force that enhances how we create, manage, and optimize our work. By using it responsibly and transparently, we can drive innovation without compromising the health of our planet. Purpose-driven brands have a role to play in setting that example by asking better questions, choosing more sustainable tools, and ensuring that progress does not come at the planet’s expense.
Let’s Get Concrete…
My first introduction to AI wasn’t through ChatGPT or Claude; it was through simple tools like Calendly (2013) and Doodle polls (2007). At the time, I didn’t think of them as artificial intelligence. I just knew they were helping me manage board meetings and complex client schedules without the constant back-and-forth of email. Looking back, that was my first glimpse into how AI could quietly expand capacity without erasing humanity.
Now, I help nonprofits and mission-driven organizations make that same shift: turning AI from a threat into a tool that advances their values-driven mission.
For instance, a national nonprofit I worked with was drowning in manual data processing during a culture audit. Their leadership team spent hours trying to sort through hundreds of staff responses, which meant less time for the conversations that actually change workplace culture. We used AI to summarize and identify themes in the feedback, which cut their analysis time by seventy percent. The result? Leaders spent those reclaimed hours listening, reflecting, and having the kind of hard, heart-level conversations that led to a twenty-point increase in psychological safety scores within six months.
Another client’s HR team was stuck in an endless loop of repetitive email communications i.e. onboarding reminders, policy updates, benefits questions. We automated those routine exchanges, freeing up fifteen hours per week that staff now use to mentor early-career employees and host storytelling lunches that build real connection and trust. Their employee engagement scores jumped, and their retention of first-year staff improved by thirty percent.
This is what applying an equity lens to AI looks like in practice: asking who the tool serves and who it might silence. Using technology to amplify human care, not replace it. AI should buy you time for the work only humans can do.
Because here’s the truth: we can automate what machines do best so we have more time for what only humans can do. The hugging. The listening. The expanding of our collective mindset. Oxytocin heals. No algorithm can do that.
In my work with organizations navigating this territory, I’ve learned that practicing equity in AI means using inclusive datasets, naming and crediting your sources, and being transparent about where your content and decisions come from. It means reviewing your processes and prompts through this lens and noticing what changes. The impact is immediate: better trust, stronger teams, and technology that reflects our shared humanity.
Start with these Reflection Questions:
- How does our use of AI align to our personal and organizational values?
- In what circumstances do we agree that using AI to produce work is ok, even encouraged? [See page 4 of this document to get you started]
- (AI/data centers have a huge environmental impact. Think about how different types of AI consume more energy than others).
- How are we going to be transparent with our colleagues about when/how we use AI?
- What does that look/sound like?
- Where does the impetus to rely on AI intersect with existing norms at our organization? Particularly around urgency and competing priorities?
Practical Ways to Integrate AI Intentionally
Batch with Purpose
- Use batching for recurring tasks, like writing captions for social posts, emails, or reports. This saves time while maintaining brand voice and cultural nuance.
- Include examples of ethical prompt engineering (e.g., prompts that reference DEIB frameworks, B Corp values, or community impact).
Sample Prompts:
- “Respond as if you are part of a justice-oriented consulting firm committed to belonging, equity, and sustainability.”
- “Review this paragraph for biased language or assumptions that might exclude marginalized readers. Suggest alternatives rooted in cultural humility.”
- “Rewrite this blog introduction to center on collective benefit rather than individual achievement.”
- “Write an auto-reply for when I’m attending community events that encourages people to connect later and shares my commitment to staying grounded in relationships.”
Build Templates that Free Up Emotional Labor
- Create reusable templates for communications you repeat: introductions, outreach, internal updates, etc.
- Emphasize that AI can handle structure, so humans can handle connection.
- Encourage teams to use these saved minutes for slower, more meaningful in-person interactions.
Redistributing Time for Impact
AI can’t build empathy, but it can build capacity. The hours saved through automation are an invitation to reinvest in what machines can’t replicate: human connection, care, and community. When organizations use AI thoughtfully, they free up time and energy that can be redirected toward work that expands empathy and justice. Use that capacity to volunteer your expertise with local nonprofits, mentor emerging leaders navigating systemic barriers, or join coalitions advancing liberation and climate justice.
Start today:
1) identify one community partner your team can support,
2) dedicate one AI-saved hour each week to service or learning, and
3) measure success not by productivity alone, but by your collective impact on people and the planet.

Training AI with an Equitable Lens
AI systems are trained on data that too often excludes Black, Brown, Indigenous, queer, and disabled voices. When organizations use AI uncritically, they risk reinforcing the very biases they’re working to dismantle. Integrating an equity lens isn’t optional. It’s essential to protect your mission and your reputation.
A small education nonprofit I worked with started using AI to draft grant language and community updates. Within weeks, they noticed a problem: the outputs were technically correct but culturally hollow. The warmth and context that made their work resonate with funders and families was missing.
Here’s what we did differently. First, we used AI to handle the time-consuming parts: researching funder priorities, drafting boilerplate sections like organizational background and budget narratives, and formatting proposals to match specific requirements. This freed up about 12 hours per grant cycle that the development director had been spending on administrative tasks.
With that reclaimed time, we built a cross-functional review team to shape how AI was being used:
We assembled diverse voices strategically. Staff from different departments, board members with varied expertise, and two parents the organization serves met for 90 minutes initially, then 30 minutes monthly. This wasn’t a committee that reviewed every output. It was a team that created the guardrails so AI could work unsupervised for routine tasks while flagging what needed human oversight.
We started with pattern identification, not document review. The team looked at five AI-generated grant drafts and identified consistent gaps: missing cultural references, generic language where specific community stories belonged, tone that felt corporate rather than relational. These patterns became prompts. Instead of “write about our after-school program,” prompts now included: “Use warmth and specificity. Reference the Friday storytelling circle. Emphasize relationships with families, not just academic outcomes.”
We built decision criteria, not approval processes. The team created a simple framework: AI handles research, first drafts of standard sections, and formatting. Humans write anything involving community stories, cultural context, or strategic positioning. High-stakes grants get human review. Renewals and smaller asks flow through with spot-checks.
The development director became a strategic editor, not an administrative assistant. Instead of spending hours on research and formatting, she now spends that time strengthening the narrative, weaving in authentic community voice, and building funder relationships. Grant quality improved because she had energy for the parts that actually matter.
Once you’ve built the guardrails for AI, then clarified who does what, the next step is making your commitments visible. An AI Equity Pledge isn’t about perfection; it’s about transparency. It tells your staff, your board, and the communities you serve exactly how you intend to use AI and what standards you’re holding yourself to.
As Anna mentioned in Avenue’s example above, the most effective pledges create accountability while remaining simple enough for every team member to remember. Think of it as the foundation that turns your values into daily practice.
Equitable AI Use Pledge
At [Organization Name], we believe technology should serve people, not the other way around. We commit to integrating artificial intelligence (AI) in ways that honor equity, protect privacy, and deepen human connection.
Our Commitments
1. We use AI to enhance, not replace, human creativity.
We view AI as a tool to expand capacity; not as a substitute for human thought, care, or relationships. Automation helps us make space for reflection, collaboration, and innovation led by people.
2. We protect personal and community data.
We never enter personal, confidential, or identifying information into AI tools. This includes employee records, client names, private emails, or internal data. We understand that once information enters these systems, it may no longer be private or secure.
3. We center equity and justice in every prompt.
We intentionally design and test prompts to reflect inclusive, anti-oppressive language. We ask: Whose voice is missing? Who benefits? Who might be harmed? Equity is our lens for how we generate and interpret AI outputs.
4. We commit to transparency and accountability.
When AI supports our work, we name it. We will be open about where and how automation plays a role, maintaining honesty in how our content and processes are created.
5. We reinvest time saved into community and climate action.
Efficiency without justice is empty. We dedicate a portion of the time saved through automation to volunteering, mentoring, and advocacy with organizations advancing liberation and climate justice.
6. We evolve as technology evolves.
We will continue to learn, test, and adjust our practices to ensure our use of AI remains ethical, equitable, and aligned with our values.
Signed,
[Organization Name]
Date: _____________
Where to Start
AI can’t liberate us, but it can create the time we need to do the liberating work ourselves. The goal isn’t to work faster. It’s to work with more intention.
If you’re ready to integrate AI with care, start here:
1. Create an AI Equity Pledge with your team. Make your commitments visible. Keep personal data private. Hold yourselves accountable to the communities you serve.
2. Batch and template repetitive communications. Automate what’s mechanical so you can reinvest that time in genuine community connection and relationship building.
3. Support organizations advocating for AI governance and environmental justice. Technology doesn’t regulate itself. Add your voice to those shaping policy and holding tech companies accountable.
4. Use only what you need. Just as you wouldn’t leave the lights on when leaving a room, apply that same intentionality to every AI interaction. The environmental cost is real.
Balance is the path forward. We can embrace AI’s potential while remaining mindful of its impact. These are the strategies we teach through Construct the Present. We help teams use technology without losing touch with what matters most. Together, we can shape technology to serve liberation, not limit it, and use every saved minute to build a more just, sustainable world.
At Avenue, we believe the path forward is about balance, embracing the transformative potential of AI while remaining deeply mindful of its environmental cost. Just as we would never leave the lights on when leaving a room, we commit to using only what we need, applying that same intentionality to every AI interaction, which we invite you to read in Avenue’s AI Manifesto. By making thoughtful choices, asking better questions, and prioritizing sustainability at every step, we can harness AI to power progress that uplifts both people and the planet.
Resources for Equitable AI
- Oregon State Government Artificial Intelligence Advisory Council — In your home state, this council is crafting a plan with guiding principles emphasizing equity, transparency, and oversight.
- Responsible AI Institute (RAI Institute) — Provides training, assessments, and toolkits to help organizations scale AI in trustworthy ways
- Western Resource Advocates (WRA) — They’ve published policy reports showing how data centers strain water and energy in the American West and suggest ways to regulate their growth responsibly.
- The Green Grid — An industry nonprofit consortium that pushes for more efficient resource use in data centers (including metrics like water usage effectiveness).
- AI for Nonprofits Resource Hub: https://www.nten.org/learn/resource-hubs/artificial-intelligence
- World Economic Forum: https://www.weforum.org/publications/a-blueprint-for-equity-and-inclusion-in-artificial-intelligence/



0 Comments