Social Technology Lab is an applied AI lab that builds technology for organizations working on complex societal problems. We bridge the gap between what AI can do and what institutions actually use, helping organizations in climate, democracy, healthcare, and education understand, control, and act on complex data.
We are self-funded and independent. No venture capital. No outside investors. No one to answer to but ourselves and the institutions we serve.
This is a deliberate choice. Independence lets us say no to bad incentives, bad timelines, and bad trade-offs. It means we can choose work based on what matters, not what maximizes revenue.
AI isn't neutral. Neither are we.
Most AI is built to maximize convenience, automation, and — ultimately — dependency. The AI market treats dependence as a feature: proprietary models, opaque pipelines, undocumented decisions, systems no one internally understands, and the quiet assumption that "you'll need us to maintain this".
This is normalized. It's also increasingly politically and institutionally unacceptable.
Climate organizations, democratic institutions, healthcare systems, and educational bodies face a specific challenge: they need advanced tech capability, but they cannot afford to lose control. They need to understand what they're using, be able to modify it, and take responsibility for it.
Meanwhile, AI capability is advancing faster than most institutions can absorb. The gap between what's possible and what's actually used keeps growing — not because the technology isn't ready, but because no one is translating it into systems these organizations can actually operate.
The market doesn't serve these institutions well. Large consultancies optimize for billable hours. AI vendors optimize for lock-in. Startups optimize for market share and scale. None of them are built to translate cutting-edge AI into systems these institutions can actually own.
STL exists to close that gap. We call it optimizing for institutional sovereignty.
We work with organizations that meet these criteria:
Climate, democracy, public health, education, social justice. If the work doesn't contribute to a better society, it's not for us.
Because of public accountability, ethical stakes, or institutional responsibility, they can't accept solutions they don't understand or control.
They're not looking for a vendor to take the problem away — they want to become capable themselves.
Our primary clients are government agencies, NGOs, research institutions, and mission-driven healthcare and climate organizations. We also selectively work with commercial organizations on public-interest problems — chosen carefully, disclosed honestly.
Our work is built on beliefs that differ from the mainstream:
| What we believe | So we... |
|---|---|
| AI fails at the human-technology boundary | Start with the human context, not technical specs |
| Data problems are framing problems, not just engineering problems | Invest in understanding what and how to build, not just building it |
| Adoption is as hard as building | Stay through adoption, not just delivery |
| Dependency is not a feature | Build for handover, not lock-in |
| Technology is not neutral | Take responsibility for what we build and for whom |
Five principles that define how we work:
We don't chase scale. We stay small enough to do deep work and say no to projects that don't align.
We build systems our clients can own, understand, and maintain, even if that's harder than handing them a black box.
Technology embeds choices. We don't pretend otherwise. We take responsibility for what we build and for whom.
We don't offer plug-and-play solutions. We invest in understanding each client's specific context and problem before building anything.
We do work that matters, and we fund it by running a sustainable business, not by relying on grants or donations.
© 2026 Social Technology Lab
KVK: 93608411