Let’s talk about Artificial Intelligence (AI). This piece is not about hyping AI, but acknowledging specific areas where the current level of tech can, realistically, have big impact.
One of the big problems that I write about is that the Bureaucratic Government / Welfare State has grown too expansive and complex. While troves of data and heaps of reports are published by governments, transparency remains limited. Citizens do not have enough time to keep track of it all, as the world keeps rushing headfirst to the future. Maybe AI could change that?
AI is computer software. The funny thing about software is that producing a copy costs pretty much nothing. That’s why it can be handed out for free, for you to run the software on your own computer, on your own electricity. This applies to AI software, like large language models (LLM), too. Once you have one running on your own computer, you don’t need to pay for a subscription for, say, OpenAI or Anthropic, and you don’t have to share them what you’re doing.
Software can also be open sourced—in other words, allowing anyone to tinker with it, to create variations. The Chinese company DeepSeek just few days ago caused a ruckus in the markets by open sourcing (MIT license) a very competitive LLM.
The thing with open source LLM is that if you find it lacking in some area of knowledge—due to censorship, for example—your buddies on the open source spectrum* are eager to patch it up. (*I’m conflating open source community and autism spectrum to showcase my bad sense of humor. Sorry about that, dear nerds.)
What is an LLM actually good for? Sure, you can have it write your job application, teach you Estonian, provide criticism for your blog post, compose a haiku for your goldfish, summarize an article, and so on. It can complete text-related tasks in a blink of an eye. Amidst all this, I’d like to highlight content analysis.
Content analysis is a research method for making reliable interpretations of collections of texts. It involves systematically identifying, coding, and analyzing the presence, meanings, and relationships of certain words, themes, or concepts. In a way, content analysis is the whole essence of an LLM.
In investigative journalism, legal investigations, or cybersecurity, the aim of content analysis is to uncover patterns of deception and manipulation. But, it’s very laborious. Imagine having to pull apart 50 government reports each 100 pages long as an investigative journalist just looking for possible clues. Sounds like chore.
What if you had to try different approaches, and go through the same 50 reports over and over again? I’d rather have a life. What if you needed to cover 1000 reports? Beam me up, Scotty. Right now! But, AI can do it, no problem—any time, how much as you want—in effect brute-forcing transparency.
So, I’m hopeful about the potential of activists, journalists, independent researchers, and others using decentralized open-source AI to monitor and hold governments and their adjacent players accountable. I think it can affect the balance of power in society in a good way, but I also wonder how regressive forces might try to resist it.
I’m writing a book on… eh, let’s call it social innovation. This blog expands on it, reflecting on news stories and current events, often heading to surprising directions. Subscribe to stay tuned—it’s free.
By subscribing to my Substack, you’ll get a monthly summary, but you won’t get any spam or separate notifications for each post in email. If you want be notified more often, feel free to follow me on the Substack app or LinkedIn.