Mark Headd recently published an excellent post on applying the principles of microservice architecture to the infrastructure of civic tech.
I think the idea there is fundamentally correct and quite important. However, one paragraph and image in the post bothered me immediately and my brain has been gnawing at it ever since. Since a couple thoughts finally solidified this morning, I wanted to outline them here.
Here’s the paragraph that bothered me:
The centerpiece of this approach is not a government open data portal, but rather a centralized hub for routing requests and controlling access to different microservices – essentially, a reverse proxy. Governments would manage this new civic tech infrastructure by managing access to different resources through this proxy.
The idea of inserting a government-managed piece of infrastructure handling all routing and access control seems really problematic to me, purely on an architectural basis, especially given that it must handle things across many different local, state and federal governments as well as civic tech companies, non-profits and volunteers. Not only is it potentially a performance bottleneck, it is almost guaranteed to be an innovation bottleneck. In a way, the idea of a central hub that does all that itself violates microservices principles.
If we tease out the specific functions that this central connector needs to play, I think we come up with three key pieces: discovery, routing and access control. Without any claim to having thought through it all systematically, I want to briefly explore how we might address each of these pieces through separate, smaller services.
I’d begin by arguing that we can drop routing from the requirements entirely – as long as endpoints are expressed as standard URLs, we can leave it up to the existing internet infrastructure to handle the routing.
Next, the vast majority of civic tech infrastructure services are publicly available and so they don’t need access control so much as a way to authenticate identity for non-anonymous access, perhaps in order to regulate usage. This is easily accomplished using a rich array of standard authentication services, including OpenID and a large number of Oauth providers like Facebook, Github, Google, Microsoft, etc. As a best practice, civic tech endpoints should support several – modern tools make this easy to do. Where needed, true access control for non-public (internal or specially authorized) users requires additional functionality, but arguably should still be implemented as an external micro-service using standard protocols.
Finally, my main aha! moment this morning was about endpoint discovery. It occurred to me that we have a rather nice model for this already in the npm software package manager. What is interesting about the model is that it offers a centralized repository for resources without requiring centralized coordination of the contributors of those resources. Although originally developed specifically to deliver Node.js modules, npm has expanded to become a much more general tool for delivering assets needed by software projects.
I suspect we could actually create a functioning civic tech services registry today using an npm repository.
From the standpoint of a developer of a civic tech service that needs to leverage other services, npm makes it easy to list dependencies and have needed artifacts installed automatically, accounting for any version requirements. What is installed might be as simple as a JSON file with the endpoint URL and information about the protocols and standards the endpoint implements. Or it might include software that delivers functionality ranging from simple connection management to analytics to full-on embeddable interactive applications.
If we also standardize on some set of expected civic tech-related fields in the package file, we could enable quite sophisticated endpoint discovery. We might, for example, query the registry for all endpoints that provide a particular type of data or all endpoints within a specific geographic area. Such queries aren’t necessarily supported by npm right now, but it provides an underlying functionality over which additional tools can be built.
I’ll end this here since my goal is not so much to argue for the specific solutions I’ve outlined as to spur further conversation and, ideally, help make it possible to realize the vision presented in Mark’s post sooner rather than later.
One thought on “Some Thoughts on Mark Headd’s Thinking Small Post”