[supplied title]

what there is and what there could be

I haven't kept close watch on the efforts to regulate tech since leaving my job at a tech-adjacent institution about a year-and-a-half ago. My job had technical aspects, but it wasn't a technical job. It wasn't a policy job either but the materials I worked with, spanning the history of computing, gave me reason to reflect on the growth of the tech industry and its role in society. And my day-to-day commute brought me face-to-face with the landscape of contemporary tech as represented in the suburbs and exurbs of Slurban America.

When I started looking for a new job two years ago, it wasn't clear to me that I'd stay in libraries and archives. I was more committed to staying near family than to a particular profession. One possibility I considered was going into public policy work. This would have been more of a long-term goal, possibly requiring additional education. It's not a path I've ended up taking. So in lieu of analyses or recommendations on what to do about tech, I'm just going to briefly write up here my hopes for tech regulation.

My hope is that the regulatory framework that gets built will look beyond the "what to do about" questions that put the focus on particular companies. Company X may be an enormous company with enormous reach, but "what should we do about Company X?" is a narrow question. Instead, I hope that regulation will be driven by broader concerns around the kinds of work people do in the tech industry (contract and otherwise), and what it should be like to go online in a society.

What are the kinds of social (political, commercial) interactions we want to support, what should be discouraged or prohibited, what promotes a healthy civil society? We can, and should, spend time evaluating specific policies of specific companies, but we shouldn't lose sight of what we would want social media or search or commerce to be like if there weren't already companies dominating these spaces. What are the ways we could get there?

If regulation proceeds according to narrower goals, like simply limiting the size a company could be, the most likely outcome will be a landscape dominated by companies of exactly the maximum size, with a sidebar of litigation about how precisely to determine a company's size for regulatory purposes. If an outcome of regulation is that every social media company must now have an oversight board, that will solidify the positions of only the companies large enough to support oversight boards.

But if regulation is driven by broader principles, then instead of asking "should company X be broken up?" the question becomes more like "do company X's practices or business models threaten the fabric of society?" If the answer is yes, then I suppose Company X would have to transform itself or, failing that, to break up. It's not the responsibility of regulation to protect a given business model just because it already exists. There's no constitutional or moral right to scale.