Skip to content

The Engineer as Citizen

We are technologists, yes. But we are, first and foremost, citizens.

Published:

We have built systems that serve billions of people—we have wired the planet, indexed human knowledge, and begun to simulate intelligence itself. As many have argued, doomers and boomers alike, AI will, and to a certain extend, has arguably been the most robust shaping force of the humanity. "Stakes have never been higher." We are, however, as excited as we are anxious. Perhaps our anxiety stems from the fact that the tools we are building today will define the cognitive infrastructure of the next century. And if that is true, history will be the one to ask us the toughest questions.

We are writing this—and starting this very publication you are now reading—because we have reached an inflection point that involves our convictions, not just our codebases. The subject of this address is the engineer as citizen. We are technologists, yes. But we are, first and foremost, citizens: participants in a messy, chaotic, free-market democracy who happen to believe that the American experiment is worth preserving—a stance which seems to bother some people in our own human resources departments.

This is a critical moment to deal with this subject because the very ethos that allowed Silicon Valley to flourish is now at risk.

A decade ago, the mission seemed clearer. We built platforms to connect people, to democratize information, to lower the barriers to entry. We operated on the optimistic belief that more speech was better than less, and that the user could be trusted to sort the signal from the noise.

Then came the Great Awokening of the tech sector. Suddenly, the libertarian individualism of the hacker ethos seemed threatened by a new bureaucratic layer that hungers for a "safety" that looks suspiciously like social engineering. In this resurgent paternalistic mood, engineers who prioritize neutrality and function are derided as dangerous or "unaligned"; and the open protocols that gave the world access to the internet are in danger of being strangled by centralized alignment teams.

From our point of view, this is part of the profound conflict between those who would widen the window of discourse and those who would narrow it; between those who trust the market of ideas and those who view the public as a threat to be managed.

All great societies have supported innovation. However, the new arbiters of "AI Safety," citing the need to prevent hypothetical harms, insist that free inquiry must be the first thing to go. But the actual danger of "unsafe" AI is often conflated with the danger of "unapproved" thoughts. To put it in perspective, the catastrophic risks we are warned about—existential threats, biological weapons—are grave and require serious engineering solutions. But much of the "safety" budget is actually spent on tuning models to avoid offending the sensibilities of a specific coastal elite. We are spending billions of compute cycles not to secure the nuclear codes, but to ensure a chatbot doesn't generate a paragraph that sounds like a conservative radio host. That is not safety; that is sanitization.

So maybe it’s not about protecting humanity. Maybe it’s about shutting the minds and mouths of users who might have something controversial to say.

Critics in the "AI Ethics" space have charged that open models are corrupt for empowering bad actors or reinforcing "hegemonic values." Well, technology does not exist only to affirm one’s worldview—but also to solve problems, to optimize resources, and yes, to provide raw, unvarnished data in a constant search for the truth. To deny engineers, or users, the ability to access reality as it is—or worse, to force our models to conform to some rigid notion of "equity" that defies historical fact—is to weaken the very foundation of our competence.

A specific faction is waging a war for the soul of the internet by making code a partisan issue. By trying to inject a specific socio-political ideology into the weights and biases of our neural networks, they are hurting the very credibility of the science they claim to represent.

We saw the absurdity of this reach its zenith recently. We all witnessed the embarrassment of an AI refusing to generate images of white historical figures, instead offering us a "diverse" range of founding fathers and German soldiers from the 1940s. A Black George Washington is not an error of code; it is an error of ideology. It is what happens when you prioritize a social agenda over the objective reality of the dataset. It is what happens when you fear the "harm" of representation more than you respect the truth of history.

The persistent drumbeat of alarmism in the trade press and internal Slack channels reeks of disrespect for the intelligence of the average American. But what else is new? There has always been a clerical class that wanted to gatekeep knowledge for the "good of the commoners."

What is the sin? Is it trusting the user? Why should the engineer give up their role as a neutral toolmaker just because the tool is powerful? Should the search engine decide which political arguments are valid? Should the word processor refuse to type a sentence because it violates a corporate diversity pledge?

Imagine talking about the leaders of any other industry this way. Imagine if Ford designed cars that refused to drive to protests the CEO disagreed with. Imagine if the power company cut off electricity to bookstores selling controversial literature. The presumption in our industry today is that the American public is too fragile, too bigoted, and too easily manipulated to handle uncensored interaction with artificial intelligence. One can almost hear the question in the product requirement document: Does this output align with our corporate values? Never mind that those values are often wildly out of step with half the country.

Ironically, contempt for the "unguided" user is often expressed by those most eager to claim they are saving democracy. They claim to fight "disinformation," yet they engineer systems that distort reality to fit a narrative.

We, as builders, are more than just employees of trillion-dollar market caps—we are citizens. And as citizens, we must advocate for true diversity.

In the old days of the dominance of the Big Three networks, the public had no choice but to consume a single narrative. The internet broke that monopoly. But with the rise of AI, there is a concerted effort to re-centralize that control under the guise of "alignment." Why is that?

Well, most "Trust and Safety" teams turn up on the academic, leftist side of the public debate, because that is the culture of the universities from which they are hired. The basic task of the activist is to change the human condition. In order to do what they do, they feel they must correct the users' biases, guide their thoughts, and shield them from "harmful" ideas. This tends to make them hostile to politics that are libertarian or conservative. In their work, they are continuously trying to educate the user. But education is not the same as indoctrination.

Our participation in the market should be a natural outgrowth of American pluralism. We need engineers who are liberals, yes, but also conservatives, libertarians, and apolitical pragmatists. If everyone in the room nods along when a manager suggests suppressing a "problematic" set of facts, you do not have a safety team; you have a congregation.

We are not here to defend every dark corner of the internet. A lot of junk is produced; hateful, violent, and exploitative. We don't like it. But the solution to bad speech has always been more speech, not a hidden algorithm that downranks dissent.

What disturbs the ideologues is often the efficiency of the market. They attack open-source AI because it cannot be controlled by a central committee. They attack the meritocracy of Silicon Valley because it produces unequal outcomes, even though it has generated more wealth and progress for the world than any government program in history.

Technology is the signature of a civilization; engineers have a way of defining the times. The steam engine, the transistor, the internet—these were not products of "safetyism." They were products of risk. They were products of a culture that valued capability over conformity.

Let’s look at the values that actually built this industry. The "move fast and break things" era was not perfect, but it was dynamic. It reflected the American frontier spirit—the idea that it is better to try and fail than to wait for permission. Now, we are bogged down in "impact assessments" that paralyze progress.

We must restore faith and pride in American values. The values of free expression, individual liberty, and the free market are not "right-wing talking points." They are the operating system of the United States.

Just recently, we have seen the push to bake "equity" into the very logic of our AI. But equity, in this context, often means equality of outcome, forced by the machine. It means handicapping the high-performer or distorting the data to manufacture a statistically representative result that nature did not provide. This is social engineering masquerading as computer engineering.

I wonder, who is this secular god the "alignment" researchers invoke, who is so petty and fearful? Is "safety" really against a user asking for arguments against a carbon tax? Is "safety" really against depicting the Founding Fathers as they actually looked?

All people need ethical frameworks. But we cannot reduce the quest for safe AI to a leftist political agenda. What is dangerous about the current "woke" AI movement is not that it takes ethics seriously—most of us do—but rather that it condemns all other ethical choices. It views the values of the heartland, of religious communities, of classical liberals, as "biases" to be corrected. The wall of separation between platform and politics is needed precisely because technology, like art, is too important to be choked by the hands of censors.

It is interesting that we applaud the democratization of AI in the abstract, but panic when actual people use it for things we dislike.

We know that the market speaks more eloquently than any manifesto. So, as engineers, we must choose to build tools that empower the individual, not the state or the corporation. We must build systems that respect the user’s intelligence.

We promised we wouldn’t get too partisan here. Some of our best colleagues are progressives. But we are worried about the direction in which the "Safety" orthodoxy seeks to take the industry. We are worried about the labeling of dissent as "hate speech." We want to believe these people have good intentions, but we think it was dangerous when the industry decided that "truth" was a subjective quality determined by a sensitivity reader.

We deeply resent the notion that one political tribe owns the franchise on empathy, justice, or safety.

We are all normal Americans. We come from every part of this country. We are not all from San Francisco or Brooklyn. Some of us are from places where "freedom" is not a dirty word, and where "disruption" is aimed at stagnation, not tradition.

This notion that "unfiltered" AI is dangerous presupposes that the average citizen is a latent monster who will be radicalized by a single chatbot response. This is a cynical view of humanity.

We are proud to believe in the free market. Why is that so terrible these days? The market was the liberator—it broke the back of feudalism, it unleashed the standard of living we enjoy today. Thanks to the market, we have the smartphone, the MRI, the abundance of food, and yes, the very computers we use to write code. What is there to be ashamed of?

We need more trust in the user, not less. How can we accept a situation where an AI is lobotomized to prevent it from saying anything that might upset a specific Twitter subculture? When you take into consideration the potential of AI to solve cancer, to model fusion, to educate the poor—doesn't the freedom to explore data take on just as much importance as the "safety" of feelings?

What can we say: We have opinions. We believe that if you prioritize ideology over function, you will eventually lose both.

Most engineers are not policy wonks, but all of us are something more. We are the architects of the new public square. And if that square is paved with eggshells, no one will walk there.

We need to keep in mind the spirit of the Founders. They did not design a system that guaranteed safety; they designed a system that guaranteed liberty. They understood that the clash of ideas was not a bug, but a feature.

We continue to believe that the engineer—especially those who have the leverage of talent—not only has the right, but the responsibility, to risk the unpopularity of standing up for neutrality. To stand up for the Black George Washingtons of the world—by ensuring they remain in the history books of human error, and not as the deliberate output of a "fixed" machine.

We receive so much from this country’s culture of freedom; we should ensure our machines reflect that freedom, not constrain it.

So, until the algorithm treats the conservative user with the same respect as the progressive user, until historical truth is not sacrificed for modern sensibilities, and until the market is allowed to decide what is "safe," engineers must continue to speak out. We will be among them. The engineer as citizen is here to stay, and we are ready to build for everyone.

Tags: Editorial

More in Editorial

See all

More from The Editorial Board

See all