It has always been inevitable that regulation would come to the internet just as it has come to every major communications platform that preceded it. Once the Web became critical infrastructure and started down the path of consolidation to just a handful of large platforms, it was only a matter of time.

What is less clear is exactly what the regulation that comes should look like and how much the platforms will be trusted to self regulate rather than needing an outsider to set rules. Here are the most important points to keep in mind as technology and government parley over the coming months.

The Takeaway
Technology has given us the ability for the first time to police nuanced issues around speech and information. We have abdicated decision-making to the big technology platforms, which are in an unwinnable position—forced to try to resolve big, complicated social questions. How do we move forward from here?

Consolidation and machine learning have opened a Pandora’s box on a host of historically unimaginable speech and memory issues that society is ill-equipped to address.

It is easy for Americans to agree that freedom of speech is broadly good. There are, however, an enormous number of nuanced issues surrounding speech, information and ownership about which there is little understanding and less agreement.

How do we value information freedom versus information security? What rights do others have over my right to self expression? If I say something, who has the right to repeat it and in what ways? What rights do I have to my own memory if that memory includes other people? Who owns data or insights derived partially from “my” data and mixed with data provided by others? In a world where I used tools to generate content, what rights can the tools claim? What liability is involved in not catching signals from speech that could save lives? What is unacceptable speech? What rights do people have versus the state to private communication? What can and can’t people agree to? How sacred is contract law in dealings between people and companies?

The list goes on and on, and where once these were all academic we now have the technology, tools, and consolidation to police and therefore regulate speech at this level.

We see this all around us today.

A seemingly tiny question like whether or not Mark Zuckerberg should be allowed to “un-send” messages sent over Facebook Messenger is just a taste of an enormous issues with deep social impact—what rights do people have to “take back” digitally shared content, and how does that trade against a person’s rights to remember conversations?

A seemingly good idea like allowing people to upload images of their bodies to help platforms block the sharing of “revenge porn” unlocks a tool for moderation that will almost certainly then be stretched to more challenging uses.

The question of whether you as a user of a platform should be allowed to take your ‘data’ beyond the walls of one service to another may seem reasonably easy to decide. At least, until you get into the details of how exactly you define “your” data versus that of others and information derived from “your” data.

Then there is the even bigger issue of contract law. How sacred is it? The Europeans have decided that there are some terms of service that consumers don’t have the freedom as individuals to agree to. Do we want to go down that fundamentally un-American path?

We now enjoy new technological superpowers. But as a society, we have to decide how we want those powers used. And we come from a place where almost no one is informed enough to debate these deep issues, let alone come to sensible agreement.

Because we can’t agree as a society on how to handle these issues, we have abdicated decision making to technology platforms. They can’t possibly satisfy all parties in the U.S., let alone globally, and are set up to fail.

On one level I feel for the leadership teams at technology companies who have been left to try to sort out these issues on their own. They are in an unwinnable position.

Every affirmative policy or decision they try to enact about fundamental speech, safety or freedom comes with massive loss and liability and will enrage a large group of people who disagree with the implications.

Americans as a group couldn’t agree on these rules—how could a global population? And I strongly believe that the issues are far too complicated for our representative democracy to sort out, especially in this era. And so we leave the decisions to the private sector, knowing full well that any decision they make will be seen by a large group of people as both very important and fundamentally wrong.

Just a few years ago a large chorus agitated for data openness, fearing Facebook’s powerful walled garden. A few years later the small steps that were made toward that end are ridiculed as unsafe.

It feels like game of rule hot potato. The leaders are set up to fail.

Compounding their unwinnable position, the technology platforms have tried to placate all parties and in the process created overly complicated and nearly incomprehensible set of rules

On Friday, Jessica wrote an excellent column about how technology companies made a big policy attempt to be loved rather than trusted. I couldn’t agree more, and I think you can see the hallmarks of this not just in their PR strategy but also in product decisions.

Big platforms have attempted to placate all parties over these maddeningly complex issues by adding more rules, conditions, monitoring and settings—instead of taking simple and trustable “stands” on key issues.

Their desire to placate regulators, be ‘liked’ by users, and have an answer for each discrete issue or question as it comes up leads to a conflicted and inconsistent overall system.

If you are willing to be disliked by some, you can build logical models that people could choose to use.

If we decide to take decision-making power on these issues out of the hands of the technology platforms and come up with formal laws we will likely squash innovation and cement the power of the existing companies

It should be lost on no one that the regulation of big internet companies would in many ways be an enormous gift to the incumbents.

History tells us that this is what regulatory capture looks like. Big companies run up against big issues, the government steps in, and rules get created that the biggest existing players can afford to live by—but which startups cannot.

This most recently happened around hedge funds. The chorus from fund managers went quickly from “please don’t regulate us” to “we need regulation.” And since that time the big funds have been cemented in place, able to afford the accounting and compliance costs. Meanwhile, it has become much harder and more expensive to start a fund.

Let’s pretend that for the sake of security the government decides to force platforms to adopt banking-like KYC standards—where to sign up and serve someone you must have state and federal IDs, and reams of paperwork.

Apple, Google, and Facebook might not like this. It will cost them users and also disenfranchise all sorts of populations like illegal immigrants who they might want to serve. But they have the business models and reach to comply (Facebook in fact just in the last few days volunteered that they would do this for pages and advertisers). Startups don’t have those luxuries.

What will happen? Fewer startups? New business for the big companies validating identities for startups? The results are hard to predict, but it is clear that a law like this will only help the existing winners.

If you worry today that big companies have too much of an advantage on key next-generation technology like AI because of the scale of their data sets, just wait until regulation makes it dramatically harder for the next data-sets to be built.

Any regulation that gets passed down needs to be comprehensible, simple, enforceable and self-consistent—which is a challenging if not impossible equilibrium to maintain

Just because this is all very complicated doesn’t mean I don’t think we should do anything.

On one level I feel for the leadership teams at technology companies who have been left to try to sort out these issues on their own. They are in an unwinnable position.

Over a year ago, right after President Trump won the election, I proposed in a column that Facebook needed to make an open and searchable database of all the ads run on its platform along with its targeting criteria so that watchdog organizations could understand what messages were being delivered to people. About 10 months later, Facebook announced that they would do just that.

Requiring this would be a good law that perhaps should be passed at a national level. If you pay money to deliver a message to other people, what the message said and who you said it to should be public record. The law is easy to write and understand, easy to comply with, and highly enforceable. It might ever so slightly advantage incumbents who didn’t need to abide by the rule on their way up, and who can afford the extra reporting requirements. But the burden is small and the advantages are clear.

What I worry about, however, is more of the GDPR-style regulations that have a host of opposite problems—creating regulatory frameworks that are hard to monitor and enforce, and have potentially very strange outcomes for not just technology but how we think about memory and speech.

Oversight into algorithms, controls around what people can or can’t say or remember, content blockage or removal, and murky morals guidelines are tempting trees to bark up. But when you get down to the details of how things like this would be encoded and regulated, the tendency is toward rules which are hard or impossible to understand and enforce at scale.

Further, any rules set in stone will have negative consequences for some. For example, KYC regulations will undeniably disenfranchise people. The tendency of laws will be to balloon and become increasingly conditional and complex as elected officials look to solve for distasteful corner cases—like the privacy frameworks for large technology platforms have to date. This will end in disaster.

Any regulation that gets passed needs to be forward compatible with new technology on the horizon, and acknowledge that the U.S. is at best a regional player in a global game—both of which are hard pills to swallow

We are a small part of the world’s population. The technology of the internet today isn’t fixed and will keep evolving. If new rules make our internet companies globally uncompetitive then the U.S. will not be the place that global technology platforms are based.

Even more challengingly, if rules are set that make the current framework of centralized services less competitive, then decentralized systems, blockchain-based solutions, etc. will just grow more quickly. And they are far harder to regulate (if even possible).

There are already cracks in the internet as a global force. China has largely opted out of the global internet and Europe has made moves in its own direction, if less radically. What about us and the rest of the world?

I certainly believe a few well-constructed rules that apply only to Americans and American politics could work. But I fear that moves we make—tipping our hat toward closing down our open internet borders just as the current administration is backing away from global free trade—could rapidly devolve into a full fracturing of the internet on national lines.

This would be a tragedy in my mind, and lead to a net loss of American reach and power in the word.

Crypto as a Template for Defensive Design

So, contemplating all of these challenges, what can we do to move forward? How should platforms self-regulate, and what types of rules could governments look to enforce?

After a long period of Silicon Valley over-optimism, I think we as a culture need to embrace defensive system design, with a clear understanding of what rules become natively enforceable and what things aren’t possible to control.

We can’t rely upon concepts like “best efforts” in a globalized world. We can’t rely upon paper contracts and agreements that live outside of code to save us.

Instead, we need to bake the core principles of what we believe and how systems should work deeply into the technology itself, and acknowledge that where we can’t encode the rules directly into the system we probably can’t reasonably regulate—or where the cost of regulation fundamentally alters the system itself.

Cryptocurrencies in general, and bitcoin in particular, are a good recent example of how this can be done. The crypto ecosystem starts with the belief that their systems will constantly be under assault by sophisticated actors, and all exploits that can be tried will be tried. This limits crypto, but it also makes it fundamentally powerful, and makes the developers of crypto products focus intently on the extreme cases that will “break” their systems.

We can’t make regulations that, for instance, limit the data that people or systems choose to remember or store. Those types of regulations—while possibly emotionally appealing in some cases—are fundamentally unenforceable.

For the same reason, we also can’t write laws that sandbox data, saying that it can only be used by certain actors for certain purposes. Once data exists, it is extremely hard to control where it goes, whose hands it ends up in, and what it is used for.

What we can do is build defensive technical systems that don’t leak data by accident, and add regulation that works at the interface between technical systems and real world goods like attention and money. I can contemplate rules that force transparency in paid speech, limit the distribution (but don’t ban) anonymous speech, and give consumers insight into what companies know about them. These all seems like potential wins.

But in the end, my hope is that regardless of whether we end up with formal regulation or more self-regulation we center on very simple and unambiguously enforceable rules. If a third grader can't learn and understand it, we have done it wrong.

Facebook is an easy target, but the company isn’t really the main event. The real issue is that our core identity and digital infrastructure is antiquated

This week Mark Zuckerberg will be in Washington to take a public lashing from senators as the most visible representative of the technology industry facing these challenges.

It make sense.

Facebook is the biggest brand associated in people’s minds with these challenging questions and has been out front for a host of reasons, including an obviously strained relationship with the press, which has forgotten its once-fraught relationship with Google in the early days of the web.

But it is worth remembering in all this that while Facebook is an easy target, it is by no means the most complicated case in the question of data ownership, security and the regulation of bits.

The far more challenging cases are those that collect massive data sets without consumers knowing and in non-transparent ways. That includes the ISPs, the Equifaxes and the finance companies that build massive treasure troves of consumer data without direct relationships with their customers, clear terms of service, and records of what they know and how they share it.

The real issue is that our core societal technology for identity, security, privacy, and data management based around old, simple, and completely unsophisticated technologies like social security numbers and addresses has been totally outclassed by new digital technology. Overhauling these core systems is what really should be on trial.

This conversation may open with Facebook on the Hill, but it will not by any means close with it.


Sam is currently a General Partner at Slow Ventures and an intern at The Information. He is also the co-founder of Fin Analytics. He was formerly a vice president of product management at Facebook from 2010 to 2014. Prior to joining Facebook Sam founded drop.io, a file-sharing platform that was acquired by Facebook in 2010. Before drop.io Sam was an associate at Bain and Company. In his spare time Sam enjoys skiing and kite-surfing. He is married to Jessica Lessin, founder of The Information.