The first job

Every security researcher has a year zero. Mine was 1994, at a small university on the edge of the North York Moors, where I learned that the most useful thing about a network is how much it will tell you when nobody is watching.


University College Scarborough occupied a cluster of buildings that had clearly been through several previous lives before settling, somewhat uncertainly, into higher education. In 1994 it was not yet part of the University of Hull. It was its own small thing, with its own small network, and I was the person responsible for keeping that network running. I was twenty-two. I had no idea what I was doing. I learned fast.

The web, in 1994, was not yet the web most people know. Mosaic was new. Netscape did not yet exist. The internet was largely text: email, Usenet, Gopher, FTP. The browser that would change everything had only just arrived, and most people had not noticed it yet. What we had was a TCP/IP network connecting a cluster of Unix workstations, a few Windows PCs, and some printers, and the job was to keep it all speaking to each other with minimal drama.

What early network administration taught you

What I did not appreciate at the time, and only came to understand years later, was how much that environment taught simply by being small enough to hold in your head. I could trace every cable. I knew every IP address. When something stopped working, there were not many places to look. You developed, almost without realising it, a mental model of the whole system — where data was supposed to flow, where it actually flowed, and the gap between the two.

That gap is where security lives. Not in products, not in policies, but in the difference between what you think your network does and what it actually does. In 1994 I had no vocabulary for this. Nobody used the word "threat model" in a university IT office. But the habit of building a complete mental model of a system — small enough that you can hold it, detailed enough that anomalies register — is a habit I have never lost.

The tools were primitive by modern standards. Ping and traceroute. netstat. A very early version of tcpdump that I had compiled from source because no packaged version existed for the OS we were running. Log files that wrote to local disk and had to be read manually because nothing aggregated them. You learned to read logs the way you learn to read any language: slowly at first, then faster, until the patterns became almost subconscious.

The thing it got right

What Scarborough got right, inadvertently, was scale. There were perhaps four hundred users on that network. When one of them did something unusual, it was visible. A workstation suddenly generating traffic at midnight. A login from an account that had been dormant for months. A file transfer that looked wrong for its type. These things registered because the baseline was known.

The hardest part of security work in large organisations is that the baseline is never fully known. There is too much. The signal drowns in the noise. At Scarborough there was enough signal and little enough noise that you could read the difference, and reading that difference, day after day, built an instinct that no amount of classroom instruction would have given me.

I stayed for just over a year before moving to something larger and louder and more expensive. But the instincts formed in that first year — the model-building, the anomaly-detection, the habit of reading logs before drawing conclusions — those came with me. They still come with me. The tools have changed beyond recognition. The instincts have not.

Every PING starts somewhere. Mine started in a small office in Scarborough, watching a network I could hold entirely in my head, learning to read the gap between what it was supposed to do and what it actually did. That gap, it turns out, is the whole subject.