Try not to take Zero Trust too literallyAlex Fields
The world of modern medicine uses this term “minimal effective dose” or MED. The idea is to find the optimal balance between efficacy and toxicity, or benefit and cost, of a given intervention. For example, what is the minimum dose or concentration required of a drug to produce the desired biological response? And at what point are we just multiplying the risk of side effects without additional benefit?
It is worth noting that there are always side effects in pharmacology—listen to the end of any drug commercial. Therefore, in so-called “Conventional Medicine,” much of the expertise that present-day doctors bring to the table is a sort of “balancing act”—weighing known risks and rewards for their patients when putting them on various prescription drugs.
Anyway, I submit that this same balancing act applies to cybersecurity.
On Efficacy vs. Toxicity
I love consulting work. As a consultant, I am never “sitting still,” and I get to experience a lot of different corporate cultures because I work with different types of customers every single day. And there is one thing that I have observed over and over again: organizations with “stricter” IT departments tend to have more cultural problems. Specifically, I have observed that excessive surveillance and control tends to breed more revolt, and sometimes more toxicity, within an environment.
Let me describe a recent example—a “day in the life” so to speak. On this particular engagement, I was brought in by a member of the executive management team with the specific request to help “sort out” some of the issues they were having, specifically with Microsoft 365 and their collaboration experience. This was a mid-sized organization (hundreds but not thousands of seats), so I met with the internal IT team, and I conducted separate interviews with key stakeholders from other departments within the organization.
According to the internal IT team, everything was perfectly fine (huh—wouldn’t you know it). The real problem here, I was told, wasn’t the technology. The problem was with those stupid end-users. After my initial conversation with IT, I didn’t have a whole lot to go on. I was just told some version of “They just don’t understand what they’re talking about,” and besides, “we have to keep them safe, because Zero Trust,” or something.
And what did the end-users say? Well, to put it lightly, they were frustrated. They felt that their complaints were not being heard—falling on deaf ears. I will share with you some paraphrased quotes that I remember:
- “I am using WhatsApp groups because Teams is basically useless to me.”
- “I never had this many problems at my last job, and we used all the same software, so it must be something those IT dolts are doing wrong.”
- “I think it’s possible Microsoft just sucks but it’s the only thing IT knows, and they can’t think outside that box to help us get our work done any other way.”
- “I never get actual answers to my questions, so I don’t even bother asking IT anymore.”
When I returned to IT and confronted them with some of these sentiments, they reacted… poorly. And of course, they had an answer or excuse for everything:
- “What the hell are they even talking about? We have a process to request a Team, even one that allows external users (!), but it has to go through the proper channels If they would just use the process we gave them in the first place, there would be no problem here… Now we have to remind everyone again that using unapproved apps goes against the Acceptable Use policy.”
- “Well, if someone said there were ‘no problems’ at their former employer, then their IT department was probably doing it all wrong! I bet it was like the Wild West and everyone had local admin privileges and could install anything they wanted to. That’s just not how we work here… we’re a ‘Zero Trust’ department.”
- “If they are really so frustrated with Microsoft how is that our fault? We never said they couldn’t go shopping for another app if there is something they don’t like, but we need to be part of that conversation—we’ve had issues before when departments just adopted some new web app without asking us first, but here we are today supporting that application anyway.”
- “Why are you taking their side?” (I hadn’t taken any sides to my knowledge when this comment came out). “Don’t assume they asked meaningful questions to begin with, or that they could even comprehend a good answer if they heard one.”
Very quickly I determined that I had accidentally stepped into a minefield of toxicity. Mind you, all I am doing is talking to people, and asking questions. And then bringing the (repressed?) responses back out into the open.
Anyway, through more probing I learned there wasn’t really a process being followed when adopting new safeguards or security controls. It was more like, “Oh hey, I just found out we can turn off the ability for people to use sharing links,” or “Let’s go ahead and make it so nobody can invite guests into a team without asking permission first.” Therefore, if Microsoft presented a dial or button that could maybe make something safer, the IT team felt that they just had to press it. After all, “Zero Trust,” and all that.
Sometimes an email went out alerting people to some change or another—for example, “Attention: there’s a ‘New Teams Request’ form at this link.” Oh, and if you read the whole email, you’ll learn it is best to complete this request from a laptop, and not a mobile device—they were “still having problems” with displaying this form properly on mobile devices.
And this was the norm for the organizational culture. It was just like, BAM! Suddenly stuff works differently than it did before. “You’re safer, and you’re welcome. Sure, things are harder now, but that’s life!”
Ultimately, of course, I did side with the end users. The real magic of consulting is sometimes taking sides while maintaining respect among all the parties involved, which can be… challenging. You see, in this case one of the core problems was that IT imposed too many security and compliance controls too quickly and without proper communication or managing expectations. End users naturally became frustrated, and ultimately went elsewhere to get their work done. Really it was very simple.
But let’s go just one level deeper into this problem. See, IT assumed up front that end users were “stupid,” irrational people who needed to be saved (not only from the scary attackers out on the internet, but also from themselves). The problem with this thinking is that when you assume people are stupid, you create the very conditions for them to behave “stupidly” in the end. And that might mean conducting business on WhatsApp instead of Teams.
So how do we fix this?
The first thing I had to do was explain to the IT folks, and Management, that they had fallen into the Pit of End User Despair. Climbing out of it means establishing and then maintaining a certain level of mutual trust and respect with your user base. The fact that I walked into a bunch of name-calling (on both sides) was proof that their strategy, to date, had failed.
But this wasn’t just an IT problem. The Management team also had to shoulder the blame. After all, IT is not supposed to make unilateral decisions about cybersecurity and risk tolerance for the business. That should be Management’s job. But they wanted to ignore anything that seemed even remotely technical, and just “leave it to IT.” This is an example of the tail wagging the dog.
In a way, I get it: the Execs wanted to stay out of the toxic minefield I described stepping into earlier, so they would tiptoe around the problem instead of addressing it head-on. I mean even calling me in initially was in some ways manifesting this same pattern: ultimately, they admitted their hope was that an outside, neutral consultant like me could look through their settings and say, “Oh this button right here will make all the problems go away. A simple misconfiguration!” But it wasn’t really like that.
So, once I was able to name the real problem, which was more of a people and process problem, we could start working on the solution. At this point, I took them back to “Square One” (which is how I came up with the name of our peer group by the way). Since they did not have a CISO, I suggested we start an “Office of the CISO” instead, which was chaired by a representative from the Management team, and attended as well by a rep from the IT department. Since they were not following a formal cybersecurity framework yet, I introduced them to NIST and the CIS Controls.*
I reminded everyone that the goal of this new endeavor would not be to impose as many restrictions as possible, but rather to address the risks that matter the most, and to “right-size” their approach given the business objectives, budget, and so on. Pursuing every safeguard to the greatest extent would come with a much higher price tag, and potentially more friction for the end users, which is a hole that we were already going to have to dig ourselves out of.
Therefore, IT’s role in this new office was only to inform: explain the risks and potential mitigations, with benefits and drawbacks, and that’s about it. Management ultimately owns the decisions.
In the end, as a result of the first couple of “Office of the CISO” meetings, we reversed many of the changes IT had made in the name of “Zero Trust.” Management ultimately agreed with my viewpoint, which is that restricting a user’s capacity to work with simple invitations, sharing links and so on, was a trade-off that came with more harm than benefit. We still had some strong security controls in place (e.g., Conditional Access), but the user could also “move around” a bit more freely in the ecosystem provided to them by their employer.
Don’t take Zero Trust too literally
During this engagement, I can’t tell you how many times I heard the term “Zero Trust” from the lips of the members of the IT department. It was their “bottom line” justification for every restrictive control they had put into place. At one point, in a moment of frustration, I just blurted out, “You can’t take Zero Trust too literally.”
I think my tone shocked those present at the time. I suspect they were also genuinely surprised to hear someone challenging the very notion of Zero Trust—it’s almost as though I had uttered an unspeakable blasphemy. I calmed myself down to explain the reasoning further.
You see, all of us are dependent on trust to survive. Without it, we could not have any semblance of commerce whatsoever. I wouldn’t even feel safe enough to leave my house and walk down the street if we truly lived in a world without trust. Therefore, “Zero Trust” taken literally would mean having no communications or business dealings with anyone. Ever. Nothing safer than that, am I right?
So, let’s just agree that there always has to be some basis of trust on which to operate at all. Which is why I had exclaimed, accurately, that these folks had taken the term (which is more of a marketing thing really) a bit too literally. Instead, I suggested, let’s examine what we actually mean when we say, “Zero Trust,” which is more like, “Never take trust for granted.” Microsoft uses three pillars or concepts in their literature on the subject:
- Verify explicitly: Don’t assume trust, but do verify it (e.g., Conditional Access)
- Least privilege: Only grant enough access to do the job needed (but, and this is key, in a way that still makes the job easy to do).
- Assume breach: Regardless of what other protections or safeguards you have in place already, pretend there could be someone lurking in your environment who doesn’t belong there right now.
I offer only that we can apply these principles without actually disrupting users to the point that they feel compelled to revolt and run away to WhatsApp (or wherever).
This means you approach risk strategically: you address the risks that are most important to your business, and at the same time, you accept some risks in the spirit of finding that right balance for your particular needs and situation. Like, maybe it’s okay for employees to invite a guest to co-author on a document or collaborate in a Team without jumping through a bunch of internal hoops first, but you still require those outside guests to authenticate (i.e., no ‘anonymous’ entries in your audit log).
I think this balancing act is a little bit like applying the concept of “Minimal Effective Dose” to cybersecurity. There are some things you have to do for your “health” just to maintain a basic level of cyber hygiene but remember that not all medicine is created equal. Other cybersecurity activities come with (potentially severe) side effects. At some point, the toxicity of these accumulated side effects may negatively impact the business in both perceptible and imperceptible ways.
*Soon I will publish a new course/kit all about building an MSP practice based on the CIS Controls (v8). Stay tuned!