A Red Team is a non-hostile group or individual that challenges a system or an organization by simulating a potential adversary. This is done in order to uncover security vulnerabilities that are often not apparent to organizational insiders. The term comes from the cold war practice of US forces taking a Soviet, i.e. “red”, perspective. Russia used to do the same thing, calling it a “blue team”, while taking the US perspective.
Red Team exercises can be applied in both the cyber and the physical realm (and in the cyber-physical arena), but this article will deal with physical Red Teaming.
The biggest problems I’ve encountered in the field of Red Teaming is that the concept means different things to different organizations (and even to different people within the organization). It can be implemented in many different ways for a number of different reasons and with a number of different goals. The diversity of views on this topic can, I can tell you from experience, open you up to some pretty embarrassing situations if everyone isn’t on the same page.
I’ve been involved in private sector Red Teaming in one way or another for around thirteen years. Red Team exercises have long been a part of HighCom’s field supervision program. I started out as an operator in this program and then ran it for a number of years. I’ve also applied Red Teaming to organizations other than HighCom and have managed various types of Red Team-type activities in the covert protection and surveillance detection fields (both in training and in operations).
With this in mind, I’m going to try to explain a few things about Red Teaming, but I want to make it clear that I am in no way claiming to be an objective authority on the topic. These are my own insights and recommendations, based on my own experiences.
One Size/Type Doesn’t Fit All
Red Teaming can be done for the purposes of testing, training, drilling, evaluating or all of the above. It can be applied to facilities, campuses, residences, event venues, travel routes and more. It can be applied in situations where no security personnel are present or in situations where entire security teams are being tested. It can be done when security forces are ready for it or in cases when they’re not; in highly controlled scenarios or in more open-ended environments. Red Team exercises can test anything from outer-perimeter visual control to inner-circle access control. They can take the form of anything from Red-Zone hostile surveillance role-play (mobile or static) to inner-perimeter penetration testing.
It’s also worth considering that Red Team exercises can be applied as a singular one-time snapshot test, a set of tests that give a less anecdotal picture, or as on-going tests to measure change over time.
Each case has its own goals, methods and measurements of success. There’s no one size, one goal, one method, one measurement of success, to fit all.
What Is The Goal?
As mentioned, Red Team exercises can have various goals, and methods for reaching those goals. Here are three different types (there are others):
- Surveillance Mapping—gaining a better understanding of the area around the property (the Red Zone): locating potential vulnerabilities and avenues of attack in the property/area in question, and identifying the surveillance vantage points which can be used to observe said vulnerabilities. More on this topic here.
- Outer Circle testing—checking to see if security operators are: a) maintaining deterrence by appearance, b) detecting people observing the property (by paying attention to surveillance vantage points and to individuals spending time in the area), and c) exposing/acknowledging the individuals they’ve detected. More on this here.
- Penetration testing—checking to see how easy/difficult it is to penetrate into the property in question, and possibly into sensitive locations within the property (server room, CEO office, Board room, etc.). Keep in mind that penetration usually only follows a stage of external surveillance, which, as mentioned above, can be assessed with Outer Circle testing.
Adversaries Don’t Surveil and Penetrate For No Reason
I’ve seen too many Red Team exercises that supposedly “succeed” in revealing security gaps, but that don’t make much sense from the perspective of a realistic adversary that weighs their risk/benefit ratio. To mimic an adversary that doesn’t have well defined goals that are balanced against the risk of failure is to present a non-realistic picture, which makes it a less than adequate Red Team exercise.
So what if a Red Team member managed to get into the building? For an actual adversary, getting into the building is a tool, not the goal. If they can’t see what they potentially have to gain by getting themselves in there, and see no viable avenue of escape and exploitation afterwards, they’re not very likely to enter in the first place. It’s just not worth the risk.
So, if the exercise is supposed to mimic reality, but the Red Team’s goal is only to get into the building, it might not be a very good exercise because it doesn’t mimic what actual adversaries are more likely to do.
Follow a Hostile Planning Process
Adversaries (or anyone else for that matter) don’t just take risks for no good reason. The way an adversary most often works is by following some form of hostile planning. Even if this process is short and not very professional, at the very least, it’ll be based on information leading to a decision leading to execution leading to escape and exploitation.
Even if the Red Team exercise only tests one part of the process, in order to make it more realistic, this part should at least theoretically fit into some hostile planning process.
A Red Team exercise that doesn’t take these factors into account isn’t going to be very realistic:
- Actual hostile surveillance isn’t just conducted for fun, it’s a necessary intelligence gathering tool for a hostile planning process.
- Criminals don’t just penetrate into facilities or surveil individuals for fun, they do it in order to attain certain goals.
- Hostile goals need to be balanced against hostile risks—hostiles aren’t trying to fail or get caught.
- Hostile goals need to be balanced against hostile costs and resources—hostiles aren’t going to invest infinite amounts of time, effort and money on undefined goals, or an unwise return on investment.
Any Security Measure Can Be Breached
We all know there are no impregnable facilities or foolproof security programs. A skilled enough Red Team operator can almost always figure out a way to penetrate through defenses. The question is, is the Red Team operator just trying to score points and show how skilled they are, or are they trying to uncover gaps in a security program that are more likely to be exploited by actual adversaries.
Red Team operators (and also investigative reporters) often reach unfair conclusions about gaps in security, based on anecdotal cases when they managed to get into a facility or breach security. Sure, if you’re a calm, friendly, well dressed, intelligent, well-spoken individual who has extensive prior knowledge of the facility in question and utilize this knowledge for establishing a solid cover story, then yes, it’s likely that you’ll be able to get in. But hostile individuals rarely poses even one of those factors, let alone all of them. Showing off your own personal capabilities with some ‘gotcha’ penetration result doesn’t really serve your client because it doesn’t represent a realistic adversary.
I myself have penetrated into every type of facility, campus, special event and private home. I’ve “visited” a neighboring Middle Eastern country without a passport—twice. I’ve talked myself out of Egyptian police custody after being arrested for ammunition smuggling (long story…). I’ve received a year-long visa with a work permit in Japan after establishing an assumed Canadian identity (more on this in my book). And on top of all that, I also have extensive, in-depth knowledge and experience in private security operations.
So, yes, if I put enough time and effort into it, I can probably get into your facility or “breach” your security. But what would be the point of it? To show that Ami Toben can penetrate your defenses? If it doesn’t give you a realistic representation of what an actual, likely adversary might do (an adversary with criminal goals), then it’s not worth much. (Though admittedly, it can be quite fun…)
In order to keep things realistic—and therefore useful—try to fit your Red Team exercises to more realistic adversary goals, resources and capabilities.
Red Team Exercises for Training
Though Red Team exercises will always be in the form of an adversarial test, they can also be utilized for the purposes of training. Challenging security operators with Red Team scenarios on a regular—yet unpredictable—basis can cause them to perform better. This is why hostile surveillance role-play is so important during surveillance detection training. It’s one thing to talk about how to detect a hostile observer, it’s quite another to experience it (even if it’s just a simulation).
HighCom has been implementing this strategy with success for over fifteen years, with each facility being tested once or twice a week. Now, one of the questions that comes up regarding this idea is what is the value of Red Team-testing security operators who already recognize the Red Team members?
The answer to this question has to do with goal definition. When your goal is to test-train security operators on their attention to people in the environment—people in surveillance vantage points and people trying to get into the property—then there’s still value even when the Red Team members are recognized. This is because a security operator can only recognize a Red Team member if they detect them, and they can only detect them if they’re paying attention to the relevant locations in the first place.
The follow-up action that a security operator performs when they recognize a Red Team member (usually in the form of a wave) isn’t as important as the attention we want them to pay to people in the environment. Remember, in order to recognize someone, you have to detect them first. Even if all this strategy gives you is security operators who pay more attention and detect people in the environment, then you’re doing quite well.
We always follow up the Red Team portion of this training with an immediate face-to-face evaluation of how the operator did. And this is followed with on-the-job field training on how to improve visual control, detection and response if the operator’s performance was lacking. The results of each check-up are put into a dedicated evaluation form, and these forms factor into the operator’s employee evaluation program.
I can tell you from experience that by implementing this type of test-train-evaluate trifecta over time, I’ve gotten security operators to the point where no one can get within two blocks of the facility they’re securing without getting detected, usually within thirty seconds. I would often get immediately detected as I was still approaching the facility in my car. Security operators that have a level of visual control that enables them to immediately detect and recognize a Red Team member as soon as they get even near the area in question are not likely to miss an actual hostile observer (who will have to spend a considerable amount of time observing their target).
Remember, attacks and/or penetrations don’t come out of nowhere. They usually depend on initial external observation in order to collect information (hostile surveillance). Security operators that can detect and expose people who are doing this can nip the hostile planning process in the bud, and prevent it from reaching the later stage of penetration and/or attack.
Important Mistakes to Avoid
Losing control over the situation.
Good examples of this are cases where the Red Team exercise includes placing a suspicious item that security operators need to detect. Leaving an item on the property (no matter how well it’s marked as harmless/for training purposes only) can backfire if it gets discovered by a non-security employee, visitor or passerby. This can also happen if it’s the Red Team member that’s detected placing a suspicious item, observing the property or trying to penetrate. People who don’t realize what’s going on can even call the police in some cases.
Though this can also be viewed as a positive result, I can tell you that individuals who didn’t realize it was harmless, to say nothing of police officers that respond to a call, are not going to appreciate it. And neither will the client.
Taking it too far.
I’ve seen cases where Red Team members have used ladders to break into facility windows at night or dressed up as armed Islamist militants to test security response. I wish it were needless to say that this kind of stuff can really backfire (sometimes dangerously so) and is a terrible idea. I know of cases where the police have been called by people who didn’t know it was a Red Team exercise, and one case where the facility actually went into lockdown mode.
Do not go overboard with your role-play and always maintain control over a Red Team exercise. Make no assumptions about what’s going to happen and make sure you can quickly identify yourself and explain what’s going on if the need arises.
Learn more about this subject—and many others—in my master class on Hostile Activity Prevention.
Utilizing Israeli know-how and delivered by me, Ami Toben, this online course teaches actionable, time-tested methods of prevention, detection and disruption of hostile attacks.
3 thoughts on “Red Team”
Spot on. Clear and to the point. It’s really good to see people understanding the concept of red teaming as a whole and then applying it to their field, in this case physical surveillance and security.
Well done. There is no such thing as perfect security, only varying levels of insecurity. I’d quibble with a few small points, but in my world, the mantra is that when you’ve seen one airport, you’ve seen *only* one airport, because every facility is unique…. planning, design, operational and threat environment, staffing, training and more.
[…] and then seen how well these operators performed in day-to-day operations, during incidents and in Red-Team tests they’ve had to regularly go […]