Interview: Hacker OPSEC with The Grugq

grugq body Interview: Hacker OPSEC with The Grugq The Grugq is an world renowned information security researcher with 15 years of industry experience. Grugq started his career at a Fortune 100 company, before transitioning to @stake, where he was forced to resign for publishing a Phrack article on anti-forensics. Since then the Grugq has presented on anti-forensics at dozens of international security conferences, as well as talks on numerous other security topics. As an independent information security consultant the Grugq has performed engagements for a wide range of customers, from startups to enterprises and the public sector. He has worked as a professional penetration tester, a developer, and a full time security researcher. The Grugq’s research has always been heavily biased towards counterintelligence aspects of information security. His research has been referenced in books, papers, magazines, and newspapers. Currently an independent researcher, the grugq is actively engaged in exploring the intersection of traditional tradecraft and the hacker skillset, learning the techniques that covert organisations use to operate clandestinely and applying them to the Internet. You can follow him on Twitter at @thegrugq.

John Little: You blog and have given conference presentations on Hacker OPSEC. You started doing this before the recent NSA revelations (and the general hysteria surrounding intelligence collection) but you were already warning hackers that states had superseded them as the internet’s apex predator. In just a couple of years we’ve moved from the seeming invincibility of LulzSec, to high profile busts, and now onto serious concerns being raised about the every aspect of the internet’s architecture, security models, and tools. Rock solid OPSEC is a refuge but maintaining it for long periods of time under significant pressure is very difficult. The deck is obviously stacked against anyone trying to evade state surveillance or prosecution so where do freedom fighters and those with less noble intentions go from here?

The Grugq: You raise a number of interesting points. I’ll ramble on about them in a moment, but before that I’d like to clarify for your readers a bit about where I am coming from. Firstly, I am not a “privacy advocate”, I am an information security researcher. My career in information security has been mostly focused around denial and deception at the technical level.

Recently, however, I became aware that this “fetishizing the technology” approach is simply not effective in the real world. So I turned to studying clandestine skills used in espionage and by illicit groups, such as narcotics cartels and terrorist groups. The tradecraft of these clandestine organizations is what I am trying to extract, inject with hacker growth hormone, and then teach to those who need real security: journalists; executives traveling to adversarial environments; silly kids making stupid life altering mistakes, etc.

The media has actually expressed a lot of interesting in improving their security posture, and I am engaged in helping some journalists develop good OPSEC habits. Or at least, learn what those habits would be, so they have some idea of what to aspire to. There is a strange intransigence with some who reject improved security with the line: “but we’re not criminals! Why do we need this?” Well, the only answer I have is that OPSEC is prophylactic, you might not need it now, but when you do, you can’t activate it retroactively. As I phrased it in my “The Ten Hack Commandments” — be proactively paranoid, it doesn’t work retroactively.

So, that’s how I’ve arrived at hacker tradecraft, and where I’m trying to take it. On to the issues you’ve raised about good OPSEC and living a clandestine life.

The stress of the clandestine lifestyle is something that people tend to gloss over all too easily. This is an observation that comes up frequently in the literature about terrorist groups, espionage agents, and revolutionaries. There are a lot of compound issues which combine to make this sort of “good OPSEC” lifestyle very unhealthy for the human mind:

1. Isolation
2. Compartmentation of the ego
3. Paranoia related stress

Isolation provides the strongest security, and all good security involves a significant investment in maintaining a low profile, “going underground”, “off the grid”, etc. This means that the clandestine operative has reduced visibility over the social and political landscape, and their telemetry will suffer. Degraded telemetry means they will be unable to self-correct and reorient to what is happening around them. If they are part of a cell, a group of operatives in communal isolation, they will tend to self reinforce their ideology. Effectively radicalizing and distancing themselves further from the mainstream norms of society. This additional isolation can create a feedback loop.

If the operative isn’t living a completely isolated clandestine lifestyle in their Unabomber cabin, they will have to isolate parts of their individual selves to compartment the different aspects of their lives. There will be their normal public life, the one face they show to the world, and also a sharded ego with their clandestine life. Maintaining strict compartmentation of the mind is stressful, the sharded individual will be a sum less than the total of the parts.

As if that wasn’t enough, there is the constant fear of discovery, that the clandestine cover will be stripped away by the adversary. This leaves the operative constantly fretting about the small details of each clandestine operational activity. Coupled with the compartmentalization of the self, the operative also has to stress about each non-operational activity, will this seemingly innocent action be the trigger that brings it all crashing down?

Seriously, maintaining a strong security posture for prolonged periods of time is an extremely stressful and difficult act. Operatives working for the intelligence agencies have a significantly easier time of it than those on the other side of the protection of the state: e.g. their agents; hackers; terrorists, and narcos. The “legal” operatives have peers that they can confide in and unwind with thanks to the protections of the nation state. The true clandestine agents must be guarded with their peers, the public and the adversary. Any peer might be an informant, either now or in the future. Opening up and being friendly with their peers is part of what lead to the unraveling of the lulzsec hacker group.

This leaves people who need to operate clandestinely and use the internet with a real problem. How can you be on the Internet and isolated? Well, compartmentation is the only answer, but it is expensive and fragile, even a single error or mistake can destroy the whole thing. This is why I’ve advocated that people who seek to operate clandestinely combine deception, that is, multiple covers, for their compartmented activities. It is possible to embed tripwires into the cover identities and be alerted when they’re blown.

My thinking these days is that an operative must minimize the time that they are engaged in a clandestine operation. Something like the theory of special operations, the period of vulnerability only grows the longer the operation goes on. Clandestine operational activity must be compartmented, it must be planned, it must be short in duration, and it must be rehearsed (or at least, composed of habitual actions). It is possible to do, and I believe that even non-experts can pull it off, but it must be limited in scope and duration. Prolonged exposure to underground living is caustic to the soul.

John Little: There is a significant amount of paranoia circulating in hacker and activist communities right now. How much of it is justified? More importantly, how should people go about conducting a realistic personal risk assessment before they start piling on layer after layer of OPSEC? How can they strike that balance between the tedium and isolation and security that is “good enough”?

The Grugq: There is certainly a great deal of paranoia, some of it justified, some of it unjustified, and some of it misdirected. I think it is important to remember that paranoia is unhealthy, it is paralyzing, it is divisive, and it is harmful to operational effectiveness. The goal to aim for is caution. Allowing the adversary to inflict paranoia on you, or your group, gives them an easy psychological operation “win”. So lets drop the paranoia and figure out what security precautions we must take in order to operate safely and effectively.

As you bring up, the core to effective security is performing a risk assessment, deciding what information is most important to protect, and then developing mitigation strategies to safe guard that information. There are books and manuals that go into this in great depth, so I won’t spend a lot of time on the details.

A risk assessment should focus on the most high impact items first. To determine this, you list your adversaries and group them by intent and capability. So the NSA would have a very high capability, but probably has a low intent of targeting you. Then you make a list of information about your secrets, what you are trying to protect, and group that based on the negative impact it would have if it were in the hands of an opponent. The most damaging information must be protected from the likely and the most capable adversaries.

Generally speaking, if you’re engaged in a clandestine activity that you want to protect, the core information to secure is:

1. Your identity
2. Your clandestine activity
3. Your association with the activity

So lets take the example of the Dread Pirate Roberts, who’s been in the news recently after he got arrested. His adversaries were highly capable, including a wide range of law enforcement officials from across the globe. They were highly motivated, because DPR and his site were very high profile. So you have high capability, and high intent. Not looking good so far.

The information that was most important was his personal real world identity, followed by his location. Protecting that information would require:

1. Robust compartmentation
2. Reducing his exposure to the most capable adversaries (e.g. leave the USA)
3. A strong disinformation campaign
4. Limiting his time in “the dragonworld” (to use J. Bells’ term for the underground)

For most people engaged in a clandestine activity this list is probably what they will want to follow. The exact mitigation enacted for each component in the list is case dependent. As we discussed earlier, and as you’ve said, we need to find a good balance between an aggressive security posture and living a rewarding life.

Remember, the goal is to reduce the quantity and the quality of information available to the adversary.

John Little: So a point which both of us comment on with some regularity is the fact that security is rooted in behavior rather than technology. That’s always been true to some extent but never more than now. Tools are suspect, almost across the board. And a lot of assumptions about security have to be tossed aside. But one thing is certain, hackers adapt to the adversary. Terrorists do this well too. An attacker who can successfully parse all this and adapt is going to be a very significant threat. How can states counter the advanced threats? How can they counter hackers who know how to manage OPSEC and technical security to evade detection?

The Grugq: HUMINT. More of it.

The role of SIGINT in intelligence has basically been this weird bubble, starting around WWII when the love of SIGINT started until recently, when some of the SIGINT capabilities are starting to go dark. SIGINT is much more attractive than HUMINT. Signals don’t lie. They don’t forget. They don’t show up late to meetings, or provided intelligence information that is deliberately deceptive. SIGINT is the heroin of intelligence collection. The whole world got hooked on it when they discovered it, and it has had a very good run… it will probably continue to be useful for decades more, but really… the real utility of SIGINT will start to diminish now. It has to. The amount of encryption being deployed means that many mass collection capabilities will start to go dark. I, of course, am in total favour of this. I think that the privacy and protection of the entire Internet are more important than the ability of the US government to model the “chatter” between everyone using the Internet. The reduced security that the US government has tried (and succeeded) to force on the entire world is makes all of us less safe against any adversary.

SIGINT is really the sort of intelligence collection technique that needs to lose its prominence in the pantheon of intelligence gods. It is very easy for a serious adversary to defeat: basic tradecraft from the days of Allen Dulles will work (leave the phone behind, have the meeting while taking a walk). This tradecraft technique is described by Dulles, in 50 year old KGB manuals, and by Hizbollah operatives last year. The only way to catch people who are capable of any sort of OPSEC / tradecraft is via: a) Mistakes that they make (very easy for amateurs to make mistakes), or b) Via HUMINT. Spies catch spies, as the saying goes. It might be updated to, spies catch clandestine operatives.

Historically, the value of HUMINT has been very hit and miss, but those “hits” are extremely valuable. The major successes of the Cold War were almost all the result of human beings who became spies for the opposition: Ames, Hanssen, Walker, Howard, Tolkachev, etc. There are myriad cases with terrorist groups as well, informants is the best weapon against them. Relying on SIGINT is essentially relying on the adversary (terrorist groups) having poor tradecraft and terrible counterintelligence practices. This is simply not the case, at least not with sophisticated dangerous groups.

Double down on HUMINT and scale back SIGINT. SIGINT can be evaded, but HUMINT, essentially exploiting trust relationships, will always bite you in the ass.

John Little: Hackers are going to have to evolve in the same direction though aren’t they? Technology isn’t their salvation from an OPSEC perspective, in fact it is really the weakest link in their security model, so they will have to fully embrace good old-fashioned tradecraft and deception to avoid detection. Do you see an appreciation of that in the hacking community? It seems like a lot of big name hackers are still making fairly simple OPSEC mistakes.

The Grugq: Exactly, this is really the understanding that needs to sink in: technology alone will not save you. Hacker culture, almost by definition, is technology obsessed. We fetishize technology and gadgets, and this leads us to the deep-seated belief that if we just use the right tool, our problems will be solved. This mindset is fundamentally wrong. At best, I would call it misguided, but really I believe that most of the time it is actually counter productive.

Trust is the weakest link in the security chain, it is what will get you in the most trouble. This goes double for trusting in technology (even, as Bruce Schneier says “trust the math”). Tech is not the path to security. Security comes from the way that you live your life, not the tools. The tools are simply enablers. They’re utilities. OPSEC is a practice.

Expecting the tools to provide security for you is like buying a set of weights and then sitting around waiting for your fitness to improve. The fallacy that technology will provide the solution has to be seen for what it is, a false promise. There is nothing that will protect secrets better than not telling them to people!

Good OPSEC is founded on the same basic principles that have governed clandestine activities since the dawn of time. Hackers might be new, but good hackers require the same set of skills as the second oldest profession. Good OPSEC is timeless, and it stems from the application of the principles of clandestine operation, using caution and common sense.

The “73 rules of spycraft” by Allen Dulles was written before the Internet, before hacker culture (even phreaker culture) existed. I believe it is one of the most valuable guides available to understanding how to implement OPSEC. (As an interesting aside, harking back to one of my previous points, Dulles recommends taking vacations to get away from the stress of “work”.)

There are a lot of very public hackers who exhibit terrible security practices. Many of them are techno fetishists rather than espionage geeks, consequently they fail to understand how limited their knowledge is. Its the Dunning–Kruger effect in full tilt. They don’t do the research on their opposition and don’t know what sort of techniques will be used against them. By the time they figure it out, they are usually just an opportunity for the rest of us to practice Lessons Learned analysis. Of course the great tragedy is that many of the hacker community suffer from hubris that prevents them from actually learning from other’s failures.

A friend of mine paraphrase Brian Snow (formerly of the NSA) “our security comes not from our expertise, but from the sufferance of our opposition”. As soon as the adversary is aware of the existence of secrets worth discovering, and has the resources available to pursue them, hackers rapidly learn how good their OPSEC is.

John Little: I’ve always been amazed at the very public profiles of some hackers, especially where conferences are concerned. Granted, most are legitimate security researchers but there are also many in the community who occupy a grey area that is guaranteed to draw attention from intelligence or law enforcement agencies. Are hackers largely underestimating the skill with which intelligence agencies can penetrate, encircle, and absorb aspects of their community? Are we in for significant changes in the relationship between IC/LE and hackers, how hackers view themselves from a security standpoint, and how hackers engage each other?

The Grugq: Yes, very much so. There is a growing awareness of the altered threat landscape, and the need for an improved security posture. For decades the hacker community has been myopically focused on SIGINT threats, the sorts of technical attacks that have technical solutions. The HUMINT threat has been misunderstood, or ignored completely. That is changing as the hacker community is starting to learn and practice counterintelligence.

It is a difficult transition though, as some core counterintelligence principles run directly counter to the hacker ethos. There are a lot of factors at play, but one of the important ones is that hacker culture is very much a research culture. There is a great deal of knowledge exchange that goes on rather freely within various segments of the community. The problem, of course, is that the trading of information, which is so central to hacker culture, is the antithesis of a strong security posture. Many hackers realize this, so they only share with trusted friends, who then only share with their trusted friends, who then… and then suddenly everyone is on lists and someone is going to jail.

Security conferences are important events for hackers where they disseminate their research and findings, and socialize. This makes these events very target rich environments for intelligence agencies looking to build dossiers on hackers. They can see who is socializing with whom, attempt to recruit people, elicit information on capabilities, install malware on computers, collect intel from computers, and so on. That hackers would expose themselves to these activities seems very counterproductive for robust security. What gives?

The hacker community has a slightly different set of moral and ethical guidelines than mainstream society, which leads to problems with the authorities. Broadly speaking, few hackers view breaking into a system as unethical or morally wrong. Damaging the system, stealing information, or otherwise abusing the system is wrong. Simply accessing it is a challenge. The police, of course, view things differently: an illegal act is an illegal act.

For hackers the secret knowledge that they discover from active research is something to be proud of, and so we’re very excited to brag about our findings, activities or capabilities. This information is treated as something that will be kept within the community, bound by the FrieNDA. Of course, this is all based on trust, which is a very dangerous foundation for any security system. As Dulles’ says, the second greatest vice is vanity, the third is drink. Security conferences are not the places to avoid those vices!

So there is certainly this dynamic of wanting to brag about our discoveries from active research, but at the same time the tension of “what will happen if this leaks?”. These days we know what will happen, over zealous law enforcement and prosecution: weev, Aaron Schwartz, Stephen Watt, Dan Cuthbert, etc. The authorities view hackers as modern day witches, something to be feared and destroyed. It is unfortunate for the hacker community in many ways. Intelligent people who could contribute to mainstream society have their lives destroyed. So the repercussions of what are generally harmless activities can be devastating and life altering. Unfortunately, the protections that hackers turn to tend to be technological, but the problem is humans.

The hacker community is easy prey for law enforcement and the intelligence community. Very few hackers are savvy enough to spot a recruitment pitch, or to understand that what they think is amusing others view as criminal. I think this is starting to change. These days there is a lot less discussion about illegal hacking of systems (whether for monetary gain or not), and more about how to protect against the massive Internet surveillance that has been made public.

In this, I think, the hacker community and the general public are finding a lot of common cause against the LE/IC. There is a lot of good that will come out of this realization that the technology of privacy is actually important and should be ubiquitous, and easy to use. The default should be secure. Of course, as we know, this won’t help that much if someone is going around making basic OPSEC errors. So strong privacy protections for everyone will make the job of the LE/IC a bit harder, but it will also make everyone safer. I think that is a fair trade off.

Similarly, I think a lot of hackers would be quite happy to help the LE/IC community with technology support and ideas. The problem is that the relationship is a difficult one to establish. The IC is a black-hole, sucking in information and returning nothing. I don’t know how there can be meaningful engagement between the two communities, which I believe is a tremendous shame. There is a lot that can be learned from both sides, and I would love for the IC to contribute back. Law enforcement doesn’t interest me that much. Personally, my interest with LE begins and ends with studying their tools techniques and procedures for counterintelligence purposes. Something, that historically at least, few other hackers actually do. That is changing.

Hackers are learning to tighten up their security posture, they are learning about the tools techniques and procedures that get used against them, and they are learning how to protect themselves. Of course, the preponderance of criminal activity is committed in places where lax enforcement of computer crime laws allows blackhats to operate inside “protected territory”. In the long term, this is an extremely dangerous situation for those guys, of course, because without an adversarial environment they won’t learn how to operate securely. When the rules change, they will be caught out, completely unprepared.

The intelligence agencies and law enforcement departments have decades of organizational history and knowledge. The individual members can display wide ranges of skill and competence, but the resources and core knowledge of the organization dwarf what any individual hacker has available. Many of the skills that a hacker needs to learn, his clandestine tradecraft and OPSEC, are the sort of skills that organizations are excellent at developing and disseminating. These are not very good skill-sets for an individual to learn through trial and error, because those errors have significant negative consequences. An organization can afford to lose people as it learns how to deal with the adversary; but individual cannot afford to make a similar sacrifice — after all, who would benefit from your negative example?

The skills that hackers do have, the highly technical capabilities they can bring to the game, are not useful against an adversary who’s primary skill is manipulating other people. Knowing how to configure a firewall, use Tor, encrypt everything, etc. isn’t going to do much good if you also attend a conference without a highly tuned functioning spook-dar and a working knowledge of anti-elicitation techniques. The hackers are hopelessly outclassed at this game. Hell, the majority of them don’t even know that they’re playing!

Times are changing though, and hackers are starting to learn: OPSEC will get you through times of no crypto better than crypto will get you through times of no OPSEC.

Life in the Cold – Discussing the Psychology of Spying with Former Mossad Officer Michael Ross

Michael Ross was born in Canada and served as a soldier in a combat unit of the Israel Defence Forces prior to being recruited as a “combatant,” (a term designating a deep-cover operative tasked with working in hostile milieus) in Israel’s legendary secret intelligence service, the Mossad. In his 13 year career with the Mossad, Ross was also a case officer in Africa and South East Asia for three years, and was the Mossad’s counterterrorism liaison officer to the CIA and FBI for two-and-a-half years. Ross is a published writer and commentator on Near Eastern affairs, intelligence and terrorism. He is the author of The Volunteer: The Incredible True Story of an Israeli Spy on the Trail of International Terrorists. You can follow him on Twitter at @mrossletters. John Little tweets at @blogsofwar.

John Little: John le Carre’s The Spy Who Came in from the Cold is a damning and deeply cynical take on the intelligence profession and government’s use of intelligence and intelligence operatives. It is an uncomfortable and exaggerated (but not entirely untrue) look at the difficult human dimension of this business that will always be relevant even as technological sources of intelligence continue to advance. In a world where relationships are often built on an inherent dishonesty, empathy for the source is secondary to achieving one’s goal (or non-existent) and success may also mean that lives are damaged or lost in the process how does an intelligence officer succeed and walk away relatively undamaged? Is that even possible?

Michael Ross: I think the great achievement of “The Spy Who Came infrom the Cold” is that it lifted the Bondian veil and revealed that spies aren’t all suave Aston-Martin driving sex addicts who gamble at high end casinos but what Le Carre’s anti-hero Alec Leamas describes as “a bunch of seedy squalid bastards like me.” While I think he is being a bit harsh in his assessment, I believe the underlying point he is making in this part of the book (and brilliant subsequent film with Richard Burton) is that people searching for some deeper, altruistic motive behind the actions of their intelligence services will be readily disabused of these notions when confronted with the reality of the profession. Le Carre posited that intelligence services are the sub-conscious of the nation they serve and when examined as such, you see that he has also revealed another hard truth about this milieu. I would only add to Le Carre’s observation by saying that intelligence services are also the disassociative aspect of a nation’s sub-conscious. Policy makers have a tendency to only ask questions about methods when things go awry.

A spy’s job is to meet the expectations set by his nation’s national security agenda (and in specific instances I include economic security under this umbrella) and part of this includes targeting people for sources of human intelligence who will assist you in meeting these expectations. From the dock-worker in Tartous to the network administrator for a European telecom provider, they all have to be spotted, assessed, developed, recruited, and handled by a spy in person. This involves forming a bond and workable relationship, but for obvious reasons, these relationships can only go so far. There can also be a great deal of warmth and empathy in these relationships that is often misinterpreted by the source (I heard of more than once case where a female source fell in love with her case officer), but it can never be reciprocated to the degree that it interferes with the primary objective of the relationship. A HUMINT case officer who lacks empathy and is unable to make some kind of bond with his source will never achieve the full potential of the relationship.

Things do go wrong from time to time and sources get caught and in our area of operation, this often means torture and death. I never saw a case officer remain unscathed by such an experience and I think one of the great fears of practioners is to lose a source. Some case officers are less moved by relationships with thier sources than others but in the end, it’s a question of balance; be the person that your source wants to spill his secrets to but don’t take it so far in the direction of cameraderie that your source is also your best friend. People know when sincerity isn’t genuine. Recruiting human sources of intelligence is as much an emotional and psychological construct as it is an intellligence gathering one.

John Little: The psychological dynamics of these relationships really run the whole spectrum so it’s difficult to generalize. However, agents seem to be burdened with most of the psychological stress. Once that line has been crossed and they’ve betrayed their country the case officer is both a lifeline and in some ways a potential (if not outright) threat. It seems like a really unstable dynamic. How were you prepared for this? Can role play and classroom time really prepare a potential case officer for the challenge or does it have to be mastered in the field?

Michael Ross: Let there be no mistake; it’s the source that bears almost all the risk. How often do you hear in the news that a Mossad, CIA, or MI6 case officer has been captured and/or executed? By the same token, being a case officer has its stresses and dangers (one of my Mossad colleagues was shot by a turned source during a meeting in Brussels and we all know what happened at the CIA base near Khost), but by comparison, it’s negligible compared to what the source must endure waiting for the local security goons to get wise. The worst thing a case officer can do is be the cause of his source’s capture. It’s why we do surveillance detection routes, have good cover, and make damn sure we’re not being the reason the source was discovered. The recent episode with Ryan Fogle in Moscow is a good example of what happens when you don’t take the HUMINT recruitment process seriously. You can laugh at Fogle and his wig but you have to wonder who trained him and even more importantly, thought he was a case officer worthy of deployment in Russia.

There is no replacement for experience but training is a very big part of success in the field. There is a lot of time devoted to role-playing during training. I can’t speak for other services but our role-playing consists primarily of real-life scenarios based on what happens when things go sideways. You have sources who balk, demand more money, threaten to go to their own authorities etc. I recall one Mossad case officer sitting calmly with his Arab source in a hotel in Zurich and the source engaging in histrionics and complaining bitterly about his lot in life. The Mossad case officer just smiled and reassuringly told the source in Arabic, “I kiss the words that come out of your mouth”. Sometimes all a source wants is reassurance and a chance to vent. A good HUMINT service always remembers that it’s dealing with human beings with all their failings and idiosyncrasies. A good case officer is able to evaluate very quickly what type of person he’s dealing with and conduct himself accordingly.

John Little: So maintaining a productive relationship with the source requires a lot of work. Does all the effort that goes into maintaining security and managing the agent’s psychological state help the case officer maintain the necessary emotional distance? It’s never really a “normal” relationship and it would seem that those extra layers of activity would constantly reinforce that.

Michael Ross: That’s an excellent way of framing the relationship. As a case officer, there is so much to be done in the professional domain that the logistics and requirements of the job prevent the relationship with your source from becoming a true “friendship”. It’s important to also remember that a case officer has other sources on the go at varying degrees of development at once and therefore is too busy managing each relationship like a plate spinner to somehow turn work into the kind of relaxing and fun construct that real friendship entails. The essence of real friendship is effortless, the essence of being a good case officer is making it look effortless when it’s not.

Having said that, there’s moments to debrief and there’s times when you can hit the bars and relax with your source. Some case officers are fun people and some are very businesslike. It’s a question of personal style and if it works, then nobody will question it. I’m an introvert at heart but the work forced me to overcome that part of myself and become someone else for the purpose of getting the job done. I actually enjoyed that transformation and still do on the rare occasions that I still have to step out on myself.

John Little: How does this dynamic change when a case officer’s leadership gets involved? It’s not too difficult to imagine a scenario where all three players have different expectations from the relationship. Do these kinds of breakdowns occur? Are there common strategies for managing this problem?

Michael Ross: The Mossad, because of its size and small cadre of case officers at its disposal, has to be really selective about the sources that it recruits. This means that the case officer’s leadership is involved in much of the process in a collaborative way. Having said that, I remember taking two senior managers from HQ to a country in Africa who had never visited before to meet some sources and one of the managers – who served in France – made a comment about the conduct of my source that I took rather personally. In defence of my source I made an angry comment about Africa not being exactly the same as Europe. I knew the terrain and the local attitudes and my manager was looking at it from the perspective of his experiences in Europe. I received a stern rebuke and the moment was instructive. I endeavored thereafter to educate my managers about the way business is done in the places I chose to serve and also remember that yes, a source needs his case officer to be his advocate with his own people from time to time.

Even in the most collaborative environment, the friction between field and HQ will always exist.

John Little: It sounds rare but when the collaboration does break down, and the case officer and leadership find themselves at odds, is there a specific approach to working that out or do they eventually end the conversation, pull rank, and force the case officer to carry out their instructions? And while we are on the topic is it fair to say that headquarters has to manage it’s case officers to some degree the same way case officers manage their agents?

Michael Ross: I don’t think I ever saw a complete break down between case officer and HQ but there are differences of opinion on how to approach a recruitment operation. These details are always hashed out in advance. Case officers are expected to work with little guidance and a fair amount of autonomy but the reporting structure makes sure that there is no real disconnect.

As far as case officer management goes, that’s a really interesting question because case officers tend to be people with subtle (and at times not so subtle) powers of persuasion and manipulation. Issues arise when case officers think it’s okay to use this finely honed skill in their personal lives and with colleagues at work. It’s considered very bad form in the Mossad for a case officer to try and use his skills on colleagues or as a means to advance his or her career. It’s extremely rare, but it does happen. Case officers (and combatants) are a special demographic that requires careful, but not overly stringent management. One of the advantages that the Mossad has is that it’s senior ranks are not professional bureaucrats but people who have earned their position through successful careers in the field – and these are not people to be trifled with. In fact, it’s not usual to have a new division head appointed that has barely spent anytime at all inside Mossad HQ.

John Little: Despite the Mossad’s laser focus on its mission, the excellent training and a generally effective chain of command I still get the sense that you can personally relate to the source of Alec Leamas’ cynicism. Can you, in very general terms, touch on the decisions or outcomes during your career that didn’t set will with you and perhaps still don’t?

Michael Ross: As someone whose career was almost entirely based in the field I can very much identify with Alec Leamas and his cynicism. There’s a great (and in my view under-noticed) part in “The Spy Who Came in From the Cold” where Le Carre talks about the essence of being a spy and living a life under cover: “In itself, the practice of deception is not particularly exacting; it is a matter of experience, of professional expertise, it is a facility most of us can acquire. But while a confidence trickster, a play-actor or a gambler can return from his performance to the ranks of his admirers, the secret agent enjoys no such relief. For him, deception is first a matter of self-defense. He must protect himself not only from without but from within, and against the most natural of impulses: though he earn a fortune, his role may forbid him the purchase of a razor; though he be erudite, it can befall him to mumble nothing but banalities; though he be an affectionate husband and father, he must under all circumstances withhold himself from those in whom he should naturally confide.”

For all the cool professionalism of my service as I describe it, there are the petty banalities that one cannot escape; the source you detest and yet must cajole and entertain, the bigot, the venal, the malodorous, and the foul. The constant and monotonous surveillance detection routes (try doing one in Delhi in 42 C. heat, it’s very unglamorous). Then there is your own desk officer who forgets to maintain your commercial cover address and brings your credibility into question within your operational environment, the constant loneliness, and the occasional failure. This is compounded by those instances where you are putting a source and his family at risk yet he knows it and agrees because you can help his family or keep him afloat financially knowing his dependency on you is like a drug. He’s your worst enemy and now your best friend. After someone looks at you in the way a drowning man looks at a life preserver, believe me that it changes you and makes you second guess yourself and who you really are. At the center is a sense of duty. This is the only place where soldiers and spies walk a common road; you are expected to do the worst things because it’s a contract you signed and fulfill because if you don’t, then who will?

You can read more discussions with Michael Ross here.

Interview: Ali-Reza Anghaie and Scot Terban on InfoSec, Hackers, China, and Cyber Hype

terbali2 Interview: Ali Reza Anghaie and Scot Terban on InfoSec, Hackers, China, and Cyber Hype

Ali-Reza Anghaie (Right) is a Consulting Security Engineer and Senior Analyst with Wikistrat. His varied work in engineering and security has taken him to numerous universities and Fortune 500 companies in the Defense, Energy, Entertainment, and Medical fields. You can follow Ali-Reza on Twitter and Quora. Scot Terban (Left), AKA the gonzo INFOSEC blogger Krypt3ia, blogs at http://krypt3ia.wordpress.com. You can also find him on Twitter. Both host the weekly Cloak & Swagger: Security Unhinged podcast.

John Little: Let’s start off with a Skyfall-esque word association game. Ready? “Cyber Pearl Harbor

Ali-Reza Anghaie: Geraldo. (Yes, that’s my answer. Say `Cyber Pearl Harbor` in his voice and you’ll want to strangle yourself too.)

Scot Terban: Expletive.

John Little: Alright, so what is it about “Cyber Pearl Harbor” that sets you two, and many other infosec professionals, off? What are Panetta, Lieberman, and other Beltway types getting wrong about the legitimate threats we face in the digital domain?

Ali-Reza Anghaie: Lets clarify “getting wrong” – as professionals we encounter `wrong` all the time. ~Intentionally~ exaggerating and obfuscating threats is what has been happening in DC. However, it’s also politics – you never hear a politician talk about any issue in a way that satisfies the wider professional community of that issue. That’s quite intentional – as the people who really know are absolutely the people that politicians need to play ~against~ to centralize and pull power toward their own spheres of influence.

And that’s really the part that burns me – the echo chamber they’ve built is designed to accomodate just those that will work within the confines of the existing DC dynamic. And so much energy is exhausted in just that posturing that by the time you get to actual technical working groups – you’re already on the tail end of resource availability. So, if you’re lucky, you’ll get through one or two iterations of actual policy driven work before the next manufactured crises hoovers priority elsewhere.

Since this is the inevitable cycle, I suggest we move straight to the end – private industry needs to step to the plate as a competitive matter because Government, as Government always does, will punish you using whatever laws do or don’t exist as soon as it’s politically tenable. And won’t provide any solutions along the way. Why not just get it over with?

You know – I’d probably be less cynical and in a better mood if you stopped saying “Cyber Pearl Harbor”..

Scot Terban: It’s jingoism at its best. It is propaganda and a tool to get people to react in a knee jerk way.

What are Panetta, Lieberman, and other Beltway types getting wrong about the legitimate threats we face in the digital domain? Everything. They do not comprehend the technologies involved nor the complexities of what they are advocating as the end of the world. They need to let the professionals who deal with this technology and space give the answers. It’s akin to telling a five year old to go on to Meet The Press and explain quantum mechanics.

John Little: There are countless layers to this problem and many of them are not “technical”. There are human factors and physical security issues for example. In most cases there are no paths to 100% security. So where, from a national security perspective, should we focus or efforts and dollars? What would get us the most bang for the buck?

Scot Terban: Well, contrary to what a Dave Aitel or lately Schneier might posit, more security awareness for the general populace to start I think. This is more so for companies that are within the sights of an APT adversary but also look at what goes on with crimeware to start right? How much of this could be stopped just with making sure people understand the technology that they own and should be managing? We are all supposed to have training to drive a car and a license so why not at least have a better grasp on the PC and how things work right?

*wait’s for Ali’s head to explode*

But really, knowledge is power and unfortunately I don’t think this will happen either really. The money will all go into offensive campaigns within the CyberComm and we will lag behind on defense. Look at the EO and how the corps responded to it. “hey yeah, we would like to do less” I know Ali thinks that is all about letting the gubment take over and that is what they want but I disagree here. I think they do not want the government dictating to them nor do they want to be responsible for the security of their environments at the level of mandate because they would be held to it by assessment.

I think in the end your question is moot because nothing will be done that will help us.

Ali-Reza Anghaie: The pounding of the `do the basics` drums needs to be louder than the `sexy` drums..

However, I think the biggest things we can do at a national security lever are:

1) Admit defeat at the Government level. Make it clear – CLEAR – that if you’re waiting for Government to combat your hacking problem, you’re going to die.

2) You. Must. Compete. There is a concept called “Intellectual Property Obesity” that has ravaged the American innovators for some time. They spent too much time on Copyright, Patent, and IP theft and not enough on risk analysis, business development, existing means of competition.. concentrate on ~everything else~ that has made America less competitive on a global scale.

In the end, if we’re to suffer a `death by a thousand cuts`, it’s not because of cyber espionage from the Chinese or anyone else. That’s but a small part of the bigger picture.

Now – that speaks to national security at the economic level, which I think is most important – but some conflate this as all purely defense/military in nature. The solutions to that problem set as a bit different and, in part, require actually letting people fail. Not retroactively but put a pretty solid post in the ground that says: `Hey, if you get hacked and all the IP is stolen. Your program funding is going to take a BIG hit. We don’t want to tell you how to fix it – we (Government) doesn’t know how. Likewise, if the data gets stolen while with us (again, Government), you’re going to get a bit of automatica business helping us or influencing our direct means of securing it`.. something along those lines without the tin-foil gaps.

John Little: Although I know and respect many security professionals the ones that I encounter professionally seem to be bureaucrats rather than technical professionals. They are just lords of a massive fixed documentation process that must be completed whether I’m building a simple web page with public data or a massive mission critical enterprise system. The problem is that I can answer 500 questions about my application and get it approved but at the end of the day there’s nothing about the process that really enhances security. What are your thoughts about how the private sector utilizes InfoSec professionals?

Ali-Reza Anghaie: Firstly – I’m sorry. Really really sorry. You’ll have to file a RC269B exception to ask me this question. It’ll be rejected of course because everyone knows of the `Great RC268T Debacle` of 2012. I have my big red stamp ready to reject your request because email isn’t secure enough and the ColdFusion workflow app we had developed in Bangalore was, of course, developed by non-US Citizens so we can’t really use it. I have spoken.

There is this inherit fear of InfoSec that comes with the noise around incidents right now – similar to how auditors were perceived just after SOX went into effect. Nobody knows what to do with InfoSec except to not piss InfoSec off. Along with that come a lot of non-technical professionals or entry-level professionals enabled with copious amounts of authority and confidence over – well – nothing in particular. So, much like politics, you do exactly what you can get away with without punishment.

This is a cynical view – as my answers have trended so far – but it’s quite normal and recent trends leave me very optimistic.

We’re at the tail end of this trend and, as an industry, we’re going through it a fair bit quicker than many of our predecessors. Somewhat due to economic constraints but I sincerely believe the best of the best in InfoSec have taken more responsibility recently for knocking down their own echo chambers. They’ve seen the charlatans flourish and they know “we” created room for them with ambiguity and hand-waiving. “We” want our industry back..

So – to answer your question – I think a huge majority of the private sector is very confused in how to apply InfoSec. And it’s our fault…for now.

Scot Terban: I think we need to differentiate between the INFOSEC folks like an archaeological dig here to start. First off, not all INFOSEC’ers are built the same. I come from the pentesting side AND the policy as well. I performed many assessments that had a combination of both and understand them both well enough to see where the rubber meets the road to so speak. Unfortunately not everyone has the skill sets to see both sides of coin and to work efficiently in the space. So we have people who get into INFOSEC primarily from a “legislative or paper” side of the issue. They understand that security is necessary and there are rules that need to be in place and that is about it. They follow their checklists and once they have checked the boxes they are good. This is bad but all too often the real aegis of many folks in corporations who perform audit from SOX to other government audit standpoints.

Then there are the people who perform just pentest and who many often think that rules are just useless. Why? Because the hackers/adversary does not follow the rules and all too often rules get mired in minutiae that doesn’t matter to their attacks. I have heard way too many times, and rightly so, that SOX and other check box security measures are useless. I too have felt the same thing but, too often the pentest crowd is just dismissive of it because they are broken and not workable in their present state much of the time. So you can develop an app as you say, the “Bob’s” can come in with their checklists but in the end they have not made the product more secure because they lack the dimension of the attacker perspective.

So we have two camps.. Both out to secure things and neither really can because of a third camp.. Let’s call this camp the “Corporation” The corp all too often is motivated not by an innate desire to protect their data, their clients etc.. Their driver is to make as much money as possible and in doing so security spend is even today, not what it should be because it is a cost center. When looking at the options and the legal drivers we can see how it is so easy for a company to go for the check box security approach mainly because that is what the government and the laws are mandating. It is the “due diligence” mentality and in that, the only due diligence we have primarily is to have the boxes checked to insure that they can say that once they get sued or after an incident. THIS is to minimize the legal remunerations that they may incur to law suits and that’s the extent of it. Rarely have I seen a company throughout my career that was proactive about their security enough to engage true red teaming and effective policies, procedures, and audit to insure a modicum of security.

It’s mostly set and forget as well as get drones who check SOX boxes every year. Aye, there’s the rub huh? This is where you have the paper CISSP’s and others who really do not have a grasp of adversarial INFOSEC that needs to be in place to protect yourselves and this is where the engine of popularity and money have made a glut of people who don’t really have the chops to be in the business doing business. So yeah, you could create an application and the SOX types come along and ask questions but they really aren’t coders nor understand application code security right? They do their bit but they don’t see the whole picture and you, you could totally hoodwink them that your application is up to standard because this is the only appsec that they are carrying out.. Asking questions and not validating code?

To me, that says that the system is broken. What we need is a middle road where true application security people are involved in your case. In other cases I would like to see people who have a good grasp of security (defense as well as offense) in the roles of audit. Will this happen? Probably not and that is because as was lamented recently “Defense isn’t sexy” add to that the corp’s aren’t looking to do anything but be “risk averse” and you have a broken system.

John Little: So we have a system that is broken and seems bound to stay that way. With the increasing complexity and distributed nature of data and applications, the vast number of application users (a good portion of the planet now), the rapid advancement of technology, and the challenges involved in building and maintaining an even barely adequate cadre of INFOSEC professionals how will the future not become even more of a hacker’s playground?

Ali-Reza Anghaie: The problem space is going to continue to grow at an accelerating pace. We will drown in more data and we won’t ever have enough bodies to throw at the problem. Government “regulation” will likely further exasperate the staffing problems. Generally we’ve shown ourselves incapable of effective security automation. Woe is me?

There is a difference between a hacker’s playground and an unmanageable risk. Like any other type of crime, society will compensate in some areas and not in others. Some regions will do better with the same `door locks` and other regions will need `burglar bars` on all windows. So the question isn’t if the attack surface will continue to outpace us – it certainly will – the question is how will we compensate, as an industry and society, elsewhere?

This goes to the very root of competition – and we’re stuck with this idea that InfoSec is absolute. You’re either not using computers or your pwned. In no other aspect of life or society do we so readily say that to customers, through Governments, and in our daily routines.

So I would say that hackers will hack and that’s OK. If you aren’t viable and complete even under hacker fire – I’d say you were never actually viable or complete.

Scot Terban: It shall be just as it is now. The only answer is to become a new age Luddite and live in a bunker awaiting the end…

John Little: A significant portion of the cyber-chatter inside the Beltway and in the media is focused on China. How would you characterize the threat Chinese hackers (official or not) pose to the U.S. and how should we be talking about it?

Ali-Reza Anghaie: Lets be clear – the Chinese threat is real and it’s aggressive. It is also entirely irrelevant.

We’re at such an early stage of secure architecture and software that concentrating on a given foe is foolish for all but a small core of defense and intelligence agencies. Along those lines, Government emphasizing a given nation-state threat also leaves people with the false impression that these threats ~require~ a nation-state to execute. And…. wait for it… a nation-state level response.

About now big red spinning alarms should be going off in your head. THAT is the problem with “the Chinese threat” – it’s become a political football that has turned into a lobby interest that has turned into a disadvantage to an already painfully broken field. It creates whole classes of C-levels looking at the wrong problems, wrong solutions, and wrong people to deliver those solutions.

Scot Terban: How would I characterize the Chinese threat… Well, they are a threat because they are just persistent and mostly sneaky. Not all of the teams are uber ninja’s like portrayed in the news media or in a Mandiant self propaganda piece but they are pretty good (some of them) What the question really should be though is how would I characterize the attacked.. Not the attacker. We are on the whole not prepared to deal with attacks either in the MIL space or the private whatsoever. Companies are reticent to fix their infrastructures because it would cause loss of productivity, they hold on to old technologies like XP and IE6 for way too long, and they generally are not as a whole, security savvy.

So.. How hard is it for the average Chinese hacker to get someone to click on a link, pwn a machine, enter a poorly managed network, and steal them blind? Furthermore, how hard is it then to keep persistence?

Meh.

John Little: You both raise a very important point. While the debates over terminology, doctrine, and threats rage on the assets are going unprotected. We hear case after case of hackers having an easy time with their targets because of laziness, ignorance, and irresponsibility on the behalf of individual users, software developers, and network owners. It seems like we could eliminate most threats by shifting the focus away from “external” threats and back to our own behavior and business practices.

Ali-Reza Anghaie: Some years ago various groups started referring to de-perimeterisation as an inherit system design goal – that is to say that every system’s functions should act like it’s facing the “outside” world. From the outset I thought that should be the data protection goal as well – trust no one, period. Everything should have a forensic trail, least-privilege model, etc. Insiders can become your outsiders – prepare as such.

Now, that was naive of me – cost applies. So I think it comes down to appropriate risk assessments in the complete context of your business, legal, and technical resources – which is non-trivial for multinationals and small business alike.

So – the “right” answer to your question is – we still have an accountability problem period. Internally or externally the risk assessments, valuations, and models just aren’t being done appropriately on a reliable basis for most organizations. The good news is that the body of work on these topics are increasingly reliable – we can fix the overall scheme of things. Where fixing doesn’t always mean absolute security as the goal.

I’d like to thank Blogs of War for taking the time to put together this interview. It’s been great and I really enjoy your various feeds.

Scot Terban: The answer is “yes” but I would also hasten to say that it’s not just accountability but a more encompassing problem of OPSEC altogether. The point being that many people today lack understanding of the need never mind the practice of OPSEC. So we have all these private and public entities that really have no concept of the security landscape in the first place and why it is important to protect their data so how do you expect them to be aware of internal or external threats? While in the military and government space they have an idea they too suffer from lackadaisical attitudes and lack of comprehension of the technologies that they are using to manipulate, store, and use data. I tend to think of it as a human nature issue in general that we need to tackle just to bring people to the security table in the first place before we can make them aware enough to think about and secure their assets. Once people are on the same page with the technologies (not just the tech folks we all work with but the end users) then we will have a discussion over the internal versus the external threats posed.