2

Why Are Security Problems So Hard to Solve?

 3 years ago
source link: https://blog.cimicorp.com/?p=4750
Go to the source link to view the article. You can view the picture content, updated content and better typesetting reading experience. If the link is broken, please click the button below to view the snapshot at that time.
neoserver,ios ssh client

Why Are Security Problems So Hard to Solve?

Why are network, application, and data security problems so difficult to solve? As I’ve noted in previous blogs, many companies say they spend as much on security as on network equipment, and many also tell me that they don’t believe that they, or their vendors, really have a handle on the issue. “We’re adding layers like band-aids, and all we’re doing is pacing a problem space we’ve been behind in from the first,” is how one CSO put it.

Staying behind by the same amount isn’t the same as getting ahead, for sure, but there’s not as much consensus as I’d have thought on the question of what needs to be done. I’d estimate that perhaps a quarter or less of enterprises really think about security in a fundamental way. Most are just sticking new fingers in new holes in the dike, and that’s probably never going to work. I did have a dozen useful conversations with thoughtful security experts, though, and their views are worth talking about.

If you distill the perspective of these dozen experts, security comes down to knowing who is doing what, who’s allowed to do what, and who’s doing something they don’t usually do. The experts say that hacking is the big problem, and that if hacking could be significantly reduced, it would immeasurably improve security and reduce risk. Bad actors are the problem, according to the enterprise experts, and their bad acting leaves, or could/should leave, behavioral traces that we’re typically not seeing or even looking for. Let’s try to understand that by looking at the three “knows” I just cited.

One problem experts always note is that it’s often very difficult to tell just who is initiating a specific request or taking a specific action on a network or with a resource. Many security schemes focus on identifying a person rather than identifying both the person and the client resource being used. We see examples of the latter in many web-based services, where we are asked for special authentication if we try to sign on from a device we don’t usually use. Multi-factor authentication (MFA) is inconvenient, but it can serve to improve our confidence that a given login is really from the person who owns the ID/password, and not an impostor who’s stolen it.

The problem of having someone walk away and leave their system accessible to intruders would be largely resolved if multi-factor authentication were applied via a mobile phone as the second “factor”, since few people would leave their phones. However, if an application is left open, or if a browser tab that referenced a secure site/application is open and it’s possible to back up from the current screen into the secure app, there’s a problem. There are technical ways of addressing these issues, and they’re widely understood. They should be universally applied, and my group of experts say that more time is spent on new band-aids than on making sure the earlier ones stick.

The network could improve this situation, too. If a virtual-network layer could identify both user and application connection addresses and associate them with their owners, the network could be told which user/resource relationships were valid, and could prevent connections not on the list—a zero-trust strategy. It could also journal all attempts to connect to something not permitted, and this could be used to “decertify” a network access point that might be compromised.

Journals are also a way of looking at access patterns and history, something that AI could facilitate. Depending on the risk posed by a particular resource/asset, accesses that break the normal pattern could be a signal for a review of what’s going on. This kind of analysis could even detect the “distributed intruder” style of hack, where multiple compromised systems are used to spread out access scanning to reduce the chance of detection.

A special source of identity and malware problem is the systems/devices that are used both in the office and elsewhere, since the use of and traffic associated with those systems/devices aren’t visible when they’re located outside a protected facility. That problem can be reduced if all devices used for access to company assets are on the company VPN, with the kind of zero-trust access visibility I’ve described. If the WFH strategy in play for systems outside the office puts the systems inside the zero-trust boundary, then the risk of misbehavior is reduced because the chances of detecting it are much lower.

The “dualism” of devices, the fact that many are used for both personal and business reasons, it one of the major sources of risk, one that even zero-trust network security can’t totally mitigate. Many of the security experts I’ve talked with believe that work and personal uses of devices absolutely do not mix, and in fact should not be able to install any applications other than those approved by IT. Those same experts are forced to admit that it’s virtually impossible to cut off Internet access, however, and that creates an ongoing risk of malware and hacks.

One suggestion experts had was to require that all systems used for business, whoever owns them, access all email through a company application. Emailed malware, either in the form of contaminated files or links, represent a major attack vector, and in fact security experts say it may well be the dominant way that malware enters a company. The problem here again is the difficulty in enforcing the rule. Some who have tried, by blocking common email ports, have found that employees learn how to circumvent these rules using web-based email. Others say that social-media access, which is hard to block, means that it may not be worthwhile to try to control email access to avoid malware.

So what’s the answer to the opening question, why security is so hard? Because we’ve made it hard, not only by ignoring things that were obviously going to be problems down the line (like BYOD), but also by fixing symptoms instead of problems when the folly of the “head-in-the-sand” approach became clear. I think that we need to accept that while the network isn’t going to be able to stamp out all risk, it’s the place where the most can be done to actually move the ball. Zero-trust strategies are the answer, and no matter how much pushback they may generate because of the need to configure connectivity policies, there’s no other way we’re going to get a handle on security.


About Joyk


Aggregate valuable and interesting links.
Joyk means Joy of geeK