I’ve often had people ask me, “How can hackers be stopped?”, and immediately expect that there is a quick, succinct, list of items that is going to make an organization completely hack proof. This is simply not the case. Preventing an application or infrastructure from being hacked is like preventing a car crash. The car and the road can be checked for defects. The vehicle can be validated to have the correct configuration for the environment and the drivers can be trained in defensive driving techniques. Even with these precautions defects can be missed, the environment can change, and drivers can make mistakes. Any of these could result in a car crash. Accepting this risk is one of the requirements of being on the road. Likewise, it’s impossible to completely remove the risk of being hacked if your organization is on the information superhighway, but there are a lot of things you can do to reduce the likelihood and impact.
Testing for Defects
Within the information security field testing for defects is usually referred to as a “Vulnerability Assessment”, “Ethical Hacking”, “Penetration Testing (aka Pen Testing)”, or “Red Teaming”. What is involved in each of these terms is different, but all seek to discover defects that could potentially allow attackers to do bad things. We more commonly refer to these as “security vulnerabilities”.
What should be tested for defects?
Within the car example; defects in the vehicle, tire configuration, road, stop-lights, or signs could all result in a car crash. Likewise in the information security arena almost any defect in anything running on your network (Routers, Servers, Databases, Applications, Facilities, wires, processes, and people) could result in your organization being hacked. What should be tested is generally broken into four categories: Network, Application, Social and Physical.
Network assessments focus on the “roads”, “stop-lights” and “signs” that make up the environment. This includes firewall configurations, server ports, and known problems with common applications that make up the infrastructure (Router Software, Web Server, Application Server, etc.).
Application assessments focus on the “cars” that use the roads. This includes each one of the individual applications that is used by the business (Web Sites, mobile applications, desktop application, etc.).
Social assessments focus on the “drivers” that use the applications. This includes anyone that uses any of the applications anywhere within the organization, and involves trying to get them to do things that they should not.
Physical assessments also focus on the “roads”, and “cars”, but from a physical standpoint. This includes seeing if a person can physically get to sensitive locations within your office building to gain access to network, servers, or other sensitive data.
Testing all four areas gives a reasonable assurance that it would be difficult to get hacked. Focusing on only one of these areas could be a formula for disaster. Consider the situation where there is a newly paved road, with an old clunker driving down it with bald tires, leaking gasoline, with a bumper dragging causing sparks. What is the probability that the vehicle will be in an accident if it’s used frequently? The information security equivalent of this happens all the time. An organization will get a network assessment of their infrastructure, and thereby assume the applications on their network are also secure. This is rarely true, and this thinking can be attributed at least a handful of the large data breaches.
Who should test for defects?
Within the car industry the expectation is that the manufacturer is going to test any new vehicle for safety issues and create a secure product. This is regulated by the government, and there is clear legal precedent holding the manufacturer accountable when it has not been done. This allows consumers to feel confident that they need not know much about vehicles to validate that a vehicle is reasonably safe when purchased new. Within the Information Security field this is a completely different story.
There aren’t currently any regulations which require manufacturers of software to create software which is up to a clearly defined security standard. In addition, the legal precedent is still being established to determine how much, if at all, software manufactures can be held accountable for security defects. As a result, while the software manufacturer *SHOULD* test for security vulnerabilities, there isn’t typically a huge incentive for it to be done. Add in the fact that there is a wide range of levels of experience among individuals creating software, and that even among Computer Science college graduates only a small percentage have taken even a single class which contained a single section on secure programming, and it seems that most anyone who uses an application should at least kick the tires.
How thoroughly should defects be tested?
Determining what level defects should be tested for is directly associated with the impact a flaw would cause. For example, if a car was only going to be driven for 1 mile, at 15 mph, once a month, on a completely straight road, which has almost no traffic, a defect with the car that could pop the tire would not be as big of a deal. Likewise if an application is only used by a handful of users, and a full compromise of the machine would not reveal any sensitive data, or prevent business form being done, then it may make sense to do little or no testing. If on the other hand the application allowed the transfer of millions of dollars’ worth of money, or controlled the life-support of an individual, it would probably be advisable to have at least one pretty extensive ethical hacking engagement performed on the software. This would probably make sense even if the manufacturer of that software stated they had their own assessment done.
The risk of being hacked can never be completely eliminated, but it can be greatly reduced. The important thing is that you take a look at the risks to your organization present in the four different areas of information security, and be sure that your organization is doing something in all four areas appropriate for the level of risk. This may mean requiring your vendors to go through 3rd party security audits, having your own min-assessments conducted, or having an extensive assessment conducted. If you need help determining what your risks are, what to ask your vendors to do, or to have actual work performed, please feel free to reach out to us. We offer a free 1-hr initial consultation for new customers, and can refer you to other organizations if what you require is not in our area of expertise. The information superhighway would be a much safer place if a more organizations did some tire kicking.