In the previous articles, we talked about the basic process of the implementation of a security assessment and its final output security solution; but what kinds of skills are needed in the specific design of security solutions? This article will describe what method may be used in actual combat and discuss white hat security.
Principle of Secure by Default
In the design of security solutions, the most basic and most important principle is “Secure by Default.” This should be kept in mind in any security design. If the whole plan is secure enough, it has a great relationship with the application of this principle. In fact, the principle of “Secure by Default” can be summed up as the whitelist and blacklist. The more you use the whitelist, the more secure the system will become.
For example, when making a network access control policy, if the site is only available for web services, then the correct approach is to only allow the web server ports 80 and 443 to external provision of services, shielding the other ports. This is a whitelist approach; if you use the blacklist approach, then problems may arise. Assume a blacklist strategy is as follows: Do not allow SSH port open to the Internet; then, they would audit the SSH default port: Port 22 is open to the Internet. However, in the actual work process, it is often found that some engineers, due to laziness or for the sake of convenience, change the SSH listening port, for example, they change the SSH port from 22 to 2222 without permission, thereby allowing it to bypass security policy.
For example, in the production environment of the server, arbitrary installation of software should be limited; unified rules for software installation need to be developed. The rules can be worked out based on a white list. Information on software versions in accordance with the business needs are to be listed and others prohibited. If the engineers are allowed to install software on the server, it may create loopholes, which increase possibilities for attack.
In web security, white lists are used everywhere. For example, in the application processing rich text submitted by the user, taking into account the XSS issue, you need to do a security check. Common XSS filters are generally used to parse into label objects, and then to match XSS rules. This list of rules can be either a black or a white list. If you choose blacklist, a set of rules may prohibit labels such as <script>, <iframe>, etc. But this blacklist may not be enough, because browsers support new HTML tags and these tags may not be in the blacklist. Choosing white list will avoid this issue as the rules allow the user to input only labels such as <a>, <img>, etc.
However, the implementation of whitelist does not guarantee complete safety. This may seem contradictory as whitelist is supposed to solve security issues. Let us therefore a nalyze the thought process behind security as has been mentioned earlier: The nature of security issues is a question of trust; a security program is based on trust. Selecting the whitelist to design security solutions is a safer bet, as it is comparatively more effective. However, once the basis of trust does not exist, the security vanishes.
Principle of Least Privilege
Another meaning of Secure by Default is the principle of least privilege. This is also the basic principle of security design. The principle of least privilege requires that the system only gives to the user the necessary permissions, but does not overauthorize, which will effectively reduce the error opportunity of systems, networks, applications, and databases.
For example, in the Linux system, a good operating practice is that after ordinary account login, if perform operations need root privileges, the pseudo command is necessary. This can reduce the risk of misuse; the consequences caused by the unauthorized use of the general account and the root account are completely different.
Using the principle of least privilege, you need to carefully sort out the permissions needed by the business; in many cases, developers will not realize that the users are overauthorized in the name of business. During interviews to understand business, you can set a number of rhetorical questions, such as: Are you sure you need to access the Internet? Such problems will determine the business needs of least privilege.
Principle of Defense in Depth
Like Secure by Default, Defense in Depth is an important guideline for designing security programs.
Defense in Depth consists of two approaches: First, security programs should be implemented at various levels and in different aspects to avoid omissions; different security programs should be synergized to constitute a whole. Second, we must do the right thing at the right place, namely, the targeted security programs should be implemented to counter the fundamental problems.
A mineral water ad shows the production process of a drop of water: 10 multilayered security filters remove harmful substances, and eventually we get a drop of drinking water. This multilayered filtering system is akin to a three-dimensional layered security solution that Defense in Depth provides.
Defense in Depth does not mean a security program should be implemented twice or even more, but means to implement at all levels from all angles to make the overall solution. We often hear the word “bucket theory,” saying how much water a bucket can hold does not depend on the longest piece of board, but depends on the shortest piece of board, which is the so-called short board. The design of security solutions is most afraid of short boards; boards are a variety of security solutions with different roles, and they should be closer together to form a watertight bucket.
In the common case of invasion, most web application vulnerabilities are used; an attacker will first obtain a low-rights web shell, then upload more files through the web shell and try to perform the higher privileges of system commands—even try to elevate privileges as root on the server; next, the attacker attempts to penetrate the database server.
Such intrusion cases, if any, of the links during the attack encountering effective defense measures will lead the invasion process to fail. But there is no panacea, so it is necessary to scatter the risks to the system at all levels. In defense against the invasion, we need to consider the possibility of web application security, OS security, database security, and network environment security. These different levels of security solutions together constitute the entire defense system, which is what Defense in Depth is all about. Defense in Depth also refers to doing the right thing at the right place. To understand this, one must understand the nature of the threat to take the right action. On the XSS defense technology development process, there have been several different ideas in recent years when the XSS defense idea gradually matured and unified. In the beginning of the program, mainly to filter special characters, such as << Swordsman >> will become a Swordsman the brackets are filtered out.
The resulting blunder is because there is no “doing the right thing in the right place.” For an XSS defense system, to obtain user input, filtering is not appropriate, because the harm of XSS is on the user’s browser or server-side output HTML pages, injected with malicious code. Only in the assembly HTML output can the system get the semantics of the HTML context, which can determine whether there is an error. “Doing the right thing in the right place” therefore refers to the defense program being installed in the most appropriate place to solve the problem.
In recent years, security vendors, in order to meet the needs of the markets, have launched a product called the UTM (unified threat management). UTM is almost integrated with all major security functions, such as firewall, VPN, antispam, IDS, and antivirus. When SMEs are not capable of developing their own security programs, the UTM aims, to a certain extent, at improving the security threshold. UTM is not a panacea; a lot of problems should be resolved at the network layer or the gateway, so the effect may not be as good as it is expected; to users, it means more peace of mind. For a complex system, Defense in Depth is necessary for a safer system.
Principles of Data and Code Separation
Another important security principle is the principle of separation of data and code. This principle is widely applicable to a variety of injected issues. In fact, buffer overflow can also be regarded as a consequence contrary to this principle— the program on the stack or the heap data as code and executes, which results in security problems.
There are many web security problems caused by the injection, such as XSS, SQL injection, CRLF injection, X-Path injection, and so on. Such problems can be designed in accordance with the principle of separation of data and code, “a truly secure solution,” because this principle seizes the nature of the loopholes.
In the code, $var is a variable the user can control; then, this code:
is the implementation of the earlier program . While $var is the user’s data fragment, if the user data fragment $var is executed as a code, it will lead to security problems.
For example, when the value of $var is:
the user data are injected into the code snippet. The browser will execute it—the browser treats the user data with <script> tag as a code snippet—this is clearly not the program developer’s intent.
In accordance with the principle of separation of data and code, the user data $ var needs security handling; you can use the means of filtering, coding, etc. to eliminate any code that may cause confusion, specific to this case, that is, to handle <> symbols. Some people may ask: “There is a need to perform a <script> label to pop up a paragraph of text, such as: “Hello!” How to do it?” In this case, data and code change; based on the principles of data and code separation, we should rewrite the code fragment:
<script> alert("$var1"); </script>
In this case, <script> label has become part of the code fragment; the user data can only control $ var1 so as to prevent the occurrence of security problems.
Unpredictability of the Principles
Several principles have been described earlier: the Secure by Default should always be kept in mind as the general principle; Defense in Depth is a more comprehensive and accurate view of the problem; the separation of data and code form is to view the problem from the angle of vulnerability; the next principle of unpredictability is to look at the issue from the perspective of a countermeasure to the attacks.
Microsoft’s Windows users over the years have suffered from buffer overflow; in the new version of Windows, Microsoft has many measures against this. Microsoft cannot claim that the software runs in the system without any vulnerability. The approach it takes is to let the vulnerabilities fail. For example, it uses DEP to ensure that the stack is nonexecutable and ASLR to make the stack base become a random variation, so that the attacker is unable to guess the memory address, which greatly improves the threshold of attack. Practical testing has proved that Microsoft’s idea is really effective—even if it is unable to repair the code, it can be regarded as a successful defense if the method makes the attack invalid.
ASLR is used by Microsoft and is available in newer versions of the Linux kernel. Under ASLR control, every time you start a program, the stack base address is not the same, with a certain degree of randomness, which makes it unpredictable for attackers.
To be unpredictable is an effective technique against attackers who rely on tampering and forgery. Let us consider the following case.
Assume that the serial number of the articles in a content management system is in ascending numerical order, for example, id = 1000, id = 1002, id = 1003… This kind of order allows the attacker to easily traverse all the article numbers in the system: finding an integer, then counting in ascending order will be ok. If an attacker wants to batch delete these articles, he just needs to write a simple script
and can easily achieve his goal. However, if the content management system is unpredictable and the value of the id becomes unpredictable, what will be the results?
id = asldfjaefsadlf, id = adsfalkennffxc, id = poerjfweknfd……
The id value becomes completely unpredictable; if the attacker wants to batch delete, the only way is to use the crawler to scroll through all the page ids and then analyze them one by one, thereby increasing the threshold of the attack.
The unpredictability of the principles can be cleverly used in protecting sensitive data. In the CSRF defense technology, for example, it usually uses one token to array out the effective defense. This token can successfully defend a CSRF because an attacker cannot predict the value of the token, thus it requires the token to be complex enough.
Unpredictability often goes with encryption algorithms, random number algorithms, and hash algorithm; making good use of this principle can often greatly assist in the design of security solutions.