Layered system. Anti-aircraft "Pantsir" for the Russian army

Submitting your good work to the knowledge base is easy. Use the form below

Students, graduate students, young scientists who use the knowledge base in their studies and work will be very grateful to you.

Posted on http://www.allbest.ru

1. GENERAL CHARACTERISTICS OF THE WORK

1.1 Relevance

1.2 Target

1.3 Tasks

2. MAIN CONTENT OF THE WORK

2.1 Defense in depth

2.2 Components of a layered information security system

2.2.1 Antivirus programs

2.2.2 Recording and auditing

2.2.3 Physical protection

2.2.4 Authentication and password protection

2.2.5 Firewalls

2.2.6 Demilitarized Zone

2.2.7 VPN

2.2.8 Intrusion detection system

3. MAIN RESULTS OF THE WORK

LIST OF INFORMATION SOURCES USED

defense in depth information antivirus

1. GENERAL CHARACTERISTICS OF THE WORK.

1.1 Relevance

The study of a layered information security system in “office” type computer systems is relevant due to the constant increase in the number of attacks on the networks of large organizations with the aim, for example, of copying databases containing confidential information. Such a security system is a very powerful tool against attackers and can effectively deter their unauthorized access (AT) attempts to the protected system.

The purpose of this work is to study a layered protection system for “office” type computer systems.

1.3 Objectives

To achieve this goal, it is necessary to solve the following tasks:

Study the principles of construction and operation of a layered security system;

Study independent security systems included in a layered information security system;

Determine the requirements for protection systems;

2. MAIN CONTENT OF THE WORK

2.1 Defense in depth

Defense in Depth is an information insurance concept in which several different layers of protection systems are installed throughout a computer system. Its purpose is to provide redundant security to a computer system in the event of a malfunction of the security control system or when an attacker exploits a certain vulnerability.

The idea of ​​defense in depth is to protect the system from any attack, using, usually sequentially, a number of independent methods.

Initially, defense in depth was a purely military strategy, which made it possible not to anticipate and prevent, but to postpone an enemy attack, and to buy a little time in order to correctly position various protective measures. For a more complete understanding, we can give an example: barbed wire effectively restrains infantry, but tanks easily drive over it. However, a tank cannot drive through anti-tank hedges, unlike infantry, which simply bypasses them. But if they are used together, then neither tanks nor infantry will be able to get through quickly, and the defending side will have time to prepare.

The placement of security mechanisms, procedures, and policies is intended to enhance the security of a computer system, where multiple layers of protection can prevent espionage and direct attacks on critical systems. From a computer networking perspective, defense in depth is intended not only to prevent unauthorized access, but also to provide time in which an attack can be detected and responded to, thereby reducing the consequences of a breach.

An “office” type computer system can process information with different levels of access - from free to information constituting a state secret. That is why, in order to prevent unauthorized access and various types of attacks, such a system requires an effective information security system.

Next, we will consider the main layers of protection (echelons) used in layered defense systems. It should be noted that a defense system consisting of two or more of the following systems is considered layered.

2.2 Components of a layered information security system

2.2.1 Antivirus programs

Antivirus program (antivirus) is a specialized program for detecting computer viruses, as well as unwanted (considered malicious) programs in general and recovery of files infected (modified) by such programs, as well as for prevention - preventing infection (modification) of files or the operating system with malicious code.

Refers to software tools used to ensure the protection (by non-cryptographic methods) of information containing information constituting state secrets and other information with limited access.

Antivirus products can be classified according to several criteria:

According to the anti-virus protection technologies used:

Classic antivirus products (products that use only signature detection methods);

Proactive antivirus protection products (products that use only proactive antivirus protection technologies);

Combined products (products that use both classic, signature-based protection methods and proactive ones).

By product functionality:

Antivirus products (products that provide only antivirus protection)

Combination products (products that provide not only anti-malware protection, but also spam filtering, encryption and data backup, and other functions);

By target platforms:

Antivirus products for Windows operating systems;

Anti-virus products for *NIX OS family (this family includes BSD, Linux OS, etc.);

Antivirus products for the MacOS family of operating systems;

Antivirus products for mobile platforms (Windows Mobile, Symbian, iOS, BlackBerry, Android, Windows Phone 7, etc.).

Antivirus products for corporate users can also be classified by protection objects:

Antivirus products to protect workstations;

Anti-virus products to protect file and terminal servers;

Anti-virus products to protect email and Internet gateways;

Antivirus products to protect virtualization servers.

Requirements for anti-virus protection tools include general requirements for anti-virus protection tools and requirements for security functions of anti-virus protection tools.

To differentiate the requirements for the security functions of anti-virus protection tools, six protection classes of anti-virus protection tools have been established. The lowest class is sixth, the highest is first.

Anti-virus protection tools corresponding to protection class 6 are used in personal data information systems of classes 3 and 4.

Anti-virus protection tools corresponding to class 5 protection are used in personal data information systems of class 2.

Anti-virus protection tools corresponding to class 4 of protection are used in government information systems that process restricted information that does not contain information constituting state secrets, in personal data information systems of class 1, as well as in public information systems of class II.

Anti-virus protection tools corresponding to protection classes 3, 2 and 1 are used in information systems that process information containing information constituting state secrets.

The following types of antivirus protection tools are also distinguished:

type “A” - anti-virus protection tools (components of anti-virus protection tools), intended for centralized administration of anti-virus protection tools installed on information system components (servers, automated workstations);

type “B” - anti-virus protection tools (components of anti-virus protection tools) intended for use on information system servers;

type “B” - anti-virus protection tools (components of anti-virus protection tools) intended for use at automated workstations of information systems;

type “G” - anti-virus protection tools (components of anti-virus protection tools) intended for use in autonomous automated workstations.

Anti-virus protection tools of type “A” are not used in information systems independently and are intended for use only in conjunction with anti-virus protection tools of types “B” and (or) “C”.

The purpose of defense in depth is to filter malware at different levels of the protected system. Consider:

Connection level

At a minimum, an enterprise network consists of a connectivity layer and a core. At the connectivity level, many organizations have firewalls, intrusion detection and prevention systems (IDS/IPS/IDP), and defenses against denial of service attacks. Based on these solutions, the first level of protection against the penetration of malware is implemented. Firewalls and IDS/IPS/IDP tools “out of the box” have built-in inspection functionality at the protocol agent level. Moreover, the de facto standard for UTM solutions is a built-in antivirus that scans incoming/outgoing traffic. The presence of in-line antiviruses in firewalls is also becoming the norm. Such options appear more and more often in new versions of well-known products. However, many users forget about the built-in functions of network equipment, but, as a rule, their activation does not require additional costs for the purchase of expansion options.

Thus, optimal use of the built-in security functions of network equipment and activation of additional anti-virus control options on firewalls will create the first level of defense in depth.

Application protection level

The application protection level includes both gateway solutions for anti-virus scanning and security tools that are initially aimed at solving non-anti-virus problems. Similar solutions are presented on the market and certified according to the requirements of FSTEC of Russia. These products do not require significant implementation costs and are not tied to the types of content being checked, and therefore can be used in organizations of any size.

Solutions whose main function is not anti-virus scanning can also act as a second level of malware filtering. An example is the widespread gateway solutions for filtering spam and protecting web services - URL filtering, Web Application Firewall, balancing tools. They often have the ability to perform anti-virus scanning of processed content using several malicious content filtering vendors. In particular, the implementation of anti-virus scanning at the level of mail systems or spam filtering gateways. In the case of sequential use of several antivirus products, the efficiency of filtering viruses in incoming/outgoing correspondence can reach almost 100%.

Using this approach, you can already achieve serious malware filtering performance at the first and second levels of defense in depth. In other words, if an adequate anti-virus protection system is implemented (up to the user), the lion's share of malware will be filtered out at the level of gateway solutions for one purpose or another.

Host security level

Host protection means the implementation of anti-virus scanning functions for servers and user workstations. Since employees use many stationary and mobile devices, then you need to protect them all. Moreover, a simple signature antivirus has long been no longer considered a serious protection tool. That is why many organizations have switched to Host IPS technology, which allows the use of additional control/protection mechanisms during verification through the functionality of a firewall and IPS system (behavioral analysis).

If the issues of protecting user workplaces are already well regulated, then the implementation of Host IPS on application servers (physical or virtual) is a specific task. On the one hand, Host IPS technology should not significantly increase the server load, on the other hand, it must provide the required level of security. A reasonable balance can only be found through pilot testing of the solution on a specific set of applications and hardware platform.

2.2.2 Logging and auditing

Logging refers to the collection and accumulation of information about events occurring in an information system. For example, who tried to log into the system and when, how this attempt ended, who used what information resources, what information resources were modified and by whom, and many others.

An audit is an analysis of accumulated information, carried out promptly, almost in real time, or periodically.

The implementation of logging and auditing has the following main goals:

Ensuring user and administrator accountability;

Ensuring the possibility of reconstructing the sequence of events;

Detection of attempted information security violations;

Providing information to identify and analyze problems.

Thus, logging and auditing belong to the registration and accounting subsystem. Using these operations, you can quickly and effectively find problems and vulnerabilities in the existing information security system. Depending on the class of information security information, this element can take different forms and solve different problems, such as registration and accounting:

Entry (exit) of access subjects into (from) the system(s) (network node) (at all levels);

Launching (completion) of programs and processes (tasks, tasks) (at levels 2A, 1D, 1B, 1B, 1A);

Access of programs of access subjects to protected files, including their creation and deletion, transmission over lines and communication channels (at levels 2A 1G, 1B, 1B, 1A);

Access of programs of access subjects to terminals, computers, computer network nodes, communication channels, external computer devices, programs, volumes, directories, files, records, record fields (at levels 2A 1D, 1B, 1B, 1A);

Changes in the powers of access subjects (1B, 1B, 1A);

Created protected access objects (2A, 1B, 1B, 1A);

Signaling attempts to violate security (1B, 1B, 1A).

Thus, when building defense in depth, logging and auditing systems must be installed at the border of the protected system, on the server, on each workstation, as well as on all authentication devices (for example, when entering the protected territory).

2.2.3 Physical protection

This includes measures to prevent theft of devices and storage media, as well as protection against negligence and natural disasters:

Checkpoint on the border of the protected area;

Installation of fences, prohibition signs, body height limiters for cars, various barriers, etc. along the perimeter of the protected territory;

Positioning strategically important objects so that an attacker would have to cross a large open space to get to them;

Lighting of the protected area, namely gates, doors and other strategically important objects. It should be taken into account that dim light distributed throughout the entire territory is more effective than single bright spots of spotlights, since spotlights have blind spots. A backup power supply system should also be provided in case the main one is disconnected;

Security posts at the entrances to the premises;

Installation of an alarm system and sensors (motion, touch, glass break sensors). It should be noted that this system must work in tandem with a video surveillance system to eliminate false sensor alarms;

Installation of a video surveillance system;

Various access control tools, from locks to biometrics;

Means for controlling the opening of equipment (seals on cases, etc.);

Securing equipment at workplaces using specialized locks.

2.2.4 Authentication and password protection

Authentication - an authentication procedure, for example: verifying the authenticity of a user by comparing the password he entered with the password in the user database; confirming the authenticity of an email by checking the digital signature of the letter using the sender’s signature verification key; checking the checksum of a file for compliance with the amount declared by the author of this file. In Russian, the term is used mainly in the field of information technology.

Given the degree of trust and security policy of the systems, the authentication performed can be one-way or mutual. It is usually carried out using cryptographic methods.

There are several documents establishing authentication standards. For Russia, the following is relevant: GOST R ISO/IEC 9594-8-98 - Fundamentals of authentication.

This standard:

Defines the format of authentication information stored by the directory;

Describes how to obtain authentication information from the directory;

Establishes prerequisites for methods of generating and placing authentication information in the directory;

Defines three ways in which application programs can use such authentication information to perform authentication, and describes how other security services can be provided using authentication.

This standard specifies two types of authentication: simple, using a password as a verification of a claimed identity, and strong, using credentials created using cryptographic methods.

In any authentication system, there are usually several elements:

The subject who will undergo the procedure;

The characteristic of the subject is a distinctive feature;

The owner of the authentication system, who is responsible and controls its operation;

The authentication mechanism itself, that is, the principle of operation of the system;

A mechanism that grants certain access rights or deprives the subject of them.

There are also 3 authentication factors:

Something we know is a password. This is secret information that only an authorized subject should have. The password can be a speech word, a text word, a lock combination, or a personal identification number (PIN). A password mechanism can be implemented quite easily and is low cost. But it has significant drawbacks: keeping a password secret is often difficult; attackers are constantly coming up with new ways to steal, hack and guess the password. This makes the password mechanism weakly protected.

Something we have is an authentication device. What is important here is the fact that the subject possesses some unique object. This could be a personal seal, a key to a castle, or for a computer it is a data file containing a characteristic. The characteristic is often built into a special authentication device, for example, a plastic card, a smart card. For an attacker, obtaining such a device becomes more difficult than cracking a password, and the subject can immediately report if the device is stolen. This makes this method more secure than a password mechanism, but the cost of such a system is higher.

Something that is part of us is biometrics. A characteristic is a physical feature of a subject. This could be a portrait, a finger or palm print, a voice, or a feature of the eye. From the subject’s point of view, this method is the simplest: there is no need to remember a password or carry an authentication device with you. However, a biometric system must be highly sensitive to confirm an authorized user but reject an attacker with similar biometric parameters. Also, the cost of such a system is quite high. But, despite its shortcomings, biometrics remains a fairly promising factor.

Let's take a closer look at authentication methods.

Authentication using reusable passwords.

It consists of entering a user identifier, colloquially called a “login” (eng. login - user registration name, account) and a password - some confidential information. A reliable (reference) login-password pair is stored in a special database.

Basic authentication has the following general algorithm:

The subject requests access to the system and enters a personal ID and password.

The entered unique data is sent to the authentication server, where it is compared with the reference data.

If the data matches the reference data, authentication is considered successful; if it differs, the subject moves to the 1st step

The password entered by the subject can be transmitted on the network in two ways:

Unencrypted, in clear text, based on the Password Authentication Protocol (PAP)

Using SSL or TLS encryption. In this case, the unique data entered by the subject is transmitted securely over the network.

Authentication using one-time passwords.

Having once obtained the subject's reusable password, the attacker has constant access to the hacked confidential information. This problem is solved by using one-time passwords (OTP - One Time Password). The essence of this method is that the password is valid for only one login; each subsequent access request requires a new password. The authentication mechanism using one-time passwords can be implemented either in hardware or software.

Technologies for using one-time passwords can be divided into:

Using a pseudo-random number generator that is common to both the subject and the system;

Use of timestamps in conjunction with a uniform time system;

Using a database of random passwords, uniform for the subject and for the system.

Multi-factor authentication.

Recently, so-called extended, or multi-factor, authentication has been increasingly used. It is built on the joint use of several authentication factors. This significantly increases the security of the system. An example is the use of SIM cards in mobile phones. The subject inserts their card (authentication device) into the phone and enters their PIN (password) when turned on. Also, for example, some modern laptops and smartphones have a fingerprint scanner. Thus, when logging into the system, the subject must go through this procedure (biometrics) and then enter a password. When choosing a particular factor or authentication method for a system, it is necessary, first of all, to take into account the required degree of security, the cost of building the system, and ensuring the mobility of the subject.

Biometric authentication.

Authentication methods based on measuring a person's biometric parameters provide almost 100% identification, solving the problems of losing passwords and personal identifiers.

The most used biometric attributes and corresponding systems are:

Fingerprints;

Hand geometry;

Iris;

Thermal image of the face;

Keyboard input;

At the same time, biometric authentication has a number of disadvantages:

The biometric template is compared not with the result of the initial processing of the user's characteristics, but with what came to the comparison site. A lot can happen during the journey.

The template database can be modified by an attacker.

It is necessary to take into account the difference between the use of biometrics in a controlled area, under the watchful eye of security, and in “field” conditions, when, for example, a dummy can be brought to the scanning device, etc.

Some human biometric data changes (both as a result of aging and injuries, burns, cuts, illness, amputation, etc.), so the template database needs constant maintenance, and this creates certain problems for both users and administrators .

If your biometric data is stolen or compromised, it is usually for life. Passwords, despite their unreliability, can be changed as a last resort. You cannot change a finger, an eye or a voice, at least not quickly.

Biometric characteristics are unique identifiers, but cannot be kept secret.

The authentication procedure is used when exchanging information between computers, and very complex cryptographic protocols are used to protect the communication line from eavesdropping or substitution of one of the participants in the interaction. And since, as a rule, authentication is necessary for both objects establishing network interaction, authentication can be mutual.

Thus, several families of authentication can be distinguished:

User authentication on PC:

Encrypted name (login)

Password Authentication Protocol, PAP (login-password combination)

Access card (USB with certificate, SSO)

Network authentication -

Secure SNMP using digital signature

SAML (Security Assertion Markup Language)

Session cookies

Kerberos Tickets

X.509 Certificates

Operating systems of the Windows NT 4 family use the NTLM protocol (NT LAN Manager - NT Local Network Manager). And in Windows 2000/2003 domains, the much more advanced Kerberos protocol is used.

From the point of view of building defense in depth, authentication is used at all levels of protection. Authentication must be checked not only by personnel (when entering a protected facility, in special premises, when receiving confidential information on any media, when entering a computer system, when using software and hardware), but also each individual workstation, program, media information, tools connected to workstations, servers, etc.

2.2.5 Firewalls

A firewall (FW) is a local (single-component) or functionally distributed software (hardware and software) tool (complex) that implements control over information entering the firewall and/or leaving the firewall. ME provides AS protection by filtering information, i.e. its analysis according to a set of criteria and making a decision on its distribution to (from) the AS based on given rules, thus delimiting the access of subjects from one AS to objects of another AS. Each rule prohibits or allows the transfer of information of a certain type between subjects and objects. As a consequence, subjects from one AS receive access only to permitted information objects from another AS. Interpretation of a set of rules is performed by a sequence of filters that allow or deny the transmission of data (packets) to the next filter or protocol layer. Five ME security classes are established.

Each class is characterized by a certain minimum set of requirements for information protection.

The lowest security class is the fifth, used for the safe interaction of class 1D speakers with the external environment, the fourth - for 1G, the third - 1B, the second - 1B, the highest is the first, used for the safe interaction of class 1A speakers with the external environment.

The requirements for ME do not exclude the requirements for computer equipment (CT) and AS in accordance with the guidelines of the FSTEC of Russia “Computer equipment. Protection against unauthorized access to information. Indicators of security against unauthorized access to information” and “Automated systems. Protection against unauthorized access to information. Classification of automated systems and requirements for information protection.”

When an ME is included in an AS of a certain security class, the security class of the total AS obtained from the original one by adding an ME to it should not be reduced.

For class 3B, 2B speakers, MEs of at least class 5 must be used.

For class 3A, 2A speakers, depending on the importance of the information being processed, ME of the following classes should be used:

When processing information classified as “secret” - no lower than class 3;

When processing information classified as “top secret” - no lower than class 2;

When processing information classified as “special importance” - no lower than class 1.

ME is a standard element of the NSD information security system and is a means of controlling information flows, entering the access control subsystem.

From the point of view of echeloning, firewalls can be located both on the perimeter of the protected system and on its individual elements, for example, software firewalls installed on workstations and servers.

Demilitarized zone.

DMZ (demilitarized zone, DMZ) is a technology for ensuring the protection of the information perimeter, in which servers responding to requests from the external network are located in a special network segment (called DMZ) and are limited in access to the main network segments using a firewall ( firewall) in order to minimize damage when one of the public services located in the zone is hacked.

Depending on security requirements, the DMZ can have one, two or three firewalls.

Configuration with one firewall.

In this DMZ scheme, the internal network and the external network are connected to different ports of the router (acting as a firewall), which controls connections between networks. This scheme is easy to implement and requires only one additional port. However, if the router is hacked (or configured incorrectly), the network becomes vulnerable directly from the external network.

Configuration with two firewalls.

In a dual firewall configuration, the DMZ connects to two routers, one of which restricts connections from the external network to the DMZ, and the second controls connections from the DMZ to the internal network. This scheme allows you to minimize the consequences of hacking any of the firewalls or servers interacting with the external network - until the internal firewall is hacked, the attacker will not have arbitrary access to the internal network.

Configuration with three firewalls.

There is a rare configuration with three firewalls. In this configuration, the first of them takes over requests from the external network, the second controls the DMZ network connections, and the third controls the internal network connections. In such a configuration, usually the DMZ and the internal network are hidden behind NAT (Network Address Translation).

One of the key features of the DMZ is not only traffic filtering on the internal firewall, but also the requirement of mandatory strong cryptography in the interaction between the active equipment of the internal network and the DMZ. In particular, there should be no situations in which it is possible to process a request from a server in the DMZ without authorization. If the DMZ is used to ensure the protection of information inside the perimeter from leakage from within, similar requirements are imposed for processing user requests from the internal network.

From the point of view of forming defense in depth, the DMZ can be defined as part of the ME, however, the DMZ is required only when some data must be in public use (for example, a website). DMZ, like ME, is implemented at the edge of the protected network, and, if required, there can be several such zones, each with its own security policies. Thus, the first echelon here is protection when accessing the DMZ, and the second is protection of the internal network. Thanks to this, gaining access to the DMZ itself is difficult, but even with a successful combination of circumstances, disruption of the internal network is almost impossible.

VPN (English: Virtual Private Network) is a generalized name for technologies that allow one or more network connections (logical network) to be provided over another network (for example, the Internet). Despite the fact that communications are carried out over networks with a lower or unknown level of trust (for example, over public networks), the level of trust in the constructed logical network does not depend on the level of trust in the underlying networks due to the use of cryptography tools (encryption, authentication, public key infrastructure, means to protect against repetitions and changes transmitted over the logical network of messages).

Depending on the protocols used and the purpose, a VPN can provide connection of three types: node-node, node-network and network-network.

Typically, VPNs are deployed at levels no higher than the network level, since the use of cryptography at these levels allows transport protocols (such as TCP, UDP) to be used unchanged.

Users Microsoft Windows The term VPN denotes one of the virtual network implementations - PPTP, which is often used not for creating private networks.

Most often, to create a virtual network, the PPP protocol is encapsulated in some other protocol - IP (this method is used by the PPTP implementation - Point-to-Point Tunneling Protocol) or Ethernet (PPPoE) (although they also have differences). VPN technology has recently been used not only to create private networks themselves, but also by some “last mile” providers in the post-Soviet space to provide Internet access.

With the proper level of implementation and the use of special software VPN network can provide high level encryption of transmitted information. When all components are properly configured, VPN technology ensures anonymity on the Internet.

VPN solutions can be classified according to several main parameters:

According to the degree of security of the environment used:

Secure - the most common version of virtual private networks. With its help, it is possible to create a reliable and secure network based on an unreliable network, usually the Internet. Examples of secure VPNs are: IPSec, OpenVPN and PPTP.

Trusted - used in cases where the transmission medium can be considered reliable and it is only necessary to solve the problem of creating a virtual subnet within a larger network. Security issues become irrelevant. Examples of such VPN solutions are: Multi-protocol label switching (MPLS) and L2TP (Layer 2 Tunnelling Protocol).

By implementation method:

In the form of special software and hardware, the implementation of a VPN network is carried out using a special set of software and hardware. This implementation provides high performance and, as a rule, a high degree of security.

As a software solution, they use a personal computer with special software that provides VPN functionality.

Integrated solution - VPN functionality is provided by a complex that also solves the problems of filtering network traffic, organizing a firewall and ensuring quality of service.

By purpose:

Intranet VPN is used to unite several distributed branches of one organization exchanging data via open communication channels into a single secure network.

Remote Access VPN is used to create a secure channel between a corporate network segment (central office or branch) and a single user who, working at home, connects to corporate resources from a home computer, corporate laptop, smartphone or Internet kiosk.

Extranet VPN - used for networks to which “external” users (for example, customers or clients) connect. The level of trust in them is much lower than in company employees, so it is necessary to provide special “lines” of protection that prevent or limit the latter’s access to particularly valuable, confidential information.

Internet VPN - used to provide access to the Internet by providers, usually if several users connect via one physical channel. The PPPoE protocol has become the standard in ADSL connections.

L2TP was widespread in the mid-2000s in home networks: at that time, intranet traffic was not paid for, and external traffic was expensive. This made it possible to control costs: when the VPN connection is turned off, the user does not pay anything. Currently (2012), wired Internet is cheap or unlimited, and on the user’s side there is often a router on which turning the Internet on and off is not as convenient as on a computer. Therefore, L2TP access is becoming a thing of the past.

Client/Server VPN - it provides protection for transmitted data between two nodes (not networks) of a corporate network. The peculiarity of this option is that the VPN is built between nodes located, as a rule, in the same network segment, for example, between a workstation and a server. This need very often arises in cases where it is necessary to create several logical networks on one physical network. For example, when it is necessary to divide traffic between the financial department and the human resources department accessing servers located in the same physical segment. This option is similar to VLAN technology, but instead of separating traffic, it is encrypted.

By protocol type, there are implementations of virtual private networks for TCP/IP, IPX and AppleTalk. But today there is a tendency towards a general transition to the TCP/IP protocol, and the vast majority of VPN solutions support it. Addressing in it is most often selected in accordance with the RFC5735 standard, from the range of TCP/IP Private Networks.

By network protocol level - based on comparison with the levels of the ISO/OSI reference network model.

VPN-based defense in depth must include two (or more) security perimeters on equipment from different manufacturers. In this case, the method of exploiting the “hole” in the line of defense of one supplier will not be applicable to the solution of another. This two-tier solution is a mandatory technology requirement for many corporate networks, and is particularly common in Swiss banking networks.

Figure 1. Scenarios of interaction between protected perimeters.

Figure 2. Symbols used in Figure 1.

Within the framework of a two-tier network security architecture based on Cisco and CSP VPN products, the following basic scenarios for the interaction of protected perimeters are implemented (Figure 1):

Internet access for corporate users.

Secure interaction between external perimeters.

Secure access for remote users to the external perimeter network.

Secure access for remote users to the internal perimeter network.

Secure interaction of internal perimeters.

Creation of internal secure circuits and protection of client-server applications.

Technical implementations of basic scenarios are varied and depend on:

what we protect (what are the protected objects, how are they authenticated),

where the security objects are located (network topology),

how we want to apply security measures (access control policy and IPsec tunnel structure).

2.2.7 Intrusion detection system

An intrusion detection system (IDS) is a software or hardware tool designed to detect cases of unauthorized access to or unauthorized control of a computer system or network, primarily via the Internet. The corresponding English term is Intrusion Detection System (IDS). Intrusion detection systems provide an additional layer of protection for computer systems.

Intrusion detection systems are used to detect certain types of malicious activity that may compromise the security of a computer system. Such activity includes network attacks against vulnerable services, attacks aimed at escalation of privileges, unauthorized access to important files, as well as malicious software (computer viruses, Trojans and worms)

Typically, an IDS architecture includes:

A sensor subsystem designed to collect events related to the security of the protected system

An analysis subsystem designed to detect attacks and suspicious actions based on sensor data

Storage that provides accumulation of primary events and analysis results

A management console that allows you to configure the IDS, monitor the state of the protected system and IDS, and view incidents identified by the analysis subsystem

There are several ways to classify IDS depending on the type and location of sensors, as well as the methods used by the analysis subsystem to identify suspicious activity. In many simple IDSs, all components are implemented as a single module or device.

Using the classification of IDS based on the location of the sensors, it is possible to determine a layered IDS located at the levels of the network (network IDS), server (protocol IDS) and host (node ​​IDS). According to the same classification, hybrid IDS can be immediately classified as layered IDS, since they meet the basic separation requirements.

In a networked IDS, sensors are located at critical points in the network, often in the demilitarized zone, or at the edge of the network. The sensor intercepts all network traffic and analyzes the contents of each packet for the presence of malicious components. Protocol IDSs are used to monitor traffic that violates the rules of certain protocols or the syntax of a language (for example, SQL). In host-based IDS, the sensor is usually a software agent that monitors the activity of the host on which it is installed. There are also hybrid versions of the listed types of OWLs.

Network-based IDS (NIDS) monitors for intrusions by inspecting network traffic and monitors multiple hosts. A network intrusion detection system gains access to network traffic by connecting to a hub or switch configured for port mirroring, or a network TAP device. An example of a web-based IDS is Snort.

A Protocol-based IDS (PIDS) is a system (or agent) that monitors and analyzes communication protocols with associated systems or users. For a web server, such an IDS usually monitors HTTP and HTTPS protocols. When using HTTPS, the IDS must be located on such an interface to view HTTPS packets before they are encrypted and sent to the network.

Application Protocol-based IDS (APIDS) is a system (or agent) that monitors and analyzes data transmitted using application-specific protocols. For example, on a web server with an SQL database, the IDS will monitor the content of the SQL commands sent to the server.

Nodal IDS (Host-based IDS, HIDS) - a system (or agent) located on the host that monitors intrusions using analysis of system calls, application logs, file modifications (executable files, password files, system databases), host state and others sources. An example is OSSEC.

A hybrid IDS combines two or more approaches to developing an IDS. Data from agents on hosts is combined with network information to create the most complete picture of network security. An example of a hybrid OWL is Prelude.

Although both an IDS and a firewall are information flow control tools, a firewall differs in that it restricts certain types of traffic to a host or subnet to prevent intrusions and does not monitor intrusions that occur within the network. IDS, on the contrary, passes traffic, analyzing it and signaling when suspicious activity is detected. Detection of a security breach is usually carried out using heuristic rules and analysis of signatures of known computer attacks.

3. MAIN RESULTS OF THE WORK

In the course of the work, the basic principle of constructing a layered information security system was studied - “echelon”. This means that many independent components of the information security system should be installed not in one place, but “echeloned” - distributed across different levels (echelons) of the protected system. Thanks to this, a state of “overprotection” of the system is achieved, in which the weaknesses of one component are covered by other components.

The independent means themselves used in the formation of layered information security in “office” type computer systems were also studied:

Antivirus programs;

Recording and auditing;

Physical protection;

Authentication and password protection;

Firewalls;

Demilitarized Zone;

Intrusion detection system.

For each component, it was considered at what levels its installation is necessary, and requirements were also presented, according to the documents of the FSTEC of Russia.

LIST OF INFORMATION SOURCES USED

Guiding document of the FSTEC of Russia “Computer facilities. Protection against unauthorized access to information. Indicators of security against unauthorized access to information.” dated March 30, 1992;

Guiding document of the FSTEC of Russia “Automated systems. Protection against unauthorized access to information. Classification of automated systems and requirements for information protection” dated March 30, 1992;

Guiding document “Computer facilities. Firewalls. Protection against unauthorized access to information. Indicators of security against unauthorized access to information" dated July 25, 1997;

Posted on Allbest.ru

Similar documents

    Ways of unauthorized access, classification of methods and means of protecting information. Analysis of information security methods on a LAN. Identification and authentication, logging and auditing, access control. Computer system security concepts.

    thesis, added 04/19/2011

    The problem of choosing between the required level of protection and network efficiency. Mechanisms for ensuring information security in networks: cryptography, electronic signature, authentication, network protection. Requirements for modern information security tools.

    course work, added 01/12/2008

    Information protection requirements. Classification of an automated system. Factors influencing the required level of information security. Physical data protection. Installation of uninterruptible power supplies. Identification and authentication, access control.

    course work, added 11/29/2014

    Methods and means of protecting information from unauthorized access. Features of information protection in computer networks. Cryptographic protection and electronic digital signature. Methods of protecting information from computer viruses and hacker attacks.

    abstract, added 10/23/2011

    Review of information security technologies in computer networks: cryptography, electronic signature, authentication, network protection. Organization of information security on the client machine using the Avast system. Configuration and setup of the Avast system on your computer.

    course work, added 05/11/2014

    The concept of computer crime. Basic concepts of information protection and information security. Classification of possible threats to information. Prerequisites for the emergence of threats. Ways and methods of protecting information resources. Types of antivirus programs.

    course work, added 05/28/2013

    Software and hardware for protecting your computer from unauthorized access. Electronic lock "Sable". SecretNet information security system. Fingerprint information security devices. Public key management, certification centers.

    course work, added 08/23/2016

    Characteristics of protected objects and requirements for them. Identification of leakage channels and protection requirements. Protective equipment and their placement. An alternative information protection system with complex shielding. Shielded structures, rooms, chambers.

    course work, added 04/16/2012

    Technical means of information security. Main security threats to a computer system. Means of protection against unauthorized access. Systems for preventing leaks of confidential information. Security systems analysis tools.

    presentation, added 11/18/2014

    Analysis of information as an object of protection and study of requirements for information security. Research of engineering and technical protection measures and development of an information protection object management system. Implementation of object protection using the Packet Tracer program.

Millions of dollars are being spent on development and will be able to overcome the Russian air defense and missile defense systems, according to the American magazine The National Interest. According to the publication’s conclusions, the American army has no experience of confrontation with a high-tech enemy, so the result of a potential military conflict with the Russian Federation is impossible to predict. In the event of an operation, modern stealth aircraft and cruise missiles, such as Tomahawks, will be at risk of interception, the author of the article emphasizes. About the capabilities of American weapons and the potential of Russian air defense - in the RT material.

The Pentagon's investments in the creation of aircraft with extensive use of stealth technologies will not give a guaranteed result against the Russian system of “limiting and denying access and maneuver” (A2/AD - anti-access and area denial). The American publication The National Interest writes about this.

A2/AD is a term common in the West, which implies that a state has long-range strike systems capable of intercepting air attack weapons hundreds and tens of kilometers from the borders and launching preventive strikes on enemy ground and sea targets.

The overseas publication notes that Russia has an “air minefield” that “NATO will have to somehow neutralize or go around in the event of a conflict.” Moscow’s main advantage is its layered air defense system, the main “advantages of which are range, accuracy and mobility.”

As The National Interest writes, in case of an invasion of Russian airspace, not only the latest American aircraft will be vulnerable, but also sea-based cruise missiles (we are talking about Tomahawks). For this reason, “the best way to counter air defense systems is to avoid them,” the magazine concludes.

Control of the sky

There is a widespread view in the Western press about the vulnerability of NATO aircraft and missiles to the air defense/missile defense systems armed with Russian troops. According to military expert Yuri Knutov, it is based on the habit of the United States to start fighting only upon achieving complete air superiority.

“The Americans never invade a country without first destroying command posts and air defense systems. In the case of Russia, this is an absolutely impossible situation. That's why they are so annoyed by the current state of affairs. At the same time, the process of preparing for a possible war with us in the United States has never ended and the Americans continue to improve aviation and weapons,” Knutov stated in a conversation with RT.

According to the expert, the United States has traditionally been ahead of our country in the development of aviation technology. However, for half a century, domestic scientists have been creating highly effective weapons that are capable of intercepting the latest NATO aircraft and missiles and causing serious interference with their electronic equipment.

Enormous attention was paid to the creation of a layered air defense/missile defense system in the Soviet Union. Until the early 1960s, American reconnaissance aircraft flew almost unhindered over the USSR. However, with the advent of the first anti-aircraft missile systems (SAM) and the destruction of the U-2 near Sverdlovsk (May 1, 1960), the intensity of American Air Force flights over the territory of our country decreased noticeably.

Enormous amounts of money have been invested in the formation and development of air defense, as well as missile attack warning systems (MAWS). As a result, the USSR managed to ensure reliable protection of the most important administrative centers, key military infrastructure, command posts and industrial zones.

A variety of radar stations (airspace monitoring, target detection, reconnaissance), automated control systems (processing of radar information and its transmission to the command), jamming systems and fire destruction systems (anti-aircraft missile systems, fighters, electronic warfare systems) were adopted.

At the end of the 1980s, the regular strength of the USSR air defense troops exceeded 500 thousand people. The Soviet Union was defended by the Moscow Air Defense District, the 3rd Separate Missile Warning Army, the 9th Separate Air Defense Corps, the 18th Separate Space Control Corps, as well as eight air defense armies with headquarters in Minsk, Kyiv, Sverdlovsk, Leningrad, Arkhangelsk , Tashkent, Novosibirsk, Khabarovsk and Tbilisi.

In total, over 1,260 air defense missile divisions, 211 anti-aircraft missile regiments, 28 radio engineering regiments, 36 radio engineering brigades, 70 air defense fighter regiments, numbering over 2.5 thousand combat aircraft, were on combat duty.

After the collapse of the USSR, due to changes in the geopolitical situation and a change in military doctrine, the number of air defense troops was reduced. Now the Aerospace Forces includes units of the Space Forces (responsible for early warning systems), the 1st Air Defense-Missile Defense Army (protects the Moscow region) and five air force and air defense armies covering the south of the Russian Federation and western regions Central Russia, Far East, Siberia, the Volga region, the Urals and the Arctic.

According to the Ministry of Defense of the Russian Federation, in recent years Russia has restored a continuous radar field “in the main missile-hazardous directions” and strengthened the air defense system due to the receipt of the latest S-400 air defense systems “Triumph”, “Pantsir-S”, modernized versions of “Tora” and “Pantsir-S” to the troops. Buka."

In the coming years, the military plans to complete the modernization of the A-135 Amur missile defense system and launch serial production of the S-500 complex, capable of intercepting almost all known targets, including orbital aircraft, satellites, intercontinental ballistic missiles and their warheads.

“Does not bring breakthrough results”

In a conversation with RT, Vadim Kozyulin, a professor at the Academy of Military Sciences, noted that in the United States there is ongoing debate regarding the validity of relying on the low radar signature of aircraft and missiles. According to him, there are growing concerns in the United States that modern radars (primarily Russian) can easily detect so-called “invisible” radars in the air.

“This raises the question of whether it makes sense to work so hard in this area if it does not bring breakthrough results. The Americans were pioneers in the development of stealth technology. Hundreds of billions of dollars were spent on stealth projects, but not even all production samples met expectations,” Kozyulin said.

Low radar signature is achieved by reducing the effective dispersion area (RCS). This indicator depends on the presence of flat geometric shapes in the aircraft design and special radio-absorbing materials. “Invisible” is usually called an aircraft with an ESR of less than 0.4 square meters. m.

The first US production stealth aircraft was the Lockheed F-117 Nighthawk tactical bomber, which took to the skies in 1981. He participated in operations against Panama, Iraq and Yugoslavia. Despite the ESR indicator, incredible for its time (from 0.025 sq. m to 0.1 sq. m), the F-117 had many significant shortcomings.

In addition to the extremely high price and complexity of operation, the Nighthawk was hopelessly inferior to earlier US Air Force vehicles in terms of combat load (just over two tons) and range (about 900 km). In addition, the stealth effect was achieved only in radio silence mode (switching off communications and the friend-or-foe identification system).

On March 27, 1999, an American high-tech vehicle was shot down by a Soviet S-125 air defense system of the Yugoslav air defense forces, which was already considered obsolete. This was the F-117's only combat loss. Since then, discussions have continued among military personnel and experts about how such an incident became possible. In 2008, Nighthawk was retired from the US Air Force.

The most modern examples of American aviation are also represented by “invisible” aircraft. The EPR of the first fifth-generation aircraft F-22 is 0.005-0.3 square meters. m, the newest F-35 fighter - 0.001-0.1 sq. m, long-range bomber B-2 Spirit - 0.0014-0.1 sq. m. At the same time, the S-300 and S-400 air defense systems are capable of recording air targets with an ESR in the region of 0.01 square meters. m (no exact data).

Kozyulin noted that Western and domestic media often try to find out whether Russian anti-aircraft systems can intercept American aircraft. According to him, anti-aircraft combat is simultaneously influenced by many factors; it is impossible to predict its outcome in advance.

“The EPR changes depending on the altitude and flight range of the aircraft. At one point it may be clearly visible, at another - not. However, the great popularity of Russian air defense systems on the world market and the Americans’ concerns about the capabilities of the S-400 indicate that Russia’s air defense is coping with its assigned tasks, that is, protection from any means of air attack,” Kozyulin concluded.

Defense in depth

Defense in Depth is an information insurance concept in which several different layers of protection systems are installed throughout a computer system. Its purpose is to provide redundant security to a computer system in the event of a malfunction of the security control system or when an attacker exploits a certain vulnerability.

The idea of ​​defense in depth is to protect the system from any attack, using, usually sequentially, a number of independent methods.

Initially, defense in depth was a purely military strategy, which made it possible not to anticipate and prevent, but to postpone an enemy attack, and to buy a little time in order to correctly position various protective measures. For a more complete understanding, we can give an example: barbed wire effectively restrains infantry, but tanks easily drive over it. However, a tank cannot drive through anti-tank hedges, unlike infantry, which simply bypasses them. But if they are used together, then neither tanks nor infantry will be able to get through quickly, and the defending side will have time to prepare.

The placement of security mechanisms, procedures, and policies is intended to enhance the security of a computer system, where multiple layers of protection can prevent espionage and direct attacks on critical systems. From a computer networking perspective, defense in depth is intended not only to prevent unauthorized access, but also to provide time in which an attack can be detected and responded to, thereby reducing the consequences of a breach.

An “office” type computer system can process information with different levels of access - from free to information constituting a state secret. That is why, in order to prevent unauthorized access and various types of attacks, such a system requires an effective information security system.

Next, we will consider the main layers of protection (echelons) used in layered defense systems. It should be noted that a defense system consisting of two or more of the following systems is considered layered.

Components of a layered information security system

Antivirus programs

Antivirus program (antivirus) is a specialized program for detecting computer viruses, as well as unwanted (considered malicious) programs in general and restoring files infected (modified) by such programs, as well as for prevention - preventing infection (modification) of files or the operating system with malicious code .

Refers to software tools used to ensure the protection (by non-cryptographic methods) of information containing information constituting state secrets and other information with limited access.

Antivirus products can be classified according to several criteria:

According to the anti-virus protection technologies used:

Classic antivirus products (products that use only signature detection methods);

Proactive antivirus protection products (products that use only proactive antivirus protection technologies);

Combined products (products that use both classic, signature-based protection methods and proactive ones).

By product functionality:

Antivirus products (products that provide only antivirus protection)

Combination products (products that provide not only anti-malware protection, but also spam filtering, encryption and data backup, and other functions);

By target platforms:

Antivirus products for Windows operating systems;

Anti-virus products for *NIX OS family (this family includes BSD, Linux OS, etc.);

Antivirus products for the MacOS family of operating systems;

Anti-virus products for mobile platforms (Windows Mobile, Symbian, iOS, BlackBerry, Android, Windows Phone 7, etc.).

Antivirus products for corporate users can also be classified by protection objects:

Antivirus products to protect workstations;

Anti-virus products to protect file and terminal servers;

Anti-virus products to protect email and Internet gateways;

Antivirus products to protect virtualization servers.

Requirements for anti-virus protection tools include general requirements for anti-virus protection tools and requirements for security functions of anti-virus protection tools.

To differentiate the requirements for the security functions of anti-virus protection tools, six protection classes of anti-virus protection tools have been established. The lowest class is sixth, the highest is first.

Anti-virus protection tools corresponding to protection class 6 are used in personal data information systems of classes 3 and 4.

Anti-virus protection tools corresponding to class 5 protection are used in personal data information systems of class 2.

Anti-virus protection tools corresponding to class 4 of protection are used in government information systems that process restricted information that does not contain information constituting state secrets, in personal data information systems of class 1, as well as in public information systems of class II.

Anti-virus protection tools corresponding to protection classes 3, 2 and 1 are used in information systems that process information containing information constituting state secrets.

The following types of antivirus protection tools are also distinguished:

type “A” - anti-virus protection tools (components of anti-virus protection tools), intended for centralized administration of anti-virus protection tools installed on information system components (servers, automated workstations);

type “B” - anti-virus protection tools (components of anti-virus protection tools) intended for use on information system servers;

type “B” - anti-virus protection tools (components of anti-virus protection tools) intended for use at automated workstations of information systems;

type “G” - anti-virus protection tools (components of anti-virus protection tools) intended for use in autonomous automated workstations.

Anti-virus protection tools of type “A” are not used in information systems independently and are intended for use only in conjunction with anti-virus protection tools of types “B” and (or) “C”.

The purpose of defense in depth is to filter malware at different levels of the protected system. Consider:

Connection level

At a minimum, an enterprise network consists of a connectivity layer and a core. At the connectivity level, many organizations have firewalls, intrusion detection and prevention systems (IDS/IPS/IDP), and defenses against denial of service attacks. Based on these solutions, the first level of protection against the penetration of malware is implemented. Firewalls and IDS/IPS/IDP tools “out of the box” have built-in inspection functionality at the protocol agent level. Moreover, the de facto standard for UTM solutions is a built-in antivirus that scans incoming/outgoing traffic. The presence of in-line antiviruses in firewalls is also becoming the norm. Such options appear more and more often in new versions of well-known products. However, many users forget about the built-in functions of network equipment, but, as a rule, their activation does not require additional costs for the purchase of expansion options.

Thus, optimal use of the built-in security functions of network equipment and activation of additional anti-virus control options on firewalls will create the first level of defense in depth.

Application protection level

The application protection level includes both gateway solutions for anti-virus scanning and security tools that are initially aimed at solving non-anti-virus problems. Similar solutions are presented on the market and certified according to the requirements of FSTEC of Russia. These products do not require significant implementation costs and are not tied to the types of content being checked, and therefore can be used in organizations of any size.

Solutions whose main function is not anti-virus scanning can also act as a second level of malware filtering. An example is the widespread gateway solutions for filtering spam and protecting web services - URL filtering, Web Application Firewall, balancing tools. They often have the ability to perform anti-virus scanning of processed content using several malicious content filtering vendors. In particular, the implementation of anti-virus scanning at the level of mail systems or spam filtering gateways. In the case of sequential use of several antivirus products, the efficiency of filtering viruses in incoming/outgoing correspondence can reach almost 100%.

Using this approach, you can already achieve serious malware filtering performance at the first and second levels of defense in depth. In other words, if an adequate anti-virus protection system is implemented (up to the user), the lion's share of malware will be filtered out at the level of gateway solutions for one purpose or another.

Host security level

Host protection means the implementation of anti-virus scanning functions for servers and user workstations. Since employees use many desktop and mobile devices in their daily work, they all need to be protected. Moreover, a simple signature antivirus has long been no longer considered a serious protection tool. That is why many organizations have switched to Host IPS technology, which allows the use of additional control/protection mechanisms during verification through the functionality of a firewall and IPS system (behavioral analysis).

If the issues of protecting user workplaces are already well regulated, then the implementation of Host IPS on application servers (physical or virtual) is a specific task. On the one hand, Host IPS technology should not significantly increase the server load, on the other hand, it must provide the required level of security. A reasonable balance can only be found through pilot testing of the solution on a specific set of applications and hardware platform.

There are no guarantees that the latest US air attack weapons, the development of which is spending millions of dollars, will be able to overcome the Russian air defense and missile defense systems, says the American magazine The National Interest. According to the publication’s conclusions, the American army has no experience of confrontation with a high-tech enemy, so the result of a potential military conflict with the Russian Federation cannot be predicted. In the event of an operation, modern stealth aircraft and cruise missiles, such as Tomahawks, will be at risk of interception, the author of the article emphasizes. About the capabilities of American weapons and the potential of Russian air defense - in the material RT

American aircraft B-2 Spirit

The Pentagon's investments in the creation of aircraft with extensive use of stealth technologies will not give a guaranteed result against the Russian system of “limiting and denying access and maneuver” (A2/AD - anti-access and area denial). The American publication The National Interest writes about this.

A2/AD is a term common in the West, which implies that a state has long-range strike systems capable of intercepting air attack weapons hundreds and tens of kilometers from the borders and launching preventive strikes on enemy ground and sea targets.

The overseas publication notes that Russia has an “air minefield” that “NATO will have to somehow neutralize or go around in the event of a conflict.” Moscow’s main advantage is its layered air defense system, the main “advantages of which are range, accuracy and mobility.”

As The National Interest writes, in case of an invasion of Russian airspace, not only the latest American aircraft will be vulnerable, but also sea-based cruise missiles (we are talking about Tomahawks). For this reason, “the best way to counter air defense systems is to avoid them,” the magazine concludes.

Control of the sky

There is a widespread view in the Western press about the vulnerability of NATO aircraft and missiles to the air defense/missile defense systems armed with Russian troops. According to military expert Yuri Knutov, it is based on the habit of the United States to begin military operations only after achieving complete air superiority.

“The Americans never invade a country without first destroying command posts and air defense systems. In the case of Russia, this is an absolutely impossible situation. That's why they are so annoyed by the current state of affairs. At the same time, the process of preparing for a possible war with us in the United States has never ended and the Americans continue to improve aviation and weapons,” Knutov stated in a conversation with RT.

According to the expert, the United States has traditionally been ahead of our country in the development of aviation technology. However, for half a century, domestic scientists have been creating highly effective weapons that are capable of intercepting the latest NATO aircraft and missiles and causing serious interference with their electronic equipment.

Enormous attention was paid to the creation of a layered air defense/missile defense system in the Soviet Union. Until the early 1960s, American reconnaissance aircraft flew almost unhindered over the USSR. However, with the advent of the first anti-aircraft missile systems (SAM) and the destruction of the U-2 near Sverdlovsk (May 1, 1960), the intensity of American Air Force flights over the territory of our country decreased noticeably.

Enormous amounts of money have been invested in the formation and development of air defense, as well as missile attack warning systems (MAWS). As a result, the USSR managed to ensure reliable protection of the most important administrative centers, key military infrastructure, command posts and industrial zones.

A variety of radar stations (airspace monitoring, target detection, reconnaissance), automated control systems (processing of radar information and its transmission to the command), jamming systems and fire destruction systems (anti-aircraft missile systems, fighters, electronic warfare systems) were adopted.

At the end of the 1980s, the regular strength of the USSR air defense troops exceeded 500 thousand people. The Soviet Union was defended by the Moscow Air Defense District, the 3rd Separate Missile Warning Army, the 9th Separate Air Defense Corps, the 18th Separate Space Control Corps, as well as eight air defense armies with headquarters in Minsk, Kyiv, Sverdlovsk, Leningrad, Arkhangelsk , Tashkent, Novosibirsk, Khabarovsk and Tbilisi.

In total, over 1,260 air defense missile divisions, 211 anti-aircraft missile regiments, 28 radio engineering regiments, 36 radio engineering brigades, 70 air defense fighter regiments, numbering over 2.5 thousand combat aircraft, were on combat duty.

After the collapse of the USSR, due to changes in the geopolitical situation and a change in military doctrine, the number of air defense troops was reduced. Now the Aerospace Forces includes units of the Space Forces (responsible for early warning systems), the 1st Air Defense-Missile Defense Army (protects the Moscow region) and five air force and air defense armies covering the south of the Russian Federation, the western regions of Central Russia, the Far East, Siberia, the Volga region, and the Urals and the Arctic.

According to the Ministry of Defense of the Russian Federation, in recent years Russia has restored a continuous radar field “in the main missile-hazardous directions” and strengthened the air defense system due to the receipt of the latest S-400 air defense systems “Triumph”, “Pantsir-S”, modernized versions of “Tora” and “Pantsir-S” to the troops. Buka."

In the coming years, the military plans to complete the modernization of the A-135 Amur missile defense system and launch serial production of the S-500 complex, capable of intercepting almost all known targets, including orbital aircraft, satellites, intercontinental ballistic missiles and their warheads.

“Does not bring breakthrough results”

In a conversation with RT, Vadim Kozyulin, a professor at the Academy of Military Sciences, noted that in the United States there is ongoing debate regarding the validity of relying on the low radar signature of aircraft and missiles. According to him, there are growing concerns in the United States that modern radars (primarily Russian) can easily detect so-called “invisible” radars in the air.

“This raises the question of whether it makes sense to work so hard in this area if it does not bring breakthrough results. The Americans were pioneers in the development of stealth technology. Hundreds of billions of dollars were spent on “invisible” projects, but not even all production samples met expectations,” Kozyulin said.

Low radar signature is achieved by reducing the effective dispersion area (RCS). This indicator depends on the presence of flat geometric shapes in the aircraft design and special radio-absorbing materials. “Invisible” is usually called an aircraft with an ESR of less than 0.4 square meters. m.

The first US production stealth aircraft was the Lockheed F-117 Nighthawk tactical bomber, which took to the skies in 1981. He participated in operations against Panama, Iraq and Yugoslavia. Despite the ESR indicator, incredible for its time (from 0.025 sq. m to 0.1 sq. m), the F-117 had many significant shortcomings.

In addition to the extremely high price and complexity of operation, the Nighthawk was hopelessly inferior to earlier US Air Force vehicles in terms of combat load (just over two tons) and range (about 900 km). In addition, the stealth effect was achieved only in radio silence mode (switching off communications and the friend-or-foe identification system).

On March 27, 1999, an American high-tech vehicle was shot down by a Soviet S-125 air defense system of the Yugoslav air defense forces, which was already considered obsolete. This was the F-117's only combat loss. Since then, discussions have continued among military personnel and experts about how such an incident became possible. In 2008, Nighthawk was retired from the US Air Force.

The most modern examples of American aviation are also represented by “invisible” aircraft. The EPR of the first fifth-generation aircraft F-22 is 0.005-0.3 square meters. m, the newest F-35 fighter - 0.001-0.1 sq. m, long-range bomber B-2 Spirit - 0.0014-0.1 sq. m. At the same time, the S-300 and S-400 air defense systems are capable of recording air targets with an ESR in the region of 0.01 square meters. m (no exact data).

Kozyulin noted that Western and domestic media often try to find out whether Russian anti-aircraft systems can intercept American aircraft. According to him, anti-aircraft combat is simultaneously influenced by many factors; it is impossible to predict its outcome in advance.

“The EPR changes depending on the altitude and flight range of the aircraft. At one point it may be clearly visible, at another - not. However, the great popularity of Russian air defense systems on the world market and the Americans’ concerns about the capabilities of the S-400 indicate that Russia’s air defense is coping with its assigned tasks, that is, protection against any means of air attack,” Kozyulin concluded.

Annotation: The lecture describes the methodological aspects of protecting information systems.

Requirements for IS architecture to ensure the safety of its operation

The ideology of open systems has significantly affected the methodological aspects and direction of development of complex distributed information systems. It is based on strict adherence to a set of profiles, protocols and standards de facto and de jure. Software and hardware components according to this ideology must meet the most important requirements of portability and the ability to coordinate and collaborate with other remote components. This allows for compatibility of components of various information systems, as well as data transmission media. The task comes down to the maximum possible reuse of developed and tested software and information components when changing computing hardware platforms, operating systems and interaction processes.

When creating complex, distributed information systems, designing their architecture, infrastructure, selecting components and connections between them, one should take into account in addition to the general ones (openness, scalability, portability, mobility, investment protection, etc.) a number of specific conceptual requirements aimed at ensuring the security of the operation of the system itself and data:

  • The system architecture must be sufficiently flexible, i.e. should allow for relatively simple, without fundamental structural changes, development of infrastructure and changes in the configuration of the tools used, increasing the functions and resources of the information system in accordance with the expansion of the scope and tasks of its application;
  • the security of the system's operation against various types of threats and reliable protection of data from design errors, destruction or loss of information, as well as user authorization, workload management, data and computing resource backup, and restoration of the functioning of the information system as quickly as possible must be ensured;
  • it is necessary to ensure comfortable, maximally simplified user access to services and the results of IS operation based on modern graphic tools, mnemonic diagrams and visual user interfaces;
  • the system must be accompanied by updated, complete documentation that ensures qualified operation and the possibility of developing the IS.

We emphasize that technical security systems, no matter how powerful they are, cannot by themselves guarantee the reliability of the software and hardware level of protection. Only an IS architecture focused on security can make the integration of services effective, ensure the controllability of the information system, its ability to develop and withstand new threats while maintaining such properties as high performance, simplicity and ease of use. In order to fulfill these requirements, the IS architecture must be built on the following principles.

IC design on the principles of open systems, adherence to recognized standards, use of proven solutions, hierarchical organization of IS with a small number of entities at each level - all this contributes to transparency and good manageability of IS.

Continuity of protection in space and time, inability to overcome protective measures, exclusion of spontaneous or caused transition to an unsafe state - under any circumstances, including abnormal ones, the protective means either fully performs its functions or completely blocks access to the system or part of it

Strengthening the weakest link minimization of privileges access, separation of service functions and personnel responsibilities. It is assumed that roles and responsibilities will be distributed in such a way that one person cannot disrupt a process that is critical for the organization or create a security gap due to ignorance or on the orders of attackers.

As applied to the software and hardware level, the principle of minimizing privileges prescribes that users and administrators should be allocated only those access rights that they need to perform their official duties. This allows you to reduce damage from accidental or intentional incorrect actions of users and administrators.

Layering of defense, variety of protective equipment, simplicity and controllability of the information system and its security system. The principle of defense echeloning prescribes not to rely on one defensive line, no matter how reliable it may seem. Physical protection means must be followed by software and hardware, identification and authentication - access control, logging and audit.

Defense in depth is capable of not only not allowing an attacker to pass through, but also, in some cases, identifying him thanks to logging and auditing. A variety of protective means involves the creation of defensive lines of different nature, so that a potential attacker is required to master diverse and, if possible, incompatible skills.

Simplicity and controllability of IS in general and protective equipment in particular. Only in a simple and controlled system can the consistency of the configuration of the various components be checked and centralized administration. In this regard, it is important to note the integrating role of the Web service, which hides the variety of objects being served and provides a single, visual interface. Accordingly, if objects of some kind (for example, database tables) are accessible via the Internet, it is necessary to block direct access to them, since otherwise the system will be vulnerable, complex and poorly managed.

Thoughtful and orderly structure of software and databases. The topology of internal and external networks directly affects the achieved quality and security of information systems, as well as the complexity of their development. With strict adherence to the rules of structural design, it is significantly easier to achieve high quality and safety indicators, since the number of possible errors in implementing programs, failures and equipment malfunctions is reduced, and their diagnosis and localization is simplified.

In a well-structured system with clearly defined components (client, application server, resource server), control points are identified quite clearly, which solves the problem of proving the sufficiency of the security measures used and ensuring that it is impossible for a potential violator to bypass these means.

The high requirements for the formation of architecture and infrastructure at the IS design stage are determined by the fact that it is at this stage that the number of vulnerabilities associated with unintentional destabilizing factors that affect the security of software, databases and communication systems can be significantly minimized.

The analysis of IS security in the absence of malicious factors is based on the model of interaction of the main components of the IS (Fig. 6.1) [Lipaev V.V., 1997]. The following are considered vulnerable objects:

  • dynamic computational process of data processing, automated preparation of decisions and development of control actions;
  • object code of programs executed by computers during the operation of the IS;
  • data and information accumulated in databases;
  • information provided to consumers and actuators.


Rice. 6.1.

Complete elimination of these threats is fundamentally impossible. The challenge is to identify the factors on which they depend, to create methods and means to reduce their impact on IP security, and to effectively allocate resources to ensure protection that is equally strong in relation to all negative impacts.

Standardization of approaches to information security

It is almost impossible for information security specialists today to do without knowledge of the relevant security profiles, standards and specifications. The formal reason is that the need to follow certain standards (for example, cryptographic and “Guiding Documents” of the State Technical Commission of the Russian Federation) is enshrined in law. The reasons are also convincing: standards and specifications are one of the forms of accumulation and implementation of knowledge, primarily about the procedural and software-technical levels of information security and information systems; they record proven, high-quality solutions and methodologies developed by the most qualified companies in the field of software development and security software tools.

At the top level, two significantly different groups of standards and specifications can be distinguished:

1. evaluation standards designed to evaluate and classify IP and security tools according to security requirements;

2. specifications regulating various aspects of the implementation and use of protection means and methods.

These groups complement each other. Evaluation standards describe the most important concepts and aspects of information security from an information security point of view, playing the role of organizational and architectural specifications. Specialized standards and specifications determine exactly how to build an IS of the prescribed architecture and fulfill the organizational and technical requirements to ensure information security (Fig. 6.2, Fig. 6.3).


Rice. 6.2.


Rice. 6.3.

Among the evaluation ones, it is necessary to highlight the standard “Evaluation Criteria for Trusted Computer Systems” and its interpretation for network configurations (US Department of Defense), “Harmonized Criteria of European Countries”, international standard“Criteria for assessing the security of information technologies” and, of course, “Guiding documents” of the State Technical Commission of the Russian Federation. The US Federal Standard also belongs to this group. Security requirements for cryptographic modules", regulating a specific, but very important and complex aspect of information security.

Technical specifications applicable to modern distributed information systems are created primarily by the Internet Engineering Task Force (IETF) and its subdivision, the Security Working Group. The core of the technical specifications are the IP Security (IPSec) documents. In addition, security is analyzed at the transport layer (Transport Layer Security - TLS), as well as at the application level (GSS-API, Kerberos specifications).

The Internet community pays due attention to the administrative and procedural levels of security, creating a series of guides and recommendations: “Guide to Enterprise Information Security”, “How to Choose an Internet Service Provider”, “How to Respond to Information Security Violations”, etc.

In matters of network security, the specifications X.800 "Security Architecture for Open Systems Interconnection", X.500 "are in demand Directory service: overview of concepts, models and services "and X.509" Directory service: public key and attribute certificate frameworks."

Over the past 15 years, a large series of standards have been approved by the International Organization for Standardization (ISO) to ensure the security of information systems and their components. The vast majority of these standards relate to telecommunications, processes and protocols for exchanging information in distributed systems, and protecting information systems from unauthorized access. In this regard, when preparing a protection and safety system, the most suitable ones for the entire life cycle of a specific PS project should be selected from the standards.

The next chapter, “Technology and Standardization of Open Computing and Information Systems,” will detail the structure and activities of ISO and its technical committees, in particular the Joint Technical Committee 1 (JTC1), designed to form a comprehensive system of core standards in the field of IT and their extensions for specific areas of activity. Depending on the problems, methods and means of protecting computer and information systems, ISO international standards can be divided into several groups [V. Lipaev., http://www.pcweek.ru/themes/detail.php?ID=55087].

The first group of standards - ISO/IEC JTC1/SC22 "Retrieval, transmission and management of information for open systems interconnection (OSI)" - was created and developed under the leadership of the SC22 subcommittee. The standards of this group are devoted to the development and detailing of the BOS concept. Information protection in this group it is considered as one of the components that ensures the possibility of full implementation of this concept. For this purpose, services and security mechanisms have been defined at the levels of the basic OSI model, standards have been published and are being developed that consistently detail the methodological foundations of information security and specific security protocols at different levels of open systems.

The second group of standards - ISO/IEC JTC1/SC27 - is being developed under the guidance of the SC27 subcommittee and is focused primarily on specific security methods and algorithms. This group combines methodological standards for information security and cryptography, regardless of the basic OSI model. Specific methods and means of protection are summarized in the system of organization and management of IP protection.

During the planning and design process software system IP protection, it is advisable to use the third group of the most general methodological standards presented below that regulate the creation of protection complexes. Due to the similar goals of the standards, their concepts and content partially overlap and complement each other. Therefore, it is advisable to use standards together (create a profile of standards), highlighting and adapting their components in accordance with the requirements of a specific IS project.

1. ISO 10181:1996. Part 1-7. "VOS. Structure of work to ensure safety in open systems". Part 1. Overview. Part 2. Structure of work on authentication. Part 3. Structure of work on access control. Part 4. Structure of work on non-repudiation. Part 5. Structure of work on confidentiality. Part 6. Structure of work on ensuring integrity. Part 7. Structure of work for conducting a security audit.

2. ISO 13335:1996-1998. Parts 1-5. IT. THAT. "Security Management Guide." Part 1. Concept and models for ensuring information technology security. Part 2: Planning and security management information technology. Part 3. IT security management techniques. Part 4. Selection (selection) of security means. Part 5. Security of external communications.

3. ISO 15408:1999. Part 26 1-3. "Methods and means of ensuring security. Criteria for assessing the security of information technologies." Part 1. Introduction and general model. Part 2: Protection of functional requirements. Part 3. Protection of quality requirements.

The first standard in this group, ISO 10181, consists of seven parts and begins with the general concept of ensuring the security of open information systems and develops the provisions of the ISO 7498-2 standard. The first part provides basic concepts and general characteristics protection methods and focuses on the need to certify the IP security system during its implementation. The following briefly describes fixed assets ensuring the security of information systems, features of the work on their creation, the basics of interaction of protection mechanisms, principles for assessing possible refusals to service information systems tasks according to protection conditions. Examples of constructing general circuits for IP protection in open systems are shown. The content of the parts of the standard is quite clearly defined by their names.

The second standard, ISO 13335, reflects a wide range of methodological problems that must be solved when designing security systems for any information system. Its five parts focus on the basic principles and techniques for designing robust IP protection systems against various types of threats. This guide fairly fully systematizes the basic methods and processes for preparing a security project for the subsequent development of a specific comprehensive system for ensuring the security of the functioning of the IP.

The presentation is based on the concept of risk from threats of any negative impacts on IP. The first part of the standard describes the functions of security tools and the necessary actions for their implementation, vulnerability models and principles of interaction of security tools. When designing protection systems, it is recommended to take into account: the necessary protection functions, possible threats and the likelihood of their implementation, vulnerability, negative impacts of the execution of threats, risks; protective measures; resources (hardware, information, software, human) and their limitations. The remaining parts of the standard propose and develop a concept and model for managing and planning the construction of a protection system, the interaction of components of which is generally presented in Fig. 6.4.

ISO 13335 identifies functional and security components and how they interact. Security management processes should include: change and configuration management; risk analysis and management; function traceability; registration, processing and monitoring of incidents. General requirements for assessing security results are provided, as well as possible options for organizing the work of specialists to comprehensively ensure IS security.

The policy and technique of planning, selection, construction and use of security means to limit the acceptable risk when various schemes interaction and means of protection. Various approaches and strategies are recommended for creating protection systems and supporting their subsequent development. The content of the parts of the standard details general concepts and is quite accurately defined by their names. It is advisable to specify the security planning model set out in the standard and use it as a fragment of a systemic IS development project.


Rice. 6.4.

Criteria for assessing security mechanisms at the software and hardware level are presented in the international standard ISO 15408-1999 “The Common Criteria for Information Technology Security Evaluation”, adopted in 1999. This standard consolidated the basic principles of standardization in the field of information security and received further development in a series of standards discussed below.

The first part of the standard presents the goals and concept of security, as well as a general model for building IP protection. The concept is based on standard scheme life cycle of complex systems, consistent detailing of requirements and component specifications. It highlights: environment; objects; requirements; function specifications; tasks of security system tools. The general requirements for the criteria for assessing the results of protection, the Security Profile, the purposes of assessing the requirements and the use of their results are outlined. A draft set of general goals, objectives and criteria for ensuring IP security is proposed.

The second part presents a paradigm for constructing and implementing structured and detailed functional requirements for IP security components. Eleven groups (classes) of basic IS security tasks have been identified and classified. Each class is detailed by sets of requirements that implement a certain part of the security goals and, in turn, consist of a set of smaller components for solving particular problems.

The classes include and describe in detail the principles and methods for implementing the requirements for security functions: cryptographic support; communications protection And transportation(transactions) information; input, output and storage of user data; identification and authentication of users; security function management processes; privacy data protection; implementation of restrictions on the use of computing resources; ensuring the reliability of routing and communication between security functions, as well as some other classes of requirements.

For each group of tasks, recommendations are provided on the use of a set of the most effective components and procedures for ensuring IP security. To achieve IS security goals with a certain level of protection quality assurance, it is recommended to combine components of functional requirements and methods for their implementation into unified “Reusable Security Profiles”.

These “Profiles” can serve as a basis for further specification of functional requirements in the “Security Specifications” for a specific IS project and help to avoid gross mistakes in the formation of such requirements. Generalized assessments of the Security Assignment requirements specification should enable customers, developers and testers of the project to make a general conclusion about the level of its compliance with the functional requirements and the requirements for guaranteeing IP protection. Extensive appendices provide recommendations