Current location - Loan Platform Complete Network - Big data management - Virtual unit layout, the final get is the physical layout, or logical layout
Virtual unit layout, the final get is the physical layout, or logical layout
Virtual cell layouts end up with both logical and physical layouts.

Physical Layout:

TIA-942:Data Center Standards Overview describes the needs of a data center infrastructure. At its simplest, a Tier 1 data center is basically a computer room with basic guidelines for installing computer systems. Most of the immediate needs are for Tier 4 data centers, which are designed to host mission critical computer systems that are fully redundant and divided into secure zones that are biometrically controlled for access. Another consideration is the arrangement of the data center's underground contents for data security and environmental considerations, such as cooling requirements.

A data center occupies a room in a building, one or more floors, or even an entire building. Much of the equipment is often housed in racks with 19-inch compartments. These racks are placed in rows, creating a corridor. This allows people to access the compartments from the front or the back. Servers vary greatly in size from 1U servers to storage in individual silos, with storage taking up many floor tiles. Some devices, like mainframe computers and storage devices are often as big as their racks and are placed next to them. Very large data centers can use shipping containers, each of which can hold 1,000 or more servers; when repairs or upgrades are needed, the entire container is replaced rather than repairing individual servers. Locally compiled code can be controlled with minimal ramp-up. The physical environment of the data center is tightly controlled: × Air conditioning controls the temperature and humidity of the data center. ASHRAE's "Thermal Guidelines for Data Processing Environments" recommends a temperature of 20-25 °C (68-75 °F) and a humidity of 40-55%, with optimal conditions in the data center of 17 °C as the maximum dew point. conditions of 17 °C as the maximum dew point. Power supplies heat the air in the data center. Unless the heat is removed, the temperature will continue to rise, causing the power supply unit to fail. By controlling the air temperature, the server components maintain the manufacturer's stated temperature/humidity range at the shelf level. The air conditioning system helps control humidity by cooling the space air that comes and goes below the dew point. If it is too humid, water begins to concentrate on the internal components. If the air is dry, the auxiliary humidity system adds water vapor, and if the humidity is too low, it will lead to electrostatic discharge problems, which will damage components. Underground data centers are able to spend less than conventional designs to keep computer equipment cool.

1. Modern data centers try to use economizers for cooling, where they use outside air to keep the data center cool. There are several data centers in Washington State right now that use outside air to cool all of their servers 11 months out of the year. They don't use chillers or air conditioners and they create potential energy savings in the millions.

2. Backup power consists of one or more uninterruptible power supplies and, or diesel engines.

3. To prevent a single point of failure, all electrical system elements, including the backup system, are typically fully replicated, and critical servers are connected to two power ("A-Side" and "B-Side") areas. This arrangement is often used to accomplish N+1 redundancy of systems. Static power gates are sometimes used to ensure instantaneous switching from one to the other during a power failure event.

4. Data centers typically use a 60 cm (2 ft) raised floor with removable square tiles. The trend now is 80-100 cm (31.5-39.4 in), with the free area increased to better ensure air circulation. This supply is for adequate circulation of air underground as part of the conditioned system and also to provide space for power cables. Data cables in modern data centers typically use overhead cables. However, some still place them under the floor for safety reasons and it is necessary to add cooling systems on the racks. Smaller/less expensive data centers do not raise the floor but use the opposite of static tiles on the floor. Computer cables are often placed in corridors to maximize airflow efficiency.

5. A feature of data centers is the fire protection system, which contains both passive and active design elements, as well as the implementation of fire protection procedures in the business. Smoke detectors are installed, which can sound an alarm by detecting the point of origin of smoke from a stew before there is a flame. This permits investigation, interrupts the power supply, and makes it possible for someone to use a fire extinguisher to put out the fire before it gets bigger. An automatic sprinkler system is often used to control a fire that grows into a large fire. Automatic sprinkler systems require 18" of clearance under the sprinkler. In contrast to sprinkler systems, clear event extinguishing gas systems are sometimes installed for early fires. Passive fire protection elements include firewalls around the data center, so large fires can be contained more easily in the event that the fire protection system fails or is not installed.

6. Physical security also plays a large role in the data center. Physical access locations are often restricted to selected personnel and contain security control systems. Video surveillance and permanent security alerts are often used in large data centers or for data that contains confidential information

Logical layout:

Linux treats drives, files, folders, and devices differently than other operating systems. Instead of labeling each drive with a letter, all are treated as a branch under the root file system.

The default directories visible to Debian distributions are as follows.

boot: contains the Linux kernel and other packages needed to boot the Pi.

bin: The relevant binaries in the operating system are stored here, such as the GUI that needs to be run.

dev: This is a virtual directory and is not actually stored on the SD card. All devices connected to the system including storage devices, sound cards and HDMI ports can be accessed from here.

etc: Stores configuration files, including user lists and encrypted passwords.

home: Each user has a subdirectory in this directory to store all personal files.

lib: library files used to store the code **** enjoyment required by different applications.

lost+found: a special directory to store fragments of lost files in case of a system crash.

media: directory for removable storage devices, such as USB memory sticks or external CD drives.

mnt: This folder is used for manually mounted storage devices, such as external hard drives.

opt: This is used to store software that does not come with the operating system itself. New software you install to the Pi will usually go here.

proc: This is another virtual directory that contains information about running programs, which are processes in Linux.

selinux: a security system for mandatory access control (MAC) provided in the Linux kernel, originally developed by the US National Security Agency.

sbin: A directory where special binaries are stored, mainly used by the root (superuser) account when performing maintenance on the system.

sys: operating system file storage directory.

tmp: Temporary files are automatically stored here.

usr: This directory provides storage for user-accessed programs.

var: This is a virtual directory with which to store values or variables that change while the program is running.