Internet Domain Name System

posted on May 14, 2021



When DNS was not into existence, one had to download a Host file containing host names and their corresponding IP address. But with increase in number of hosts of internet, the size of host file also increased. This resulted in increased traffic on downloading this file. To solve this problem the DNS system was introduced.

Domain Name System helps to resolve the host name to an address. It uses a hierarchical naming scheme and distributed database of IP addresses and associated names

IP Address

IP address is a unique logical address assigned to a machine over the network. An IP address exhibits the following properties:

  • IP address is the unique address assigned to each host present on Internet.

  • IP address is 32 bits (4 bytes) long.

  • IP address consists of two components: network component and host component.

  • Each of the 4 bytes is represented by a number from 0 to 255, separated with dots. For example

IP address is 32-bit number while on the other hand domain names are easy to remember names. For example, when we enter an email address we always enter a symbolic string such as

Uniform Resource Locator (URL)

Uniform Resource Locator (URL) refers to a web address which uniquely identifies a document over the internet.

This document can be a web page, image, audio, video or anything else present on the web.

For example, is an URL to the index.html which is stored on tutorialspoint web server under internet_technology directory.

URL Types

There are two forms of URL as listed below:

  • Absolute URL

  • Relative URL

Absolute URL

Absolute URL is a complete address of a resource on the web. This completed address comprises of protocol used, server name, path name and file name.

For example http:// / internet_technology /index.htm. where:

  • http is the protocol.

  • is the server name.

  • index.htm is the file name.

The protocol part tells the web browser how to handle the file. Similarly we have some other protocols also that can be used to create URL are:

  • FTP

  • https

  • Gopher

  • mailto

  • news

Relative URL

Relative URL is a partial address of a webpage. Unlike absolute URL, the protocol and server part are omitted from relative URL.

Relative URLs are used for internal links i.e. to create links to file that are part of same website as the WebPages on which you are placing the link.

For example, to link an image on, we can use the relative URL which can take the form like /internet_technologies/internet-osi_model.jpg.

Difference between Absolute and Relative URL

Absolute URLRelative URL
Used to link web pages on different websites Used to link web pages within the same website.
Difficult to manage. Easy to Manage
Changes when the server name or directory name changes Remains same even of we change the server name or directory name.
Take time to access Comparatively faster to access.

Domain Name System Architecture

The Domain name system comprises of Domain Names, Domain Name Space, Name Server that have been described below:

Domain Names

Domain Name is a symbolic string associated with an IP address. There are several domain names available; some of them are generic such as com, edu, gov, net etc, while some country level domain names such as au, in, za, us etc.

The following table shows the Generic Top-Level Domain names:

Domain NameMeaning
Com Commercial business
Edu Education
Gov U.S. government agency
Int International entity
Mil U.S. military
Net Networking organization
Org Non profit organization

The following table shows the Country top-level domain names:

Domain NameMeaning
au Australia
in India
cl Chile
fr France
us United States
za South Africa
uk United Kingdom
jp Japan
es Spain
de Germany
ca Canada
ee Estonia
hk Hong Kong

Domain Name Space

The domain name space refers a hierarchy in the internet naming structure. This hierarchy has multiple levels (from 0 to 127), with a root at the top. The following diagram shows the domain name space hierarchy:


In the above diagram each subtree represents a domain. Each domain can be partitioned into sub domains and these can be further partitioned and so on.

Name Server

Name server contains the DNS database. This database comprises of various names and their corresponding IP addresses. Since it is not possible for a single server to maintain entire DNS database, therefore, the information is distributed among many DNS servers.

  • Hierarchy of server is same as hierarchy of names.

  • The entire name space is divided into the zones


Zone is collection of nodes (sub domains) under the main domain. The server maintains a database called zone file for every zone.


If the domain is not further divided into sub domains then domain and zone refers to the same thing.

The information about the nodes in the sub domain is stored in the servers at the lower levels however; the original server keeps reference to these lower levels of servers.

Types of Name Servers

Following are the three categories of Name Servers that manages the entire Domain Name System:

  • Root Server

  • Primary Server

  • Secondary Server

Root Server

Root Server is the top level server which consists of the entire DNS tree. It does not contain the information about domains but delegates the authority to the other server

Primary Servers

Primary Server stores a file about its zone. It has authority to create, maintain, and update the zone file.

Secondary Server

Secondary Server transfers complete information about a zone from another server which may be primary or secondary server. The secondary server does not have authority to create or update a zone file.

DNS Working

DNS translates the domain name into IP address automatically. Following steps will take you through the steps included in domain resolution process:

  • When we type into the browser, it asks the local DNS Server for its IP address.

  • Here the local DNS is at ISP end.
  • When the local DNS does not find the IP address of requested domain name, it forwards the request to the root DNS server and again enquires about IP address of it.

  • The root DNS server replies with delegation that I do not know the IP address of but know the IP address of DNS Server.

  • The local DNS server then asks the com DNS Server the same question.

  • The com DNS Server replies the same that it does not know the IP address of but knows the address of

  • Then the local DNS asks the DNS server the same question.

  • Then DNS server replies with IP address of

  • Now, the local DNS sends the IP address of to the computer that sends the request.

Internet Protocols Know-How

posted on May 14, 2021


Transmission Control Protocol (TCP)

TCP is a connection oriented protocol and offers end-to-end packet delivery. It acts as back bone for connection.It exhibits the following key features:

  • Transmission Control Protocol (TCP) corresponds to the Transport Layer of OSI Model.
  • TCP is a reliable and connection oriented protocol.

TCP offers:

  • Stream Data Transfer.
  • Reliability.
  • Efficient Flow Control
  • Full-duplex operation.
  • Multiplexing.

TCP offers connection oriented end-to-end packet delivery.

TCP ensures reliability by sequencing bytes with a forwarding acknowledgement number that indicates to the destination the next byte the source expect to receive. It retransmits the bytes not acknowledged with in specified time period.

TCP Services
TCP offers following services to the processes at the application layer:

  • Stream Delivery Service
  • Sending and Receiving Buffers
  • Bytes and Segments
  • Full Duplex Service
  • Connection Oriented Service
  • Reliable Service

Stream Deliver Service

TCP protocol is stream oriented because it allows the sending process to send data as stream of bytes and the receiving process to obtain data as stream of bytes.

Sending and Receiving Buffers

It may not be possible for sending and receiving process to produce and obtain data at same speed, therefore, TCP needs buffers for storage at sending and receiving ends.

Bytes and Segments

The Transmission Control Protocol (TCP), at transport layer groups the bytes into a packet. This packet is called segment. Before transmission of these packets, these segments are encapsulated into an IP datagram.

Full Duplex Service

Transmitting the data in duplex mode means flow of data in both the directions at the same time. Connection Oriented Service

TCP offers connection oriented service in the following manner:

  • TCP of process-1 informs TCP of process – 2 and gets its approval.
  • TCP of process – 1 and TCP of process – 2 and exchange data in both the two directions.
  • After completing the data exchange, when buffers on both sides are empty, the two TCP’s destroy their buffers.

Reliable Service

For sake of reliability, TCP uses acknowledgement mechanism. Internet Protocol (IP)

Internet Protocol is connectionless and unreliable protocol. It ensures no guarantee of successfully transmission of data.

In order to make it reliable, it must be paired with reliable protocol such as TCP at the transport layer.

Internet protocol transmits the data in form of a datagram as shown in the following diagram:

Points to remember:

  • The length of datagram is variable.
  • The Datagram is divided into two parts: header and data.
  • The length of header is 20 to 60 bytes.
  • The header contains information for routing and delivery of the packet.

User Datagram Protocol (UDP)

Like IP, UDP is connectionless and unreliable protocol. It doesn’t require making a connection with the host to exchange data. Since UDP is unreliable protocol, there is no mechanism for ensuring that data sent is received.

UDP transmits the data in form of a datagram. The UDP datagram consists of five parts as shown in the following diagram:

Points to remember:

  • UDP is used by the application that typically transmit small amount of data at one time.
  • UDP provides protocol port used i.e. UDP message contains both source and destination port number, that makes it possible for UDP software at the destination to deliver the message to correct application program.

File Transfer Protocol (FTP)

FTP is used to copy files from one host to another. FTP offers the mechanism for the same in following manner:

  • FTP creates two processes such as Control Process and Data Transfer Process at both ends i.e. at client as well as at server.
  • FTP establishes two different connections: one is for data transfer and other is for control information.
  • Control connection is made between control processes while Data Connection is made between
  • FTP uses port 21 for the control connection and Port 20 for the data connection.

Trivial File Transfer Protocol (TFTP)

Trivial File Transfer Protocol is also used to transfer the files but it transfers the files without authentication. Unlike FTP, TFTP does not separate control and data information. Since there is no authentication exists, TFTP lacks in security features therefore it is not recommended to use TFTP.

Key points

  • TFTP makes use of UDP for data transport. Each TFTP message is carried in separate UDP datagram.
  • The first two bytes of a TFTP message specify the type of message.
  • The TFTP session is initiated when a TFTP client sends a request to upload or download a file.
  • The request is sent from an ephemeral UDP port to the UDP port 69 of an TFTP server.

Difference between FTP and TFTP

1 Operation Transferring Files Transferring Files
2 Authentication Yes No
3 Protocol TCP UDP
4 Ports 21 – Control, 20 – Data Port 3214, 69, 4012
5 Control and Data Separated Separated
6 Data Transfer Reliable Unreliable

Telnet is a protocol used to log in to remote computer on the internet. There are a number of Telnet clients having user friendly user interface. The following diagram shows a person is logged in to computer A, and from there, he remote logged into computer B.

Hyper Text Transfer Protocol (HTTP)
HTTP is a communication protocol. It defines mechanism for communication between browser and the web server. It is also called request and response protocol because the communication between browser and server takes place in request and response pairs.

HTTP Request
HTTP request comprises of lines which contains:
  • Request line
  • Header Fields
  • Message body
Key Points
  • The first line i.e. the Request line specifies the request method i.e. Get or Post.
  • The second line specifies the header which indicates the domain name of the server from where index.htm is retrieved.
HTTP Response
Like HTTP request, HTTP response also has certain structure. HTTP response contains:
  • Status line
  • Headers
  • Message body
Internet Protocols are being careully taken careof by a reliable web hosting provider like LiquidWeb so that you will not face any problems with them.

Data Encryption Basics

posted on May 14, 2021


Encryption is a security method in which information is encoded in such a way that only authorized user can read it. It uses encryption algorithm to generate ciphertext that can only be read if decrypted.

Types of Encryption
There are two types of encryptions schemes as listed below:

Symmetric Key encryption
Symmetric key encryption algorithm uses same cryptographic keys for both encryption and decryption of cipher text.


Public Key encryption
Public key encryption algorithm uses pair of keys, one of which is a secret key and one of which is public. These two keys are mathematically linked with each other.


In terms of security, hashing is a technique used to encrypt data and generate unpredictable hash values. It is the hash function that generates the hash code, which helps to protect the security of transmission from unauthorized users.

Hash function algorithms
Hashing algorithm provides a way to verify that the message received is the same as the message sent. It can take a plain text message as input and then computes a value based on that message.

Key Points

  • The length of computed value is much shorter than the original message.
  • It is possible that different plain text messages could generate the same value.

Here we will discuss a sample hashing algorithm in which we will multiply the number of a’s, e’s and h’s in the message and will then add the number of o’s to this value.

For example, the message is “ the combination to the safe is two, seven, thirty-five”. The hash of this message, using our simple hashing algorithm is as follows:

    2 x 6 x 3 ) + 4 = 40

The hash of this message is sent to John with cipher text. After he decrypts the message, he computes its hash value using the agreed upon hashing algorithm. If the hash value sent by Bob doesn’t match the hash value of decrypted message, John will know that the message has been altered.

For example, John received a hash value of 17 and decrypted a message Bob has sent as “You are being followed, use backroads, hurry”

He could conclude the message had been altered, this is because the hash value of the message he received is:

    (3x4x1)+4 = 16

You should know that quality web hosting companies like LiquidWeb use high level of encryption in order to make he most secure environment for your website.

Features of a Smart Home Security System

posted on May 13, 2021


Your home is meant to be to be your own private space where you can relax, unwind, have fun with family and friends, and feel safe and comfortable at all times. And while no-one wants to think about being the victim of a home robbery or break-in, the fact is that it does happen, and it happens at a rather alarming rate across the country. This is exactly why so many homeowners install smart home security systems. They give you peace of mind and can act as a strong deterrent for would-be intruders. Let’s take a look at some of the top features to look for when picking the right smart home security system for your needs.

Does it have a doorbell camera?

One of the most popular features to look for in a Smart Home Security System today is a doorbell camera. Not only is it a useful feature, but you’ll find that it’s one of the most popular features these systems have. What’s great about a doorbell camera is that you will obviously be able to see who is at your door but, in many cases, you can also speak with them, and here’s the best part – that is done remotely. What that means is that you can be on vacation in a different country, get an alert that someone is at your door, and you can then speak to them giving the impression that you are right there – at home.
Smart sensors and detectors

These systems also tend to have smart sensors and detectors that are meant to keep you and the house safe. This includes such things as a water sensor, which can sense a water leak in the home, and a carbon monoxide detector which will alert you if there is a gas leak. These are the kind of safety features you hope to never use, but should they be required, they can literally save your life where a carbon monoxide leak is concerned.

Make use of a Wi-Fi camera

A Wi-Fi camera is not to be confused with a doorbell camera as this is a separately mounted camera. Because this one is Wi-Fi enabled, you’ll be able to access the video remotely. It will constantly be recording, and you may also be able to take still pictures – which can come in handy if you need to share information with police. You can use a remote hosting for increasing the security.

When shopping for the right Wi-Fi camera for your security system, be sure that it features night vision, so you have a clear and crisp picture any time of the day or night. It also needs to be positioned in a way that it will capture the entire image of any would-be intruders.

It should feature a Companion App

Finally, you want to be sure it features a companion app that can be installed on your phone, giving you quick and easy access to all the features, voice control, video, and photos.

These are just a few of the key features to look for when shopping for the ideal smart home security system.

Is Ryzen better than Intel?

posted on May 13, 2021


IT Technician here. The question is badly phrased : Ryzen is an AMD brand. AMD’s competitor, Intel, is the company who holds the “Core” brand.

Short answer:

The AMD Ryzen CPUS are progressively taking over the Intel Core processors in all matters, from brute performances per core to performance/watt ratio, while remaining still more afordable.

Also, AMD AM4 motherboard compatibility spans, for the best motherboards, over all generations of Ryzen (from 1x00 to 5xx0) if the manufacturer offers the right BIOS updates, while Intel motherboards are limited to.. one generation of CPU, even if the architecture barely changes from one generation to another.

Longer answer:

In a general fashion, AMD Ryzen 3, 5, 7 & 9 correspond to Intel Core i3, i5, i7, & i9, and these denomination are related to core and thread count, each brand & series being declined generations after generation by respectively AMD & Intel.

Desktop computers:

The last AMD Ryzen generation (4th generation, named “Zen 3”, and marketed as “Ryzen 5xx0”, forgive AMD’s logic lol) is faster than the 10th generation of Intel Core, both in single thread & in multi-thread computation (at equivalent number of cores/threads), due to its amount of Instructions Per Cycles (+ 20% compared to Intel 10th gen), and more power-savvy as well due to the second generation of 7nm lithography it employs.

What will happen to C/C++ in the next 20 years?

posted on May 13, 2021


If the recent past is any guide:

  •     the C language will remain mostly stable and unchanged
  •     The C++ language will adopt many new things, in 20 years it will probably be from 2 to 10 times as complex as it is now

Some alternative programming languages will rise for the domain that is now dominated by C and C++, but they will not get much traction. The successful ideas from these languages will be incorporated in C++ (and a few unsuccesful ones too), but will be ignored by C.

C will remain dominant in Electrical Engineering curricula and careers (and in the Linux kernel). C++ will become dominant in low-level/high-performance/resource-constrained programming that is not intimately tied to electronics.

Hardware will continue to evolve, hence things that are now done in C and C++ will be done in other, more programmer-friendly but less performant (less CPU-friendly) languages. New application areas will arise that require performant languages to get the most out of the hardware, these things (gadgets? wearables? intelligent dust? who knows) will be programmed in C or C++.

Blade Servers vs Rack Servers vs Tower Servers

posted on May 12, 2021


Servers come in several different configurations. In the data center, decisions about blade server vs. rack server vs. tower server will affect performance, data center space, budgets, and scalability.

This article is a quick start guide to rack servers, blade servers, tower servers: how to understand their advantages and shortcomings, and how each type fits into your server requirements.

Before we go in-depth, let’s look at a quick summary of each:

  • Rack servers are mounted on standardized racks that can reach 10 feet in height, allowing the data center to efficiently deploy dozens of rack-mounted servers.
  • Blade servers are small circuit boards that act as servers within their server enclosure; they are an excellent choice for high processing power in a dense environment.
  • Tower servers come with the capacity for high optimization and customization, allowing organizations to match the server configuration to their needs.

What Is a Rack Server?

A rack server is a server mounted inside a rack. Rack servers are typically general-purpose servers that support a broad range of applications and computing infrastructure. The racks stack servers vertically to save data center floor space. The more equipment that admins can stack vertically, the more equipment they can house.

Standardized racks are measured in units (U’s) that are 1.75 inches tall and 19 inches wide. Rack servers fit into these dimensions by vertical multipliers, meaning that rack server heights may be 1U, 4U, 10U, or higher, like the 10 foot tall 70U rack that came out in 2016. Additional devices are also manufactured to fit the rack unit standard, so companies can make use of empty units in their racks.
Blade vs. Rack vs. Tower Servers

Rack Server Pros

• Self-contained: Each rack server has everything necessary to run as a stand-alone or networked system: its own power source, CPU, and memory. This enables rack servers to run intensive computing operations.

• Efficiency: Rack-mounted servers and other computing devices mke highly efficient use of limited data center space. Rack servers can be easily expanded with additional memory, storage, and processors. And it’s physically simple to hot-swap rack servers if admins have shared or clustered the server data for redundancy.

• Cost-effective: Smaller deployments offer management and energy efficiency at lower cost.
Rack Server Cons

• Power usage: Densely populated racks require more cooling units, which raises energy costs. Large numbers of rack servers will raise energy needs overall.

• Maintenance: Dense racks require more troubleshooting and management time.

What Is a Blade Server?

A blade server is a server enclosure that houses multiple modular circuit boards called server blades. Most blade servers are stripped down to CPUs, network controllers, and memory. Some have internal storage drives. Any other components—like switches, ports, and power connectors—are shared through the chassis.

The enclosures typically fit rack unit measurements, which allows IT to save space. Admins can cluster blades or manage and operate each individually as its own separate server, such as assigning applications and end-users to specific blades. Their modular architecture supports hot swaps. Blades have small external handles, so it’s a simple matter to pull out or replace them.

Blade servers have high processing power to serve complex computing needs. They can scale to high performance levels, if the data center has enough cooling and energy to support the dense infrastructure.

Blade Server Pros

• Low energy spend: Instead of powering and cooling multiple servers in separate racks, the chassis supplies power to multiple blade servers. This reduces energy spend.

• Processing Power: Blade servers provide high processing power while taking up minimal space.

• Multi-Purpose: They can host primary operating systems and hypervisors, databases, applications, web services, and other enterprise-level processes and applications.

• Availability: The blade server environment simplifies centralized monitoring and maintenance, load balancing, and clustered failover. Hot swapping also helps to increase system availability.
Blade Server Cons

• Upfront costs: Over time, operating expenses are reasonable thanks to simplified management interfaces and lower energy usage. However, initial capital, deployment, and configuration costs can be high.

• Energy costs: High density blade servers require advanced climate control. Heating, cooling, and ventilation are all necessary expenditures in order to maintain blade server performance.

What Is a Tower Server?

Tower servers are servers in a stand-alone chassis configuration. They are manufactured with minimal components and software, so mid-size and enterprise customers can heavily customize the servers for specific tasks. For example, tower servers usually do not come with additional components like advanced graphic cards, high RAM, or peripherals.

Tower servers are typically targeted to customers who want to customize their servers and maintain a customized upgrade path. For example, customers can configure tower servers as general-purpose servers, communication servers, web servers, or network servers that integrate using HTTP protocols. Buyers may order the customization they need, or do it themselves when the tower server is shipped to their site. Another usage case is a smaller business that needs a single powerful server to run multiple processes and applications.

Externally they resemble desktop towers, and—like desktops—they do not share input devices. Multiple tower installations will require separate keyboards, mice, and monitors; or switches that make it possible to share peripheral devices. They can share network storage like any other type of server.

Tower Server Pros

• Efficient scalability: Tower servers come with minimal configuration, so IT can customize and upgrade them based on business needs. They are less expensive to buy than a fully loaded server.
• Low cooling costs: With their low component density, towers are less expensive to cool than dense racks or blades.

Tower Server Cons

• Upgrade expense. Many customers buy tower servers for the customization and not low capital costs. High-end hardware components and software will raise the ongoing price considerably.

• Large footprint: These servers do not fit in racks and consume data center space. They require opening the enclosure to troubleshoot and add or upgrade internal components.

• Awkward peripheral management: In multiple tower server environments, IT must invest in switches or re-plug external devices into each separate server.

After Apple’s M1, are the days of x86 over?

posted on May 12, 2021


The M1. It’s freaking insane.

That single-threaded performance don’t lie.


Could be better in multi-threading.

Mind you the XEON E5–2687W v3 is a 10-core server CPU with 20 threads. The Ryzen 7 1700—which I bought on release—has 8 cores and 16 threads. These are old CPUs though.

A “fairer” comparison would be against a Ryzen 5600X, that has 91% of the M1’s single-threaded score and is within 0.07% of the M1’s multi-threaded score per thread (not core).

—Oh great, it’s just a Ryzen with Apple branding. One would say.

But no! That’s not the point!


This is part of a trend we've seen coming for a while. Apple’s not even the first company to make a good ARM chip for a computer.

Anyone remember the AMD Opteron A1100 from 2016?

The logic has always been the same. ARM is very efficient, but not very powerful. But certain server processes require just that. Amazon is not stupid, so they did the same thing with the Graviton chips—though the first ones were kind of shit.

—But hey, it’s coming! ARM is just around the corner. It’ll be awesome. We said.

And then this M1 came.

It’s Ryzen-level performance on four times less TDP and roughly a quarter of the juice.

The M1 goes about 30W on a multi-threaded load. A 5600X will go up to 140[1]W fully loaded.

It’s no longer around the corner. This is here.

I’d be hugely surprised if AMD is not working on a revival of their Opteron ARM ambitions.

It will take some time for compatibility to catch up and SDKs/frameworks to be mature enough, but the incentives are now here. This performance per watt is insane. Much longer battery lives, much cheaper servers (Graviton), more competition in the CPU space. Win, win, win.

It’s not the end for x86/64 but it surely is a new beginning as things heat up.

How can I add SSL Certificate in my host when the domain is somewhere else?

posted on May 12, 2021


The precise answer to the question depends highly on the specific hosting company in use.

But, as general advice, TLS (formerly, SSL) certificates may be installed on any hosting provider (or any server on the Internet) as long as the DNS records point to the server IP address where the HTTPS server is running.

An extra point of confusion is that the company where your domain is registered may be different from where the DNS records are hosted - for consumer services these are typically the same company.

It doesn’t matter who manages your domain names. If your domain name is pointed to your web host (you’ve added the name servers to your domain with your domain registrar), you can install an SSL certificate on your server.

On many hosting providers, you can do it via cPanel or Plesk. In your dashboard, locate the SSL certificates section and import your SSL files. For some servers, you’ll need the OpenSSL utility to configure SSL.

The most important things to when you buy hosting and you expect security and performance

posted on May 11, 2021


We exist in the era of the internet. You'll eventually find yourself in a position where you'll need to think about expanding your online presence. That's when you'll want to think about using a web hosting service.The most important thing that u keep in your mind when you are the buying a web hosting service that is 3S - Speed,Support,Security.

But their the more thing that you can’t ignore or you can say that they are the most important things when you are going to buy a hosting, here are things you should look into before considering a particular service.

1. Pricing

There are a plethora of service providers offering similar services at various rates. Of course, there are a variety of factors that influence these differences in programmes, but you can also look at a few different options before settling on one.

If you're hosting a simple webpage with no expectation of a lot of concurrent traffic or bandwidth, the cheapest service is always the best option. Other features should be considered if you want to host a more complex website.

2.Tech support

Another critical aspect to consider is the provider's level of technical support. For the most part, this is a significant problem. Consider what would happen if your website went down during a high-traffic hour and you had no idea why or how to fix it quickly.

Of course, guides are still available for assistance, but nothing beats getting a real person you can speak to and ask for help. This is guaranteed by the majority of services. As a result, you must ensure that you are not being duped.There are a few things you should consider. Are they, for example, available 24 hours a day, seven days a week? Is this a toll-free number?

3.Control panel

The control panel is the user interface from which you can manage and control your website. It's yet another feature provided by your web hosting company, and you can make certain you're having the best control panels possible.If the control panel is too complicated for you and you have to call the hosting company every time you need to make a minor adjustment, it might be a major hassle.

As a result, confirm that your service provider uses cPanel, Plesk, or a similar platform. At the very least, make sure the control panel isn't difficult to understand.

4.Shared vs private

This is something else to think about. What kind of hosting do you require? You'll probably be fine with a shared hosting service if you only want to host a basic show webpage. They are less expensive and, for the most part, easier to use and manage.

A shared hosting service is similar to sharing a personal computer in that it allows you to share a server with a variety of other website owners. Private hosting, on the other hand, is needed if you choose to host a more professional or complicated website.

They are more difficult to deal with and cost more, but that is the price you pay for more professional hosting.


This part does not matter much to most of us because we're looking to host small webpages with low to medium traffic and predictable bandwidth. However, when the size of your web project grows and you'll need to do more than just view pages, you'll need to start thinking about hardware.

CPUs, GPUs, RAMs, and storage (Solid State Drive vs SATA) are only a couple of the factors to consider. How much computation is needed by your web application? And what kind of foot traffic do you anticipate? These are the things you should think about ahead of time.


This is an important function. Consider what would happen if all of your website blogs, tweets, and other data were accidentally removed. Most service providers strive to make their services extremely dependable, but incidents are unavoidable.

You might, for instance, delete content by mistake. In any case, the majority of reputable web hosting companies have a reliable backup service. You must ensure that yours does as well. Inquire about your potential provider's disaster recovery plan. One hosting company, A Small Orange, for example, offers free regular automatic backups.

7.Email features

This is another feature to think about. What functionality does the email service provider guarantee? Regardless of what you might have heard or read about social media replacing the function of web email, believe me when I say that email will continue to play a significant role in your online presence.

You must ensure that the email service you receive with your hosting includes essential features such as spam control and time travel. Many providers, for example, have unrestricted email forwarding and auto response services. It's always a good idea to check with the provider first.


This is yet another important factor to consider when planning your company. If your company is expanding, you should expect your online presence to expand as well. And as your web presence grows, you'll need to update your service. Anything from your hardware to tech support is included.

Anyway, some providers, such as Interserver, provide automatic scalability, which means that if the current system can't manage the incoming traffic/bandwidth, it will be automatically upgraded. In any case, you can check to see what kind of scalability services your provider provides.

What is LogicMonitor?

posted on May 10, 2021


LogicMonitor is a fast option for getting a server monitoring infrastructure up and running. A key differentiator for the platform is its automated discovery feature, which uses different protocols to rapidly find devices and applications so they can be monitored.

LogicMonitor has both on-premises capabilities for servers as well as hypervisors and cloud deployments.

Monitoring templates are another core capability in the LogicMonitor platform providing administrators with a pre-configured set of items that other organizations are monitoring for a given server or application. While in the past some organizations have expressed concern about the platform user reporting module, though overall the platform is well regarded as a solid server and infrastructure monitoring tool.
Service Description

LogicMonitor uses over 20 standard protocols to identify a given device so it can be onboarded for monitoring.

The platform provides insights into app availability, end user experience and performance metrics for the entire IT infrastructure, including servers, virtual machines, storage databases and apps.
LogicMonitor Pros and Cons


LogicMonitor is primarily cloud-based, so it’s easy to setup and scale to serve the needs of growing businesses. Any organization looking to rapidly deploy and benefit from server and infrastructure monitoring will find LogicMonitor to be a valuable tool.

The vendor offers multiple forms of support and training materials to get teams up and running and using the tool efficiently. Support varies based on the subscription tier. Community forums, online documents and product training videos are available to all users. Pro users get chat and email-based support and onboarding support, while enterprise also gets phone support.


While there are many benefits to using a cloud-based or hybrid cloud server monitoring platform, it can also cause issues. LogicMonitor requires an Internet connection to operate. So any Internet outages will block your ability to monitor servers.
LogicMonitor Features

Automated Deployment:

Collector technology makes use of industry-standard protocols to deploy and monitor devices.

Real-Time Performance Metrics:

For servers, LogicMonitor tracks multiple metrics including Application Requests/Sec, average and Peak Response Times and Latency, as well as CPU and Memory Utilization.

Customizable Dashboard:

The LogicMonitor dashboard can be easily adjusted by administrators to display different data sets.

Alert Escalation Chains:

To make sure alerts of different severity reach the right people, admins can create escalation chains that designate certain types of alerts classifications to the appropriate staff member within the organization. Different alert rules can be configured to identify which escalation chain to use for different devices.


Full reporting capabilities on performance metrics, alerts and utilization.

How Much Does LogicMonitor Cost?

LogicMonitor pricing varies depending on each organization’s specific needs and you will need to contact the vendor for a custom quote. However, it offers two general editions of the platform: Pro and Enterprise.

  • The Pro edition gives full access to the platform’s cloud-based infrastructure. The plan supports up to 199 devices.
  • The Enterprise edition is boosted with the inclusion of more AI-based capabilities, including root cause analysis, dynamic thresholds, forecasting, and more. This plan supports more than 200 devices.


Why and When to Upgrade Servers

posted on May 10, 2021


Like any other asset or device, servers depreciate over time and malfunction when you least expect it. The good news: we know why and when to upgrade servers.

Most upgrades can be divided into two buckets, upgrading to the newest technology and replacing existing servers for business continuity. Server complications can be anything from performance decay to limited disk space and an ended warranty. Either way, server administrators are responsible for maintaining and maximizing the technology that fuels our organizations.

We dive into the life of our high-powered computing friends and the multitude of reasons it might be time to consider upgrading your servers.

The Life of a Server

Servers, sadly, have not been designed to live forever. Technical management like part replacement and regular upgrades can extend the server life, but, in the end, servers typically only last 3-5 years. Depreciation and hardware life cycle play a role, but so does the RAID storage configuration. By adding hard drives, the life expectancy is diminished by almost half.

Adopting an equipment lifecycle management and recycling protocol only helps ease the process of upgrading to the next server.

Should I Upgrade My Server?

Servers are critical resources to business continuity, and their health should be a priority for any managing administrators. We run through the gamut of reasons why and when you’ll want to upgrade your server.

Business Continuity Value

Servers are arguably the most critical component of any organization. As the engines that store data, maintain performance, connect, and protect, their continued performance is essential to business continuity.

We start with this because it is a bit of a catch-all for the remaining reasons to upgrade. If any high-priority server seems at risk of malfunctioning, plan accordingly, knowing the consequences can mean extended downtime, security vulnerabilities, and more.

Up-To-Date Technology

A popular reason for organizations and firms to upgrade is the demand for the newest features servers can offer. The practice by manufacturers of releasing hardware and software in unique cycles presents a struggle no organization can fully control.

A current organization server might have another year or two before its expected end-of-life. During that time, the server will continue receiving manufacturer updates, but the newest server hardware might offer required features in-house sooner than later. Best practices for server management here mean that administrators should not upgrade on a whim and only do so with justification.

Server Speed

Server performance declines by 14% annually, which means that it’s only operating at 40% of its initial performance mark by a server’s fifth year. On the client-side, slow performance can mean lagging operating systems that upend staff and customer expectations. There are ways of improving server speed like enabling caching, HTTP/2, a reverse proxy, and more, but doing so could take time and resources that administrators don’t have.

Disk Space

Inadequate management of disk space can be a recipe for danger. Insufficient free disk space directly affects the server performance and can lead to instability, degradation of the server, or shutdown. As disk space fills, it’s essential to take steps to remove shadow copies, full backups, and logs that aren’t business-critical. Otherwise, upgrading for additional disk space is an inherent part of maintaining and scaling a business.

Server Noise

While servers are inherently loud, there is a bar to the cacophony. Server administrators will be most familiar with irregular noise and should take prompt action to identify the source. From the rack’s frame to servers and their complementary parts, all depreciate over time, and the wear and tear could result in obstructed movement within the rack. Finding the noise source can inform the following steps to replace a damaged or malfunctioning server or other server rack component.

Expired Machinery

Servers that reached their manufacturer warranty can be both a security risk to your infrastructure and costly. Without an extended warranty of upgrade, devices disconnect from their manufacturer support, including critical updates and servicing. Continued manufacturer updates can be the difference between your server catching the newest malware strain and sitting pretty. Add on the potential cost of contracting a technician to service the machine, and the cost-benefit analysis could’ve told you to upgrade sooner.

Cost-Benefit Analysis

For the budget-minded, the task is simple: does the cost of maintaining or updating the current server outweigh the benefits of upgrading to a new server? Many of the other reasons listed are attached to the cost-benefit analysis because server performance directly affects business performance. If the price weren’t a factor, organizations wouldn’t hesitate to upgrade. Because the cost is essential, organizations try to maximize the lives of servers and, when needed, upgrade accordingly.

Resource Intensive Servers

Do you know that server that seems to be causing timeouts and needs additional attention consistently? It’s a server that, for little identified reason, is wasting valuable organizational resources. Not as visible to staff and customers, server managers who work with a suite of machines can recognize which need regular attention. If no other reason is listed, the organization should consider upgrading to a new server or seeking technical support.

Upgrading: A time and place for everything

Whether you’ve waited long enough, developed a personal relationship with your server, or feel like the extra noise means it’s doing hard work, it might be time to consider an upgrade.

There is a time and place for everything, including an upgrade. With server management best practices, your organization should be able to inspect server health regularly and forecast servicing and end-of-life plans. When problems arise along the way, it’s always best to prepare for the worst and be ready to upgrade. The consequences of waiting are too significant a risk.

Enterprise Service Management Secrets

posted on May 9, 2021


The divide between IT and business is closing.

First, we’re already seeing more and more organizations bring in internal IT teams, utilize software-as-a-service (SaaS) platforms, and roll out self-service models to reduce the dependencies on external, outsourced, and siloed IT resources. Next, IT organizations are now more critically involved in helping other areas of the business meet the need to provide similar self-service capabilities that IT has successfully deployed to their customers.

As these two interdependent entities begin to realize the benefits of collaboration through merged and coordinated tools and processes, there’s never been a better time than today to start adopting enterprise service management (ESM) across the organization.

Business Benefits of ESM

IT service management (ITSM) tools have advanced considerably in response to the demanding, diverse needs of the modern enterprise. ESM broadens the scope of these evolving ITSM tools beyond IT and across the organization into other areas of the business—like human resources (HR), finance, marketing, and facilities—to improve performance, deliver a transcendent customer experience, increase employee engagement, and strengthen business outcomes.

By extending and enhancing the functionality, traceability, and reproducibility of ITSM to other lines of business with ESM, organizations can yield significant business benefits beyond greater efficiencies and cost savings. For example, if businesses want to thrive in the future and survive any seismic changes, they will need to evolve into an autonomous digital enterprise that embraces intelligent, tech-enabled systems across every facet of the organization. ESM helps make rapid innovation achievable within this ecosystem.

Not only that, as we experience generational and cultural shifts that change how we consume technology, applying ITSM principles enterprise-wide can help address the very pressing needs and expectations of internal customers—the employees—and external customers. Replacing unstructured, mundane work with modern, automation-based alternatives through ESM enables personalized, responsive service that creates happy customers and frees up employees to focus on business-critical tasks.

Along this journey from basic automation to automation with deliberate human intervention to a fully autonomous solution, the level of intelligence increases, so it’s fair to characterize that progression as a path of intelligent automation. By using the power of artificial intelligence (AI) and applying methods and algorithms that can learn and adapt from the right set of data, intelligent automation is made real in solutions like ESM.

Now that we’ve uncovered the business benefits of ESM, let’s dive further into how to implement strategies for successful ESM adoption.  

Strategies for Success

Establishing SaaS-based, enterprise-wide, multi-device-compatible service management functions is necessary to adopt processes that drive digital transformation. It’s also helpful in reducing the complexity that may have resulted from hastily scaling ITSM solutions as the world shifted to remote work.

Choosing the right ESM solution can be discouraging and confusing, especially as most tools on the market are simply far-reaching ITSM tools designed for specific use cases. In order to plan wisely and ensure ESM success, consider the following:

  • Follow the tried-and-true guiding principles of ITSM derived from the ITIL 4® framework.
  • But don’t apply ITIL blindly outside of IT; understand what is valuable to stakeholders so that the transition to ESM truly benefits them.
  • Carefully assess the organization in order to determine what to improve or to replicate.
  • Progress iteratively in small achievable steps.
  • Avoid silos by including the right people throughout the process and by being transparent with sharing actionable data and open to constructive feedback.
  • Look for flexibility and functionality but be cautious of tool sprawl.
  • Ensure the ESM solution you choose is flexible enough to deliver the specific functionality for your intended use cases across the organization.
  • Be careful not to create a bespoke ESM experience using a selection of unintegrated ITSM tools, which may lead to more complexity and cost inefficiency.
  • Provide a seamless experience to customers across the organization on devices and channels of their choice.
  • Ready the organization for change.
  • Get senior-level buy-in and create adoption initiatives and training to ensure that the decision to implement an enterprise-wide solution does not come as a surprise.
  • Communicate the change before crossing the finish line.
  • Collaborate with others and work to deliver a solution that employees will want – and want to use.
  • Think long-term when developing a plan for ESM. Longer-term benefits of a more cohesive approach will help ensure better organizational maturity and expansive business benefits. After all, change can be costly and daunting for stakeholders so it’s essential to get it right to not just for today’s business demands, but also future technological and cultural disruptions.

When done correctly, ESM solutions help organizations create, manage, and analyze data that improves business performance and delivers insights on growth, competitiveness, and efficiency—while also empowering users with service-oriented experiences delivered by simple, familiar, and fast user interfaces. Now that business and IT alignment is no longer a goal but a requirement for digital competitiveness, it’s time to adopt the very processes that drive digital transformation, beginning with enterprise service management. The right web hosting service provider offering dedicated and VPS servers can help with ESM.

Mixed content blocking in Firefox

posted on May 9, 2021


Firefox protects you from attacks by blocking potentially harmful, insecure content on web pages that are supposed to be secure. Keep reading to learn more about mixed content and how to tell if Firefox has blocked it.

What is mixed content and what are the risks?

HTTP is a system for transmitting information from a web server to your browser. HTTP is not secure, so when you visit a page served over HTTP, your connection is open for eavesdropping and man-in-the-middle attacks. Most websites are served over HTTP because they don't involve passing sensitive information back and forth and do not need to be secured.

When you visit a page fully transmitted over HTTPS, such as your bank, you'll see a padlock icon Fx70GreyPadlock in the address bar (for details, see How do I tell if my connection to a website is secure?). This means that your connection is authenticated and encrypted, and thus safeguarded from both eavesdroppers and man-in-the-middle attacks.

However, if the HTTPS page you visit includes HTTP content, the HTTP portion can be read or modified by attackers, even though the main page is served over HTTPS. When an HTTPS page has HTTP content, we call that content “mixed”. The page you are visiting is only partially encrypted and even though it appears to be secure, it isn't.

How can I tell if a page has mixed content?

There are two types of mixed content: mixed passive/display content and mixed active content. The difference lies in the threat level. Look for a padlock icon in your address bar to determine whether the page has mixed content.

No mixed content: secure

You’ll see a gray padlock when you are on a fully secure (HTTPS) page. To see if Firefox has blocked parts of the page that are not secure, click the gray padlock. For more information, see the Unblock mixed content section below.

Mixed content is not blocked: not secure

If you see a padlock with a red line over it, the page contains mixed active content and Firefox is not blocking insecure elements. That page is open to eavesdropping and attacks where your personal data from the site could be stolen. Unless you’ve unblocked mixed content using the instructions in the next section, you shouldn’t see this icon on a secure (HTTPS) website. Note: A padlock with a red line is also shown on unencrypted (HTTP or FTP) websites.

A padlock with a triangle indicates that Firefox is not blocking insecure passive content, such as images. By default, Firefox does not block mixed passive content; you will simply see a warning that the page isn't fully secure. Attackers may be able to manipulate parts of the page like displaying misleading or inappropriate content, but they should not be able to steal your personal data from the site. For your security, you may ask your web hosting service provider for assistance with such issues.

Unblock mixed content

Unblocking insecure elements is not recommended but can be done, if necessary:

  1. Click the padlock icon Fx70GreyPadlock in the address bar.
  2. Click the arrow in the Site Information panel
  3. Click Disable protection for now

To enable protection, follow the preceding steps and click Enable protection

Should I get my SSL certificate from host service or domain registrar?

posted on May 9, 2021


Whether you get your SSL Certificate from a hosting service provider or a domain registrar, won’t make any difference to your website in terms of security. A $6.99 dollar per year SSL Certificate protects your users’ data the same way a $344.99 per year does. The big disparity in prices is due to additional features that come with more expensive SSL Certificates. The famous green bar, the dynamic site seal, and the brand equity are just a few factors that contribute to a higher price.

Many reputable hosting providers and domain registrars offer SSL Certificates as an upsell to their main product. Some are even offering free SSL Certificates for a year when you buy a hosting account or a domain name, tempting you to get the SSL certificate as well. While there is nothing wrong with this selling technique and their SSL certificates, I would question their expertise in dealing with SSL Certificates.

Their main focus, the bread and butter if you wish, will always be selling hosting accounts and web domains, so it’s only natural that they will direct all their resources in providing excellent customer support for hosting and domain issues. Sure, they will offer basic support for SSL Certificates as well, but what if you need extensive assistance in installing and configuring them on a specific server? Wouldn’t it be better to get an SSL Certificate from a company that deals exclusively with them?

There are many trustworthy SSL Certificate resellers on the market. Take for instance SSL Dragon. This company is relatively new to the SSL industry, but already a rising star. They cover the whole spectrum of SSL Certificates and work closely with various Certificate Authorities to bring you the best SSL price and solution. Finally, and most importantly, SSL Dragon provides five-star customer service via online live chat and ticketing system. Check them out before getting your SSL Certificate.

What’s new in the programming world?

posted on May 9, 2021


  1. Many companies are rightly shifting their platform from in-house infrastructure to cloud-based solutions like AWS or Azure, meaning software architecture also tends to be more Domain-Driven in order to benefit from such platforms.
  2. Rapid prototyping is seen as increasingly important in order to get a solution of out the door quickly but also to prove the viability of the projects, hence I have seen a shift into using more language-diverse systems (eg Python and Java and JavaScript), which is perfectly fine since most of them are based on Microservice architectures, where components can be replaced with a different implementation without impacting the overall system. You are more free to use the tool right for the job.
  3. Linked to the above points, Containerization is increasingly important, allowing to encapsulate applications into a container without worrying about system dependencies.
  4. Also Event Sourcing is becoming more popular, allowing the changes to application state to be stored as a sequence of events, which can be replayed on demand.
  5. Reactive programming is taking over the UI development approach where you want real-time information to be displayed, mainly, but also very commonly used in backed applications working with asynchronous data streams, and many backend frameworks are adapting to this.
  6. Security is still a huge problem for companies, mainly to do with exposing information to the world wide web, so I can see there is more awareness of OWASP vulnerabilities and some teams are taking actions to adapt their development practises, other don’t and are in danger of malicious attacks or exposing sensitive information. This is also linked to User Data Protection another huge topic to be aware of. There are also new techniques like Chaos Engineering.
  7. In the Testing development world, Visual Testing is getting more importance because of the multitude of devices with different screen sizes, from laptops to smartphones and tv monitors etc.
  8. Also Artificial Intelligence is getting applied to more and more fields, even the one mentioned above (Visual Testing).
  9. Modular programming is getting embraced by most programming languages (see JavaScript ES6 or Java 9 for example).
  10. Cross platform mobile app: it’s not very new, but it had reach a good level of maturity to make it a very good choice over native mobile development for most cases, since libraries like React Native make it easy to release good performance mobile apps for different platforms.
  11. CI tools, these days there seem to be more options than ever out there, no matter what programming language you use, there are very valid alternatives to the old fashion Jenkins for automated builds and Continuous Integration.

What metrics do you look at to evaluate the success of your software product?

posted on May 9, 2021


It is extremely important for Product Managers to have their metrics in check to ensure the success of their product! Metrics help you manage your product and measure its performance so you can figure out how to improve it if needed!

Here are the major metrics that matter the most for almost all product people:

  • MAUs / DAUs: Monthly Active Users (MAU) and Daily Active Users (DAU) are a great overview of a digital product’s overall health. They help to track whether your user base is growing or not, and how ‘sticky’ your product is for end-users.
  • Customer Conversion Rate: A low Customer Conversion Rate shows that people are landing on your app/website, and not really finding what they’re expecting, or they’re disappointed. Always aim high!
  • Churn & Customer Retention Rate: This metric refers to the rate at which customers stop doing business with you. What you want is a high Customer Retention Rate, where more people come back than disappear forever.
  • NPA& CSAT Score: Net Promoter Score (NPS) and Customer Satisfaction Score (CSAT) are great ways to measure the sentiments of your users. Your NPS score, in a nutshell, tells you how well your product is loved by users. CSAT can also be used to measure how happy users are with individual processes and features.

It's important to note that no metric in isolation can give you all of the information you need, and there are so many out there which are worth keeping track of. What you keep an eye on will depend on your business strategy, the industry you’re in, and what stage of growth you’re in.

Sometimes, metrics will be decided for you, as it’s important to keep your product’s KPIs and OKRs aligned with those of your company.

Regularly Assessing Your Enterprise’s Cybersecurity Posture

posted on May 8, 2021


Every enterprise has unique security requirements and standards based on its risk profile and tolerance. in fact, according to Gartner, 78% of organizations use 16 or more security tools and spend more than $150B on information security every year.

Despite these tools and spending, it remains very difficult to assess how secure and protected organizations are against constantly evolving cyberattacks.

However, here are a range of methods to assess your enterprise’s security posture.

Security Scoring

Key performance indicators (KPIs) can be used to assess your cybersecurity posture across all security configurations and controls. KPIs are one way to answer the question of how secure an organization may be either as an absolute, based on its historical levels, or as compared to organizations of similar size, geographies, or business. Whether internally developed or established by industry available tools, using KPIs can provide a relative assessment that can be considered reasonable. But simply being better than the average, or improving over time, does not necessarily mean that your security is adequate for your level of risk. As far as web hosting is concerned, a reputable web hosting service provider will take care your cybersecurity issues.

Penetration Testing

Engaging a red team of ethical hackers to attempt to bypass your security configurations, controls, and teams can be very effective in understanding your enterprise’s risk of breach. These groups are experts in the latest tools, techniques, and tactics. They act like cybercriminals and attempt to breach your defenses, which is an excellent way to stress test every aspect of your security, including employee awareness. This approach helps you determine which defenses are strong and which are weak. A key limitation is that it is dependent on the expertise of the red team and it only occurs at a single point for a defined scope of attack.

Breach Attack Simulation

Breach attack simulation (BAS), similar to penetration testing, attempts to assess the totality and effectiveness of your defenses, but it uses automation tools to seek entry, rather than human experts. BAS can be run regularly and broadly, rather than at a single point in time or scope. However, the attacks are more programmatic, so they may be less sophisticated or customized than penetration testing.

Independent Effectiveness Testing

Alongside organization-specific assessments of overall security, expert labs run independent tests of specific security tools. These tests often benefit from a larger sample set of attacks, as they are relevant to a broad set of organizations, and in many cases, will provide comparative scoring for security tools of the same type. The common downside is that they operate in a lab, rather than the real-world where conditions may vary from those of your organization, particularly over time. These assessments also typically focus on just one type of control, such as network security, email security, or endpoint security. They rarely test combinations of controls.

MITRE Engenuity ATT&CK Evaluations

MITRE Engenuity’s ATT&CK Evaluations are another useful tool. These evaluations test a range of security tools that are typically in the same security category and expose them to a single or small number of sophisticated cybercriminal campaigns. These campaigns are comprised of a series of tactics and techniques that are designed to accomplish a defined cyber mission. The key benefits of this approach are that enterprise security teams gain visibility into the inner workings of security controls. They can understand not only what the solution detects but also why and how it does so. Seeing their operation can give teams more confidence in the type of protection they deliver. The evaluation goes beyond a single attack, sample set, point in time, or control. Evaluation results also can be combined across controls for a more comprehensive view of coverage or exposure.

The primary drawback is that cybercriminal’s attack tactics and techniques evolve over time and the evaluation results are constrained to the timeframe in which the campaigns are run. They also focus only on detection (and/or blocking) of the attack technique, with no ability to assess what else (including legitimate operations) might be flagged by the control.


Enterprises have a range of options to assess their security posture, based on individual control or as a whole. If the objective is to do more than the average organization, security scoring is a great tool. If your goal is to push security posture to higher levels, penetration testing and/or breach attack simulation are great aids. For granular assessments of individual security controls at points of exceptional risk, independent effectiveness testing can help. Lastly, for planning and implementing a rigorous and resilient defense based on capabilities across controls in aggregate, the MITRE ATT&CK Evaluation is a valuable tool.

Learn more about Fortinet’s FortiEDR solution and how it has the unique ability to defuse and disarm a threat in real time, even after an endpoint is already infected.

Server Security Top Practices

posted on May 8, 2021


Your servers are your business. That is a fact in the 21st century. And your servers can make or break your business. Well maintained servers can drive your business forward and bring in revenue. Poorly managed servers can mean lost business, data, or customer information, and that can be crippling if not outright fatal to a company.

Because of the critical role they play, confidential organizational data and information stored on your servers is extremely valuable. There is a popular saying, “data is the new oil.” Or gold, take your pick.

If you’re not sure how to secure your servers, or if you’re not sure you have covered all the bases, this article will offer some of the security tips that you can use to secure your servers.

Tips for Server Security
Keep the Software and OS Updated

In server security, keeping on top of software and operating system-related security fixes is essential. System hacks and compromises frequently occur due to unpatched software. Usually software vendors push out notifications to customers of updates, and you should not delay. Server software is extensively tested before release, although you might want to test for compatibility issues with your own environment. Patch management tools can help, as can vulnerability scanning tools and others that look for security weaknesses.

Automate and Use AI Whenever Possible

To err is human and the majority of major server outages have been caused by human mistakes. And people are overloaded and may miss things. To perform function one, allow for automation wherever possible. Most systems support the automatic downloading and installation of patches, for instance, and there is a growing list of AI products to monitor, protect, and upgrade your system.

Use Virtual Private Networks

Private networks are based on Internet Protocol address space. A VPN is said to be private because no Internet Protocol packets addressed are transmitted via a public network.

A VPN will allow you to create a connection between different computer devices located in different places. It lets you carry out operations on your servers in a secure manner.

You can exchange information with other servers on the same account without compromises from outside. To ensure that your server is safe, you should set up a Virtual Private Network.

Consider Zero Trust Networks

One of the weaknesses of firewalls and VPNs is that they don’t prevent internal movement. Once a hacker has breached your walls, they pretty much have free movement throughout the network. That’s where Zero Trust Networks come in. As their name implies, Zero Trust Networks don’t allow a user or device to be trusted to access anything until proven otherwise. This is known as a “least privilege” approach, which requires rigorous access controls to everything.

Encrypt Everything

No data should move around your servers unencrypted. Secure Socket Layer certificates are security protocols that guard the communication between two systems over the Internet. Well, the same holds true for your internal systems. With SSL certificates, only the intended recipient will have the key to decrypt the information.

When connecting to a remote server, use the SSH (Secure Shell) to encrypt all data transmitted in the exchange. Use SSH Keys to authenticate an SSH server using a pair instead of the more easily broken password, using RSA 2048-bit encryption.

To transfer files between servers, use the File Transfer Protocol Secure (FTPS). It encrypts data files and your authentication information.

Finally, require connections from outside the firewall to use a virtual private network (VPN). VPNs use their own private networks with a private IP to establish isolated communication channels between servers.

Don’t Just Use Standard Firewalls

Firewalls are a must-have to ensure that your servers are safe but there are more firewalls than just on-premises firewalls. There are also managed security service providers (MSSPs) who provide a managed firewall service for your network. Depending on the extent of the service agreement, the MSSP may perform firewall installation, application control and web content filtering, as they assist in determining which applications and web content (URLS) to block. They will also help manage patching and updates. There are literally 100 MSSPs to choose from.

Change Defaults

The default account in most systems is the root account, which is what hackers target. So get rid of it. Ditto for an account named admin. Don’t use obvious account names on your network.

You can increase server security by reducing the so-called attack vector, which is the process of running the bare minimum services needed to operate. The server versions of Windows and Linux come with a myriad of services, which you should turn off if they are not needed.

Wi-Fi access ports default to broadcasting their identity, so if you are in range, your endpoint device will see it. Go into the access port and turn off broadcasting, so anyone who wants to use it will have to know the access point’s actual name. And don’t use the default name from the manufacturer.

Create Multi-Server or Virtual Environments

Isolation is one of the best types of server protection you can have because if one server is compromised, the hacker is locked into that one server. For example, it is standard practice to separate the database servers from the web application servers.

Full separation would require having dedicated metal servers that do not share any components with other servers. That means more hardware, which can add up. Instead, virtualization can serve as an isolation environment.

Having isolated execution environments in a data center allows what is called Separation of Duties (SoD). SoD operates on the principle of “Least Privilege,” which essentially means that users should not have more privileges than needed to complete their daily task. To protect and secure the system and the data, a hierarchy of users must be established, each with his or her own user ID and with permissions as minimal as possible.

If you cannot afford or do not require full isolation with dedicated server components, you can also choose to isolate execution environments, otherwise known as virtual machines and containers.

Also, the newest server processors from Intel and AMD have specialized VM encryption so as to isolate a VM from the others. Therefore, if one VM is compromised, the hacker cannot get to the others.

Do Passwords Right

Passwords are always a security problem because people are so sloppy with them. They use the same ones everywhere or use simple, easily guessed passwords like “password,” “abcde,” or “123456.” You might as well not have any passwords at all.

Make it a requirement for passwords to contain a mix of upper AND lower case letters, numbers, and symbols. Force password changes at regular intervals, with old passwords banned after one use.

Close Hidden Open Ports

Attacks can come through open ports that you don’t even realize are open. Therefore, don’t assume you know every port; that’s impossible to keep in your head. Ports that aren’t absolutely essential should be closed. Windows Server and Linux share a common command, called netstat, which can be used to determine which ports are listening while also revealing the details of connections that may currently be available.

  •     Information for all ports — “netstat -s”
  •     List all TCP ports — “netstat -at”
  •     List all UDP ports — “netstat -au”
  •     All open listening ports — “netstat -l”

Do Backups Frequently and Properly

In 2009, a server full of flight simulation files was hacked and its contents destroyed. The site was spread across two servers and used each other for backup; server A backed up to server B, and server B was backed up to server A. The result was everything was lost.

Don’t be like that site. Not only do you need to do regularly scheduled backups but they should be to offsite locations outside of your network. Offsite backups are necessary, especially for ransomware attacks, where you can just wipe the infected drive and restore it.

Also consider disaster recovery as a service (DRaaS), one of the many as-a-service offerings, which offers backup through a cloud computing model. It is provided by many on-premises vendors as well as cloud service providers.

Whether you have automated backup jobs or do them manually, make sure to test the backups. This should include sanity checks in which administrators or even end users verify that data recovery is coherent.

Perform Regular and Frequent Security Audits

Without regular audits, it’s impossible to know where problems might exist or how they can be addressed to ensure that your server remains fully protected. Check your logs for suspicious or unusual activity. Check for software, OS, and hardware firmware updates. Check system performance. Oftentimes hackers cause a spike in system activity, and unusual drive or CPU or network traffic might be the sign. Servers are not deploy-and-forget, they must be constantly checked.

Using Netsh to Manage Remote Servers and Workstations

posted on May 8, 2021


The network shell (Netsh) of Windows can be a great way to view or manage network-related settings via the Command Prompt. You can use it to run one-off commands or utilize scripts for some automation. And as we’ll discuss today, Netsh can also be used to manage remote workstations and servers and web hosting servers.
Using the remote functionality of Netsh

The built-in remote functionality of Netsh allows you to send commands Windows Server Tutorials to individual machines on the network. You can specify a remote machine you’d like to run the command or script on by inserting the -r option. If necessary, you can also specify login credentials to use for the remote connection: -u for the username of the remote machine and -p for the password.

Open a Command Prompt and enter the following command to access the Netsh CLI on a remote machine:

  • netsh -r hostname -u domainadmin -p password
  • Once you’ve established that you can gain remote access, you can also run netsh commands directly. For instance, here’s how to obtain the IP configuration:
  • netsh -r hostname -u domainadmin -p password interface ip show config
  • For the -r option, you can also use the IP address or FQDN in addition to the host name of remote machines.

If you run into connectivity issues with remote machines, ensure the Remote Registry service is running on the remote computer. If it is not, then Windows may display a “Network Path Not Found” error message. Additionally, verify File and Printer Sharing for Microsoft Networks is enabled in the network connection properties of the remote machine. As always, ensure there aren’t any firewalls blocking the traffic.

If connectivity issues persist, try the following Registry edit:

  • Open RegEdit on the remote machine and navigate to HKEY_LOCAL_MACHINESOFTWAREMicrosoftWindowsCurrentVersionPoliciesSystem
  • Add a new DWORD value called “LocalAccountTokenFilterPolicy,” if it doesn’t already exist, and ensure its value is set to “1”
  • Reboot the remote machine.

Using Netsh via psexec

Netsh lacks an easy way to simultaneously manage multiple remote machines. Though you could get creative with the built-in remote functionality, like incorporating multiple remote addresses in batch files and other scripts, you might have better luck pursuing other options.

For instance, utilizing the PsExec utility from Windows Sysinternals allows you to push out Netsh (or any other commands) to multiple machines at once.

Once you download PsExec, open a Command Prompt to the folder containing it and try the following command to access the CLI of a remote machine:

psexec hostname -u domainadmin -p password cmd

If the remote machine is Windows Vista or higher, you may need to use the -h option to have the process run with the account’s elevated token.

Once you’ve established that you can gain remote access, you can also run netsh commands directly, for instance:

psexec hostname -u domainadmin -p password cmd.exe /c netsh.exe interface ip show config

If an interactive CLI isn’t needed — for example, if you’re running a command that doesn’t provide output — consider adding the psexec -d option. This option tells it not to wait for the process to terminate. On the other hand, if you’d like the program to be interactive on the desktop of the remote machine, consider the -i option.