Use SSL certificate free for 3 months

Create your key (mail.saic.key) and your request (mail.saic.csr):

openssl req -new -newkey rsa:4096 -nodes -subj ‘/CN=mail.saic.it/O=SAIC, Inc./C=IT/ST=Italy/L=Viadana’ -keyout mail.saic.key -out mail.saic.csr

Go to this website and follow the istruction for have back the certificate for your Common Name (mail.saic.it) and the authority certificate :

Certificate Authority https://www.sslforfree.com

I configured my dns.

I set all file permission

chmod 444 mail.saic.*

then vim /etc/postfix/main.cf

smtp_tls_key_file                         = /etc/ssl/certs/mail.saic.key

smtp_tls_cert_file                        = /etc/ssl/certs/mail.saic.crt

smtp_tls_CAfile                           = /etc/ssl/certs/saic.sslforfree.ca

here the console for renew the certificate

great!!

 

It can be useful to check a certificate and key before applying them to your server. The following commands help verify the certificate, key, and CSR (Certificate Signing Request).

Check a certificate

Check a certificate and return information about it (signing authority, expiration date, etc.):

openssl x509 -in server.crt -text -noout

Check a key

Check the SSL key and verify the consistency:

openssl rsa -in server.key -check

Check a CSR

Verify the CSR and print CSR data filled in when generating the CSR:

openssl req -text -noout -verify -in server.csr

Verify a certificate and key matches

These two commands print out md5 checksums of the certificate and key; the checksums can be compared to verify that the certificate and key match.

openssl x509 -noout -modulus -in server.crt| openssl md5
openssl rsa -noout -modulus -in server.key| openssl md5

 

Self Signed Certificate : Commands

Create a private key

openssl genrsa -out server.key 4096

Generate a new private key and certificate signing request

openssl req -out server.csr -new -newkey rsa:4096 -nodes -keyout server.key

Generate a self-signed certificate

openssl req -x509 -sha256 -nodes -days 365 -newkey rsa:4096 -keyout server.key -out server.crt

Generate a certificate signing request (CSR) for an existing private key

openssl req -out server.csr -key server.key -new

Generate a certificate signing request based on an existing certificate

openssl x509 -x509toreq -in server.crt -out server.csr -signkey server.key

Remove a passphrase from a private key

openssl rsa -in server.pem -out newserver.pem

Parse a list of revoked serial numbers

openssl crl -inform DER -text -noout -in list.crl

Check a certificate signing request (CSR)

openssl req -text -noout -verify -in server.csr

Check a private key

openssl rsa -in server.key -check

Check a public key

openssl rsa -inform PEM -pubin -in pub.key -text -noout
openssl pkey -inform PEM -pubin -in pub.key -text -noout

Check a certificate

openssl x509 -in server.crt -text -noout
openssl x509 -in server.cer -text -noout

Check a PKCS#12 file (.pfx or .p12)

openssl pkcs12 -info -in server.p12

Verify a private key matches an certificate

openssl x509 -noout -modulus -in server.crt | openssl md5
openssl rsa -noout -modulus -in server.key | openssl md5
openssl req -noout -modulus -in server.csr | openssl md5

Display all certificates including intermediates

openssl s_client -connect www.paypal.com:443

Convert a DER file (.crt .cer .der) to PEM

openssl x509 -inform der -in server.cer -out server.pem

Convert a PEM file to DER

openssl x509 -outform der -in server.pem -out server.der

Convert a PKCS#12 file (.pfx .p12) containing a private key and certificates to PEM

openssl pkcs12 -in server.pfx -out server.pem -nodes

Convert a PEM certificate file and a private key to PKCS#12 (.pfx .p12)

openssl pkcs12 -export -out server.pfx -inkey server.key -in server.crt -certfile CACert.crt

Apache vs Nginx: Practical Considerations

Introduction

Apache and Nginx are the two most common open source web servers in the world. Together, they are responsible for serving over 50% of traffic on the internet. Both solutions are capable of handling diverse workloads and working with other software to provide a complete web stack.

While Apache and Nginx share many qualities, they should not be thought of as entirely interchangeable. Each excels in its own way and it is important to understand the situations where you may need to reevaluate your web server of choice. This article will be devoted to a discussion of how each server stacks up in various areas.

General Overview

Before we dive into the differences between Apache and Nginx, let’s take a quick look at the background of these two projects and their general characteristics.

Apache

The Apache HTTP Server was created by Robert McCool in 1995 and has been developed under the direction of the Apache Software Foundation since 1999. Since the HTTP web server is the foundation’s original project and is by far their most popular piece of software, it is often referred to simply as “Apache”.

The Apache web server has been the most popular server on the internet since 1996. Because of this popularity, Apache benefits from great documentation and integrated support from other software projects.

Apache is often chosen by administrators for its flexibility, power, and widespread support. It is extensible through a dynamically loadable module system and can process a large number of interpreted languages without connecting out to separate software.

Nginx

In 2002, Igor Sysoev began work on Nginx as an answer to the C10K problem, which was a challenge for web servers to begin handling ten thousand concurrent connections as a requirement for the modern web. The initial public release was made in 2004, meeting this goal by relying on an asynchronous, events-driven architecture.

Nginx has grown in popularity since its release due to its light-weight resource utilization and its ability to scale easily on minimal hardware. Nginx excels at serving static content quickly and is designed to pass dynamic requests off to other software that is better suited for those purposes.

Nginx is often selected by administrators for its resource efficiency and responsiveness under load. Advocates welcome Nginx’s focus on core web server and proxy features.

Connection Handling Architecture

One big difference between Apache and Nginx is the actual way that they handle connections and traffic. This provides perhaps the most significant difference in the way that they respond to different traffic conditions.

Apache

Apache provides a variety of multi-processing modules (Apache calls these MPMs) that dictate how client requests are handled. Basically, this allows administrators to swap out its connection handling architecture easily. These are:

  • mpm_prefork: This processing module spawns processes with a single thread each to handle request. Each child can handle a single connection at a time. As long as the number of requests is fewer than the number of processes, this MPM is very fast. However, performance degrades quickly after the requests surpass the number of processes, so this is not a good choice in many scenarios. Each process has a significant impact on RAM consumption, so this MPM is difficult to scale effectively. This may still be a good choice though if used in conjunction with other components that are not built with threads in mind. For instance, PHP is not thread-safe, so this MPM is recommended as the only safe way of working with mod_php, the Apache module for processing these files.
  • mpm_worker: This module spawns processes that can each manage multiple threads. Each of these threads can handle a single connection. Threads are much more efficient than processes, which means that this MPM scales better than the prefork MPM. Since there are more threads than processes, this also means that new connections can immediately take a free thread instead of having to wait for a free process.
  • mpm_event: This module is similar to the worker module in most situations, but is optimized to handle keep-alive connections. When using the worker MPM, a connection will hold a thread regardless of whether a request is actively being made for as long as the connection is kept alive. The event MPM handles keep alive connections by setting aside dedicated threads for handling keep alive connections and passing active requests off to other threads. This keeps the module from getting bogged down by keep-alive requests, allowing for faster execution. This was marked stable with the release of Apache 2.4.

As you can see, Apache provides a flexible architecture for choosing different connection and request handling algorithms. The choices provided are mainly a function of the server’s evolution and the increasing need for concurrency as the internet landscape has changed.

Nginx

Nginx came onto the scene after Apache, with more awareness of the concurrency problems that would face sites at scale. Leveraging this knowledge, Nginx was designed from the ground up to use an asynchronous, non-blocking, event-driven connection handling algorithm.

Nginx spawns worker processes, each of which can handle thousands of connections. The worker processes accomplish this by implementing a fast looping mechanism that continuously checks for and processes events. Decoupling actual work from connections allows each worker to concern itself with a connection only when a new event has been triggered.

Each of the connections handled by the worker are placed within the event loop where they exist with other connections. Within the loop, events are processed asynchronously, allowing work to be handled in a non-blocking manner. When the connection closes, it is removed from the loop.

This style of connection processing allows Nginx to scale incredibly far with limited resources. Since the server is single-threaded and processes are not spawned to handle each new connection, the memory and CPU usage tends to stay relatively consistent, even at times of heavy load.

Static vs Dynamic Content

In terms of real world use-cases, one of the most common comparisons between Apache and Nginx is the way in which each server handles requests for static and dynamic content.

Apache

Apache servers can handle static content using its conventional file-based methods. The performance of these operations is mainly a function of the MPM methods described above.

Apache can also process dynamic content by embedding a processor of the language in question into each of its worker instances. This allows it to execute dynamic content within the web server itself without having to rely on external components. These dynamic processors can be enabled through the use of dynamically loadable modules.

Apache’s ability to handle dynamic content internally means that configuration of dynamic processing tends to be simpler. Communication does not need to be coordinated with an additional piece of software and modules can easily be swapped out if the content requirements change.

Nginx

Nginx does not have any ability to process dynamic content natively. To handle PHP and other requests for dynamic content, Nginx must pass to an external processor for execution and wait for the rendered content to be sent back. The results can then be relayed to the client.

For administrators, this means that communication must be configured between Nginx and the processor over one of the protocols Nginx knows how to speak (http, FastCGI, SCGI, uWSGI, memcache). This can complicate things slightly, especially when trying to anticipate the number of connections to allow, as an additional connection will be used for each call to the processor.

However, this method has some advantages as well. Since the dynamic interpreter is not embedded in the worker process, its overhead will only be present for dynamic content. Static content can be served in a straight-forward manner and the interpreter will only be contacted when needed. Apache can also function in this manner, but doing so removes the benefits in the previous section.

Distributed vs Centralized Configuration

For administrators, one of the most readily apparent differences between these two pieces of software is whether directory-level configuration is permitted within the content directories.

Apache

Apache includes an option to allow additional configuration on a per-directory basis by inspecting and interpreting directives in hidden files within the content directories themselves. These files are known as .htaccess files.

Since these files reside within the content directories themselves, when handling a request, Apache checks each component of the path to the requested file for an .htaccess file and applies the directives found within. This effectively allows decentralized configuration of the web server, which is often used for implementing URL rewrites, access restrictions, authorization and authentication, even caching policies.

While the above examples can all be configured in the main Apache configuration file, .htaccess files have some important advantages. First, since these are interpreted each time they are found along a request path, they are implemented immediately without reloading the server. Second, it makes it possible to allow non-privileged users to control certain aspects of their own web content without giving them control over the entire configuration file.

This provides an easy way for certain web software, like content management systems, to configure their environment without providing access to the central configuration file. This is also used by shared hosting providers to retain control of the main configuration while giving clients control over their specific directories.

Nginx

Nginx does not interpret .htaccess files, nor does it provide any mechanism for evaluating per-directory configuration outside of the main configuration file. This may be less flexible than the Apache model, but it does have its own advantages.

The most notable improvement over the .htaccess system of directory-level configuration is increased performance. For a typical Apache setup that may allow .htaccess in any directory, the server will check for these files in each of the parent directories leading up to the requested file, for each request. If one or more .htaccess files are found during this search, they must be read and interpreted. By not allowing directory overrides, Nginx can serve requests faster by
doing a single directory lookup and file read for each request (assuming that the file is found in the conventional directory structure).

Another advantage is security related. Distributing directory-level configuration access also distributes the responsibility of security to individual users, who may not be trusted to handle this task well. Ensuring that the administrator maintains control over the entire web server can prevent some security missteps that may occur when access is given to other parties.

Keep in mind that it is possible to turn off .htaccess interpretation in Apache if these concerns resonate with you.

File vs URI-Based Interpretation

How the web server interprets requests and maps them to actual resources on the system is another area where these two servers differ.

Apache

Apache provides the ability to interpret a request as a physical resource on the filesystem or as a URI location that may need a more abstract evaluation. In general, for the former Apache uses <Directory>or <Files> blocks, while it utilizes <Location> blocks for more abstract resources.

Because Apache was designed from the ground up as a web server, the default is usually to interpret requests as filesystem resources. It begins by taking the document root and appending the portion of the request following the host and port number to try to find an actual file. Basically, the filesystem hierarchy is represented on the web as the available document tree.

Apache provides a number of alternatives for when the request does not match the underlying filesystem. For instance, an Alias directive can be used to map to an alternative location. Using <Location>blocks is a method of working with the URI itself instead of the filesystem. There are also regular expression variants which can be used to apply configuration more flexibly throughout the filesystem.

While Apache has the ability to operate on both the underlying filesystem and the webspace, it leans heavily towards filesystem methods. This can be seen in some of the design decisions, including the use of .htaccess files for per-directory configuration. The Apache docs themselves warn against using URI-based blocks to restrict access when the request mirrors the underlying filesystem.

Nginx

Nginx was created to be both a web server and a proxy server. Due to the architecture required for these two roles, it works primarily with URIs, translating to the filesystem when necessary.

This can be seen in some of the ways that Nginx configuration files are constructed and interpreted.Nginx does not provide a mechanism for specifying configuration for a filesystem directory and instead parses the URI itself.

For instance, the primary configuration blocks for Nginx are server and location blocks. The serverblock interprets the host being requested, while the location blocks are responsible for matching portions of the URI that comes after the host and port. At this point, the request is being interpreted as a URI, not as a location on the filesystem.

For static files, all requests eventually have to be mapped to a location on the filesystem. First, Nginx selects the server and location blocks that will handle the request and then combines the document root with the URI, adapting anything necessary according to the configuration specified.

This may seem similar, but parsing requests primarily as URIs instead of filesystem locations allows Nginx to more easily function in both web, mail, and proxy server roles. Nginx is configured simply by laying out how to respond to different request patterns. Nginx does not check the filesystem until it is ready to serve the request, which explains why it does not implement a form of .htaccess files.

Modules

Both Nginx and Apache are extensible through module systems, but the way that they work differ significantly.

Apache

Apache’s module system allows you to dynamically load or unload modules to satisfy your needs during the course of running the server. The Apache core is always present, while modules can be turned on or off, adding or removing additional functionality and hooking into the main server.

Apache uses this functionality for a large variety tasks. Due to the maturity of the platform, there is an extensive library of modules available. These can be used to alter some of the core functionality of the server, such as mod_php, which embeds a PHP interpreter into each running worker.

Modules are not limited to processing dynamic content, however. Among other functions, they can be used used for rewriting URLs, authenticating clients, hardening the server, logging, caching, compression, proxying, rate limiting, and encrypting. Dynamic modules can extend the core functionality considerably without much additional work.

Nginx

Nginx also implements a module system, but it is quite different from the Apache system. In Nginx, modules are not dynamically loadable, so they must be selected and compiled into the core software.

For many users, this will make Nginx much less flexible. This is especially true for users who are not comfortable maintaining their own compiled software outside of their distribution’s conventional packaging system. While distributions’ packages tend to include the most commonly used modules, if you require a non-standard module, you will have to build the server from source yourself.

Nginx modules are still very useful though, and they allow you to dictate what you want out of your server by only including the functionality you intend to use. Some users also may consider this more secure, as arbitrary components cannot be hooked into the server. However, if your server is ever put in a position where this is possible, it is likely compromised already.

Nginx modules allow many of the same capabilities as Apache modules. For instance, Nginx modules can provide proxying support, compression, rate limiting, logging, rewriting, geolocation, authentication, encryption, streaming, and mail functionality.

Support, Compatibility, Ecosystem, and Documentation

A major point to consider is what the actual process of getting up and running will be given the landscape of available help and support among other software.

Apache

Because Apache has been popular for so long, support for the server is fairly ubiquitous. There is a large library of first- and third-party documentation available for the core server and for task-based scenarios involving hooking Apache up with other software.

Along with documentation, many tools and web projects include tools to bootstrap themselves within an Apache environment. This may be included in the projects themselves, or in the packages maintained by your distribution’s packaging team.

Apache, in general, will have more support from third-party projects simply because of its market share and the length of time it has been available. Administrators are also somewhat more likely to have experience working with Apache not only due to its prevalence, but also because many people start off in shared-hosting scenarios which almost exclusively rely on Apache due to the .htaccess distributed management capabilities.

Nginx

Nginx is experiencing increased support as more users adopt it for its performance profile, but it still has some catching up to do in some key areas.

In the past, it was difficult to find comprehensive English-language documentation regarding Nginx due to the fact that most of the early development and documentation were in Russian. As interest in the project grew, the documentation has been filled out and there are now plenty of administration resources on the Nginx site and through third parties.

In regards to third-party applications, support and documentation is becoming more readily available, and package maintainers are beginning, in some cases, to give choices between auto-configuring for Apache and Nginx. Even without support, configuring Nginx to work with alternative software is usually straight-forward so long as the project itself documents its requirements (permissions, headers, etc).

Using Apache and Nginx Together

After going over the benefits and limitations of both Apache and Nginx, you may have a better idea of which server is more suited to your needs. However, many users find that it is possible to leverage each server’s strengths by using them together.

The conventional configuration for this partnership is to place Nginx in front of Apache as a reverse proxy. This will allow Nginx to to handle all requests from clients. This takes advantage of Nginx’s fast processing speed and ability to handle large numbers of connections concurrently.

For static content, which Nginx excels at, the files will be served quickly and directly to the client. For dynamic content, for instance PHP files, Nginx will proxy the request to Apache, which can then process the results and return the rendered page. Nginx can then pass the content back to the client.

This setup works well for many people because it allows Nginx to function as a sorting machine. It will handle all requests it can and pass on the ones that it has no native ability to serve. By cutting down on the requests the Apache server is asked to handle, we can alleviate some of the blocking that occurs when an Apache process or thread is occupied.

This configuration also allows you to scale out by adding additional backend servers as necessary. Nginx can be configured to pass to a pool of servers easily, increasing this configuration’s resilience to failure and performance.

Conclusion

As you can see, both Apache and Nginx are powerful, flexible, and capable. Deciding which server is best for you is largely a function of evaluating your specific requirements and testing with the patterns that you expect to see.

There are differences between these projects that have a very real impact on the raw performance, capabilities, and the implementation time necessary to get each solution up and running. However, these usually are the result of a series of trade offs that should not be casually dismissed. In the end, there is no one-size-fits-all web server, so use the solution that best aligns with your objectives.

Apache Httpd web page authentication

this command for create user and password access file:
htpasswd -c /etc/httpd/conf/.htpasswd xxxx

chown root:apache /etc/httpd/conf/.htpasswd
chmod 640 /etc/httpd/conf/.htpasswd

In my vhost configuration file:


ServerName sm.saic.it
RewriteEngine On
DocumentRoot /usr/local/sendmailanalyzer/www
Options ExecCGI
AddHandler cgi-script .cgi
DirectoryIndex sa_report.cgi

AuthType Basic
AuthName "Restricted Content"
AuthUserFile /etc/httpd/conf/.htpasswd
Require valid-user

# Apache 2.4
# Require all granted
#Require host example.com


# Apache 2.2
Order deny,allow
#Allow from all
#Allow from 127.0.0.1
#Allow from ::1
# Allow from .example.com


How to test if the email address exists

Source Link

To check if user entered email mailbox.does.not.exist@webdigiapps.com really exists go through the following in command prompt on windows / terminal on mac. The commands you type in are in green and the server response is in blue. Please refer to MAC & PC screenshots towards the end of this post.

Step 1 – Find mail exchanger or mail server of webdigiapps.com

COMMAND:
nslookup -q=mx webdigiapps.com
RESPONSE:
Non-authoritative answer:
webdigiapps.com mail exchanger = 0 mx2.sub3.homie.mail.dreamhost.com.
webdigiapps.com mail exchanger = 0 mx1.sub3.homie.mail.dreamhost.com.

Step 2 – Now we know the mail server address so let us connect to it. You can connect to one of the exchanger addresses in the response from Step 1.

COMMAND:
telnet mx2.sub3.homie.mail.dreamhost.com 25
RESPONSE:
Connected to mx2.sub3.homie.mail.dreamhost.com.
Escape character is ‘^]’.
220 homiemail-mx7.g.dreamhost.com ESMTP

COMMAND:
helo hi
RESPONSE:
250 homiemail-mx8.g.dreamhost.com

COMMAND:
mail from: <youremail@gmail.com>
RESPONSE:
250 2.1.0 Ok

COMMAND:
rcpt to: <mailbox.does.not.exist@webdigiapps.com>
RESPONSE:
550 5.1.1 <mailbox.does.not.exist@webdigiapps.com>: Recipient address rejected: User unknown in virtual alias table

COMMAND:
quit
RESPONSE:
221 2.0.0 Bye

Screenshots – MAC Terminal & Windows

MAC email verification
Windows email verification

NOTES:

1) the 550 response indicates that the email address is not valid and you have caught a valid but wrong email address. This code can be on the server and called on AJAX when user tabs out of the email field.  The entire check will take less than 2 seconds to run and you can make sure that the email is correct.
2) If email was present the server will respond with a 250 instead of 550
3) There are certain servers with a CATCH ALL email and this means all email address are accepted as valid on their servers (RARE but some servers do have this setting).
4) Please do not use this method to continuously to check for availability of gmail / yahoo / msn accounts etc as this may cause your IP to be added to a blacklist.
5) This is to supplement the standard email address javascript validation.

IPTables – Load Balance Incoming Web Traffic

First of ALL

The important thing to remember as we go forward is that ORDER MATTERS! Rules are executed from top to bottom.

Note that Rules are applied in order of appearance, and the inspection ends immediately when there is a match. Therefore, for example, if a Rule rejecting ssh connections is created, and afterward another Rule is specified allowing ssh, the Rule to reject is applied and the later Rule to accept the ssh connection is not.

At the top of the /etc/sysconfig/iptables (Centos 7) the rules are more important !!

 

Instead of using the default policy, I normally recommend making an explicit DROP/REJECT rule at the bottom of your chain that matches everything. You can leave your default policy set to ACCEPT and this should reduce the chance of blocking all access to the server.

Load Balance Incoming Web Traffic

This uses the iptables nth extension. The following example load balances the HTTPS traffic to three different ip-address. For every 3th packet, it is load balanced to the appropriate server (using the counter 0).

iptables -A PREROUTING -i eth0 -p tcp --dport 443 -m state --state NEW -m nth --counter 0 --every 3 --packet 0 -j DNAT --to-destination 192.168.1.101:443
iptables -A PREROUTING -i eth0 -p tcp --dport 443 -m state --state NEW -m nth --counter 0 --every 3 --packet 1 -j DNAT --to-destination 192.168.1.102:443
iptables -A PREROUTING -i eth0 -p tcp --dport 443 -m state --state NEW -m nth --counter 0 --every 3 --packet 2 -j DNAT --to-destination 192.168.1.103:443

Allow Loopback Access

You should allow full loopback access on your servers. i.e access using 127.0.0.1

iptables -A INPUT -i lo -j ACCEPT
iptables -A OUTPUT -o lo -j ACCEPT

Allow Internal Network to External network.

On the firewall server where one ethernet card is connected to the external, and another ethernet card connected to the internal servers, use the following rules to allow internal network talk to external network.

In this example, eth1 is connected to external network (internet), and eth0 is connected to internal network (For example: 192.168.1.x).

iptables -A FORWARD -i eth0 -o eth1 -j ACCEPT

Allow outbound DNS

The following rules allow outgoing DNS connections.

iptables -A OUTPUT -p udp -o eth0 --dport 53 -j ACCEPT
iptables -A INPUT -p udp -i eth0 --sport 53 -j ACCEPT

Prevent DoS Attack

The following iptables rule will help you prevent the Denial of Service (DoS) attack on your webserver.

iptables -A INPUT -p tcp --dport 80 -m limit --limit 25/minute --limit-burst 100 -j ACCEPT

In the above example:

  • -m limit: This uses the limit iptables extension
  • –limit 25/minute: This limits only maximum of 25 connection per minute. Change this value based on your specific requirement
  • –limit-burst 100: This value indicates that the limit/minute will be enforced only after the total number of connection have reached the limit-burst level.

Port Forwarding

The following example routes all traffic that comes to the port 442 to 22. This means that the incoming ssh connection can come from both port 22 and 422.

iptables -t nat -A PREROUTING -p tcp -d 192.168.102.37 --dport 422 -j DNAT --to 192.168.102.37:22

If you do the above, you also need to explicitly allow incoming connection on the port 422.

iptables -A INPUT -i eth0 -p tcp --dport 422 -m state --state NEW,ESTABLISHED -j ACCEPT
iptables -A OUTPUT -o eth0 -p tcp --sport 422 -m state --state ESTABLISHED -j ACCEPT

Log Dropped Packets

You might also want to log all the dropped packets. These rules should be at the bottom.

First, create a new chain called LOGGING.

iptables -N LOGGING

Next, make sure all the remaining incoming connections jump to the LOGGING chain as shown below.

iptables -A INPUT -j LOGGING

Next, log these packets by specifying a custom “log-prefix”.

iptables -A LOGGING -m limit --limit 2/min -j LOG --log-prefix "IPTables Packet Dropped: " --log-level 7

Finally, drop these packets.

iptables -A LOGGING -j DROP

(most of the rules from here)

Example:

iptables -A INPUT --jump ACCEPT --protocol all   --source 127.0.0.1
iptables -A INPUT --jump ACCEPT --protocol tcp   --dport 22
iptabels -A INPUT --jump ACCEPT --protocol icmp
iptables -A INPUT --jump ACCEPT --match state    --state ESTABLISHED,RELATED
iptables -A INPUT --jump REJECT --protocol all

 

 

How to create a deamon service for java program

Here the link I used for create the service script:

vim service.name.sh

insert this:

#!/bin/sh
SERVICE_NAME=MyService
PATH_TO_JAR=/usr/local/MyProject/MyJar.jar
PID_PATH_NAME=/tmp/MyService-pid
case $1 in
start)
echo “Starting $SERVICE_NAME …”
if [ ! -f $PID_PATH_NAME ]; then
nohup java -jar $PATH_TO_JAR /tmp 2>> /dev/null >> /dev/null &
echo $! > $PID_PATH_NAME
echo “$SERVICE_NAME started …”
else
echo “$SERVICE_NAME is already running …”
fi
;;
stop)
if [ -f $PID_PATH_NAME ]; then
PID=$(cat $PID_PATH_NAME);
echo “$SERVICE_NAME stoping …”
kill $PID;
echo “$SERVICE_NAME stopped …”
rm $PID_PATH_NAME
else
echo “$SERVICE_NAME is not running …”
fi
;;
restart)
if [ -f $PID_PATH_NAME ]; then
PID=$(cat $PID_PATH_NAME);
echo “$SERVICE_NAME stopping …”;
kill $PID;
echo “$SERVICE_NAME stopped …”;
rm $PID_PATH_NAME
echo “$SERVICE_NAME starting …”
nohup java -jar $PATH_TO_JAR /tmp 2>> /dev/null >> /dev/null &
echo $! > $PID_PATH_NAME
echo “$SERVICE_NAME started …”
else
echo “$SERVICE_NAME is not running …”
fi
;;
esac

modify the environment variable

SERVICE_NAME=MyService
PATH_TO_JAR=/usr/local/MyProject/MyJar.jar
PID_PATH_NAME=/tmp/MyService-pid

 

 

Install nmap and check which ports are open.Centos 7

yum install nmap

now scan the ports with :

nmap -sT -O localhost

result:

Nmap scan report for localhost (127.0.0.1)

Host is up (0.000083s latency).

rDNS record for 127.0.0.1: localhost.localdomain

Not shown: 972 closed ports

PORT      STATE SERVICE

21/tcp    open  ftp

22/tcp    open  ssh

25/tcp    open  smtp

53/tcp    open  domain

80/tcp    open  http

110/tcp   open  pop3

111/tcp   open  rpcbind

143/tcp   open  imap

443/tcp   open  https

783/tcp   open  spamassassin

993/tcp   open  imaps

995/tcp   open  pop3s

1080/tcp  open  socks

1081/tcp  open  pvuniwien

2005/tcp  open  deslogin

2009/tcp  open  news

3005/tcp  open  deslogin

3306/tcp  open  mysql

5432/tcp  open  postgresql

8009/tcp  open  ajp13

8080/tcp  open  http-proxy

8081/tcp  open  blackice-icecap

9009/tcp  open  pichat

9080/tcp  open  glrpc

9090/tcp  open  zeus-admin

9100/tcp  open  jetdirect

10024/tcp open  unknown

10025/tcp open  unknown

No exact OS matches for host (If you know what OS is running on it, see http://nmap.org/submit/ ).

TCP/IP fingerprint:

OS:SCAN(V=6.40%E=4%D=7/23%OT=21%CT=1%CU=41542%PV=N%DS=0%DC=L%G=Y%TM=59744F1

OS:C%P=x86_64-redhat-linux-gnu)SEQ(SP=101%GCD=1%ISR=105%TI=Z%TS=A)SEQ(SP=10

OS:1%GCD=1%ISR=106%TI=Z%II=I%TS=A)OPS(O1=MFFD7ST11NW7%O2=MFFD7ST11NW7%O3=MF

OS:FD7NNT11NW7%O4=MFFD7ST11NW7%O5=MFFD7ST11NW7%O6=MFFD7ST11)WIN(W1=AAAA%W2=

OS:AAAA%W3=AAAA%W4=AAAA%W5=AAAA%W6=AAAA)ECN(R=Y%DF=Y%T=40%W=AAAA%O=MFFD7NNS

OS:NW7%CC=Y%Q=)T1(R=Y%DF=Y%T=40%S=O%A=S+%F=AS%RD=0%Q=)T2(R=N)T3(R=N)T4(R=Y%

OS:DF=Y%T=40%W=0%S=A%A=Z%F=R%O=%RD=0%Q=)T5(R=Y%DF=Y%T=40%W=0%S=Z%A=S+%F=AR%

OS:O=%RD=0%Q=)T6(R=N)T7(R=Y%DF=Y%T=40%W=0%S=Z%A=S+%F=AR%O=%RD=0%Q=)U1(R=Y%D

OS:F=N%T=40%IPL=164%UN=0%RIPL=G%RID=G%RIPCK=G%RUCK=G%RUD=G)IE(R=Y%DFI=N%T=4

OS:0%CD=S)

Network Distance: 0 hops

OS detection performed. Please report any incorrect results at http://nmap.org/submit/ .

Nmap done: 1 IP address (1 host up) scanned in 12.22 seconds

now scan from external :

nmap -sT -O <ip>

result

Starting Nmap 7.50 ( https://nmap.org ) at 2017-07-23 09:30 CEST

Nmap scan report for web.site (<ip>)

Host is up (0.035s latency).

rDNS record for <ip>: mail. web.site

Not shown: 978 closed ports

PORT     STATE    SERVICE

21/tcp   open     ftp

22/tcp   open     ssh

25/tcp   open     smtp

53/tcp   open     domain

80/tcp   open     http

110/tcp  open     pop3

111/tcp  open     rpcbind

135/tcp  filtered msrpc

139/tcp  filtered netbios-ssn

143/tcp  open     imap

443/tcp  open     https

445/tcp  filtered microsoft-ds

993/tcp  open     imaps

995/tcp  open     pop3s

1080/tcp open     socks

1081/tcp open     pvuniwien

2009/tcp open     news

3306/tcp filtered mysql

8009/tcp open     ajp13

8081/tcp open     blackice-icecap

9009/tcp open     pichat

9080/tcp open     glrpc

Device type: general purpose|media device|WAP|storage-misc

Running (JUST GUESSING): Linux 3.X|4.X|2.6.X (89%), Asus embedded (86%), Synology DiskStation Manager 5.X (86%)

OS CPE: cpe:/o:linux:linux_kernel:3 cpe:/o:linux:linux_kernel:4 cpe:/o:linux:linux_kernel:3.x cpe:/h:asus:rt-n56u cpe:/o:linux:linux_kernel:3.4 cpe:/o:linux:linux_kernel:3.10 cpe:/a:synology:diskstation_manager:5.2 cpe:/o:linux:linux_kernel:2.6.32

Aggressive OS guesses: Linux 3.2 – 4.8 (89%), Linux 3.18 (88%), Linux 3.16 (87%), Linux 3.13 or 4.2 (87%), XBMCbuntu Frodo v12.2 (Linux 3.X) (87%), ASUS RT-N56U WAP (Linux 3.4) (86%), Linux 3.13 (86%), Linux 3.12 (86%), Linux 3.8 – 3.11 (86%), Linux 4.10 (86%)

No exact OS matches for host (test conditions non-ideal).

Network Distance: 6 hops

OS detection performed. Please report any incorrect results at https://nmap.org/submit/ .

Nmap done: 1 IP address (1 host up) scanned in 15.47 seconds

 

check now for LISTENING port:

Next, check for information about the port using netstat or lsof. To check for port 834 using netstat, use the following command:

netstat -anp | grep 834

result :

tcp        0      0 127.0.0.1:9168          127.0.0.1:47834         TIME_WAIT                      

unix  2      [ ACC ]     STREAM     LISTENING     397083455 343/amavisd (ch1-av  /var/spool/amavisd/amavisd.sock

unix  2      [ ]         STREAM     CONNECTED     481728342 25062/ruby           

unix  3      [ ]         STREAM     CONNECTED     407881834 4920/dovecot         

unix  2      [ ]         STREAM     CONNECTED     481808349 25062/ruby   

The lsof command reveals similar information since it is also capable of linking open ports to services:

lsof -i | grep 834

To check if the port is associated with the official list of known services, type:

cat /etc/services

 

to check the users log in use command : who

Problem Restarting Apache

[Fri Mar 31 04:44:20.842898 2017] [:error] [pid 25467] (28)No space left on device: mod_python: Failed to create global mutex 2 of 8 (/tmp/mpmtx254672).

[Fri Mar 31 04:44:20.842912 2017] [:error] [pid 25467] mod_python: We can probably continue, but with diminished ability to process session locks.

[Fri Mar 31 04:44:20.842914 2017] [:error] [pid 25467] mod_python: Hint: On Linux, the problem may be the number of available semaphores, check ‘sysctl kernel.sem’

[Fri Mar 31 04:44:20.868130 2017] [mpm_prefork:notice] [pid 25467] AH00163: Apache/2.4.6 (CentOS) OpenSSL/1.0.1e-fips mod_fcgid/2.3.9 PHP/5.6.30 mod_python/3.5.0- Python/2.7.5 mod_jk/1.2.42 configured — resuming normal operations

To solve :

This error completely stumped me a couple of weeks ago. Apparently someone was adjusting the Apache configuration, then they checked their syntax and attempted to restart Apache. It went down without a problem, but it refused to start properly, and didn’t bind to any ports.

Within the Apache error logs, this message appeared over and over:

Apache is basically saying “I want to start, but I need to write some things down before I can start, and I have nowhere to write them!” If this happens to you, check these items in order:

1. Check your disk space
This comes first because it’s the easiest to check, and sometimes the quickest to fix. If you’re out of disk space, then you need to fix that problem. 🙂

2. Review filesystem quotas
If your filesystem uses quotas, you might be reaching a quota limit rather than a disk space limit. Use repquota / to review your quotas on the root partition. If you’re at the limit, raise your quota or clear up some disk space. Apache logs are usually the culprit in these situations.

3. Clear out your active semaphores
Semaphores? What the heck is a semaphore? Well, it’s actually an apparatus for conveying information by means of visual signals. But, when it comes to programming, semaphores are used for communicating between the active processes of a certain application. In the case of Apache, they’re used to communicate between the parent and child processes. If Apache can’t write these things down, then it can’t communicate properly with all of the processes it starts.

I’d assume if you’re reading this article, Apache has stopped running. Run this command as root:

If you see a list of semaphores, Apache has not cleaned up after itself, and some semaphores are stuck. Clear them out with this command:

Now, in almost all cases, Apache should start properly. If it doesn’t, you may just be completely out of available semaphores. You may want to increase your available semaphores, and you’ll need to tickle your kernel to do so. Add this to /etc/sysctl.conf:

And then run sysctl -p to pick up the new changes.

Restart : service httpd restart