How to test if the email address exists

Source Link

To check if user entered email mailbox.does.not.exist@webdigiapps.com really exists go through the following in command prompt on windows / terminal on mac. The commands you type in are in green and the server response is in blue. Please refer to MAC & PC screenshots towards the end of this post.

Step 1 – Find mail exchanger or mail server of webdigiapps.com

COMMAND:
nslookup -q=mx webdigiapps.com
RESPONSE:
Non-authoritative answer:
webdigiapps.com mail exchanger = 0 mx2.sub3.homie.mail.dreamhost.com.
webdigiapps.com mail exchanger = 0 mx1.sub3.homie.mail.dreamhost.com.

Step 2 – Now we know the mail server address so let us connect to it. You can connect to one of the exchanger addresses in the response from Step 1.

COMMAND:
telnet mx2.sub3.homie.mail.dreamhost.com 25
RESPONSE:
Connected to mx2.sub3.homie.mail.dreamhost.com.
Escape character is ‘^]’.
220 homiemail-mx7.g.dreamhost.com ESMTP

COMMAND:
helo hi
RESPONSE:
250 homiemail-mx8.g.dreamhost.com

COMMAND:
mail from: <youremail@gmail.com>
RESPONSE:
250 2.1.0 Ok

COMMAND:
rcpt to: <mailbox.does.not.exist@webdigiapps.com>
RESPONSE:
550 5.1.1 <mailbox.does.not.exist@webdigiapps.com>: Recipient address rejected: User unknown in virtual alias table

COMMAND:
quit
RESPONSE:
221 2.0.0 Bye

Screenshots – MAC Terminal & Windows

MAC email verification
Windows email verification

NOTES:

1) the 550 response indicates that the email address is not valid and you have caught a valid but wrong email address. This code can be on the server and called on AJAX when user tabs out of the email field.  The entire check will take less than 2 seconds to run and you can make sure that the email is correct.
2) If email was present the server will respond with a 250 instead of 550
3) There are certain servers with a CATCH ALL email and this means all email address are accepted as valid on their servers (RARE but some servers do have this setting).
4) Please do not use this method to continuously to check for availability of gmail / yahoo / msn accounts etc as this may cause your IP to be added to a blacklist.
5) This is to supplement the standard email address javascript validation.

IPTables – Load Balance Incoming Web Traffic

First of ALL

The important thing to remember as we go forward is that ORDER MATTERS! Rules are executed from top to bottom.

Note that Rules are applied in order of appearance, and the inspection ends immediately when there is a match. Therefore, for example, if a Rule rejecting ssh connections is created, and afterward another Rule is specified allowing ssh, the Rule to reject is applied and the later Rule to accept the ssh connection is not.

At the top of the /etc/sysconfig/iptables (Centos 7) the rules are more important !!

 

Instead of using the default policy, I normally recommend making an explicit DROP/REJECT rule at the bottom of your chain that matches everything. You can leave your default policy set to ACCEPT and this should reduce the chance of blocking all access to the server.

Load Balance Incoming Web Traffic

This uses the iptables nth extension. The following example load balances the HTTPS traffic to three different ip-address. For every 3th packet, it is load balanced to the appropriate server (using the counter 0).

iptables -A PREROUTING -i eth0 -p tcp --dport 443 -m state --state NEW -m nth --counter 0 --every 3 --packet 0 -j DNAT --to-destination 192.168.1.101:443
iptables -A PREROUTING -i eth0 -p tcp --dport 443 -m state --state NEW -m nth --counter 0 --every 3 --packet 1 -j DNAT --to-destination 192.168.1.102:443
iptables -A PREROUTING -i eth0 -p tcp --dport 443 -m state --state NEW -m nth --counter 0 --every 3 --packet 2 -j DNAT --to-destination 192.168.1.103:443

Allow Loopback Access

You should allow full loopback access on your servers. i.e access using 127.0.0.1

iptables -A INPUT -i lo -j ACCEPT
iptables -A OUTPUT -o lo -j ACCEPT

Allow Internal Network to External network.

On the firewall server where one ethernet card is connected to the external, and another ethernet card connected to the internal servers, use the following rules to allow internal network talk to external network.

In this example, eth1 is connected to external network (internet), and eth0 is connected to internal network (For example: 192.168.1.x).

iptables -A FORWARD -i eth0 -o eth1 -j ACCEPT

Allow outbound DNS

The following rules allow outgoing DNS connections.

iptables -A OUTPUT -p udp -o eth0 --dport 53 -j ACCEPT
iptables -A INPUT -p udp -i eth0 --sport 53 -j ACCEPT

Prevent DoS Attack

The following iptables rule will help you prevent the Denial of Service (DoS) attack on your webserver.

iptables -A INPUT -p tcp --dport 80 -m limit --limit 25/minute --limit-burst 100 -j ACCEPT

In the above example:

  • -m limit: This uses the limit iptables extension
  • –limit 25/minute: This limits only maximum of 25 connection per minute. Change this value based on your specific requirement
  • –limit-burst 100: This value indicates that the limit/minute will be enforced only after the total number of connection have reached the limit-burst level.

Port Forwarding

The following example routes all traffic that comes to the port 442 to 22. This means that the incoming ssh connection can come from both port 22 and 422.

iptables -t nat -A PREROUTING -p tcp -d 192.168.102.37 --dport 422 -j DNAT --to 192.168.102.37:22

If you do the above, you also need to explicitly allow incoming connection on the port 422.

iptables -A INPUT -i eth0 -p tcp --dport 422 -m state --state NEW,ESTABLISHED -j ACCEPT
iptables -A OUTPUT -o eth0 -p tcp --sport 422 -m state --state ESTABLISHED -j ACCEPT

Log Dropped Packets

You might also want to log all the dropped packets. These rules should be at the bottom.

First, create a new chain called LOGGING.

iptables -N LOGGING

Next, make sure all the remaining incoming connections jump to the LOGGING chain as shown below.

iptables -A INPUT -j LOGGING

Next, log these packets by specifying a custom “log-prefix”.

iptables -A LOGGING -m limit --limit 2/min -j LOG --log-prefix "IPTables Packet Dropped: " --log-level 7

Finally, drop these packets.

iptables -A LOGGING -j DROP

(most of the rules from here)

Example:

iptables -A INPUT --jump ACCEPT --protocol all   --source 127.0.0.1
iptables -A INPUT --jump ACCEPT --protocol tcp   --dport 22
iptabels -A INPUT --jump ACCEPT --protocol icmp
iptables -A INPUT --jump ACCEPT --match state    --state ESTABLISHED,RELATED
iptables -A INPUT --jump REJECT --protocol all

 

 

Angular 2 data binding

Original Link

Angular 2 has support for these data-bindings:

  • Interpolation: One-way data binding from the component to the template. Let’s you display information from the component into the template. Example: {{person.name}}.
  • Property bindings: One-way data bindings from the component to the template. Let you bind data from the component into the template. You typically use them to bind component data to DOM element properties (like for instance the src property of an image). Example: <img [src]="person.imageUrl">.
  • Event bindings: One-way data bindings from the template to the component. Let’s you bind template events to the component. You typically use them to execute arbitrary code as a response to an interaction between the user and the user interface of your application. Example (click)="selectPerson(person)".
  • [(ngModel)]: Two-way data binding from the component to the template and vice versa. It syncs data both ways from the component to the template and back. You typically use it within form inputs. Example [(ngModel)]="person.name"

 

UPDATE (18th May 2017)This tutorial has been updated to the latest version of Angular (Angular 4). Note that the Angular team has re-branded the terms Angular 2 to Angular and Angular 1.x to AngularJS which are now the official names of these frameworks.

This is the fifth article on the Getting Started with Angular 2 Step by Step series if you missed the previous articles go and check them out!

Good morning! Time for some Angular 2 goodness! Yesterday you learned about Angular 2 Routing and today you are going to learn about how you can create forms and validate them in Angular 2.

Angular 2 Getting Started

The Code Samples

You can download all the code samples for this article from this GitHub repository.

Our Application Up Until Now

At this point in time we have developed a tiny Angular 2 application to learn more about people from the StarWars universe. We have two components, the PeopleListComponent that displays a list of people and the PersonDetailsComponent that displays more information about the character that you have selected.

We use Angular 2 routing to navigate between these two views, the list of people being our default view and the one that is rendered as soon as the application starts.

We Want to Save Our Own Data!

Up until now we’ve just been reading information and we are tired of that! We want to be able to write and save our own data.

In order to do that we are going to transform our PersonDetailsComponent into a form and add some validation on top to ensure that the information that we save is correct.

At the end of exercise, our details form should look something like this:

Angular 2 Forms and Validation

First Things First! We Need to Add The Forms Module to Our App!

From RC5 onwards we need to declare our module dependencies in NgModule and since we want to use the new forms API you’ll need to import it in our app.module.ts:

import { FormsModule } from '@angular/forms';

And specify it as a dependency via the imports property:

// imports

@NgModule({
  declarations: [
    AppComponent,
    PeopleListComponent,
    PersonDetailsComponent
  ],
  imports: [
    BrowserModule,
    FormsModule,        // <===== HERE! :)
    HttpModule,
    AppRoutingModule,
  ],
  providers: [PeopleService],
  bootstrap: [AppComponent]
})
export class AppModule { }

Happily for us, because we bootstrapped our project with the Angular cli this has already been handled for us. The whole app.module.ts should already look like this:

import { BrowserModule } from '@angular/platform-browser';
import { NgModule } from '@angular/core';
import { FormsModule } from '@angular/forms';
import { HttpModule } from '@angular/http';

import { AppComponent } from './app.component';
import { PeopleListComponent } from './people-list/people-list.component';
import { PeopleService } from './people.service';
import { PersonDetailsComponent } from './person-details/person-details.component';

import { AppRoutingModule } from "./app-routing.module";

@NgModule({
  declarations: [
    AppComponent,
    PeopleListComponent,
    PersonDetailsComponent
  ],
  imports: [
    BrowserModule,
    FormsModule,
    HttpModule,
    AppRoutingModule,
  ],
  providers: [PeopleService],
  bootstrap: [AppComponent]
})
export class AppModule { }

Ok! Now we can proceed!

Using a File Template Instead of an Inline Template

The first thing we’re going to do now that things are getting serious is to extract our PersonDetailsComponent template in its very own file. Yes you can do that!

As a starting point, I thought it was nice for you to see your template bindings beside your component API but now that we are going to be writing more HTML it is nice to have it separated in its own file and get syntax highlighting and better text editor support altogether (although depending on the editor you’re using you may get great HTML support in your TypeScript via this or this in the future).

How can we use a file template instead of an inline one?

The @Component decorator has a templateUrl property in addition to the template property that we’ve been using thus far. Taking advantage of that property we can tell Angular 2 about which HTML template we want to use with a particular component. Let’s extract our PersonDetailsComponent template into a person-details.component.html file:

  <section *ngIf="person">
    <h2>You selected: {{person.name}}</h2>
    <h3>Description</h3>
    <p>
      {{person.name}} weights {{person.weight}} and is {{person.height}} tall.
    </p>
  </section>
  <button (click)="gotoPeoplesList()">Back to peoples list</button>

And update our PersonDetailsComponent metadata:

// some imports

@Component({
  selector: 'app-person-details',
  templateUrl: './person-details.component.html',  // <=== HERE!
})
export class PersonDetailsComponent implements OnInit {
    // some code...
}

A Basic Form in Angular 2

Ok, now that we got that covered, let’s add an HTML form to our view. Angular 2 works perfectly with HTML5 standards so we are just going to create a vanilla HTML form to edit the properties that we have in our Person interface.

We update our template to look like this:

<!-- new syntax for ng-if -->
<section *ngIf="person">
  <section>
    <h2>You selected: {{person.name}}</h2>
    <h3>Description</h3>
    <p>
      {{person.name}} weights {{person.weight}} and is {{person.height}} tall.
    </p>
  </section>
  <section>
    <form>
      <div>
        <label for="name">Name: </label>
        <input type="text" name="name">
      </div>
      <div>
        <label for="weight">Weight: </label>
        <input type="number" name="weight">
      </div>
      <div>
        <label for="height">Height: </label>
        <input type="number" name="height">
      </div>
    </form>
  </section>

  <button (click)="gotoPeoplesList()">Back to peoples list</button>
<section>

Ok, now we have a basic form but we are not displaying anything. Using the stuff that we’ve learned thus far we could tie our form to our component’s person by using event and property bindings.

So, for instance, we could do the following:

<div>
    <label for="name">Name: </label>
    <input type="text"
           name="name"
           [value]="person.name"
           (change)="person.name = name.value"
           #name>
</div>

Where we use a [value] property binding to bind the component’s person.name property to the input element, and we use the (change) event binding to update our person’s name.

There’s something weird though in this particular example, and that’s the #name right there inside our input element. That’s what Angular 2 calls a template local variable and it’s often used to keep DOM element related references and logic out of our component code.

In this case, the #name variable refers to the input element itself. That’s why when Angular 2 evaluates the person.name = name.value expression our person model gets updated.

This feels like way too much work to bind a property from our component to our template and back, doesn’t it?… If only there was a better way… 🙂

ngModel and Angular 2 Two-Way Data Binding

Well there is! The ngModel data binding, very similar to AngularJS ng-model, let’s us establish a two-way data binding for a given property between the component and the template. A two-way data binding effectively syncs the value of a property between template and component forwarding changes in both directions.

The syntax is a little bit special and takes a little bit to get accustomed to:

<div>
    <label for="name">Name: </label>
    <input type="text" name="name" [(ngModel)]="person.name">
</div>

Hell yeah! You read it right: [(ngModel)]="person.name". Before you start cursing let’s spend a minute longer contemplating this piece of code…

We’ve learned that event bindings are one-directional bindings that go from the template to the underlying component and are represented in brackets (click). We’ve also learned that property bindings are one-directional bindings that go from the component to the template and are represented in square brackets [src]. Therefore if we want a two-way binding, we have something equivalent to an event binding plus a property binding, merge both syntaxes and you get the “banana-in-a-box” syntax of [(ngModel)]. Makes sense right?

The [(ngModel)] two-way data binding can indeed be decomposed into an event/property binding combo like this:

<input type="text" name="name"
  [ngModel]="model.name"
  (ngModelChange)="model.name = $event" >

In the case of normal DOM events the $event variable usually gives us access to the DOM event itself. In this particular case though, for the special (ngModelChange) custom event, it takes the value of the input being changed. If you want to learn more about ngModel and when it’s useful to decompose it take a look at the Angular 2 docs.

Ok! So now that we know that [(ngModel)] exists, we can use it in our form to create a two-way data binding for each one of the person properties (but for the id, you don’t want to change that):

  <!-- new syntax for ng-if -->
<section *ngIf="person">
  <section>
    <h2>You selected: {{person.name}}</h2>
    <h3>Description</h3>
    <p>
      {{person.name}} weights {{person.weight}} and is {{person.height}} tall.
    </p>
  </section>
  <section>
    <form>
      <div>
        <label for="name">Name: </label>
        <input type="text" name="name" [(ngModel)]="person.name">
      </div>
      <div>
        <label for="weight">Weight: </label>
        <input type="number" name="weight" [(ngModel)]="person.weight">
      </div>
      <div>
        <label for="height">Height: </label>
        <input type="number" name="height" [(ngModel)]="person.height">
      </div>
    </form>
  </section>

  <button (click)="gotoPeoplesList()">Back to peoples list</button>
</section>

If you run the application in your browser (remember ng serve --open or ng s -o if you’re cool) you’ll be able to see how whenever you change the value in these inputs the changes are reflected instantly in the description.

A Review of Angular 2 Data Bindings

With the [(ngModel)] binding we have now covered all data bindings available to you in Angular 2. Let’s make a quick recap of them before we continue with forms and validation.

Angular 2 has support for these data-bindings:

  • Interpolation: One-way data binding from the component to the template. Let’s you display information from the component into the template. Example: {{person.name}}.
  • Property bindings: One-way data bindings from the component to the template. Let you bind data from the component into the template. You typically use them to bind component data to DOM element properties (like for instance the src property of an image). Example: <img [src]="person.imageUrl">.
  • Event bindings: One-way data bindings from the template to the component. Let’s you bind template events to the component. You typically use them to execute arbitrary code as a response to an interaction between the user and the user interface of your application. Example (click)="selectPerson(person)".
  • [(ngModel)]: Two-way data binding from the component to the template and vice versa. It syncs data both ways from the component to the template and back. You typically use it within form inputs. Example [(ngModel)]="person.name".

Adding Validation to Our Form

Now that we have a way to update a person’s details let’s add some validation to ensure that the data we’re introducting is legit before we save it. We are going to do the following:

  1. Make the name field required,
  2. Display a super helpful validation error message to the user whenever the field is empty
  3. Enable/disable the form submit button based on the validity of the inputs within the form

The way we track changes and the validity of an input in Angular 2 is through the same ngModel directive that we use for two-way data binding. By using this directive with an input we can obtain information about whether or not the user has done something with the input, wether or not the value has changed and even if it is invalid.

Let’s add the required attribute to the name input to mark it as a required bit of information (One can’t live without a name):

<label for="name">Name: </label>
<input type="text" name="name" required [(ngModel)]="person.name">

Angular 2 uses the name attribute (in this case name="name") to identify this particular input and keep track of its changes and validity.

The easiest way to visualize how Angular 2 tracks changes in our input is to see how it adds/removes css classes to the input based on it’s state.

If we update the input with the following snippet of code to display the DOM className property after the input and go to the browser, you’ll be able to appreciate how as you interact with the input different classes are added:

<label for="name">Name: </label>

<!-- 1. Added #name local template variable in the input element -->
<input type="text" name="name" required [(ngModel)]="person.name" #name>

<!-- 2. Added text to show the input element className -->
<p>
 input "name" class is: {{ name.className }}
</p>

So if you:

  • leave the input untouched it will have the: ng-untouched ng-pristine ng-valid classes
  • click inside then outside the input and it will be marked as visitedand given the ng-touched class
  • type something and it will be marked as dirty and given the ng-dirty class
  • remove its whole content and it will be marked as invalid and given the ng-invalid class

Test it yourself!

We can take advantage of this feature to add some css styling to our inputs when they are valid and invalid. You can update the styles.css file to include the following styles:

// These styles are HEAVILY, HEAVILY inspired (STOLEN!!)
// from the Angular 2 docs

// valid and required show green
input.ng-valid[required] {
  border-left: 5px solid #42A948; /* green */
}

// invalid
input.ng-invalid {
  border-left: 5px solid #a94442; /* red */
}

This css file is linked within index.html. As such it represents the global styles for our application which are applied globally regardless of in which component you are in.

In the future you’ll learn that you can define component-level styles that are only applied within a component. Yey!

Back to the PersonDetailsComponent template we now want to display an error message whenever the user doesn’t type the required name. We can do that by creating a local template variable name and setting its value to ngModelHow can we do that?Although unintuitive, you do that by assigning the local template variable to ngModel like this. Update the previous snippet to the following:

<label for="name">Name: </label>
<input type="text" name="name" required [(ngModel)]="person.name" #name="ngModel">

You can think of it as a way to gain access to the directive that tracks the changes and validity of the input. Now that name holds the value of the ngModel directive, we can access its properties and find whether or not the input is in a valid state. We can use that information to toggle the visibility of an error message:

<label for="name">Name: </label>
<input type="text" name="name" required [(ngModel)]="person.name" #name="ngModel">
<div [hidden]="name.valid || name.pristine" class="error">
    Name is required my good sir/lady!
</div>

That we will style with this magic css:

.error{
  padding: 12px;
  background-color: rgba(255, 0, 0, 0.2);
  color: red;
}

Notice how using the property binding syntax we can bind any expression to the [hidden] DOM property. In this case we only hide the message when ngModel tells us that the input is valid or pristine (no reason to show the error if the user hasn’t even started editing the input).

Custom Validation FTW!

Let’s do another example of validation. This time for the input element tied to a person’s weight using the max and min HTML5 input attributes. Let’s say that our Star Wars figures shouldn’t weight less than a feather nor more than a Rancor. We’ll update our template as follows:

<label for="weight">Weight: </label>
<input type="number" name="weight" [(ngModel)]="person.weight" min=0 max=350 #weight="ngModel">

And now we add the validation messages for both constraints:

<div *ngIf="weight.errors && (weight.dirty || weight.touched)"
    class="error">
    <p [hidden]="!weight.errors.min">
      Weight must be higher than a feather's. {{weight.value}} is way too low.
    </p>
    <p [hidden]="!name.errors.max">
      Weight can't be higher than a Rancor's. {{weight.value}} is too high
    </p>
</div>

If you run the example again (remember ng serve --open) you’ll be able to verify that this new validation doesn’t work! Oh no! What do we do?

Angular 2 let’s you extend the built-in validation system with new validators of your own. Let’s create two new validators for the minand max attributes. The validators will be represented as directives which we can create with the aid of our mighty companion: The Angular CLI. You can think of a directive as a way to add custom behavior to DOM elements matching a specific selector.

Type the following:

PS> ng generate directive min-validator
# ng g d min-validator

This will create a new directive for us and register the directive into our AppModule. The new directive will look like this:

import { Directive } from '@angular/core';

@Directive({
  selector: '[appMinValidator]'
})
export class MinValidatorDirective {

  constructor() { }

}

We can change the selector to reflect the attribute that we want to tie to the validation logic (min):

import { Directive } from '@angular/core';

@Directive({
  // CSS selector for attributes
  selector: '[min]'
})
export class MinValidatorDirective {

  constructor() { }
}

In order to turn this new directive into a validator we need to implement the Validator interface which contains the validatemethod. This method will contain the validation logic that will compare the current value of the input with our attribute min value.

But before we can implement that logic we need to get a hold of the min value. We’ll do that taking advantage of the @Input decorator (if you missed previous articles in the series, you can use inputs to pass data into a component or directive) :

export class MinValidatorDirective {
  // new @Input here
  // it will get the min number from the attribute
  // For example 5 for <input min=5 ...
  @Input() min: number;

  constructor() { }
}

And now we implement the Validator interface with the validatemethod as follows:

import { Validator, AbstractControl, ValidationErrors} from '@angular/forms';

@Directive({...})
export class MinValidatorDirective implements Validator {
  // new @Input here
  // it will get the min number from the attribute
  // For example 5 for <input min=5 ...
  @Input() min: number;

  constructor() { }

  // Define validation logic
  validate(control: AbstractControl): ValidationErrors {
    const currentValue = control.value;
    const isValid = currentValue >= this.min;

    // return errors as an object
    return isValid ? null : {
      min: {
        valid: false
      }
    };
  }
}

Ok! So now we have a custom validator directive for the minattribute ready to go. We have created the directive, tied it to the minattribute via the [min] selector, implemented the validation logic via the Validator interface, registered the directive into our AppModuleso we can use it within our application but there’s one little thing that remains. We need to make Angular Validation’s system aware of this validator. We do this by registering the validator as a provider for the NG_VALIDATORS token inside the @Directive decorator metadata:

import { Validator, AbstractControl, ValidationErrors, NG_VALIDATORS} from '@angular/forms';

@Directive({
  selector: '[min]',
  // register validator in DI
  providers: [{provide: NG_VALIDATORS, useExisting: MinValidatorDirective, multi: true}]
})
export class MinValidatorDirective implements Validator {
    // code...
}

Now when the Angular validation system uses dependency injection to get a hold of all the available validators (represented by NG_VALIDATORS) in addition to the built-in validators it will also have access to this new custom validator.

You can now verify that the validation logic we wrote earlier for our weight input is now working. Test to write a weight of -100 and you’ll be greeted by an error message. Yippi!

Now we can repeat the steps for the max validator. Let’s be cool and do the shorthand with the Angular CLI:

PS> ng g d max-validator

And write the validator logic:

import { Directive, Input } from '@angular/core';
import { Validator, AbstractControl, ValidationErrors, NG_VALIDATORS } from '@angular/forms';

@Directive({
  selector: '[max]',
  // register validator in DI
  providers: [{provide: NG_VALIDATORS, useExisting: MaxValidatorDirective, multi: true}]
})
export class MaxValidatorDirective implements Validator {
  @Input() max: number;

  validate(control: AbstractControl): ValidationErrors {
    const currentValue = control.value;
    const isValid = currentValue <= this.max;

    // return errors as an object
    return isValid ? null : {
      max: {
        valid: false
      }
    };

  }

  constructor() { }

}

Tada! Now we have our very own custom validators for min and max, Great job!

Submitting Our Form

Ok! Let’s do a quick summary! Up to this point we have added some validation to the name input within our form so that if the user removes the name we will show an error because you are not supposed to do that with a required field. We’ve also added min and max validation to our weight attribute and some basic styles.

The next step is to actually try to save these changes. To that end, we are going to add a submit button inside our form:

  <section>
    <form>
      <!--- form inputs here ...--->

      <!-- add button -->
      <button type="submit">Save</button>
    </form>
  </section>

And we need to disable it when the form is invalid so that the user can’t save corrupted/invalid data, cause chaos, mayhem and destroy our beloved servers (although you and I know that you must have this type of validation in your service layer as well as in your client).

In order to do that we create yet another local template variable #personForm to get access to the actual form via the ngFormdirective. We then use that variable to disable the button when the form is invalid:

  <section>
    <form #personForm="ngForm">
      <div>
          <!-- inputs ... -->
      </div>
      <button type="submit" [disabled]="!personForm.form.valid">Save</button>
    </form>
  </section>

Finally we set a submit event binding in the form so that we can save our person details whenever we submit the form:

  <section>
    <form (ngSubmit)="savePersonDetails()" #personForm="ngForm">
      <div>
          <!-- inputs ... -->
      </div>
      <button type="submit" [disabled]="!personForm.form.valid">Save</button>
    </form>
  </section>

The whole template now looks like this:

<section *ngIf="person">
  <!-- description -->
  <section>
    <h2>You selected: {{person.name}}</h2>
    <h3>Description</h3>
    <p>
      {{person.name}} weights {{person.weight}} and is {{person.height}} tall.
    </p>
  </section>

  <!-- form -->
  <section>
    <form (ngSubmit)="savePersonDetails()" #personForm="ngForm">
      <div>
        <label for="name">Name: </label>
        <input type="text" name="name" [(ngModel)]="person.name" required #name="ngModel">
        <div [hidden]="name.valid || name.pristine" class="error">
            Name is required my good sir/lady!
        </div>
      </div>
      <div>
        <label for="weight">Weight: </label>
        <input type="number" name="weight" [(ngModel)]="person.weight" min=0 max=350 #weight="ngModel">
        <div *ngIf="weight.errors && (weight.dirty || weight.touched)"
            class="error">
            <p [hidden]="!weight.errors.min">
              Weight must be bigger than a feather's. {{weight.value}} is way too low.
            </p>
            <p [hidden]="!weight.errors.max">
              Weight can't be bigger than a Rancor's. {{weight.value}} is too high
            </p>
        </div>
      </div>
      <div>
        <label for="height">Height: </label>
        <input type="number" name="height" [(ngModel)]="person.height">
      </div>
      <div>
        <label for="profession">Profession:</label>
        <select name="profession" [(ngModel)]="person.profession">
          <option *ngFor="let profession of professions" [value]="profession">{{profession}}</option>
        </select>
      </div>


      <button type="submit" [disabled]="!personForm.form.valid">Save</button>
    </form>
  </section>

  <button (click)="gotoPeoplesList()">Back to peoples list</button>
<section>

We just need to update our PersonDetailsComponent to be able to handle that submit event:

// imports 

@Component({
  selector: 'app-person-details',
  templateUrl: './person-details.component.html',
  styles: []
})
export class PersonDetailsComponent implements OnInit {
    // codes...

    savePersonDetails(){
        alert(`saved!!! ${JSON.stringify(this.person)}`);
    }
}

And now if you test everything that you’ve just done in the browser: Click on Luke Skywalker, change his name to Luke Vader (moahahaha), then save, you should be able to see your changes in an alert box.

Let’s update our component further so we can save this information with the help of our PeopleService.

Saving Information

This has nothing to do with forms and validation so I’ll just run wild like the wind over it without much detail. We are going to add a savemethod in our service and then we are going to save the changes we’ve done on a person in memory within our service.

We start by updating the PersonDetailsComponent:

// etc
export class PersonDetailsComponent implements OnInit {
    savePersonDetails(){
      this.peopleService.save(this.person);
    }

And then we update our PeopleService with the new save method:

import { Injectable } from '@angular/core';
import { Person } from './person';

const PEOPLE : Person[] = [
      {id: 1, name: 'Luke Skywalker', height: 177, weight: 70, profession: ''},
      {id: 2, name: 'Darth Vader', height: 200, weight: 100, profession: ''},
      {id: 3, name: 'Han Solo', height: 185, weight: 85, profession: ''},
    ];

@Injectable()
export class PeopleService{

  getAll() : Person[] {
    return PEOPLE.map(p => this.clone(p));
  }
  get(id: number) : Person {
    return this.clone(PEOPLE.find(p => p.id === id));
  }
  save(person: Person){
    let originalPerson = PEOPLE.find(p => p.id === person.id);
    if (originalPerson) Object.assign(originalPerson, person);
    // saved moahahaha
  }

  private clone(object: any){
    // hack
    return JSON.parse(JSON.stringify(object));
  }
}

If you are a very observant person you’ll see that I have added the clone method to this service. The purpose of this mighty teeny tiny method is to avoid sharing the same object references between the different components in the app so that we can simulate “saving” in a way more faithful to reality.

What’s With NgModel and NgForm?

If you are a little bit like me, you are probably slightly confused with the ngModel and ngForm directives. So let’s do some recap about them:

ngModel

  • ngModel lets you track the state and validity of your inputs
  • ngModel adds css classes to your inputs based on their state, whether they have been touched, changed or whether they are valid or not.
  • Using #name="ngModel" in an input element creates a local template variable called #name and assigns the ngModeldirective to it. You can then use the variable to access the ngModel directive properties like validpristinetouched, etc.

ngForm

  • Angular 2 attaches an NgForm directive to every form element.
  • The ngForm directive exposes the form.valid property that let’s you know if all controls within a given form are in a valid state.

ngModel and ngForm

  • Whenever you add the ngModel directive to an input Angular 2 is going to register it using that name that you provide (remember name="name") with an NgForm directive that Angular 2 automagically attaches to any form element.
  • The ngForm directive contains a collection of the controls created using the ngModel directive.

Hope that makes it more clear! If you want to learn more about the ngForm directive then you may want to take a look at the Angular 2 docs

Bonus Exercise. Adding a Select Input in Angular 2

As a bonus exercise let’s try to add a select input with Angular 2 to select the profession of our Star Wars figures. An HTML5 selectelement, also known as dropdown or listbox in UI circles typically has a select element that wraps a collection of option elements.

We will start by adding a profession to the Person interface:

export interface Person {
  id: number;
  name: string;
  height: number;
  weight: number;

  // it is optional because I know it
  // doesn't exist in the API that we will
  // consume in the next exercise :)
  profession?: string;
}

Then we update our PersonDetailsComponent to include all the available professions:

export class PersonDetailsComponent implements OnInit {
    professions: string[] = ['jedi', 'bounty hunter', 'princess', 'sith lord'];
    // other code
}

And finally we update the PersonDetailsComponent template to include the select element:

<!-- description header ... -->
  <section>
    <form (ngSubmit)="savePersonDetails()" #personForm="ngForm">
      <div>
      <! -- old inputs ... -->
      <!-- ...and the new select element -->
      <div>
        <label for="profession">Profession:</label>
        <select name="profession" [(ngModel)]="person.profession">
          <option *ngFor="let profession of professions" [value]="profession">{{profession}}</option>
        </select>
      </div>
      <button type="submit" [disabled]="!personForm.form.valid">Save</button>
    </form>
  </section>

And that’s it! You can now go back to the form, click on Luke Skywalker, make him a LORD SITH, save, and who’s gonna know what’s gonna happpen in the next Star Wars movie.

Do You Want to Learn More About Forms?

Would you like to learn more about Angular 2 forms, then check any of these articles:

And by the by, if you haven’t checked it yet, take a look at this awesome course on Angular 2 at Pluralsight: Angular 2: First Look by the one and only John Papa

Concluding

Great! We have got to the end of yet another article. Now you can coun forms and validation as one more of your Angular 2 ninja skills.

Up next! Consuming a real service using Angular 2 Http.

An Aside: Oh No! I Haven’t Had Time To Update To The New Forms API! Can I Update to RC5?

If you haven’t had time to update to the new forms API but still want to start using RC5, you can enable the old forms by importing the DeprecatedFormsModule and adding it to your app.module.ts:

import {DeprecatedFormsModule} from @angular/common;

@NgModule({
  imports: [ BrowserModule, routing, DeprecatedFormsModule ],
  declarations: [ AppComponent, PeopleListComponent, PersonDetailsComponent],
  bootstrap: [ AppComponent ]
})
export class AppModule { }

Lambda Expression in Java

Java Lambda Example

package it;

public class TypeInterfaceExample {

public static void main(String[] args) {

        StringLambdaString myLambda1 = (String s) -> s.length();

        StringLambdaString myLambda2 = (s) -> s.length();

        StringLambdaString myLambda3 = s -> s.length();

        printLambda(myLambda3);

        //OR

        printLambda(s -> s.length());//here directly the definition of the

        implemention of the method getLength

}

public static void printLambda(StringLambdaString l) {

        System.out.println(l.getLength(“Hello Lambda”));

}

        interface StringLambdaString {

               int getLength(String s);

        }

}

Thanks to Brains

Best Practice for WordPress security

From link

Quick Navigation

Roughly 30,000 websites are hacked every day. Could your website become one of them? In a perfect world, using a popular content management system like WordPress would end many security woes — but unfortunately, that’s not the case. By default WordPress isn’t very well secured; it’s built to easily publish content, not necessarily to protect it. If you want to protect your content as blogger, you’re going to need to take some extra steps.

But becoming a blogger shouldn’t mean that you have to be some sort of technical savant. You’re a content producer, not a hacker. Because of that, we’ve compiled a complete, all-in-one guide to “hardening” and protecting your WordPress blog. And it’s a little long — but most of the steps that you’re going to have to take are only going to have to be taken once.

By the end of this guide, you’ll know absolutely everything there is to know about WordPress safety and security, from better password habits to modifying the default WordPress configuration files. Whether you’re setting up a one man show or creating an immense magazine of content, you’ll be able to rest assured that your data, your site, and even your users are protected.

But first… let’s talk about the risks.

Why Do I Have to Secure My WordPress Account?

It’s a blog — not a bank account! Why would anyone try to hack your site? It’s easy to assume that your one blog isn’t going to become the target of a serious attack but, in truth, there are more reasons for a cybercriminal to target you than you might think. WordPress blogs are frequently hacked for the following reasons:

  • To collect your personal information — or the information of your users. Identity theft is a big reason why a cybercriminal might go after a well-trafficked blog. You don’t even need to collect a lot of information to make this viable: the criminals may only be seeking to collect email addresses. They can sell active email addresses to advertising companies or use them as their own spamming lists.
  • To post “black hat SEO” web pages. If your website is currently a highly ranked website (or even a moderately ranked one), a cybercriminal may want to take over your website domain so that they can post their own content on it. This is very similar to domain hijacking and it’s designed to leverage the popularity of a existing website in order to sell goods and services, spread malicious programs, or point to affiliate advertising.
  • To steal your website and hold it for ransom. Yes, this happens. And it’s usually not obvious. No one jumps out at you from a digital alley and says “$30 or the website gets it!” Instead, they throw a splash page on your website that says that it’s been hacked, and then direct you to services that you can purchase that will restore your website… all under the guise of “helping” you from the evil cyber criminals. This works because many people don’t back up their websites, so they can’t restore their content themselves.
  • To embed malware and malvertising. Some people just want to watch the world burn. A cybercriminal can pull off a rather subtle attack by simply embedding malware and malvertising into your website. Your website will still be up — so you may not notice that it is currently distributing malicious programs to your users (likely including yourself). Eventually, however, search engines are going to notice and your website is going to be blacklisted.
  • To simply take your website down. DDoS attacks are one of the easiest ways that a cyber-attacker can take a website down. This can happen for a variety of reasons: the attacker may be a competitor, the attacker may disagree with your positions, or the attacker may be trying to use it to gain access to your website by exposing other vulnerabilities.

Apart from this, your website can also be targeted as part of a larger attack. Criminal attackers may simply be scanning for vulnerable WordPress accounts — because they already know about the vulnerabilities that exist in WordPress. They may simply attempt exploits on all of the websites they find, hoping to recover something of usefulness and interest.

So how do you avoid becoming a target? It all begins with the setup.


Chapter One: Setting Up and Configuring Your WordPress Installation

WordPress is in the business of making it easy for you to post your thoughts and experiences. It isn’t necessarily in the business of securing them. The default configuration of your WordPress installation makes it very easy for you to use, but it also makes it easier for others to access. Before you even begin fiddling with your first post, you need to change some settings.

Change Your Administrative Username

By default, WordPress sets your username to “admin.” This is a problem: in order to log in, someone only needs to guess your password. But you can defeat this by having a username that is different and that is not visible to the public.

In WordPress, you can’t directly change usernames; instead, you have to create a new username and delete the old one before you begin. That makes it a little more complicated, but this is as good a time as any to become familiar with the administrative settings dashboard.

How to Change Your Administrative Username

On the administrative dashboard, click on “Users.” You’ll retrieve a list of current users, which should be only a single user named “admin.”

Next to the Users heading, click on the “Add New” button.

Fill in your user information as directed and select the role of “Administrator.” Click on “Add New User.”

Hover on the old “admin” user name. Click on “Delete.”

Confirm deletion.

Click on your new administrator name.

Scroll down to change the nickname and select to “display name publicly as” this nickname.

Add Two-Factor Authentication

Two-factor authentication adds an additional layer of security upon a traditional username and password combination. Think of two-step authentication as a lock in which you have to turn two separate keys. One of these keys is your login credentials — your username and password. The other key can be either of the two following options:

  • “Something you are.” A fingerprint scan, eye scan, or other biometric service can be used to verify that a user is who they say they are. This is frequently used to lock phones, doors, and other physical devices.
  • “Something you have.” A smartphone or similar device can be used to verify a user’s identity. Frequently this means sending the user an SMS message with a PIN. The user then has to enter that PIN alongside their login credentials.

Two-factor authentication can only be setup natively on WordPress.com. Otherwise it requires the use of plug-ins such as MiniOrange 2FAGoogle Authenticator and Sucuri.

Installing Two-Factor Authentication With Google Authenticator

Go to Add Plug-Ins and select “Add New.”

Search for “MiniOrange Google.”

Click “Install” and then “Activate.”

MiniOrange will send you an email to verify your email.

Then, you can set up your account.

Select the 2fa Tab to select a type of two-factor authentication. The simplest and most secure method is Google Authenticator. MiniOrange also offers premium versions including SMS, as well as less secure via email.

Click on the alert to configure security questions, which will ensure that you do not get locked out of your account.

(Optional) You can also setup which roles have two factor authentication.

WARNING! Using WordPress’s optional Jetpack, it’s possible to connect your own WordPress website to your WordPress.com login. From then on, you can log into all of your WordPress sites through your WordPress.com credentials. This is not advised. If one of your sites is compromised, all of your sites will be compromised.

Install a CAPTCHA Solution

Everyone knows CAPTCHA. CAPTCHA prevents bots from performing actions on your site, such as trying to log in or trying to submit a form. A bot can be very persistent: not only can they eventually break through your security, but they could overly tax your website, resulting in denied traffic and slow connections.

Though some CAPTCHA systems can seem a bit “annoying” — such as the ones that are difficult to read — they can be essential for high volume blogs. The CAPTCHA WordPress Plugin lets you add CAPTCHA controls to login forms, registration forms, comments forms, contact forms, and more. Further, you can control the type of CAPTCHA code that’s displayed, so that it has limited impact on your legitimate users.​​

Installing a CAPTCHA Solution

Go to Add-Plugins and Select “Add New.”

Search for “Captcha by BestWebSoft.”

Click on “Install.”

Review Captcha Settings. Enable Captcha for login forms, registration forms, forgot password forms, and comment forms.

Save changes.

From now on, when you login, you’ll be greeted with a captcha code.

Get Spam Protection for Your Comments

At first glance, spam protection seems more like a usability issue than a security issue. “Spam” comments generally come from bots who are seeking to boost the website rankings of other websites. Bots will generate “word salad” comments that have nothing to do with your posts but ultimately link to the site that they are promoting.

​Where it becomes a security issue is two-fold: spam comments can bog your blog down with excess traffic and they can contain potentially malicious links. WordPress does not have built-in spam protection, but it is provided for free through the Akismet WordPress plug-in. There are also some other options, such as the official WordPress security plug-in, and all-in-one systems like Sucuri.

Installing the Akismet WordPress Plug-In

Go to Add Plug-Ins and select “New.”

Search for “Akismet.”

Install and activate the Akismet plug-in.

Click on “Set Up Your Akismet Account.”

Click on any of the options.

Get an Akismet API Key for free.

Go to the Akismet Settings and enter in the Akismet API Key. Spam protection will begin instantly.

Remove Your WordPress Version Number

WordPress telegraphs the version number that you have installed for the world to see. While this might be interesting information, it can also be harmful. A malicious user could see that you’re using a version of WordPress that still has a certain vulnerability — and they can then target you. The easy solution? Just remove the number.

This takes a bit of editing, so remember to backup your website first. Once your website has been backed up:

Go to “Appearance” and then “Editor.”

Go to the right and click on “Theme Functions” (also labeled “functions.php”).

Note that some more advanced themes may have a custom functions file. Consult your theme documentation for more details.​

Type “add_filter(‘the_generator’,”);

This is WordPress Code that adds a filter to the part of the WordPress library that displays your version, thereby preventing it from being displayed.

Click on “Update File.”

This will strip out your version number from your WordPress header and from your WordPress RSS feeds at the same time. Now you just have a few more adjustments to do.

Disable the WordPress API

WordPress offers a REST API for developers who want to integrate their own programs into WordPress. However, there are some issues with the REST API — most notably that the REST API can actually bypass WordPress’s authentication system, including two-factor authentication. Unless you are using it for a custom-built application, it’s a solid practice to simply disable the WordPress API entirely. This can be done through a plug-in, such as Disable REST API.

All you need to do is “Install Now” and then “Activate.”​

Disable XML-RPC

XML-RPC is a special WordPress feature that enables remote access and posting. This can be a security issue, as it creates another way that a malicious user could potentially access your site. If you’re interested in publishing posts remotely, you may need to leave XML-RPC enabled (it is enabled by default). If you are not publishing posts remotely, there’s no way to add an additional vulnerability.

The easiest way to disable XML-RPC is to install the Disable XML-RPC plug-in. Though there are other ways, it would require modifying the code of a different plug-in.

Again, all you need to do is click on “Install Now” and then “Activate.”


Chapter Two: Passwords and Password Hygiene

So far many of the changes that we have made have been designed to counter security issues in the WordPress platform itself. But the platform only represents half of the risk. An equal amount of risk comes from the user — and, unfortunately, that’s you. There are many ways you could potentially (and accidentally) create your own security vulnerabilities. One of the major ways lies in passwords.

As of the most recent versions, WordPress Core actually requires “strong” passwords by default. That means that WordPress won’t let you set a password that its own algorithm deems too weak — and that’s a good thing. But there are still some things you should know about how passwords protect you, and how you can protect them.

Crafting a Strong and Memorable Password​

What makes a password good? A good password is both complex and easy to memorize. WordPress will make sure that your password is complex, but the passwords that it automatically generates are most definitely not easy to memorize — in fact, they’re generally impossible to remember. That can lead to people foregoing the automatically generated passwords altogether and attempting to make their own.

Complexity is important because the more complicated your password is, the less likely it is to be guessed by an intruder. But memorization is also important; if you can’t remember your password, you’re more likely to save it in an app, write it down in your notepad, or simply reset it the first time you forget what it is.​

Most people do not choose good passwords. To understand what makes a good password, let’s use an example:

  • “shells” – This is an obviously bad password. It’s a single dictionary word. It can easily be guessed, especially if there’s some reason for choosing the word shells. And you might think “what person is going to guess ‘shells’?” But people are rarely used for this process. Instead, automated scripts are used to go through an entire dictionary worth of words to eventually find the right one.
  • “sh311s” – This is often considered to be a good password, but it really isn’t. It’s not long enough, and the complexity is simply confusing — you’ll find yourself wondering whether you used an ‘e’ or a ‘3’. To a computer, “shells” and “sh311s” are functionally identical.
  • “#Sh@*zHQWoa*” – This is the type of password that’s usually provided through auto generation. In practice, it can be useless; it’s only helpful if saved in a password manager, which opens the door to other security issues entirely.
  • “She_sells_sea_shells.” – This is actually the best password on this list (well, assuming it wasn’t part of a very popular nursery rhyme). It is long, complex, and easy to remember.​

Complexity doesn’t mean that your p4ssw0rD has to look complex to you; this is a common misunderstanding. Instead, complexity goes up exponentially by length — and longer pass “phrases” are generally easier to remember and impossible to easily guess.

Practicing Good Password Hygiene​

Every morning you probably brush your teeth, floss, and wash your face — though may not in that order. But just as you need to practice good physical hygiene, you also need to practice something called good password hygiene. In IT, good password hygiene means maintaining your passwords properly… and making sure they aren’t unnecessarily exposed to risk. Password hygiene is called hygiene because it requires the development of good habits.

  • Always memorize your passwords. In the prior section, we discussed why making passwords memorable is important. Even if you have to use some sort of mnemonic device, passwords should always be committed directly to your memory.
  • Never save your passwords in plain text. If your passwords are saved somewhere on your computer, such as in a notepad on your computer’s desktop, anyone will be able to view it and log into your WordPress account. This also goes for post-it notes on physical desk tops.
  • Don’t give out your passwords to others. Though you may trust someone, that doesn’t necessarily mean that their password hygiene is up to snuff. When you give out a password, you run the risk that someone else might lose that password.

Remember: passwords are the first line of defense you have when securing your WordPress account. Though they aren’t the only security you should rely upon, a well-crafted and well maintained password can do much of the heavy lifting in terms of your system security.

Making Sure Your Password Can’t Be Reset

…At least, not without your knowledge. One substantial security risk involving passwords is the ability to reset a password. Other user accounts can be particularly bad about this; a malicious user might be able to reset your password simply by knowing a little about you, such as your birth date. WordPress requires that you have access to your administrative email account to reset your password. And that also means that your security is only as good as your email security.

WARNING! Anyone who has access to your email account can easily find a way to access your WordPress account — and can lock you out of both. Just as it’s important not to share your WordPress login information, it’s also important not to let anyone use your email account.

Locking Out Multiple Sign On Attempts

WordPress does not have built-in functionality for locking out multiple sign-in attempts. And that means that a persistent individual can sit there virtually all day just trying different username and password combinations. A login limiting plug-in will limit a user to a certain number of tries during a certain amount of time — such as three tries every hour. It can also permanently lock down a system (until properly unlocked) if a certain number of incorrect attempts are made. This can be achieved through the installation of a single-use plug-in such as WP Limit Login or a more comprehensive security solution such as Sucuri.

Installing WP Limit Login Attempts

Install and activate WP Limit Login Attempts, and then modify your settings:

  • Number of login attempts: the number of attempts allowed before locked down initiates.
  • Lockdown time in minutes: the amount of minutes the user will be locked out for.
  • Number of attempts for captcha: when a captcha will engage to prevent bot attempts.
  • Enable captcha: whether you want to add a captcha at all.

YOU SHOULD KNOW: At any time, you can go to your plug-ins in the administrative dashboard and select “deactivate.” If your blog appears to be acting strangely or loading slowly, you may want to deactivate plug-ins one by one to determine which plug-in might be the culprit. Incorrect settings could lead to performance issues later on.


Chapter Three: Adding an Internal Monitoring System

Up to now, you may have noticed that securing WordPress involves a lot of small changes, management, and maintenance. You can bolster the overall security of WordPress through the use of an internal security monitoring system, which will actually make many of these changes on your behalf. Wordfence and Sucuri are two of the most popular management systems; though WordPress offers an official security plug-in, its uses are fairly limited.

Monitoring Security with Sucuri

Offering “complete website security,” Sucuri is able to both clean previously hacked websites and protect websites from attacks.

Sucuri is the leading commercial option for all-in-one WordPress security. For SMBs and professionals, Sucuri is likely one of the better options — it comes with a wealth of robust features that both protect your website while also reducing the amount of time you need to spend on setup and administration. Some of the most prominent features of Sucuri include:

  • Site Cleaning. If you’ve already been hacked, Sucuri can restore your website and clean up any malicious infections. These features include the ability to reset the password of any user, reset existing plug-ins, and trace back potentially malicious activity.
  • Site Reputation. If your site has already been blacklisted by Google or disabled by its host, Sucuri can detect this and help you become reestablished.
  • Site Protection. If you want to protect yourself from being hacked, Sucuri offers DDoS and brute force protection, in addition to protection against many current security exploits and vulnerabilities.
  • SSL Certificates. Sucuri provides SSL certificates for their customers under their professional plans. SSL certificates make it possible to encrypt and protect your blog’s transmitted data.
  • Advanced Website Protection. Sucuri scans, detects, and mitigates attacks against websites through their Website Application Firewall, including DDoS attacks and brute force password attacks.
  • Scanning and Monitoring. Sucuri actively scans websites for signs that they may have been attacked, such as through malware or malvertising.
  • Site Hardening. Sucuri additionally makes many changes to improve WordPress’s overall security, such as: updating WordPress and PHP, removing the visible WordPress version, protecting the uploads directory, restricting access to internal directories, updating and using security keys, and checking for information leakage.

Sucuri is a comprehensive security plug-in that can be installed for free. To install Sucuri, download the “Sucuri WP Plugin.”

Click on “Sucuri” in your new administrative panel. Sucuri will first ensure that WordPress has not been modified in any way.

It will also make sure that the site is clean and it is not blacklisted.

Before going further, you will need to generate an API key. This will enable firewall protection. Simply provide your domain and email address to get started.

Once the API key is generated, you’re free to go through the Sucuri WP plug-in settings, which are comprehensive.

  • Scanner. This system looks for changes that have been made to your WordPress installation. If you are experiencing issues with WordPress, you can consult with the scanner to find out more.
  • Hardening. This feature goes over many of the changes that we have made and more, allowing you to automatically do things such as: werify your PHP version, delete the default administrative account, and block PHP files in the wp-includes directory.
  • Post-Hack. Secret security keys can be used to improve upon your security and authentication, and any user passwords can be reset, in addition to any installed plug-ins.
  • Alerts. Here you’ll be able to control where security alerts go – generally to your administrative email account.
  • API Service Communication. Your API key and its details are stored here – there shouldn’t be any changes that you need to make.
  • Website Info. This contains all of the credentials and other information related to your website.

Monitoring Security with Wordfence

Wordfence is the leading “freemium” plug-in for all-in-one WordPress security, with a large inventory of free features in addition to paid options.

Accessible and affordable, Wordfence presently has millions of users across the globe. Wordfence provides firewall, malware scanning, and login security services, all designed to build on top of WordPress Core. Even the free version of the plug-in is relatively feature complete. Some notable features include:

  • Web Application Firewall. The Wordfence Web Application Firewall detects attacks such as SQL injections, malicious file uploads, and DDoS attempts.
  • Website Scanning. Wordfence can provide hardening for your website by detecting problems in its public configuration, backups, posts, comments, and passwords.

There are also some premium features available:

  • Protection against spam. Wordfence can check comments against lists of known spammers, in order to better detect and remove spam. This feature takes the place of plug-ins such as Akismet.
  • Protection against blacklisting. Wordfence can additionally check to see if your website may be getting spammed to other sites. This is a commonly used tactic to get a website blacklisted; if Google sees your website being used in this fashion, it may remove you from search engine results.
  • Rate limiting. Wordfence can limit high volume traffic to a certain rate, so that users such as bots can still access the site, but without interfering with its responsiveness. This can be especially useful to limit crawlers — bots that look through websites to index them for search engines.

Wordfence can adversely impact the performance of high traffic sites — but caching and better performance optimization can also be used to address this. In recent iterations, Wordfence has addressed and reduced its usage of overhead.

Monitoring Security with WordPress Security

WordPress officially provides some advanced security features through its WordPress Security plug-in — but the features provided are fairly rudimentary and shouldn’t be relied upon to secure an entire site.

You can obtain some basic security features through the use of Jetpack Personal or Jetpack Business, both of which include the official WordPress Security Plug-In. WordPress Security includes spam filtering, technical support, daily off-site backups, and one-click restoration. But it is not designed to monitor and protect against advanced threats. WordPress security is mostly designed to quickly deploy backups of your system in the event that something goes wrong. It can be very useful in the event that your website is hacked or that an employee makes a mistake that damages your site, but it is mostly responsive rather than preventative.


Chapter Four: Securing Your Web Hosting Account

Unless you are hosted directly on WordPress.com, your WordPress site is going to run on top of a hosting account. And that means that your hosting services are going to have to be just as secure as your WordPress installation. By gaining access to your web hosting account, an attacker can do anything they want — including deleting your website entirely.

Finding the Right Hosting Service

First thing’s first — you usually want to work with a hosting service that is either experienced with WordPress or specifically targeted towards WordPress bloggers. Not only will their server environments be well-suited to the needs of WordPress, but they will also be able to provide better security tailored around the system.

There are thousands upon thousands of hosting services available, and though they may seem to be identical, some of them are far safer than others. When looking for a web host, you should consider the following:

  • Are they popular? Major web hosting services such as HostGator, DreamHost, and GoDaddy all have to have top-of-the-line security solutions because of the sheer number of clients that they have available. That doesn’t necessarily mean they are the best hosts (many of them have fairly limited resources), but they are more likely to be secure than other low cost services.
  • Is the account shared? Shared hosting packages may have additional security vulnerabilities, as multiple clients are in the same server environment. Most bloggers will not want to spend the money for a dedicated server, but they can still invest in a VPS (virtual private server) to reduce their risk.
  • Do they have built-in security features? A reputable hosting service will discuss the security features the offer, such as complimentary SSL certificates, automated backups, and firewalls.

As with many things, you don’t want to go with the most affordable hosting service. Look for a good blend of features and reputation; there are many very cost-effective options that aren’t necessarily bottom tier.

Adding External Monitoring Systems

Monitoring systems, firewalls, and scanners can all be used to protect your website from intrusion attempts. Popular options include Cloudflare and Sucuri, and some web hosts also provide their own utilities. These solutions are designed to detect, identify, and mitigate threats. They can recognize potentially suspicious traffic and deny it — while still keeping a website up and active.

External monitoring systems are particularly useful against DDoS attacks. A monitoring system will be able to identify a DDoS attack and will be able to deny all illegitimate requests while still allowing ordinary traffic to flow through. External monitoring systems can also be used to detect and reject potentially unsecured connections.

What’s a DDoS? In a distributed denial of service attack, a cyber-attacker uses multiple devices to continually create connections to a target. Eventually the target — in this case your WordPress site — becomes so inundated with requests that it can no longer respond, even to legitimate ones. This is one of the easiest and fastest ways to take a website down.​

Cloudflare is a particularly useful tool for WordPress bloggers. Not only does it protect against DDoS attempts and detect potentially malicious traffic, but it operates primarily as a Content Delivery Network. A CDN speeds up a website by caching its data; users will be able to access the website much faster and there will be less load distributed to the server. Cloudflare is also completely free and can manage multiple sites at once, additionally providing analytic data through which you can measure your website’s traffic and performance.

Setup an SSL Certificate and Configure WordPress

SSL certificates can get a little technical — all you really need to know is that using an SSL certificate means that your data is going to be encrypted. And that means that people who are seeing your data being transmitted won’t be able to read it. Many websites you use probably use an SSL certificate. You can usually tell because there will be a “locked” icon by the URL and the URL will start with “https://” rather than “http://.”

Not all hosting accounts will come with an SSL certificate. You may need to purchase one through your web host as an add-on — or you may need to use a security plug-in that comes with one, such as Sucuri. Your web hosting service will be able to install the SSL certificate on your account but, either way, you’ll need to configure WordPress to use SSL.

How to Add SSL and HTTPS to WordPress

Click on your “General” settings in your administrative dashboard.

Change your WordPress and Site Address URLs to “https” rather than “http”.​

If you have already added content to your WordPress site, you may also need to include a redirect. For this, you will need to browse to the main directory of your web host. This is usually called “htdocs,” but may also be your website’s name. Here you will want to modify a file called “.htaccess” to include the following text:

RewriteEngine On

RewriteCond %{SERVER_PORTZ} 80

RewriteRule ^(.*)$ https://www.[blog].com/$1 [R,L]

In the above example, [blog] will be the domain of your blog. This will redirect any requests to “http” to “https” automatically.​

Update Your File Permissions

File permissions tell your web server who is allowed to view and access each of your website’s files. By default, WordPress is often installed with “777” permissions for its directories. Though FTP, you can select these directories, right click, and change these permissions to either “750” or “755.” While everyone will be able to edit these files, modifying them and deleting them will require additional permissions.

Your wp-config.php file should be set to “600,” and the files within your WordPress directories should be set to “640” or “644.” These permissions will still let you do anything you need to do; it will simply reduce the chances that someone else could alter or delete your files.

Turn Off PHP Error Reporting​

By default, many servers wil lsend out an error message if PHP code fails — and WordPress is written in PHP. These errors are designed to help developers when they are debugging, but because they can expose parts of your website’s code, they can also be a substantial security risk. To address this, you need to turn off PHP error reporting. In the event that PHP does encounter an error, it will simply send a blank page.

This requires a modification of your wp-config.php file, which can be found via FTP (or a file browser) in the base directory of your WordPress installation. At the top of wp-config.php, below the first line, you should put:

error_reporting(0);

@ini_set(‘display_errors’,0);

Of course, this also means that you aren’t going to know what specifically failed in the event that your website does fail — and, in that situation, you might need to temporarily toggle errors back on.​


Chapter Five: Protecting Against Your Users

Bloggers often run in packs. If you’re running a blog that has multiple contributors, then your greatest threat might not be from the outside — it may actually be your own users. Users tend to make mistakes; in fact, when businesses are hacked, it’s almost always internal. 52% of cyber attacks occur due to system failures or human error.

The Importance of Restricting Permissions

In security, there are things that are called “best practices.” These are the things that we do in an ideal world to create the lowest riskenvironment. One of the most important security best practices is to restrict user permissions to only what they truly need to complete their day-to-day tasks. When you do not restrict permissions appropriately, you run the risk that:

  • A single user could cause substantial damage — either intentionally or accidentally. There is no reason for a contributor to be able to delete another contributor’s posts, but they might start to do so if they think those posts were inappropriately filed under “their account.”
  • A single user login breach could become more dangerous. If a malicious user gets into a contributor’s account, they are fairly limited in the amount of damage they can do. If a malicious user gets into an administrator’s account, there’s far more potential for damage. The fewer users there are with administrative powers, the better.

It’s also a good practice not to assign temporary permissions — i.e., not to make a user an administrator for a temporary amount of time to make some adjustments. Though this is commonly done to make a job simpler, it can easily be forgotten later on.

Setting Password Restrictions

Thanks to Chapter Two, you now know how to set a good password. But that doesn’t necessarily mean that your users do. When left to their own devices, users could set very simple passwords that will easily be cracked — and that compromises your entire system. To avoid this, you can set up restrictions regarding the passwords that your users can set.

The most important factor you want to look at is length, but you also want a decent variety of characters in addition to alphanumeric ones. You may want to request at least one number (0-9) and at least one special character (_;,/`~*). Keep in mind that very restrictive password combinations actually tend to work against you rather than for you, as users will be more likely to create passwords that are difficult to remember. Difficult to remember passwords will need to be either written down or reset.​

By default, WordPress core ensures that users have “strong” passwords and tests passwords for its complexity. If you have a current version of WordPress, you may not need to worry about this. But if you need to add this functionality, you can use a plug-in such as Force Strong Passwords.​

Managing User Sign-Ups New users should always be restricted to a “contributor” status, and for the best security, they should have to be manually approved. Letting users create their own accounts can be dangerous otherwise!​

​Log Out Idle Users​

Users sometimes forget that they’ve logged into their account. When they do this, they expose the blog to tremendous risk — anyone who is on the same computer and wants to tamper with your website can. To deal with this, you can install a plug-in that will automatically log users out after they’ve been idle for a certain amount of time.

The most popular way to do this is through the Idle User Logout Plugin. This plug-in lets you select which roles will idle and how long it will take them to log out when idle. Users won’t lose their data; they’ll simply need to login again before they can continue making adjustments.​


Chapter Six: Protecting Against Third-Party Utilities and Services

There are two third-party threats that you need to be most conscientious of: third-party plug-ins and third-party advertising networks. Both of these can add content and programming to your website that could either damage your site or harm your users.

Validating Third-Party Plug-Ins​

Plug-ins for WordPress are generally guaranteed to be malware free; otherwise they would not be included within the WordPress repository. However, that is not the major concern — the major concern is that these plug-ins may not be as secure as they should be. Anyone can write and publish a plug-in, including an inexperienced developer who could potentially create a plug-in with security vulnerabilities. If part of your website is vulnerable, all of your website is vulnerable.

Before installing a third-party plug-in, you should ask yourself the following questions:

  • How many reviews does it have and how highly is it rated? You should avoid plug-ins that appear to have been barely used or that have just been published for the first time; they could have security issues that have not ye been discovered.​
  • How polished is the plug-in and its documentation? The more documentation a plug-in has, the better — that means the developer is being conscientious and mindful of its design. Likewise, a plug-in that is visually polished will likely have been produced by someone who is detail-oriented.
  • How many other plug-ins has the developer released? The more experienced the developer is with WordPress, the more likely they are to produce solid, secured plug-ins for the platform. If they haven’t released any other plug-ins, they may not be aware of WordPress’s unique security environment.

Avoiding Malicious Third-Party Services

The most common type of malicious third-party service has to do with “malvertising.” Malvertising refers to advertisements that actually contain malicious code. Many bloggers fund their blogs through the use of third-party ads. Malvertising targets the users rather than the owner of the blog themselves, but it can also get a blog blacklisted if the malicious code is detected on their site. There are a few ways to avoid these products:

  • Only use popular services. Google Adsense and Bing Ads are two of the most popular networks, but that doesn’t mean they don’t ever contain malicious code — it just means they are less risky.​
  • Invest in a monitoring solution. As noted, even popular third-party network can be infested, especially if the malicious attacker is using a previously unknown vulnerability. A monitoring solution will identify malicious code when it is run on your site, rather than trusting the service to detect it.
  • React quickly to potential threats. If you do suspect that malicious code is being run on your site, it’s important to address it immediately — even if that means taking down your advertising while you figure the situation out. Otherwise you can lose traffic and damage your website’s reputation.

Identifying Potentially Harmful Plug-Ins or Themes

The Internet is a vast and wide place, and sometimes when looking for plug-ins or themes you can be directed to individual websites or repositories that promise some of the most popular WordPress tools. But whenever you are promised something for free, it’s likely that there’s a catch. In the case of plug-ins or themes, the catch is often a virus.

When purchasing a premium plug-in or theme, it is important to go through the WordPress.org repository or a trusted corporate site. There are many websites that promise premium plug-ins or themes for free. These assets have been stolen — and even if they don’t include malicious code, it still won’t be legal to use them.​

Only Installing the Plug-Ins You Need

Though plug-ins can add some fantastic functionality, they may not always be strictly necessary for the operation of your blog. Think critically about each plug-in that you install; each one isn’t just a security risk, but will also consume the overhead of your website and ultimately slow it down.


Chapter Seven: Computers, Connections, and the Internet of Things

Consider an encrypted, password-protected hard drive, and a thief who wants the data that is held within it. It would take days or weeks for the thief to hack into the hard drive — and the thief only has a few minutes of time. What does the thief do?

The thief picks up the hard drive and walks away with it.

Protecting Your Blog Against Physical Intrusion​

Today we have smartphones, tablets, and laptops, all connected to the Internet and connected to your blog. Losing any one of those items could mean compromising your blog, unless you make sure that you’ve taken the appropriate steps to protect yourself. These are:

  • Always make sure that your devices are secured. All of your devices should be protected by either a PIN or a password — and, where applicable, you should use two-factor authentication such as a fingerprint reader or an ocular scanner. Your devices should automatically lock after a certain amount of time, so that they will password protect themselves when they are idled.
  • Don’t use public computers to access your blog. You never know what could be on a public computer and you can never be too cautious. If login information is stored on that computer, someone could use that computer to log in as you. Likewise, you shouldn’t log into your email account either — because it could contain information that could be used to access your blog.
  • Never access your blog through public WiFi. A public WiFi connection can be run by anyone… including people who are trying to look at your data or insert malicious code into your data transfers. SSL largely helps with this by encrypting your website’s traffic, but there can still be potential vulnerabilities related to a public WiFi connection.

Chapter Eight: Constructing Your Disaster Preparedness Plan

It’s the blogger’s worst nightmare: what happens when your site goes down? Do you know where your backups are? How quickly can you deploy them? And how current are they? In order to avoid downtime, you have to be able to answer these questions quickly and reliably.

What is a Disaster Preparedness Plan?​

A disaster preparedness plan outlines the steps that you need to take to get your website up and running again after it has been taken down. And your website could go down for any reason: your blog could be hacked, your hosting provider could go out of business, or you could even make a mistake leading to data loss.

At its most fundamental, a disaster preparedness plan usually involves backup solutions and how to re-deploy your blog’s data. But a disaster preparedness plan might also include failover services, such as the ability to redirect your traffic somewhere else while you are down, or the ability to notify your readers that there may be problems.

In general, it’s a good idea to:​

  1. Have a temporary page in place that will tell your readers that your website is down and that it is expected to be back up by a certain time.
  2. Know where to find your current backups and how to restore them as quickly as possible.

Be able to start and restart services that your website depends upon, such as your web service or your database.

The Four Best Practices for Website Backups​

  1. Backups should be automatic. Don’t rely upon manual backups; there will come a time when you’ll forget. Schedule your backups to run during the lowest traffic hours of your website (as they do consume some system resources), and make sure that they are running as scheduled. Don’t forget to check on them frequently; they could fail if they run out of storage space.
  2. Backups should be incremental. You should always have monthly, weekly, and daily backups to fall back on. You never know when an intrusion could occur — or when data could be lost. It’s very possible that you might find yourself having to go back several days or even several weeks to completely restore your site.
  3. Backups should be redundant. Never store your backups only in one place. Cloud backup solutions are especially useful because they are naturally redundant… but what happens if you lose access to the service provider? Ideally, you should have backups both through your web host and through a secondary service.
  4. Backups should be elsewhere. Your backups shouldn’t only be stored on your host; that’s a recipe for disaster if your hosting account itself is hacked. Likewise, you don’t want your backups to only be on a local or external drive — what happens if that drive crashes?

Options for Backing Up Your WordPress Site

  • Your web hosting service. Most web hosting services offer their own backup system, which should be used as a secondary backup option. But don’t assume that your web host automatically does it. Notably, VPS systems (virtual private servers) usually leave it to you to install a backup solution manually.
  • A cloud-based backup solution. There are subscription-based backup solutions that are located on the cloud, which can take backups automatically from your system. WordPress offers cloud-based backups through its WordPress Security plug-in.
  • As a feature in comprehensive security plug-ins. Security plug-ins often include the ability to manage your backups, as this is a part of managing security and mitigating potential risks. Sucuri has a particularly comprehensive backup and restoration system.

An ideal backup solution will backup your website both on your website host and on a cloud solution. This gives you multiple options to recover your data and allows for almost immediate re-deployment of your site should data be lost or corrupted.


Chapter Nine: Managing and Monitoring Your WordPress Site

Your job isn’t over once you’ve configured your website and installed your tools. Your WordPress site will also need to be managed, monitored, and maintained over time. If you want to keep your website secure, you’ll need to update it regularly and defend against new and technologically-advanced threats.

Keeping Your WordPress Site Current

You may have noticed that WordPress updates itself quite frequently. These updates concern more than just functionality and improved workflow — they also address new and emerging security threats. Updating your WordPress site frequently is critical to maintaining a healthy security ecosystem.

Some security plug-ins, such as Sucuri, will routinely check to make sure that you are running the current version of WordPress. And though hiding your WordPress version can protect you from some threats, other more persistent cyber criminals may not be fooled.

Abandoning Out-of-Date Plug-Ins​

WordPress tracks which plug-ins have been frequently updated and which plug-ins have not been tested with current versions. Plug-ins that are not kept current should be replaced with plug-ins that are, even if the newer plug-ins might not offer the same functionality.

Older plug-ins will have the same issues as older WordPress installations; they could contain vulnerabilities that have been identified. Once a vulnerability has been identified in an older system, all a cyber-criminal has to do is look for a blog that’s still using that old system.

Keeping Your Site Clean​

Websites evolve. Over time you’ll add and remove content, install and uninstall plug-ins, and change themes. Keeping your site clean is a matter of deleting anything that you aren’t using right now: inactive plug-ins, old themes, and other unnecessary content.

Not only are these inactive items taking up space and other resources, but they could actually still represent a security risk even if they have already been deactivated. Plug-ins, in particular, need to be completely deleted in order to remove their risk. Otherwise they will still be on your server and their scripts can still be used.​


Conclusion

Though it may seem that securing WordPress is difficult, it’s really just a matter of being thorough and vigilant. “Hardening” WordPress does require that you go through certain configuration steps — and that you install security-related plug-ins. But once you have properly secured your WordPress installation, it should mostly be able to take care of itself. Moving forward, your blog will be able to protect itself… and you’ll know what to do if it ever cannot.

Security plug-ins such as Sucuri and Wordfence can take a substantial amount of burden off of you as the blog owner. Both Wordfence and Sucuri will commit many of the above mentioned configuration changes on their own — and will be able to monitor and manage your website 24/7. By automating parts of your WordPress security, you’ll both be able to improve upon its accuracy and reduce the amount of time you need to spend on site administration.

There are countless threats out there — and there are many reasons why a malicious attacker might target a WordPress site. With cyber criminals rapidly becoming more persistent and threatening, it becomes necessary for bloggers to be proactive about their security solutions. A proactive blogger will be able to protect their blog’s data against even some of the most advanced threats.

Through this eBook you will have hopefully learned all of the information that you needed to learn about hardening WordPress — but the world of security is also always changing. If you want to make sure that your site is secured into the future as well, you will need to remain current on modern security threats and solutions. The job of a blogger is never over as far as website maintenance and security is concerned.

But by properly securing your website, you’ll be able to build traffic faster, develop a solid reputation, and sidestep many of the costly issues associated with having a website taken down or otherwise compromised. Securing your website is one of the first steps towards developing a solid blog that will be able to steadily grow in popularity. A secured blog will have minimal downtime and will be able to serve its user base both better and more consistently.

That’s it. Now happy blogging!

WildFly(JBoss) vs. Tomcat

Introduction
Both WildFly and Tomcat are Java application servers, but they have difference strength. Making the wrong choice can result in more work than necessary.

The JBoss application server (aka JBoss AS) is an application server based on Java. It is an open source software server and is usable in any operating system supported Java (because the server is Java based).

Apache Tomcat is a servlet container (meaning it is a Java class that operates under the strictures of the Java Servlet API – a protocol by which a Java class responds to an http request). This is an open source server, providing a ‘pure Java’ HTTP web server environment in which code written in Java is capable of running.

The major difference between Wildfly and Tomcat
Both JBoss and Tomcat are Java servlet application servers, but JBoss is a whole lot more. The substantial difference between the two is that JBoss provides a full Java Enterprise Edition (JEE) stack, including Enterprise JavaBeans and many other technologies that are useful for developers working on enterprise Java applications.

Tomcat is much more limited. One way to think of it is that JBoss is a JEE stack that includes a servlet container and web server, whereas Tomcat, for the most part, is a servlet container and web server.

On the other hand, Tomcat has a lighter memory footprint (~60-70 MB), while those Java EE servers weigh in at hundreds of megs. Tomcat is very popular for simple web applications, or applications using frameworks such as Spring that do not require a full Java EE server. Administration of a Tomcat server is arguably easier, as there are fewer moving parts.

When WildFly
JBoss is the best choice for applications where developers need full access to the functionality that the Java Enterprise Edition provides and are happy with the default implementations of that functionality that ship with it. If you don’t need the full range of JEE features, then choosing JBoss will add a lot of complexity to deployment and resource overhead that will go unused. For example, the JBoss installation files are around an order of magnitude larger than Tomcat’s.

When Tomcat
Tomcat is a Java servlet container and web server, and, because it doesn’t come with an implementation of the full JEE stack, it is significantly lighter weight out of the box. For developers who don’t need the full JEE stack that has two main advantages.

Significantly less complexity and resource use.

Modularity.

For those who need add-ons that work with Tomcat, there are some lightweight alternative to JEE like EJB.

Summary
WildFly is a application servers with access to the whole JEE stack while Tomcat is servelet server

Developers of complex Java enterprise applications should choose JBoss (or GlassFish), while those who don’t need the full JEE stack are better off with Tomcat plus any extensions they need.

JBoss makes use of the Java EE specification; Tomcat makes use of Sun Microsystems specific specifications.

Referece
http://stackoverflow.com/questions/3821640/what-is-difference-between-tomcat-and-jboss-and-glassfish

JBoss vs. Tomcat: Choosing A Java Application Server

Difference Between JBoss and Tomcat

How to create a deamon service for java program – Centos 6

Here the link I used for create the service script:

vim service.name.sh

insert this:

#!/bin/sh
SERVICE_NAME=MyService
PATH_TO_JAR=/usr/local/MyProject/MyJar.jar
PID_PATH_NAME=/tmp/MyService-pid
case $1 in
start)
echo “Starting $SERVICE_NAME …”
if [ ! -f $PID_PATH_NAME ]; then
nohup java -jar $PATH_TO_JAR /tmp 2>> /dev/null >> /dev/null &
echo $! > $PID_PATH_NAME
echo “$SERVICE_NAME started …”
else
echo “$SERVICE_NAME is already running …”
fi
;;
stop)
if [ -f $PID_PATH_NAME ]; then
PID=$(cat $PID_PATH_NAME);
echo “$SERVICE_NAME stoping …”
kill $PID;
echo “$SERVICE_NAME stopped …”
rm $PID_PATH_NAME
else
echo “$SERVICE_NAME is not running …”
fi
;;
restart)
if [ -f $PID_PATH_NAME ]; then
PID=$(cat $PID_PATH_NAME);
echo “$SERVICE_NAME stopping …”;
kill $PID;
echo “$SERVICE_NAME stopped …”;
rm $PID_PATH_NAME
echo “$SERVICE_NAME starting …”
nohup java -jar $PATH_TO_JAR /tmp 2>> /dev/null >> /dev/null &
echo $! > $PID_PATH_NAME
echo “$SERVICE_NAME started …”
else
echo “$SERVICE_NAME is not running …”
fi
;;
esac

modify the environment variable

SERVICE_NAME=MyService
PATH_TO_JAR=/usr/local/MyProject/MyJar.jar
PID_PATH_NAME=/tmp/MyService-pid

 

 

Mongo Configuration

Create admin user:

Connect with mongo command, then type use admin.

After run db.createUser({user: “mongo”, pwd: “mongo”, roles: [“userAdminAnyDatabase”]});

the new user you set is mongo and password mongo but you can choose what you want.

use these command:

show collections
use admin    ->   db.system.namespaces.find()
show dbs
db.collectionname.find()
backup all db command
mongodump
singole database

mongodump –host 127.0.0.1 –port 27017 –db db –out ./db.dump

restore db command

mongorestore –db farad –dir farad.dump/farad/

start the service :
service mongod start

setup the service automatically start at the boot:
systemctl enable mongod

Quantitative Trading Summary

This summary is an attempt to shed some light on modern quantitative trading since there is limited information available for people who are not already in the industry. Hopefully this is useful for students and candidates coming from outside the industry who are looking to understand what it’s like working for a quantitative trading firm. Job titles like “researcher” or “developer” don’t give a clear picture of the day-to-day roles and responsibilities at a trading firm. Most quantitative trading firms have converged on roughly the same basic organizational framework so this is a reasonably accurate description of the roles at any established quantitative trading firm.

The product of a quantitative trading company is an automated software program that buys and sells electronically traded securities to make a profit. This software program is supported by many systems designed to maintain and optimize it. Most companies are roughly divided into 3 main groups: strategy research, core development, and operations. Employees generally start and stay in one of these groups throughout a career. This guide focuses on strategy research and core development roles.

Primary job requirements:

  • Strategy research (‘research’): Programming, statistics, trading intuition, and the ability to understand market data
  • Core development (‘dev’): Low-level software engineering, networking, and system architecture

The software components of a quantitative trading system are built by one of these two teams. The majority of the components are built in-house at most major trading firms, so below is a list of the programs you could expect to build or maintain if you were on the research or dev teams. Each of these programs can be a separate process, although we’ll discuss some variants later.

Programs for live production trading:

  1. Market data parser: Dev. Normalizes each exchange’s protocol (including different versions over time) into the same internal format.
  2. Trading strategy: Research/Dev. Receives normalized data, decides whether to buy or sell.
  3. Order gateway: Dev. Converts from internal order format to each exchange’s order entry protocol (different than the market data protocol).

Programs to support live production trading:

  1. Monitoring GUI: Dev. GUIs used to be important for click traders but are now mainly used to monitor that the trading system is performing appropriately. They are still occasionally used to manually adjust a few parameters, such as overall risk tolerance.
  2. Drop copy: Dev. Secondary order confirmation to make sure you have the trading positions you think you do.
  3. Market data capture: Dev. Record the market data in parallel to what’s going into the strategy to verify later that the strategy behaved as intended and to run statistical tests on historical data (live capture is more reliable than purchasing from a vendor so most major firms avoid buying data from a vendor).
  4. Startup scripts: Dev/Operations. Launch all these different software programs in the right order and at the right time of day each time they need to be restarted (typically daily or weekly), and alert or recover from startup problems.

Programs to optimize and analyze the trading strategy:

  1. Parameter optimization: Research. Regressions or other metrics to help people compare one trading strategy parameter setting to another to find the best.
  2. Production reconciliation: Research. Metrics to confirm that the state in the trading strategy’s internal algorithm matched calculations using captured market data.
  3. Back testing simulator: Research. Shows estimated trading strategy profit or loss on historical data.
  4. Graphing: Dev/Research. Display profit or loss, volume, price and other statistics over time.

In a ‘typical’ established quantitative trading company the department breakdown would be:

  • Research
  • Dev
  • Back office and operations
    • operations/monitoring
    • telco/networking/hardware
    • accounting/HR
    • management/business development
    • legal/compliance

Since we won’t focus on them later, here’s a brief description of the latter groups:

  • Operations/monitoring: Monitor strategies and risk intraday and overnight to ensure there are no problems (like Knight Capital’s +$400m loss).
  • Telco/networking/hardware: Purchase and rack servers, configure switch firmware, operating system settings, and network interface cards or FPGAs, connect co-located datacenters (possibly in different countries), etc.
  • Accounting/HR: Like any business there is tax, accounting, and human resources work
  • Management/business development: There’s a lot of legwork to trading multiple exchanges around the world, such as finding contacts in other countries, negotiating fees, licensing telecom networks, and keeping ahead of new updates.
  • Legal/compliance: Trading is one of the most regulated industries. There are US and international regulators (SEC, CFTC, FCA, etc), huge and diverse rulesets such as MIFID, industry regulating agencies like FINRA, and exchange-level self-regulatory regimes (CME, NYSE, etc) that each have their own rules. Ensuring and documenting compliance with each set of rules takes a lot of work.

Some of the key differentiating factors between quantitative trading companies are:

  1. How they divide up research teams – internal collaboration vs competition between siloed research/trading teams.
  2. Which exchanges and products they focus on.
  3. What type of trading strategy is used and how it’s optimized.

Although we are unable to explain how each specific company divides up their research/trading teams, the overall structure and organization of employees and software at most major quantitative trading firms follows a similar general pattern to what was described above.

Trading strategies

Now that you have a high-level understanding of what a typical quantitative trading company does and the different roles that exist, let’s go into more detail about trading strategies.

The industry has generally settled on three main types of strategies that are sustainable because they provide real economic value to the market:

  • Arbitrage: Arbitrage and its economic benefits have been well understood for quite some time and documented by academia. The companies that are still competitive in arbitrage have one of 3 advantages:
    • Scale: To determine that some complex option or futures spread products are mispriced relative to a set of others, nontrivial calculations must be performed, including the fee per leg, and then the hedged position has to be held and margined until expiry. Being able to manage this and have low fees requires scale.
    • Speed: Speed either comes from having faster telco or being able to hedge. For example, triangular arbitrage on FX products traded in London, NY, and Japan and are a major impetus for the Go West and Hibernia microwave telecom projects. Arbitrageurs rely on the speed of their order gateway connections so they can hedge on related markets if they are overfilled.
    • Queue position: Being able to enter one leg of an arbitrage by passively buying on the bid or selling on the offer reduces costs by not having to cross the spread on that leg, so being able to achieve good queue position can give an edge in arb trades.
  • Market Taking: Placing a marketable buy or sell order to profit from a predicted price change. The economic value market takers are being paid for is either:
    • properly pricing the relative value of related securities
    • trading and thereby contributing to price discovery in products after observed changes in supply and demand

    Like a real estate negotiation that can change a deal’s value minute-by-minute when negotiators come to the table face-to-face and discover each other’s positions, even though the fundamentals of the deal or real estate market certainly don’t fluctuate by the minute, market takers are the high-stakes-mediators of the trading world. Market taking requires predictive signals and relatively low-latency because you pay to cross the spread. A common low-latency market taking strategy would be to attempt to buy the remaining liquidity at a price after a large buy trade. Some firms have FPGAs configured to send orders as soon as they see a trade message matching the right conditions (more on this later).

  • Market Making: Posting passive non-marketable buy and sell orders with the goal to profit from the spread. The economic value market makers are being paid for is connecting buyers and sellers who don’t arrive at the market at the same time. Market makers are compensated for the risk that there may be more buyers than sellers or vice versa for an extended time, such as during times of market stress.

Basic trading system design

A quantitative trading system’s input is market data and its output is orders. In between is the strategy algorithm.

Input

The input to a trading system is tick-by-tick market data. The input is handled in an event loop. The events are the packets sent by the exchange that are read off the network and normalized by the market data parser. Each packet gives information about the current supply and demand for a security and the current price. A packet can tell you one of three things:

  • A limit order was added to the book. Primary fields: {price, side, order id, quantity}
  • A limit order was canceled. Primary fields: {order id}
  • A trade occurred. Primary fields: {price, aggressor side, quantity}

For example, a few packets look like this (for a more detailed, real example see here):

AddOrder { end_of_packet: 1; seq_number: 103901; symbol_id: 81376629; receive_time: 13:03:46.304606537089; source: 1; side: S; qty: 1; order_id: 210048618; price: 99.25; }

CancelOrder { end_of_packet: 1; seq_number: 103900; symbol_id: 81376629; receive_time: 13:03:41.863834923132; source: 1; qty: 0; order_id: 210048542; price: 99.00; side: S; }

Trade { end_of_packet: 1; seq_number: 103902; symbol_id: 81376629; receive_time: 13:03:46.304606537835; source: 1; aggressor_side: B; qty: 1; order_id: 210048321; price: 99.00; match_id: 20154940; }

If the trading system adds up all the AddOrder packets and subtracts CancelOrder and Trade packets, it can see what the order book currently looks like. The order book shows the aggregate visible supply and demand currently available at each price. The order book is an industry-standard normalization layer.

When you add up all the orders, the order book could look like this:

Sell 10 for $99.25

Sell 5 for $99.00 (best offer)

Buy 10 for $98.75 (best bid)

Buy 10 for $98.50

This is the main view of the market data input used by the strategy algorithm.

Strategy algorithm

To put into practice what we discussed above, let’s outline a market taking strategy utilizing what is often referred to as market micro-structure signals that may have made money back before quantitative trading became very competitive. Some companies have each member of their intern classes program a strategy like this as a teaching project during a summer. This strategy calculates some signals using the order book as input, and buys or sells when the aggregate signals are strong enough.

Market microstructure signals

A signal is an algorithm that takes market data as the input and outputs a theoretical price for a security. Market micro-structure signals generally rely on price, size, and trade data coming directly from data feeds. Please reference the order book state provided previously as we walk through the following signal examples.

  • A basic signal, likely used in some form by most firms, is ‘book pressure’. In this case book pressure is simply (99.00*10 + 98.75*5)/(10+5) = 98.9167. Because there is more demand on the bid, the theoretical price is closer to the offer than the bid. Another way of understanding why this is a valid predictor it is that if buy and sell trades randomly arrive in the market on the bid and offer, then there’s a 2/3 chance of them filling the entire offer before the entire bid, because it’s 2 times bigger, so the expected future price is slightly closer to the offer than the bid.
  • A second basic signal that many quantitative trading firms use is ‘trade impulse’. A common form is to plug trade quantity into something like the book pressure formula, but with the average bid and offer quantity in the denominator instead of the current quantity (let’s say the average is 15). So if there is a sell trade for 9 on this book, the trade impulse would be -0.25*9/15 = -0.15. This example signal would only be valid for the span of 1 packet. Another way of understanding why this is a valid predictor is that sometimes buy and sell trade quantity is autocorrelated over very short intervals, because there are often multiple orders in flight sent in reaction to the same trigger by different people (this is easily measured), so if you see one sell trade, then typically the next order will also be a sell.
  • A third common basic signal is ‘related trade’. Basically, you could just take the same signal as (2), but translate it over from a different security that is highly correlated, by multiplying it by the correlation between them.

The book pressure and trade impulse signal are enough to create a market taking strategy. After the sell trade for 9, the remaining quantity on the book is:

Sell 10 for $99.25

Sell 5 for $99.00 (best offer)

Buy (10-9 = 1) for $98.75 (best bid)

Buy 10 for $98.50

But our theoretical price is = book pressure + trade impulse = (99.00*1 + 98.75*5)/(1+5) + -0.25*9/15 = 98.79167 – 0.15 = $98.64167! Since our theoretical price is below the best bid, we will send an order to sell the last remaining quantity of 1 at $98.75, for a theoretical profit of $0.10833.

That is a high-level overview of a simple quantitative strategy, and provides a basic understanding of the flow from the input (market data) to the output (orders).

Digression: Trade signal on an FPGA

If you ran the market taking strategy from the previous section live in a real trading system, you would likely find that your orders rarely get filled. You want to trade when your theoretical price implies there’s a profitable opportunity, but other trading systems are faster than yours so their orders reach the market first and there’s nothing left for you.

State of the art latency, as of 2017, can be achieved by putting the trading logic on an FPGA. A basic trading system architecture with an FPGA is to have the FPGA connected directly to the exchange and also to the old trading system. The old trading system is now only responsible for calculating hypothetical scenarios. Instead of sending the order, it notifies the FPGA what hypothetical condition needs to be met to send the order. Using the same case as before, it could hypothetically evaluate the signal for a range of trade quantities:

  • Sell trade, quantity = 1…
  • Sell trade, quantity = 2…
  • Sell trade, quantity = 3…
  • Sell trade, quantity = 4…
  • Sell trade, quantity = 5…
  • Sell trade, quantity = 6: (99.00*4 + 98.75*5)/(4+5) + -0.25*6/15 = 98.7611
  • Sell trade, quantity = 7: (99.00*3 + 98.75*5)/(3+5) + -0.25*7/15 = 98.7271
  • Sell trade, quantity = 8…

With any sell trade of quantity 7 or more, the theoretical price would cross below the threshold of the best bid (98.75), indicating a profitable opportunity to trade, so we’d want to send an order to sell the remaining bid. With a trade quantity of 6 or less we wouldn’t want to do anything.

The FPGA is pre-programmed to know the byte layout of the exchange’s trade message, so all it has to do now is wait for the market data, and then check a few bits and send the order. This doesn’t require advanced Verilog. For example, the message from the exchange could look like the following struct:

struct Trade {

uint64_t time;

uint32_t security_id;

uint8_t side;

uint64_t price;

uint32_t quantity;

} __attribute__((packed));

Because of the relative ease of this setup, it has become a very competitive trade – some trading firms can make these types of trade decisions in less than one microsecond. Also, because the FPGA connects directly to the exchange, an additional connection must be purchased for each FPGA, which can get expensive. Unfortunately, if you only have one shared connection, and broadcast data internally with a switch, the switch might introduce too much latency to be competitive. Many companies will now pay for multiple connections which raises their costs significantly.

Digression: ‘Minimum viable trading system’

As I mentioned above, the simple 3-signal trading strategy could have made money several years ago. Even a few years ago, the ‘minimum viable trading system’ that could cover trading fees was simple enough that an individual could build a successful one. Here’s a good article by someone who created their own trading system in 2009, and could be another starting point to understand the basics of automated trading if all of this has gone over your head- http://jspauld.com/post/35126549635/how-i-made-500k-with-machine-learning-and-hft.

This guide only covers, at a high level, trading and work being done by professionals in established quantitative trading firms, so things like co-location, direct connection to the exchange without going through an API, using a high-performance language like C++ for production (never Python, R, Matlab, etc), Linux configuration (processor affinity, NUMA, etc), clock synchronization, etc are taken for granted. These are large and interesting topics which are now well understood inside and outside the industry.

Other strategies besides market micro-structure signals

Market micro-structure signal based strategies, as described above before the two digressions, are just one type of strategy. Here are some other example trading strategy algorithm components used by many major quantitative trading companies:

  • Model based
  • Rule based
    • Only buy or sell during a certain time range
    • Don’t trade against an iceberg order
    • Cancel a resting order if the queue position is worse than 50%
    • Don’t trade if the last 10 trades lost money

Supporting research infrastructure

Now that you have a brief high-level overview of the production trading system, let’s dive deeper into research. The job of a researcher is to optimize the settings of the trading system and to ensure it is behaving properly. Working for an established company, this whole software system will likely already be in place, and your job would be to make it better.

With that in mind, here are some more details about 4 other main software components I listed above that are programmed and used by the research team to optimize and analyze the trading strategy:

  1. Parameter optimization: Most major quantitative trading firms have a combination of signals, model-based pricing, and rule-based logic. Each of these also have parameters. Parameters enable you to tailor a generic strategy to make more money on a specific product, or adapt it over time. For one product, you might want to weight a certain signal higher than another, or you might want to down-weight it as the signal decays. You quickly run into the curse of dimensionality as parameter permutations multiply. One of the main jobs for a researcher is to figure out the optimal settings for everything, or to figure out automated ways of optimizing them. Some approaches include:
    • Manual selection based on intuition
    • Regression for signal weights or hedge ratios
    • Live tweaking or AB testing in production
    • Backtesting different settings and picking the best
  2. Production reconciliation: Sophisticated strategies have many internal components that need to be continually verified in live production trading. Measuring these, monitoring them, and alerting on discrepancies is how researchers make sure things are working as they expected. If the algorithm performs differently in production than it did on historical data, then it may lose money when it was supposed to be profitable.
  3. Backtesting simulator: Plenty of information is available publicly about backtesting, such as the tools available from Quantopian or TradeStation. Simulating a low latency strategy using tick data is challenging. The volume of data to simulate a single day reaches into the 100s of GBs so storing and replaying data requires carefully designed systems.
  4. Graphing: The trading strategy is a mathematical formula in a computer, so debugging it and adding new features can be difficult. Utilizing a Python or JavaScript plotting library to publish custom data and statistics can be helpful. Additionally, it is essential to understand positions and profits or losses during and after the trading day. Graphical representations of different types of data sets makes many tasks easier.

Conclusion

Most people who are new to the industry think that researchers primarily work on new signal development, and developers primarily optimize latency. Hopefully now it’s obvious that the system has so many components that those two jobs are just a few parts of a much wider set of roles and responsibilities. The most important skills for success are actually very close attention to detail, hard work, and trading intuition. On top of that it should be clear that having strong programming skills is essential. All of these systems are tailor-made in-house and have to be constantly tweaked and improved by the users themselves – you.

 

If you’re interested in joining our team at Headlands, please see our careers page and send your resume to careers@headlandstech.com

 

Note:

The information above is a collection of some helpful information to shed some light on what a quantitative trading firm does and what you could be doing if you worked at one.  The information, although intended to be helpful to you, should not be relied on and is not represented to be accurate or current. Please note this is by no means an exhaustive description of what goes on at a quantitative trading firm.  Nor should this be taken as covering industry best practices or everything you need to know to start trading quantitatively.  This is simply a very high overview of information I think those considering joining a quantitative trading firm may find useful as they navigate the interview process. 

Appendix: Latency and the timing of events

Similar to the breakdowns by Grace Hopper (https://youtu.be/ZR0ujwlvbkQ?t=45m08s) and Peter Norvig (http://norvig.com/21-days.html#answers), here’s a table of approximately how long things take:

  • Time to receive market data and send an order via an FPGA: ~300 nanoseconds
  • Time to receive market data and send an order via a ‘slow’ software trading system: ~30 microseconds
  • Minimum time between two packets from the exchange: ~10-1000 microseconds
  • Microwave between BATS and INET stock exchanges: ~100 microseconds
  • Fiber between BATS and INET stock exchanges: ~150 microseconds
  • Time for an exchange to match an order and send a response: ~100 microseconds – ~5 milliseconds
  • Microwave between NY and Chicago: ~4 milliseconds
  • Fiber between NY and Chicago: ~7 milliseconds
  • Fiber between NY and European exchanges: ~35 milliseconds

Appendix: Exchange idiosyncrasies

Exchanges almost all use different technology, some which dates back 10+ years. Different technology decisions and antiquated infrastructure have resulted in trading idiosyncrasies. There are many publicly available discussions of the effects of these idiosyncrasies. Here are a few interesting items:

https://www.eurexchange.com/blob/238346/40b5f1d684271727ef8c9c8cb9cdd09e/data/presentation_insights-into-trading-system-dynamics_en.pdf

https://www.bloomberg.com/news/articles/2017-03-17/currency-traders-race-to-reform-last-look-after-bank-scandals

https://www.nyse.com/publicdocs/nyse/markets/nyse/NYSE-Order-Type-Usage.pdf

http://www.cmegroup.com/notices/disciplinary/2016/11/NOTICE-OF-EMERGENCY-ACTION/NYMEX-16-0600-ELDORADO-TRADING-GROUP-LLC.html

https://brillianteyes.wordpress.com/2010/08/28/espeed-trading-procotol/

http://cdn.batstrading.com/resources/membership/CBOE_FUTURES_EXCHANGE_PLATFORM_CHANGE_MATRIX.pdf

http://quantitativebrokers.com/wp-content/uploads/2017/05/match-20130603.pdf

www.wsj.com/articles/SB10001424127887323798104578455032466082920

https://www.wsj.com/articles/SB10001424127887324766604578456783718395580