Western Mass Hosting - Feed https://www.westernmasshosting.com Shared and Managed VPS Web Hosting in Feeding Hills, MA Sun, 08 Dec 2019 13:58:08 +0000 hourly 1 https://wordpress.org/?v=5.3 <![CDATA[RunCloud Backup and Restore]]> https://www.westernmasshosting.com/runcloud-backup-and-restore/ Fri, 30 Aug 2019 12:23:53 +0000 Kevin P https://www.westernmasshosting.com/?p=3276 We recently found ourselves needing a new incremental file backup system…  something…

We recently found ourselves needing a new incremental file backup system…  something we could use to remotely store, something fast, something secure, and ultimately something reliable.

In steps Duplicity.

Duplicity allows us to sync incremental file backups to our cloud storage flawlessly; it has helped reduce backup file size, allows us to encrypt the backups, and reduce the amount of bandwidth needed for transferring both backups and restores.   Overall giving our developers a much needed break from their manual backups.

The script in our repo contains an installer that will setup everything you will need on your servers for this, though please do note that the restore is incomplete.    While we have verified manual account & app restores, we have not been able to perfect account based restores.   What this means is if your account on your server has more than 1 application in it, you will not be able to restore all of the applications in one sitting and will have to manually restore each app.

This will come in future versions of this backup/restore system, for now, enjoy:


To install the backup/restore system simply shell into your server and clone the repo.  Once it is finished, run sudo bash install.sh

If you already have it installed but need to update, simply do a git pull and run the installer as above.

Please keep an eye on the install, you will be prompted to configure certain necessities like your AWS S3 API credentials, the name of the bucket you will be using to store the backups, and backup retention period.

Once installed usage is fairly simple.

Usage – BACKUP

Backup Databases

  • sudo s3_backup_database ALL
    • will loop through all mysql (except system specific) databases and back them up

Backup a single Database

  • sudo s3_backup_database DATABASE_NAME
    • will backup a single database

Backup Everything

  • sudo s3_backup ALL
    • will loop through the filesystem backing up every account and app

Backup A Single Account

  • sudo s3_backup ACCOUNT_NAME
    • will loop through the filesystem backing up every app of a single account

Backup A Single App

  • sudo s3_backup ACCOUNT_NAME APP_NAME
    • will backup a single app in a single account


Restore A Single App

  • sudo s3_restore ACCOUNT_NAME APP_NAME
    • this will restore your accounts app to a state you select during the process
    • It will also prompt you to confirm the overwrite

Restore A Database

  • sudo s3_restore_db DATABASE_NAME
    • this will restore your database to a state you select during the process

Keep an eye on the repo for updates.

RunCloud Backup and Restore
<![CDATA[Proper Website Content Security nGinx Configuration]]> https://www.westernmasshosting.com/proper-website-content-security-nginx-configuration/ Thu, 07 Mar 2019 16:25:28 +0000 Kevin P http://www.westernmasshosting.com/?p=3193 Wow!  It’s been a little while since I have had the time…

Wow!  It’s been a little while since I have had the time to post another article.   Well, here I am again, back at it.

This time, I will show you an optimal way to keep your site secure utilizing a bit of nginx configuration.  You will need to do some work before implementing this, so please do not attempt to simply copy/paste this and expect it to work out of the box.

First things first, you need to browse through your site and note every single external call.  By external call I mean everything that is not requested directly from your sites domain.  Items like google fonts, google analytics, etc… all pull their resources from their respective domains.  Your best bet is going to be to note what the domain is, and what type of resource it is.  An image, a font, css, javascript, etc…

Once you have your list, proceed in getting your site an SSL certificate and have it applied.  When you have your list and SSL certificate applied, you will need to add the following configuration to your nginx config inside your site’s “server{}” block, although placing it in your sites “location / {}” will also work.

# Default security headers
add_header Strict-Transport-Security "max-age=31536000; includeSubdomains; preload"; # enable, cache, and preload subdomains
add_header X-Frame-Options "SAMEORIGIN" always; # generally only allow SAMEORIGIN frame sources
add_header X-Xss-Protection "1; mode=block"; # protect against Cross-Site Scripting
add_header X-Content-Type-Options "nosniff" always; # no sniffing allowed!
add_header Referrer-Policy "strict-origin"; # protect agains cross-linking
add_header X-Download-Options "noopen"; # force the download, and do not allow direct openning
add_header X-Permitted-Cross-Domain-Policies "none"; # protect agains cross-linking
add_header X-Robots-Tag none; # only allow robots.txt

# Content Security Policy
set $CSP_image         "img-src 'self' 'unsafe-inline' 'unsafe-eval' data:;"; # allowable external image domains
set $CSP_script        "script-src 'self' 'unsafe-inline' 'unsafe-eval';"; # allowable external jaavscript domains
set $CSP_style         "style-src 'self' 'unsafe-inline';"; # allowable external CSS domains
set $CSP_font          "font-src 'self' data:;"; # allowable external font domains
set $CSP_frame         "frame-src 'self';"; # allowable external frames/iframes domains
set $CSP_object        "object-src 'self';"; # allowable external object domains
set $CSP_connect       "connect-src 'self';"; # allowable external connect domains
set $CSP_media         "media-src 'self';"; # allowable external media domains
set $CSP_form          "form-action 'self';"; # allowable external form domains
set $CSP_frame_anc     "frame-ancestors 'self';"; # allowable external frame ancestor domains
set $CSP               "default-src 'self'; ${CSP_image} ${CSP_script} ${CSP_style} ${CSP_font} ${CSP_frame} ${CSP_object}";
add_header Content-Security-Policy $CSP always;
add_header X-Content-Security-Policy $CSP always;

Please see the comments in the configuration above.  You will need to use the FQDN, and not the URL for each item.  If you do not have the domains for the external resources, or there simply are none, leave well enough alone and block everything that is not allowed 🙂

Proper Website Content Security nGinx Configuration
<![CDATA[Whitelabel Nameservers on Route53]]> https://www.westernmasshosting.com/whitelabel-nameservers-on-route53/ Fri, 03 Aug 2018 14:16:54 +0000 Kevin P http://www.westernmasshosting.com/?p=3001 This is a quick(ish) how to that you can do to utilize…

This is a quick(ish) how to that you can do to utilize one of your registered domains on Amazon’s Route53 service.


  • A registered Domain
  • An account with Amazon’s AWS, in particular, their Route53 service
  • An IAM account, with API access to allow the creation, reading, and updating Route53 Domain records
  • The AWS cli installed on a linux distro that you have shell access to
  • A bit of patience
  • Remembering what DNS propagation is like…

How to do it:

First things first, drop into shell on your linux machine and run the following command.  You will need to copy/paste it’s output, so have your favorite text editor handy.

aws route53 create-reusable-delegation-set --caller-reference $(date +%s%N)

The output of this command should look similar to the following:

    "Location": "https://route53.amazonaws.com/2013-04-01/delegationset/N244H6F5LUSLJ8",
    "DelegationSet": {
        "NameServers": [
        "CallerReference": "1512169214076311809",
        "Id": "/delegationset/N244H6F5LUSLJ8"

Once your delegation set is created you will need to run and capture a few more commands so we can get the IPv4 addresses, and the IPv6 addresses, so stay in shell for now.

From the output, copy and paste the “Id”, and the “Nameservers” to your text pad, and save it.

Now, in shell, run this for each of the nameservers in the “Nameservers” block, and copy and paste the output from each

host ns-39.awsdns-04.com

This will return you the IPv4 and IPv6 addresses, which we will need soon.

Now, you can create the hosted zone at Route53 by using the following command.  It will specify the delegation set that you retrieved the Id from above, so replace the appropriate placeholder with the Id you already copy and pasted, also replace your domain name.  If you already use Route53 for your domain, you will need to export your zone file, delete all records, and finally delete the name from it… importing the records back again after you run the following command… it only takes 15-20 seconds, and typically your TTL will be much greater than that.

aws route53 create-hosted-zone --caller-reference $(date +%s%N) --delegation-set-id /delegationset/THE_DELEGATION_ID --name YOURDOMAINNAME.EXT

You will need the returned ID… so copy and paste it somewhere…

Now, pop over to AWS Route53, we need to create some A and AAAA records for each of the nameservers you need for your whitelabel.  One for each record type, and you will use the appropriate IPv4 for the A record, and IPv6 for your AAAA record.   Do not forget to name them… typically, they are named ns# where # is a number.

Now, back to shell, here we’re going to force the domains nameservers and SOA records

# Force the Nameservers Upon Us
aws route53 change-resource-record-sets --hosted-zone-id /hostedzone/YOUR_HOSTED_ZONE_ID --change-batch '{
    "Changes": [{
        "Action": "UPSERT",
        "ResourceRecordSet": {
            "Name": "YOURDOMAINNAME.EXT",
            "Type": "NS",
            "ResourceRecords": [
                {"Value": "ns1.YOURDOMAINNAME.EXT."},
                {"Value": "ns2.YOURDOMAINNAME.EXT."},
                {"Value": "ns3.YOURDOMAINNAME.EXT."},
                {"Value": "ns4.YOURDOMAINNAME.EXT."}
            "TTL": 60

Make sure to change the ns1-4 to whatever it was you decided to use when you named them above.

# Force the SOA Upon Us
aws route53 change-resource-record-sets --hosted-zone-id /hostedzone/YOUR_HOSTED_ZONE_ID --change-batch '{
    "Changes": [{
        "Action": "UPSERT",
        "ResourceRecordSet": {
            "Name": "YOURDOMAINNAME.EXT",
            "Type": "SOA",
            "ResourceRecords": [
                {"Value": "ns1.YOURDOMAINNAME.EXT. hostmaster.YOURDOMAINNAME.EXT. 1 7200 900 1209600 60"}
            "TTL": 60

Change that ns1 above to whatever you decided to use for your “primary” nameserver record.

Now you need to “glue” it all together 😉

Run this command, replacing your domain, and the IP’s you got and set above

aws route53domains --region us-east-1 update-domain-nameservers --domain-name YOURDOMAINNAME.EXT --nameservers Name=ns1.YOURDOMAINNAME.EXT,GlueIps=THE_IPV4,THE_IPV6 Name=ns2.YOURDOMAINNAME.EXT,GlueIps=THE_IPV4,THE_IPV6 Name=ns3.YOURDOMAINNAME.EXT,GlueIps=THE_IPV4,THE_IPV6 Name=ns4.YOURDOMAINNAME.EXT,GlueIps=THE_IPV4,THE_IPV6

Congratulations, you have now whitelabeled your nameservers to a domain of your chosing.  You can move forward with updating the rest of your domains nameservers if you wish to utilize these new nameservers.  If they are managed at Route53, you can use the following to utilize the delegation set you created earlier.

# Force the nameservers upon us
aws route53 change-resource-record-sets --hosted-zone-id /hostedzone/YOUR_HOSTED_ZONE_ID --change-batch '{
    "Changes": [{
        "Action": "UPSERT",
        "ResourceRecordSet": {
            "Name": "YOUR_OTHER_DOMAINNAME.EXT",
            "Type": "NS",
            "ResourceRecords": [
                {"Value": "ns1.YOURDOMAINNAME.EXT."},
                {"Value": "ns2.YOURDOMAINNAME.EXT."},
                {"Value": "ns3.YOURDOMAINNAME.EXT."},
                {"Value": "ns4.YOURDOMAINNAME.EXT."}
            "TTL": 7200
# Force the SOA upon us
aws route53 change-resource-record-sets --hosted-zone-id /hostedzone/YOUR_HOSTED_ZONE_ID --change-batch '{
    "Changes": [{
        "Action": "UPSERT",
        "ResourceRecordSet": {
            "Name": "YOUR_OTHER_DOMAINNAME.EXT",
            "Type": "SOA",
            "ResourceRecords": [
                {"Value": "ns1.YOURDOMAINNAME.EXT. hostmaster.YOURDOMAINNAME.EXT. 2018080301 7200 900 1209600 8600"}
            "TTL": 7200
# Set the "no glue needed" nameserver records
aws route53domains --region us-east-1 update-domain-nameservers --domain-name YOUR_OTHER_DOMAINNAME.EXT --nameservers Name=ns1.YOURDOMAINNAME.EXT Name=ns2.YOURDOMAINNAME.EXT Name=ns3.YOURDOMAINNAME.EXTName=ns4.YOURDOMAINNAME.EXT
Whitelabel Nameservers on Route53
<![CDATA[DNS Propagation]]> https://www.westernmasshosting.com/dns-propagation/ Fri, 06 Jul 2018 14:15:45 +0000 Kevin P http://www.westernmasshosting.com/?p=2999 Have you ever updated your domain’s A record and noticed that, for…

Have you ever updated your domain’s A record and noticed that, for at least several hours, your new domain displayed the new site on one device (such as your smartphone), but the old site on another device, such as your home computer? Have you ever updated your domain’s MX records and found that, for at least several hours, not all new emails were delivered to the new email server you specified?

I cannot count the number of times I have seen these sorts of situations cause website owners to panic, pull their hair out, or get frustrated with their hosting provider. So what exactly is going on, and what can you do about it?

What is happening is that the change you made to your domain’s DNS is propagating throughout the internet. In what follows, I will explain what DNS propagation is, and ways that you can reduce propagation times so that your changes update faster.

What is DNS Propagation?

“Propagation” is a term with several related meanings, but here it simply means the spreading of something from one thing to another. DNS was devised to be decentralized, so that there is no single, massive file that everyone needs to continuously download in order to have up-to-date records of which domain resolves to which IP. A natural consequence of this decentralized system is that any DNS changes would need to propagate or spread, to other systems in order for the rest of the internet to see those changes. This is a process that requires time. Fortunately, you do have control over some of that time.

One of the steps of the DNS resolution process is when your ISP (Internet Service Provider) caches, or stores, the looked-up record for a certain period of time. This is done so that the next time that record is requested it can be given automatically, which speeds things up on your end and reduces traffic on the ISP’s end. When you’ve made a change to your domain’s DNS, any nameservers (such as those belonging to your ISP) that have already stored that record in its cache will continue serving it until the record has expired and it has to request an update. That is why on certain networks it can take hours or even days for a DNS change to be seen, while on others it is immediate: one network has a cached result, and one does not.

Fortunately, the length of time that caches are stored before being updated can be determined by you, provided that you have access to edit the TTL, or Time to Live, a field of a given DNS record. Doing so is quite straightforward.

How Long Will it Take?

You will notice that each record has a TTL field containing a large number. This number is simply time in seconds. A TTL of 14400 means that any nameservers caching results for that record will do so for 14400 seconds, or 4 hours. After 4 hours, the cached record will expire and those nameservers will request an update from your DNS zone.

In general, a TTL value of 14400 is perfectly adequate for anyone’s needs. Lowering that value will only increase the burden on your website’s nameservers by causing it to respond with a greater frequency to any other nameservers who are caching your domain’s records.

But if you are, for example, migrating your website, or you want to change a DNS record for some other reason, then temporarily lowering the TTL value of certain records not only makes sense but can be beneficial to you.

The one caveat that you have to keep in mind before doing so is that you need to plan ahead. So, let’s suppose that I want to change an A record for blog.example.org to some other IP, and I want that record change to propagate as quickly as possible, minimizing the effects of longer record caching. Because that A record’s current TTL is 14400, or 4 hours, I first need to lower it to, say, 300, or 5 minutes, and then wait for at least 4 hours. This is to give any caching nameservers enough time to expire my record and request a new one with its new TTL value. Once I have done that, I can change the A record to a new IP, and after 5 minutes that change should have propagated to every nameserver caching my DNS records.

DNS Propagation
<![CDATA[SSL and Your Wordpress Site]]> https://www.westernmasshosting.com/ssl-and-your-wordpress-site/ Wed, 27 Sep 2017 14:13:45 +0000 Kevin P http://www.westernmasshosting.com/?p=2996 Many people think securing a website with SSL (SSL encryption) is necessary…

Many people think securing a website with SSL (SSL encryption) is necessary only if they’re selling products or services via their website and collecting credit card or payment information. What many website owners do not realize is that SSL encryption has other very important benefits for small business owners.

To understand the benefits of having an SSL certificate installed for your website, it helps to understand what SSL is and actually does.

What exactly is SSL encryption?

SSL, which stands for Secure Sockets Layer, is an encryption technology that creates a secure connection between your website’s server and your website visitor’s web browser. This allows for information to be protected during transmission between the two.

Without SSL encryption any computer could intercept the transmission from your browser to the server.

This includes the transmission of credit card numbers, usernames and passwords, and other sensitive information.

When your site is secured you’ll see that little green lock in the left corner of your browser’s location bar, followed by the website URL beginning with HTTPS. Data that is sent using HTTPS provides three key layers of protection:

Authentication: Confirms visitors are on the right website, yours, and builds trust.

Data integrity: Customer data cannot be corrupted or modified.

Encryption: Visitor activity cannot be intercepted while browsing your website.

What are my next steps, once I get a SSL certificate for my WordPress site?

Something I see on a daily basis is developers, hosters, SEO, etc forget one major piece after they apply the SSL certificate to a site.

They forget to change the links.  See WordPress stores every link, whether it’s a URL to another page, and image on a page, or a javascript theme file; and it does not change them just because you decide you want everything to be HTTPS.

After applying the SSL certificate, you will need to find and replace every last reference to your site containing a http://, and change them to https://

On top of this, if you link in anything from a 3rd party, like a RSS feed, or *gasp*, and iframe from Google, you will also need to update those links, otherwise their content will not show for you… or will display but will show your site as being non-secure for displaying mixed content.


SSL and Your Wordpress Site
<![CDATA[LEMP Commander - Shell Based LEMP Control Panel]]> https://www.westernmasshosting.com/lemp-commander-shell-based-lemp-control-panel/ Sun, 09 Oct 2016 14:12:19 +0000 Kevin P http://www.westernmasshosting.com/?p=2994 So… in my quest to create the perfect web server, I stumbled…

So… in my quest to create the perfect web server, I stumbled into an issue.

An easy(ish) way to manage it.

There aren’t too many control panels for nGinx that setup the server the way I need it to, in order to get the performance and scalability needed for the sites I run.

I initially thought about developing a web based control panel, and still eventually may, however, due to security concerns with the methods needed to create and manage these sites, I figured it’d be best left to shell.

So… without any further ado, I will explain what I did and how I did it.   Please keep in mind this is an ongoing w.i.p.

Server Install – Ubuntu 16.04 LTS

First and foremost, we need our OS.  For me, I find Ubuntu extremely stable, so I would highly recommend using it.  I chose Ubuntu Server 16.04 LTS which you can pick up the ISO for over at Ubuntu.

Once you download the ISO, burn it off to a DVD, or use something like unetbootin and create a bootable USB stick.

Pop your device or disk into your PC and boot from it to start the installation.

You can setup how you wish, just make sure to install only the minimals needed for it to run.  I happen to choose OpenSSH and Samba since I am local to my server, as I need to be able to access everything.  At the very least you should install OpenSSH so you can shell into the server to manage it.

During the partitioning phase I setup LVM, with the following partition scheme.  I would recommend utilizing LVM, if nothing more than the ability to add storage on the fly.  I have 2 – 256GB SSD’s, I setup in RAID for mirroring, and partitioned it as follows, with a single partition set aside for boot.

  • /boot – 500MB – Bootable, discard, noatime
  • / – LVM System – 15GB, discard, noatime
  • /home – LVM Home – 208GB, discard, noatime
  • swap – LVM Swap – The amount of ram I have (in this case it is 32GB)

Once the install finishes, reboot the machine, fire up a shell session, and run/configure the following:

  • Set Shell to Bash:
    dpkg-reconfigure dash


  • Turn off Apparmor:
    service apparmor stop && update-rc.d -f apparmor remove && apt-get -y remove apparmor apparmor-utils


  • Configure UFW: 
    ufw allow http
    ufw allow https
    ufw allow ftp
    ufw allow 30000:50000/tcp
    ufw allow 30000:50000/udp
    ufw allow ssh
    ufw enable

Our server is now ready to setup LEMP Commander.

LEMP Commander

This setup step is pretty easy to do, but does require some user intervention through the process.  We’ll need to pay attention 😉 and configure the way we’ll use exim, how we’ll secure MySQL, and how we’ll configure phpMyAdmin… so pay attention! 😉

In shell, make sure you are logged in as a sudo user, via running: sudo -s

Next, make sure you are in your “home” directory, and run:

git clone https://github.com/kpirnie/LEMP-Command.git && cd LEMP-Command

This will download our repository and allow you to keep it up to date with the latest code I will release to it 🙂

Once it is finished downloading you will be in it’s main directory, so to install it, simply run:


and go grab a coffee or 2.

The installer will first update and upgrade your server, I have found that this definately takes the longest, and unfortunately, there is very little that can be done about it to make it any quicker (other than upgrading your ISP)

Check back every one in a while so you can secure your MySQL install, configure exim, and configure phpMyAdmin as I stated earlier.   Securing MySQL is a simple process, just select Y, put in a username and password combo, and done.   For exim, I run this configuration due to my ISP’s restrictions, and for phpMyAdmin I select no webserver, yes to dbconfig-common, and a random password.    Set these up how you see fit.

Once the installer is complete, you will probably see a message that you will need to reboot your machine.  Go ahead and do that now.

Once the machine is restarted your server is officially setup as a highly scalable, highly performant web server.

LEMP Commander Usage

Now that your server is setup, we can let the real fun begin.   It’s time to setup a couple of administrative tasks that will help keep your server up to date, malware/virus free, backed up, and running in tip-top shape.

For this step we’ll need to be back in sudo mode, and run

crontab -e

to set the following:

  • 15 0 * * * scanner

    # Performs a nightly virus/malware scan

  • 30 0 * * * nbl-updater

    # Nightly updates the nGinx Ban list according to: http://stopforumspam.com/

  • 30 1 * * * backup

    # Backs up all sites and databases you may have on your server.  As of now, I have it built to auto-remove backups older than 30 days as well

  • */2 * * * * service-up

      # Just a quick check to make sure everything is still running.  If anything is stopped, it will restart it

Please change the times here how you see fit.

  • Create a New Site
    • new-site
    • Follow all prompts
  • Manually Run a Site Backup
    • backup
      • Will backup all sites and databases
      • The backups are placed in the following directory structure:
        • Site: /home/USER/backups/site
        • Database: /home/USER/backups/database
    • backup USER
      • Backs up the specified users site and databases
      • The backups are placed in the following directory structure:
        • Site: /home/USER/backups/site
        • Database: /home/USER/backups/database
  • Restore a Site Backup
    • restore USER YYYY-MM-DD
      • Restores the specified users site and databases from the specified date
  • Manually Run an Account Backup
    • account-backup
      • Backs up all users accounts, including their site and databases
      • The backups will be placed in the /home directory
    • account-backup USER
      • Backups up the specified users account, including their site and databases
      • The backup will be placed in the /home directory
  • Terminate an Account
    • terminate USER
      • Runs an ‘account-backup’ for the specified user, then removes the user and all the users files from the server
  • Manually Scan the Server for Malware/Virii
    • scanner
      • Scans the server for virii or malware
  • Restart Services
    • restart-commander
      • Restarts the following services: memcached, php-fpm, niginx, mysql, exim, & pure-ftpd
  • WP-CLI
    • wp COMMAND
      • Too much to cover here, so head over to: http://wp-cli.org/ to see what it can do for you

In The Works – a.k.a.  COMING SOON

  • Restore Full Account
  • Account Password reset
  • MySQL Master Admin Password Reset
  • Account Suspension
  • Sub-Domain & Parked Domain Support
  • SSL Post-Install Support, and regen
  • Extra Database Creation

Other Scripts I use on My Server


That’s it for now folks, I will update this post as more gets created/fixed for this, I will leave you with 2 pieces of advise.

  1. Always keep your servers up to date.  As a rule of thumb, I shell into mine and do this at least once a week.
  2. If you are going to be running a site with any kind of user input, make sure it is up to date and protected against attacks.   As a rule of thumb, the wordpress sites I host get updated nightly.  Since it does power 25% of the websites on the net, it is alot more susceptible to attacks than any other.
LEMP Commander - Shell Based LEMP Control Panel
<![CDATA[High Performance Wordpress and Server - Part II - Theme Development]]> https://www.westernmasshosting.com/high-performance-wordpress-and-server-part-ii-theme-development/ Fri, 01 Jul 2016 14:10:54 +0000 Kevin P http://www.westernmasshosting.com/?p=2992 In part I, I walked you through my server setup to achieve…

In part I, I walked you through my server setup to achieve a 1 second load time for my site.  It is a WordPress site, with a custom theme I developed.

I gandered at the possibility of by-passing WordPress’s front-end engine, however, I found myself needing some of the built-in functionality WordPress offers.  Items like custom posts, pages, and even posts are simple sql queries, however; widgets, shortcodes, and most plugins then become unavailable.

So, I delved into the realm of research and found WordPress core functionality offered the functionality I required, with very little performance hit; so I decided to simply extend some memcached functionality when pulling my pages/posts/widget/etc…

The only thing I found that I lost was time, and in the end drastically improved the load time of my site, as well as drastically improved the number of concurrent connections to it.

I managed to keep this 1 second load time with 250 concurrent users per minute.  Of course, load time increased as the numbers grew, and in the end I found that my server/site setup was able to effectively handle 1238 concurrent connections per minute before I hit the 7 second load time mark.    At this point I called it a viable project, implemented my code… here we are 🙂

Now I won’t get into the details of theme design and development here, however, have a look over at the codices for the how-to’s you will need to read through to go further here.

Once you get the basics down pat and your site is running how you like, add a little more code to force your templates to utilize memcached and cache your theme pages in server memory for lightning fast transfer and rendering.

At the top of each of my theme files, I have put the following code:

// Fire up an instance of Memcached
$mc = new Memcached();
// If it's not already done, set the server
	$mc->addServer('', 11211);
// Start our output buffer
// Load up our sidebar
// Grab the content from it
$sidebar = ob_get_contents();
// End our buffer so we can process the rest of the page


As you can see above, this is far from complete.  You will want to add in anything that pre-fires/pre-loads after the start of our output buffer.  This way we can catch it, and cache it before it renders

The rest is basically up to you, as the rest of the code is pretty basic… check for the cached object, if it’s there, present it, if not present the page and cache the object.

For instance:

$ret = '';
$m = $mc->get('YOURKEY');
if($m) {
	echo $m;
} else {
	$ret = 'Your stuff here...';
	$mc->set('YOURKEY', $ret);
	echo $ret;


You will want to name your KEY accordingly, for instance, for my posts, the key name is set to “post_the-post-title”.   This way Memcached knows what to present, and there will be no collisions.

Of course, if you do not follow the basics of web design, this may be a moot point.  Make sure you are concatenating and minifying where you can, optimizing all your images where you can to get the best optimal results, and setting the proper browser caching/gzipping on all static resources.

Now, as much as I hate to admit it, there are a couple of plugins I am going to go out on a limb and recommend to you all.  All are caching related, and all are the ones I have used extensively with great success.


Some of the configuration options will depend on your setup, however, if you followed along with Part 1, and set your server up like that, you should be good 😉

  • Nginx Cache
    • Plugin Homepage: https://wordpress.org/plugins/nginx-cache/
    • Settings:  Change the cache path to your location, and make sure the “Automatic flush” is checked off
  • WP Super Cache
    • Plugin Homepage: https://wordpress.org/plugins/wp-super-cache/
    • Settings:
      • Easy Tab: Turn on Caching
      • Rest of settings defaults are fine, though I did up the timeout to 3600 for everything
    • Plugin Homepage: https://wordpress.org/plugins/wp-ffpc/
    • Settings:
      • Cache Type Tab:
        • Select backend: PHP Memcached
        • Timeouts: 3600
      • Backend Settings Tab:
        • Hosts: add your memcached server ip adress and port; ie…
          • accepts a comma-delimited list of servers…
  • Notable Mentions 🙂
    • WP Smush – helps reduce image file sizes
    • WP Clean Up – keeps your database clean and optimized
    • Lazy Load – hooks into your images and only loads them into the page when they enter the viewport
    • WP Performance Score Booster – removes the versioning querystring on static resources
High Performance Wordpress and Server - Part II - Theme Development
<![CDATA[High Performance Wordpress and Server - Part I - Server Setup]]> https://www.westernmasshosting.com/high-performance-wordpress-and-server-part-i-server-setup/ Fri, 10 Jun 2016 14:08:49 +0000 Kevin P http://www.westernmasshosting.com/?p=2990 I have successfully managed to get under a 1 second load time…

I have successfully managed to get under a 1 second load time on my WordPress site, While getting 250 concurrent users over a 1 minute test period.  (Source: https://gtmetrix.com/reports/www.westernmasshosting.com/I858GlQs & https://loader.io/tests/f3cb1673bbecf7176954d39be612f838)

This was done with a combination of items, stemming from the server install up to WordPress theme development.  Here is how I did it, so maybe you can too.

Server Setup

Here we will start from the ground up.  Items you will need: VirtualBox, Ubuntu 16.04 64b Server ISO, Time

My virtual machine is setup with 4G of RAM, using 2 CPU’s, with 80G SSD, and a Bridged Networking adapter

Boot to the ISO, and start the installation process.  Everything can be setup how you wish, however, I custom partitioned, as well as, only installed the “standard system utilities”, and OpenSSH during the install process.

During paritioning (with the size above), make sure to select Manual, and setup the 4 Partitions I layout below

Since we are creating a tmp, cache, and swap partition make sure to reserve at least the same amount as you have in RAM, so with my 4G of RAM, I need to reserve at least 12G of disk, however, I am going to reserve 16G because I want my swap partition twice the amount of RAM

[showhide type=”part_scheme” more_text=”Show Partition Scheme” less_text=”Hide Partition Scheme”]

  • 1st Partition:
    • Mount Point: /
    • 69.9G (officially my drive was 85.9G)
    • Primary – Beginning
    • Mount Options: discard
    • Reserved blocks: 1%
    • Typical Usage: news
    • Bootable: on
  • 2nd Partition:
    • Mount Point: /tmp
    • 4G
    • Primary – Beginning
    • Mount Options: discard, noatime, nodiratime
    • Reserved blocks: 1%
    • Typical Usage: news
    • Bootable: on
  • 3rd Partition:
    • Mount Point: /cache (probably will have to create this manually)
    • 4G
    • Primary – Beginning
    • Mount Options: discard, noatime, nodiratime
    • Reserved blocks: 1%
    • Typical Usage: news
    • Bootable: on
  • 4th Partition:
    • Use as: swap
    • 8G


Now finish up your install process, and let the machine reboot.  Once it boots, login to the machine and drop into a sudo session using sudo -s and let the “fun” begin 🙂

We are going to configure our server to use bash only, setup the default system control, install our software, and configure it… so be prepared to have your time sucked up 😉

Use bash only: dpkg-reconfigure dash

Now, we’ll remove apparmor since we’ll be using ufw as our firewall

service apparmor stop
update-rc.d -f apparmor remove
apt-get remove apparmor apparmor-utils

Speaking of firewall, we can set that up now too

ufw allow http
ufw allow https
ufw allow ssh
ufw enable

This will allow only web and ssh connections to the server.   Feel free to allow anything else you deem necessary

Now we’ll modify our system controller to allow a ton of connections, allow a ton of files to be open, and mod our networking and swap configuration

First run rm -f /etc/sysctl.conf  then nano /etc/sysctl.conf  and paste in the following:

[showhide type=”sysctl” more_text=”Show sysctl.conf” less_text=”Hide sysctl.conf”]

# for /etc/sysctl.conf
# Protection from SYN flood attack.
net.ipv4.tcp_syncookies = 1
# See evil packets in your logs.
net.ipv4.conf.all.log_martians = 1
# Discourage Linux from swapping idle server processes to disk (default = 60)
vm.swappiness = 45
# Increase number of incoming connections that can queue up before dropping
net.core.somaxconn = 50000
# Handle SYN floods and large numbers of valid HTTPS connections
net.ipv4.tcp_max_syn_backlog = 30000
# Increase the length of the network device input queue
net.core.netdev_max_backlog = 5000
# Increase system file descriptor limit so we will (probably) never run out under lots of concurrent requests. (Per-process limit is set in /etc/security/limits.conf)
fs.file-max = 100000
# Widen the port range used for outgoing connections
net.ipv4.ip_local_port_range = 10000 65000
# If your servers talk UDP, also up these limits
net.ipv4.udp_rmem_min = 8192
net.ipv4.udp_wmem_min = 8192
# Disable source routing and redirects
net.ipv4.conf.all.send_redirects = 0
net.ipv4.conf.all.accept_redirects = 0
net.ipv4.conf.all.accept_source_route = 0
# Disable packet forwarding.
net.ipv4.ip_forward = 1
net.ipv6.conf.all.forwarding = 1
# Disable TCP slow start on idle connections
net.ipv4.tcp_slow_start_after_idle = 0
# Increase Linux autotuning TCP buffer limits Set max to 16MB for 1GE and 32M (33554432) or 54M (56623104) for 10GE Don't set tcp_mem itself! Let the kernel scale it based on RAM.
net.core.rmem_max = 56623104
net.core.wmem_max = 56623104
net.core.rmem_default = 16777216
net.core.wmem_default = 16777216
net.core.optmem_max = 40960
net.ipv4.tcp_rmem = 4096 87380 16777216
net.ipv4.tcp_wmem = 4096 65536 16777216
# Disconnect dead TCP connections after 1 minute
net.ipv4.tcp_keepalive_time = 60
# Wait a maximum of 5 * 2 = 10 seconds in the TIME_WAIT state after a FIN, to handle any remaining packets in the network. net.ipv4.netfilter.ip_conntrack_tcp_timeout_time_wait = 5 Allow a high number of timewait sockets
net.ipv4.tcp_max_tw_buckets = 2000000
# Timeout broken connections faster (amount of time to wait for FIN)
net.ipv4.tcp_fin_timeout = 10
# Let the networking stack reuse TIME_WAIT connections when it thinks it's safe to do so
net.ipv4.tcp_tw_reuse = 1
# Determines the wait time between isAlive interval probes (reduce from 75 sec to 15)
net.ipv4.tcp_keepalive_intvl = 15
# Determines the number of probes before timing out (reduce from 9 sec to 5 sec)
net.ipv4.tcp_keepalive_probes = 5


After you save the file, run the following to load in the configuration: sysctl -p

Now add in the following line to your limits by: nano /etc/security/limits.conf

# add to bottom
* - nofile 16384
* hard nofile 500000
* soft nofile 500000
root hard nofile 500000
root soft nofile 500000

Now we’ll update the server: apt-get update && apt-get -y upgrade && apt-get -y dist-upgrade && apt-get autoclean && apt-get -y autoremove

Once again, you will need to reboot.

Once you have rebooted, lets forge ahead and install our web/database software:

nGinx & memcached (and a couple extra helpers)

apt-get -y install nginx-full memcached zip lzop


apt-get -y install php7.0-fpm php7.0-curl php7.0-gd php7.0-intl php7.0-mysql php7.0-json php7.0-sqlite3 php7.0-opcache php-memcached php-pear php7.0-mbstring php7.0-cli


apt-get -y install mysql-server

We are done installing! 😀   Now the real fun begins… configuration.  Since I host a multitude of sites on my server, I setup a directory structure like the following for both site configurations and files.  But, you can do what you want…  just remember to change your paths in the config files I post here, otherwise it will not work for you.

  • /hosting
    • /hosting/DOMAINS
      • /hosting/DOMAINS/the.domain.com
        • /hosting/DOMAINS/the.domain.com/www
        • /hosting/DOMAINS/the.domain.com/fpm-pools
    • /hosting/nginx-config
    • /hosting/site-config

I remove the default nginx config otping for my own, so do a: rm -f /etc/nginx/nginx.conf && echo “include /hosting/nginx-config/nginx.conf;” > /etc/nginx/nginx.conf

Now download the following and unzip it to your /hosting/nginx-config/ directory: DOWNLOAD HERE

You are now ready to setup your first site.  Run this and change the domain to the domain of your need: mkdir -p /hosting/DOMAINS/example.com/www && echo “<h1>Hello World</h1>” > /hosting/DOMAINS/example.com/www/index.php

We also need to configure the running of your site through fpm and nginx, copy and paste the following to /hosting/site-config/example.com make sure to change the paths to fit your needs, as well as the domain.

upstream your-fpm-lb {
	# PHP-FPM - make sure the ports you decide to use are open
        # You should at least have 2 pools available to you.  But really no more than 4 is necessary

# Redirects
server {
	# what we want to redirect
	server_name www.example.com;
	# where we want to redirect to
	return 301 http://example.com$request_uri; 

server {
	# the document root of your site
	root /hosting/DOMAINS/example.com/www;
	# the default page for your site
	index index.php;
	# the main fqdn of the site
	server_name example.com;
	# your access log location.  I leave this commented for performance
	#access_log /logs/example.com-access.log;
	# your error log location, i only enable critical errors to log - performance
	error_log /logs/www.westernmasshosting.com-error.log crit;
	# let’s setup the php fpm processor
	location ~ [^/]\.php(/|$) {
		# let’s turn on the keep alive
		fastcgi_keep_conn on;
		# include our default fastcgi configuration: see file for details
		include /hosting/nginx-config/site-fastcgi-common.conf;
		# set to your fpm upstream above
		fastcgi_pass your-fpm-lb;
	# Configure memcached to be usable. See file for details 
	include /hosting/nginx-config/memcache-enabled.conf;
	# Configure caching. See file for details
	include /hosting/nginx-config/yes-cache.conf;
	# Configure no caching. See file for details
	#include /hosting/nginx-config/no-cache.conf;
	# Configure default site settings, Required.  See file for details
	include /hosting/nginx-config/all-sites.conf;
	# Configure gzipping of static resources.  See file for details
	include /hosting/nginx-config/gzip.conf;
	# Configure some extra security for WordPress sites.  See file for details.
	include /hosting/nginx-config/wp-security.conf;

One last bit of configuration.  We need to setup our fpm pools.  Since we configured 4 upstream connections in our site config, we need to configure 4 pools

Create a new user for these pools to run under

adduser -y example-user

Copy and paste the following into 4 files located in /hosting/DOMAINS/example.com/fpm-pools

; Start a new pool
; make sure to update the number
; what user should this pool run as
user = example-user
; keep this www-data so nginx can serve the site
group = www-data
; change this to reflect one of the ports in the upstream block of your site config
listen =
; We don’t need to have too high of a task priority
process.priority = 0

; fpm process management
pm = dynamic
pm.max_children = 200
pm.start_servers = 20
pm.min_spare_servers = 20
pm.max_spare_servers = 60
pm.max_requests = 500

We can now start our engines 🙂   Run this command to make sure you haven’t messed up anything 😉 nginx -t  then run the following to restart all of your services to start hosting your site.

/etc/init.d/memcached restart
/etc/init.d/php7.0-fpm restart
/etc/init.d/nginx restart

Now that we have the “basics” out of the way, let’s head into configuring MySQL to handle the loads we are going to place on it.    Copy/Paste the following into your mysqld.conf (usually located at /etc/mysql/mysql.conf.d)

[showhide type=”mysql” more_text=”Show mysqld.conf” less_text=”Hide mysqld.conf”]

# The MySQL database server configuration file.
# You can copy this to one of:
# - "/etc/mysql/my.cnf" to set global options,
# - "~/.my.cnf" to set user-specific options.
# One can use all long options that the program supports.
# Run program with --help to get a list of available options and with
# --print-defaults to see which it would actually understand and use.
# For explanations see
# http://dev.mysql.com/doc/mysql/en/server-system-variables.html

# This will be passed to all mysql clients
# It has been reported that passwords should be enclosed with ticks/quotes
# escpecially if they contain "#" chars...
# Remember to edit /etc/mysql/debian.cnf when changing the socket location.

# Here is entries for some specific programs
# The following values assume you have at least 32M ram

socket		= /var/run/mysqld/mysqld.sock
nice		= 0

# * Basic Settings
user		= mysql
pid-file	= /var/run/mysqld/mysqld.pid
socket		= /var/run/mysqld/mysqld.sock
port		= 3306
basedir		= /usr
datadir		= /var/lib/mysql
tmpdir		= /tmp
lc-messages-dir	= /usr/share/mysql
# Instead of skip-networking the default is now to listen only on
# localhost which is more compatible and is not less secure.
bind-address		=
# * Fine Tuning
key_buffer_size = 128M 
max_allowed_packet = 16M 
thread_stack = 128K 
thread_cache_size = 8 
table_open_cache = 8192 
max_heap_table_size = 256M 
innodb_buffer_pool_size = 4G
myisam-recover-options = BACKUP 
innodb_log_file_size = 512M

#table_cache            = 64
#thread_concurrency     = 10
# * Query Cache Configuration
query_cache_limit	= 4M
query_cache_size        = 1024M
# * Logging and Replication
# Both location gets rotated by the cronjob.
# Be aware that this log type is a performance killer.
# As of 5.1 you can enable the log at runtime!
#general_log_file        = /var/log/mysql/mysql.log
#general_log             = 1
# Error log - should be very few entries.
log_error = /var/log/mysql/error.log
# Here you can see queries with especially long duration
#log_slow_queries	= /var/log/mysql/mysql-slow.log
#long_query_time = 2
# The following can be used as easy to replay backup logs or for replication.
# note: if you are setting up a replication slave, see README.Debian about
#       other settings you may need to change.
#server-id		= 1
#log_bin			= /var/log/mysql/mysql-bin.log
expire_logs_days	= 10
max_binlog_size   = 100M
#binlog_do_db		= include_database_name
#binlog_ignore_db	= include_database_name
# * InnoDB
# InnoDB is enabled by default with a 10MB datafile in /var/lib/mysql/.
# Read the manual for more InnoDB related options. There are many!
# * Security Features
# Read the manual, too, if you want chroot!
# chroot = /var/lib/mysql/
# For generating SSL certificates I recommend the OpenSSL GUI "tinyca".
# ssl-ca=/etc/mysql/cacert.pem
# ssl-cert=/etc/mysql/server-cert.pem
# ssl-key=/etc/mysql/server-key.pem


For the most part we are now done and ready to start serving up scalable, efficient, and fast loading WordPress websites.  Just the config above alone is not enough to ensure high availability though.  There is alot more work to be done, mostly with theme development.

Just like any other web application, proper development goes a long way.  Design as well, if your app’s design isn’t optimized, you will still create a heavy load on the server, that is un-necessary.  Always optimize your images, always set the dimensions when you call them as well.  Make sure you concatenate and minify your css and javascripts where you can, and load them appropriately in your html code.  (css in the head, keep as much javascript at the bottom of your documents as well).

That’s it for now, stay tuned for the rest in this series.  And as always…   Happy Coding 🙂


High Performance Wordpress and Server - Part I - Server Setup
<![CDATA[Android/Kernel Tweaking ~ Team-DomPop Style]]> https://www.westernmasshosting.com/android-kernel-tweaking-team-dompop-style/ Mon, 07 Sep 2015 14:07:09 +0000 Kevin P http://www.westernmasshosting.com/?p=2988 I wanted to make sure I got this here, before I forgot…

I wanted to make sure I got this here, before I forgot what I did to make these awesome tweaks.  They should be pretty universal, so long as you can make the edits to the kernels ramdisk.  (See your kernel provider for permission and details).  I’m going to break this up into sections to make it easier for me to read.


My goals for this were pretty simple.

  1. Low Level Configuration
  2. Faster Boot Time
  3. Device Performance Enhancement
  4. Boot Process Specific Firing of Scripts



Once you have your kernel torn apart, open the file default.prop and add these tweaks in, normally you would put these in your devices build.prop file, however, I want things my way so…

# kevp75 Default Tweaks
# Rendering Tweaks

# Enable GPU Acceleration

# Saves some battery without reducing performances

# Battery save 

# Misc Tweaks (enables ADB service)

# Sensor Sleep Control

# Device will boot faster

# Reduce dial-out time

# Allow purge of wp-content to free more ram

# More free ram and apps load faster

# More RAM

# Increase general Performance

# Increase some Performance

# Reduce the black screen time of the proximity sensor


# Better Scrolling responsiveness and speed

# Smoothens UI

# Increase camera's photo and video recording quality

# Better Flashlight intensity and camera-flash quality

# Increase the volume steps in-call

# Better call voice quality.

# Force launcher into memory

# Disable Bytecode Verification

# Improves Camera &amp; Video Results

# Increase jpg quality to 100%

# Disable Error reporting and logs

# Disable Sending Usage Data

# Fix some application issues (FC)

# MMS APN retry timer set to 2 sec( if SMS/MMS couldn`t be sent, it retries after 2 instead of 5 seconds)
ro.gsm.2nd_data_retry_config=max/_retries=3, 2000, 2000, 2000

# Miscellaneous Tweaks for performance

# Better internet browsing &amp; download speed
net.tcp.buffersize.default=4096,87380,256960,4096, 16384,256960
net.tcp.buffersize.wifi=4096,87380,256960,4096,163 84,256960
net.tcp.buffersize.umts=4096,87380,256960,4096,163 84,256960
net.tcp.buffersize.gprs=4096,87380,256960,4096,163 84,256960
net.tcp.buffersize.edge=4096,87380,256960,4096,163 84,256960
net.tcp.buffersize.hspa=6144,87380,524288,6144,163 84,262144
net.tcp.buffersize.lte=524288,1048576,2097152,5242 88,1048576,2097152
net.tcp.buffersize.hsdpa=6144,87380,1048576,6144,8 7380,1048576
net.tcp.buffersize.evdo_b=6144,87380,1048576,6144, 87380,1048576

# Smoother video streaming and tweak media

# 3G signal and speed tweaks



# Support For IPV4 and IPV6


# Wireless Tweaks
net.ipv4.tcp_mem=187000 187000 187000
net.ipv4.tcp_rmem=4096 39000 187000
net.ipv4.tcp_wmem=4096 39000 18700

# Video Acceleration Enabled And HW debugging
debug.egl.profiler=1 # Measure rendering time in adb shell dumpsys gfxinfo
debug.composition.type=gpu # Disable hardware overlays and use GPU for screen compositing

# Disable logcat

# Better image quality, lower performance.


# MMS APN retry timer set to 2 sec( if SMS/MMS couldn`t be sent, it retries after 2 instead of 5 seconds)
ro.gsm.2nd_data_retry_config=max/_retries=3, 2000, 2000, 2000

# Flag Tuner

# MultiTasking Tweaks

#Disable Scrolling Cache For Faster Scrolling

### ViPER4Android



Here comes the fun stuff.  This is where things can get real screwy, so be real careful what you do in here.  Open the file init.rc and add in the sections I specify below, exactly where I specify them…

import /init.dp.rc # Add this line at the top of the file, right under the last existing import
on early-init
    # Overclock just a tad, through booting boot, also set min scaling and govenor
    # I use Lean Kernel as my base which allows the overclocking
    write /sys/devices/system/cpu/cpu0/cpufreq/scaling_max_freq 2726400
    write /sys/devices/system/cpu/cpu1/cpufreq/scaling_max_freq 2726400
    write /sys/devices/system/cpu/cpu2/cpufreq/scaling_max_freq 2726400
    write /sys/devices/system/cpu/cpu3/cpufreq/scaling_max_freq 2726400
    write /sys/devices/system/cpu/cpu0/cpufreq/scaling_min_freq 2496000
    write /sys/devices/system/cpu/cpu1/cpufreq/scaling_min_freq 2496000
    write /sys/devices/system/cpu/cpu2/cpufreq/scaling_min_freq 2496000
    write /sys/devices/system/cpu/cpu3/cpufreq/scaling_min_freq 2496000
    write /sys/devices/system/cpu/cpu0/cpufreq/scaling_governor "performance"
    write /sys/devices/system/cpu/cpu1/cpufreq/scaling_governor "performance"
    write /sys/devices/system/cpu/cpu2/cpufreq/scaling_governor "performance"
    write /sys/devices/system/cpu/cpu3/cpufreq/scaling_governor "performance"

    # At bottom of this put
    setprop dp.early_init.done 1
on init

    # At bottom put this:
    setprop dp.init.done 1
on late-init

    # At the bottom put this line
    setprop dp.late_init.done 1
on post-fs

    # At bottom
    setprop dp.post_fs.done 1
on post-fs-data

    # At bottom
    setprop dp.post_fs_data.done 1
on boot

    # Above the line class_start core put
    setprop dp.boot.done 1

What do these all do?  Well, right now all they do is create a new property at each stage of the boot process for the kernel.  Please google the boot process if you need more information.


This is a new file (that we added to init.rc above).   Create it now, and put in the following code

# Check Properties
on property:dp.early_init.done=1
	start dp_early_init
	start dpu_early_init

on property:dp.init.done=1
	start dp_init
	start dpu_init

on property:dp.late_init.done=1
	start dp_late_init
	start dpu_late_init

on property:dp.post_fs.done=1
	start dp_post_fs
	start dpu_post_fs

on property:dp.post_fs_data.done=1
	start dp_post_fs_data
	start dpu_post_fs_data

on property:dp.boot.done=1
	start dp_boot
	start dpu_boot

on property:sys.boot_completed=1
    start dp_post_boot
    start dpu_post_boot

# DP User Services
service dpu_early_init /sbin/bash /data/dp_scripts/onearlyinit.sh
	user root

service dpu_init /sbin/bash /data/dp_scripts/oninit.sh
	user root

service dpu_late_init /sbin/bash /data/dp_scripts/onlateinit.sh
	user root

service dpu_post_fs /sbin/bash /data/dp_scripts/onpostfs.sh
	user root
service dpu_post_fs_data /sbin/bash /data/dp_scripts/onpostfsdata.sh
	user root
service dpu_boot /sbin/bash /data/dp_scripts/onboot.sh
	user root
service dpu_post_boot /sbin/bash /data/dp_scripts/onpostboot.sh
	class late_start
	user root

# DP Services
service dp_early_init /sbin/bash /sbin/0/onearlyinit.sh
	user root

service dp_init /sbin/bash /sbin/0/oninit.sh
	user root

service dp_late_init /sbin/bash /sbin/0/onlateinit.sh
	user root

service dp_post_fs /sbin/bash /sbin/0/onpostfs.sh
	user root
service dp_post_fs_data /sbin/bash /sbin/0/onpostfsdata.sh
	user root
service dp_boot /sbin/bash /sbin/0/onboot.sh
	user root
service dp_post_boot /sbin/bash /sbin/0/onpostboot.sh
	class late_start
	user root

This will check for each set property during the boot process and start the services specified.  As you can see, I have 2 sets of services.  1 is specifically sitting in the /sbin/0/ folder of the ramdisk.  These scripts are the ones that will configure our kernel.   The second I added in to allow the users a level of tweaking outside the normal init.d scripts.  These all fire at the specified points in the kernel boot process.   Rather than posting the files, grab the attached zip file, and see for yourself what I have done. 🙂   I have also included the files above for those that cannot follow directions 😛

MEGA: Download Now

~ Happy Tweaking!

Android/Kernel Tweaking ~ Team-DomPop Style
<![CDATA[Cpanel Nginx Install and Configuration]]> https://www.westernmasshosting.com/cpanel-nginx-install-and-configuration/ Thu, 04 Jun 2015 14:05:51 +0000 Kevin P http://www.westernmasshosting.com/?p=2986 Install & Configure Nginx on Existing Cpanel Servers cd /usr/local/src wget http://nginxcp.com/latest/nginxadmin.tar…

Install & Configure Nginx on Existing Cpanel Servers
cd /usr/local/src
wget http://nginxcp.com/latest/nginxadmin.tar
tar xf nginxadmin.tar
cd publicnginx
./nginxinstaller install
  • Once installation completes, login to WHM for that server
  • Scroll past ConfigServer Security&Firewall to see Nginx Admin and click it
  • Add the 0 */1 * * * /usr/sbin/tmpwatch -am 1 /tmp/nginx_client to crontab -e on the server
  • Click ‘Configuration Editor
  • Copy/Paste into the field
user nobody;
# no need for more workers in the proxy mode
worker_processes 4;
error_log /var/log/nginx/error.log warn;
worker_rlimit_nofile 20480;
events {
worker_connections 5120; # increase for busier servers
use epoll; # you should use epoll here for Linux kernels 2.6.x
http {
server_name_in_redirect off;
server_names_hash_max_size 10240;
server_names_hash_bucket_size 1024;
include mime.types;
default_type application/octet-stream;
server_tokens off;
# remove/commentout disable_symlinks if_not_owner;if you get Permission denied error
# disable_symlinks if_not_owner;
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 5;
gzip on;
gzip_vary on;
gzip_disable "MSIE [1-6].";
gzip_proxied any;
gzip_http_version 1.0;
gzip_min_length 1000;
gzip_comp_level 6;
gzip_buffers 16 8k;
# You can remove image/png image/x-icon image/gif image/jpeg if you have slow CPU
gzip_types text/plain text/xml text/css application/x-javascript application/xml application/javascript application/xml+rss text/javascript application/atom+xml;
ignore_invalid_headers on;
client_header_timeout 3m;
client_body_timeout 3m;
send_timeout 3m;
reset_timedout_connection on;
connection_pool_size 256;
client_header_buffer_size 256k;
large_client_header_buffers 4 256k;
client_max_body_size 200M; 
client_body_buffer_size 128k;
request_pool_size 32k;
output_buffers 4 32k;
postpone_output 1460;
proxy_temp_path /tmp/nginx_proxy/;
proxy_cache_path /var/cache/nginx levels=1:2 keys_zone=microcache:15m inactive=24h max_size=500m;
client_body_in_file_only on;
log_format bytes_log "$msec $bytes_sent .";
log_format custom_microcache '$remote_addr - $remote_user [$time_local] '
'"$request" $status $body_bytes_sent '
'"$http_referer" "$http_user_agent" nocache:$no_cache';
include "/etc/nginx/vhosts/*";
  • Click ‘Rebuild Vhosts
  • Click ‘Restart Nginx
  • Click ‘Nginx Admin’ and you should now see it reporting as UP

Every new account on the server, will need to have the Vhosts rebuilt, and Nginx Restarted  (to be safe)

Cpanel Nginx Install and Configuration