AWS Scripts: [Python] Removing Invalid Characters in Object Names in Your S3 Bucket

Hey All!

This is my first post on this website so I thought I should make it a good one. I was working with my S3 bucket the other day from my Mac laptop and uploaded a few files without giving it any thought. Got home and jumped on my PC to download these files.. turns out I couldn’t download these files even though my permissions were correct to download them.

The error presented was “IOError: [Errno 22] invalid mode (‘wb’) or filename: ‘C:\\Users\\Dustin\Documents\Logs-12:22:13PM.txt'”

Naturally, I googled for why Errno 22 would be returned and it turns out the colon character is not allowed within a filename for a Windows machine, who would have thought?!

Even though I only had a few files with colons within the filename, I thought it might be helpful to create a Python script using the Boto3 SDK to remove the colons so that I could actually edit them on my PC.

Below I’ve added the file with the appropriate comments explaining what it does.  I hope you get as much, if not more, use out of it as I have.

“rename-bad-objects-windows.py”

 

Detayl’s Repository 

Enjoy!

Rename Bad Objects in S3 for Future Download.

AWS Guides: Speed Up or Optimize WordPress in the Cloud

wordpresslogoHey guys, below is a tutorial on how to speed up your WordPress installation in the Cloud. This tutorial will cover an installation of WordPress on a Linux LAMP stack on a instance on the AWS Cloud inside Ec2.

First things first, depending on your instance size and specs you will need to run some calculations on how much your server can actually take. Most users miss an important step with their default Apache installation on their server or instance in the cloud. The biggest misunderstanding is that Apache will work out of the box perfectly with any server size. This is wrong. The default Apache settings require quite a bit of server resources and if you do not have them, which you won’t likely will not. This calculation uses your available ram to calculate Apache’s prefork MPM settings. The calculation out something like this: (Total Memory – Critical Services Memory) / Size Per Apache proces.

The idea of making this calculation is to ensure that Apache leaves enough memory for your system to use and not starve other processes such as MySQL.

From httpd.conf:

prefork MPM

  • StartServers: number of server processes to start
  • MinSpareServers: minimum number of server processes which are kept spare
  • MaxSpareServers: maximum number of server processes which are kept spare
  • ServerLimit: maximum value for MaxClients for the lifetime of the server
  • MaxClients: maximum number of server processes allowed to start
  • MaxRequestsPerChild: maximum number of requests a server process serves

Now the best way to tweak this is to understand how much each Apache process is using currently and take that number into consideration. For example if you have the following configuration:

<IfModule prefork.c>
 StartServers       10
 MinSpareServers    10
 MaxSpareServers   10
 MaxClients       10
 MaxRequestsPerChild  4000
</IfModule>

For example the above configuration would keep at a minimum 10 processes running at any given time (Start Servers) which you also only allow a max of 10 to serve your clients (Max Clients).  In this case MinSpareServers and MaxSpareServers should not come into play. After a single process serves 4000 requests it will be terminated and another called to replace it based on the MaxRequestsPerChild directive.

So assume that you have 1.7GB of memory to work with which is in line with an m1.small instance on Ec2. So lets say that you have 60MB per process average httpd. Assuming 200MB for O/S and other services (more if running MySQL), you have 1.5GB – lets leave about 10% spare which would give you about 23 processes max. So lets try:

<IfModule prefork.c>
 StartServers       5
 MinSpareServers    2
 MaxSpareServers   5
 MaxClients       23
 MaxRequestsPerChild  2000
</IfModule>

These optimizations are necessary if you want to have a smooth running server and to endure a large traffic spike. I also suggest taking a look at MySQL optimization if you are running a local SQL server and not remote like RDS. I do not have specific steps detailed here but I have provided a resource to take a look at.

aws-logo

Next I would like to take a look at empowering WordPress to use a CDN in the cloud like AWS CloudFront or any other service you choose. For the purposes of this tutorial I used CloudFront. We can do this with a WordPress Plugin called W3 Total Cache. Go ahead and install this plugin onto your WordPress installation and take a look at the CDN options. To setup your W3 Total Cache plugin to use CloudFront, select “Performance” then “General Settings,” then scroll down to CDN, then check the enable box and switch the drop down menu to Amazon CloudFront under Origin Pull. Then press save. Once saved go to “Performance” then “CDN” and add your AWS Access Key and Secret key which can be found here.

CDNsettings

 

You can either press “Create Distribution” from the plugin to create your CloudFront Distribution or you can do it manually via the AWS Console. Please be patient, your CloudFront distribution will take anywhere from 15-30 minutes to create on AWS. Once finished you can save and test your settings. You should then see your code change when you view source in any browser to include your custom cname or the distribution ID.

Now I do recommend that you use CNAMEs, and at least 2-3 of them because this will speed up page load time by giving your browser different CNAMES to point at and open up parallel download threads that will increase overall page rendering time.

Now using a CDN is only the first step, there are many other options in the plugin to increase speed. You should also enable the Page Cache options, database caching, and minify. Please read about each option in the support tab of W3 Total Cache here “domain.com/wp-admin/admin.php?page=w3tc_faq.” I may add to this tutorial in the future but this should give you a head start. Also remember to test your performance with Pingdom afterwards.

AWS Guides: How to increase your EC2 Linux root volume size

awsThis guide applies to increasing the root volume size of an EBS EC2 Linux instance on AWS. By default most Linux instances come with an 8gb root volume unless you changed it at first launch. If you are one of the people that forgot to do this or you just simply need to extend the volume take a look at this guide. Be sure to also check out my other guide on how to increase the size of a Windows EBS Volume.

I started out with an Amazon Linux instance and an 8gb volume. First you want to navigate to your AWS Console and then click EC2 and then Volumes on the left panel. Find the volume that your instance is attached to and right click and create snapshot.

createsnapshot

A new window will pop up and you can fill in a name and description and then select ‘Yes, Create’.

snapshotdescription

Once your snapshot is started creating, navigate over to the snapshot section of the EC2 Console on the left side panel. You will then look for the snapshot you just created with the same name you gave it. This may take a while to for the snapshot process to complete.

snapshot

snapshotcompleted

Once the snapshot is complete, right click on the snapshot and select ‘Create Volume’. Now pay attention here because this is where you specify the new volume size which is larger than previously, for this example I chose 100gb. Please also note that you need to make the volume in the same Availability Zone as your instance, mine happens to be in us-west-2a. You must also choose either a standard volume or Provisioned IOPS. Once done, press ‘Yes, Create’.

createvolume2

Once the volume is created, navigate over to your EC2 Instances section and go ahead and stop your instance. Once stopped, go ahead and detach the original root volume from the Volumes section of the EC2 Console. To do this you simply find the volume attached to your instance and right click, and select detach.

detach-volume

Once the volume is detached, go ahead and attach the volume you created to the instance by selecting the 100gb volume, right click, and attach the volume to your instance specifying the mount point as /dev/sda1.

attach-volume

You may now start your instance again. Once your instance is back and running go ahead and SSH into the instance (Note: Your IP address may have changed or you may need to re-associate your elastic IP address). You may also need to switch to root if logged in as ec2-user, use ‘sudo -s’ to accomplish this. Now the attached volume will still appear as 8gb until you extend the volume with ‘resize2fs /dev/xvda1’ as seen in the code below. Your mount points may vary, you can check these with either ‘mount’ or ‘fdisk -l’.

[root@ip-10-254-59-62 ec2-user]# df -h
Filesystem            Size  Used Avail Use% Mounted on
/dev/xvda1            7.9G  957M  6.9G  12% /
tmpfs                 829M     0  829M   0% /dev/shm
[root@ip-10-254-59-62 ec2-user]# resize2fs /dev/xvda1
resize2fs 1.42.3 (14-May-2012)
Filesystem at /dev/xvda1 is mounted on /; on-line resizing required
old_desc_blocks = 1, new_desc_blocks = 7
The filesystem on /dev/xvda1 is now 26214400 blocks long.

[root@ip-10-254-59-62 ec2-user]# df -h
Filesystem            Size  Used Avail Use% Mounted on
/dev/xvda1             99G  969M   98G   1% /
tmpfs                 829M     0  829M   0% /dev/shm
[root@ip-10-254-59-62 ec2-user]#

If you have made it this far, congrats on your expanded volume. Let me know if you have any questions.

AWS Guides: How to use Amazon SES with Postfix

awsIf you have ever wondered how to use the Amazon SES SMTP endpoint with Postfix this is the guide for you. This is going to be very close to what is in the documentation on the AWS Website. I will cover some pain points that I have seen and ran into while trying to implement this.

Below we will cover integration to SES with both STARTTLS and Secure Tunnel (STUNNEL).

To configure integration using STARTTLS

1. On your mail server, open the main.cf file. Depending on your OS, this file resides in the /etc/postfix folder.
2. Add the following lines to the main.cf file, modifying them to reflect your particular situation, and then save the file.

relayhost = email-smtp.us-east-1.amazonaws.com:25
smtp_sasl_auth_enable = yes
smtp_sasl_security_options = noanonymous
smtp_sasl_password_maps = hash:/etc/postfix/sasl_passwd
smtp_use_tls = yes
smtp_tls_security_level = encrypt
smtp_tls_note_starttls_offer = yes

3. Edit the /etc/postfix/sasl_passwd file. If the file does not exist, create it. Add the following lines to the file, replacing USERNAME and PASSWORD with your SMTP user name and password. Now this is where it gets confusing, you will want to create a SMTP User from the SES Console at: https://console.aws.amazon.com/ses/home?#smtp-settings. You will create a user here and be presented with the following Window not from the IAM Console as the credentials are different:

     smtpcreds

 

Please NOTE: These credentials are an example and are now invalid, please do not use them. 

email-smtp.us-east-1.amazonaws.com:25 USERNAME:PASSWORD ses-smtp-prod-335357831.us-east-1.elb.amazonaws.com:25 USERNAME:PASSWORD      

So it would be something like:

email-smtp.us-east-1.amazonaws.com:25 AKIAICGIRMNGVGXWNKA:Aq+M1pekvR3yibnqFfYe1MAJGZ1NJ4yduxP0svMwRO5 ses-smtp-prod-335357831.us-east-1.elb.amazonaws.com:25 AKIAICGIRMNGVGXWNKA:Aq+M1pekvR3yibnqFfYe1MAJGZ1NJ4yduxP0svMwRO5                    Save the sasl_passwd file.

At a command prompt, issue the following command to create an encrypted file containing your SMTP credentials.sudo postmap hash:/etc/postfix/sasl_passwd

Remove the /etc/postfix/sasl_passwd file.

Tell Postfix where to find the CA certificate (needed to verify the SES server certificate).If running on the Amazon Linux AMI:sudo postconf -e ‘smtp_tls_CAfile = /etc/ssl/certs/ca-bundle.crt’If running on Ubuntu Linux:sudo postconf -e ‘smtp_tls_CAfile = /etc/ssl/certs/ca-certificates.crt’

To configure integration using a secure tunnel

To begin, you will need to set up a secure tunnel as described in Secure Tunnel. In the following procedure, we use port 2525 as your stunnel port. If you are using a different port, modify the settings that you actually use accordingly.

1. On your mail server, open the main.cf file. On many systems, this file resides in the /etc/postfix folder.

2. Add the following lines to the main.cf file, modifying them to reflect your particular situation, and then save the file.

relayhost = 127.0.0.1:2525
smtp_sasl_auth_enable = yes
smtp_sasl_security_options = noanonymous
smtp_tls_security_level = may
smtp_sasl_password_maps = hash:/etc/postfix/sasl_passwd

3. Edit the /etc/postfix/sasl_passwd file. If the file does not exist, create it. Add the following line to the file, replacing USERNAME and PASSWORD with your SMTP user name and password.

127.0.0.1:2525 USERNAME:PASSWORD

And another example of what it should look like:

127.0.0.1:2525 AKIAICGIRMNGVGXWNKA:Aq+M1pekvR3yibnqFfYe1MAJGZ1NJ4yduxP0svMwRO5

4. Save the sasl_passwd file.
5. At a command prompt, issue the following command to create an encrypted file containing your SMTP credentials.

sudo postmap hash:/etc/postfix/sasl_passwd

6. Remove the /etc/postfix/sasl_passwd file.
7. When you have finished updating the configuration, restart Postfix. At the command line, type the following command and press ENTER.

sudo /etc/init.d/postfix restart

Testing the implementation
You can test functionality with “mail -s test email@domain.com < mail.txt” with mail.txt containing:

Date: Thu Jan 11 08:41:54 2013
To: email@domain.com
Subject: The subject of the message
From: sender@email.com

Body of message goes here

Now you also need to make sure that you correctly flag the from address and setup your mail server correctly with a verified domain otherwise you will get the error Email Address not verified. Also if you do not get the credentials right above you will end up with the following error: “Apr 16 05:26:33 domU-12-31-39-16-38-A6 postfix/smtp[1101]: CE19B421CD: SASL authentication failed; server email-smtp.us-east-1.amazonaws.com[50.19.243.

215] said: 535 Authentication Credentials Invalid”

 

If you’ve gotten this far without errors then I believe you are set! Let me know if you have any trouble with this guide and I will try and make any section clearer

AWS Guides: How to resize a EC2 Windows EBS Volume

The following guide will help you resize an EBS volume on a Windows instance inside EC2 on AWS. The task can be a little daunting but is quite easy once you have done it the first time. Remember this guide only helps with increasing the size of the EBS volume not decrease it. Please be sure to also check out my guide on how to increase the root volume size of an Ec2 Linux Volume.

Step 1: Stop the instance that you are going to perform the volume resize on. (Note: This step is recommended but not required).

Step 2: Create a snapshot of the volume attached to the instance. This can be accomplished by navigating to the EC2 section of the AWS Console and selecting volumes, the ‘attachment information’ column will show which instance the volume is attached to. Once you find the volume, right click and select create snapshot. The details of the snapshot can be whatever you want.

Step 3: Navigate to the snapshot section of the EC2 console on the left side. Once the snapshot creation completes (Note: this can take minutes to hours depending on the size of the volume). Right click on the snapshot that you just created and select ‘create volume’. Make sure you create the volume in the same availability zone as the instance. On the next screen you can specify a larger volume size than previously, for example if the original volume was 30gb you can specify 100gb now (Note: You cannot specify a volume size smaller than the original snapshot size).

 

Step 4: Once the new volume is created, detach the original volume from the instance and attach the new volume that was created. (Note: Make sure you mount the volume on the same mount point as it was originally notated by ‘Device’ in the picture below). The root volume is usually /dev/sda1.

Step 5: Start the instance.

Step 6: The new volume size will not be reflected immediately inside the Windows instance so you will have to do one more thing. Connect to your Windows instance with Remote Desktop and open the start menu. In the run menu or the bottom of the start menu on newer versions of Windows type ‘Diskmgmt.msc’ and press enter. On the Disk Management window you want to find the drive you want to resize this is usually the C: drive for the root volume. Right click on the drive and say Extend, follow the wizard and voila you have resized your volume on EC2.

If you enjoyed the guide please like it and share with your friends, please also share your experiences in the comment section below.

AWS Guides is a series where I share my experiences with hosting on Amazon Web Services.