Archive for the Tutorials Category

Custom A* Pathfinding (Part 1: The Algorithm)

Posted in Tutorials with tags , , on August 16, 2014 by chrispikul510

So, I’m revamping this blog as a kind of intermediate location for my articles before I start my own site. And today, I’m going to talk about Unity, and more specifically the A* Pathfinding algorithm (pronounced “A Star”). In case you didn’t know pathfinding is a system that helps determine the “fastest” way to a target destination without running into obstacles.

Back story

So there are plenty of good pathfinding algorithms available for Unity. Including their very nice built-in NavMesh system. You could also go with the very nice Aron Granberg A* Pathfinding Project, which offers you many different ways to implement the pathfinding algorithm. But, alas, I needed a custom solution. Why? Because in the game I’m working on, Guns, Guts, & Glory, the level is more-or-less a dynamic system. And because I have NO capital. What do I mean “dynamic”? In my game not every room is open, and not every level is the same. Now with Unity’s own NavMesh system this instantly becomes a problem, because I need to treat each room as it’s own pathfinding graph, where the NavMesh system limits you to 1. Graph being a fancy word for the way-points and obstacle data. And Aron Granberg’s solution doesn’t help much either without purchasing the pro version. So to illustrate this, here’s a top down example scene:

Scene Setup


The enemy (red) needs to get to the player (green). The player will move of course. But even more so, what do we do when a player opens a door (black)? Seems simple, but it’s actually a fairly complicated process if your using a standard implementation. To explain a bit better, the Unity NavMesh system can be very quick and offers the ability to use a mesh to define the areas, BUT only allows one single registered NavMesh. So if your level geometry is static and un-changing then this is a perfect system, but as soon as you need to connect meshes your stuck. Aron Granberg’s system will allow a grid based graph that will scan your level for you! Very neat if you have a open terrain style. He also offers a NavMesh solution that seems like it might work but I don’t really need that much, I just need to get the enemies through rooms. Plus the whole turning NavMeshes on or off poses a problem. We can’t afford a graph recalculation just because a door was opened. There is also a Point Graph, which still has the whole recalculation problems and toggling rooms. But it does have a low footprint so I like the concept. So, to sum up our requirements!

  • Point style graph so the footprint is low.
  • Toggle style way-points so things can be turned on/off.
  • No re-scanning when something changes (if we can help it).

The Pathfinding Algorithm

Now that I’ve explained what we need, and why we need to make it. I’m going to take a moment to explain the A* Pathfinding Algorithm. It’s really known as the A* Search Algorithm. It uses whats called a best-first approach, or a greedy search. Meaning, it kind of builds the path as it goes using the best option first and continuing that way. In contrast, you could iterate through ALL possible paths and pick the best one that way, but just as it sounds, that makes it expensive. To calculate which “node” it should pick next it uses a cost (or commonly referred to as “weight”) formula. This is simply:

f = d + h

Hopefully that didn’t scare you too much. In this formula, “d” is distance traveled, in other words, how far we have traveled so far. In common 2D grid based systems this just increments 1 by 1 as the path is built. Now the “h” variable is a bit more complicated. It represents the returned value of a “heuristic” method. That’s a scary word for a function that guesses how far we need to go. Once again, in common 2D grid based systems this is a simple Manhattan Distance formula, or Euclidean Distance formula. And if that just sounds even more complicated, in reality it is just: how far away is the target if we where to draw a straight line to it. If you have ever calculated how far away an object is, you have used this formula.

Now, I’m going to explain this in a 2D aspect for now, just to simplify the calculations. Adding another dimension is easy. Here is our example 2D scene.


Now lets get our green circle guy to the red X. Or at least find a way to. Now we are going to treat each grid square as a node. In this case our green circle is located at 2,4. And our red X is at 14,5. So the algorithm starts by creating an Open List, and a Closed List. The closed list represents the places we have been, and the open list is places we can consider. So the starting point, instantly goes in the closed list, because well, we have been there. Once we have this point, we look at all the nodes around the point. We add each one of these to the open list for consideration. With each one we calculate its cost/weight. If you remember the formula from earlier that formula now becomes:

cost = 1 + Heuristic(a, b)
Heuritic(Point A, Point B) = SqrRoot( (A.x - B.x)^2 + (A.y - B.y)^2 )

Now this formula is starting to take shape. Here the “1” is just me placing the distance traveled so far. Since this is the first check, and we haven’t gone anywhere, this equals 1. Now the heuristic function takes two parameters, and calculates the distance between them. In this example, “a” should be the current nodes location, and “b” should be the destination. So once we run this calculation on each node that surrounds the current one, the algorithm now picks the one with the lowest cost. This new low cost node gets removed from the open list and added to the closed list. If this makes sense, you should be able to see where I’m going with this. Using the chosen node as the new current node we repeat the process until we have arrived at our destination. Here’s a lovely picture showing this:



You can see from this, that it should choose the bottom right corner as the next stage. In case you where wondering, if two squares have the same cost, then the chosen one is random. As well as if there is an obstacle in the way, just don’t add that as a node. Repeating this process with leave you with a path saved in the closed list. I am not going to solve for this entire example since it isn’t what we are going to use anyways. We will be created a modified version of this that should hopefully have some speed tweeks and other nifty features. Join me next time for the 3D portions!




Starting a LAMP Server in Ubuntu 13.04 (AWS) Part 3: Configuring

Posted in Tutorials with tags , , , , , , , , , , , , on July 17, 2013 by chrispikul510

So where I left off, we had a LAMP stack started and running. We confirmed this by going to our browser and typing in the IP address we setup through AWS. This URL showed us a “It Works!” message. So now what? Well, on first install Apache2 will setup the document roots to /var/www which is on the root drive. Generally that would be fine, but we setup a external volume and we would like to use that instead. So first we need a place for all the pages/scripts/etc. to go.

cd /vol

mkdir web

mkdir web/public

What this does is creates a folder called web in the new volume. Then adds a folder name into that. If you run into any permission problems just prefix the commands with “sudo”. So our deepest directory now is /vol/web/public. I chose this format because I will be putting all public facing documents in the “public” folder. And all other dependencies like php libraries, development journals, admin tools, etc. will go in other folders in the “web” directory. This way, Apache treats /vol/web/public as the root folder in which no one can traverse any farther up. But PHP can, and will. So including files in other folders in “web” from php will still work. Now we kind of need to take a side step. Since we just created new folders, they have their permissions set to be owned by ubuntu/root. Apache2’s processes run as www-data by default. So Apache2 won’t have permission to use the files we put here yet. We’re going to fix that by creating a new group for both us and apache, and adjust some of the permissions.

sudo groupadd webdev

sudo usermod -a -G webdev ubuntu

sudo usermod -a -G webdev www-data

sudo chown ubuntu:webdev /vol/web/public

sudo chmod 775 /vol/web/public

sudo chmod g+s /vol/web/public

Blamo! That should work. Now to run through it, the groupadd adds a new system group that I called “webdev”. I then added users to the group with the usermod commands. Make sure that -G option is uppercase. Next is a chown command that sets the owner and group of the public folder. Be very careful with the chown command. Accidentally adding a space, or using the -R (recursive) option improperly can and will brick your entire system. Next comes a chmod command to adjust the permissions. Same warning applies. And the last chmod command adjusts the folders GID so that all new files created here will be attached to the same group as the parent folder (aka webdev). So now if we create a new file in “public” it should be entirely readable by apache. Good! But apache2 doesn’t know to look here yet. So let’s fix that.

Configure Apache2

First off. By default the Apache2 configuration files are located at /etc/apache2. You can use the “cd” command to move yourself there if you want. I’ll keep all locations absolute though. First off. Let’s copy their template so we can just use a starting point.

cp /etc/apache2/sites-available/default /etc/apache2/sites-available/tutorial

Feel free to change that name “tutorial” to whatever you want. Just remember it. Next go ahead and pull out nano and change the lines that I list (Don’t delete the others!)…

sudo nano /etc/apache2/sites-available/tutorial

ServerAdmin[enter your email address]

DocumentRoot /vol/web/public

<Directory /vol/web/public>

Options -Indexes FollowSymLinks MultiViews

AllowOverride All

Leave everything else alone for now. Make sure to note the “-” before Indexes. This makes it so that if you have subdirectories without index.html files, that apache won’t automatically show the directory listing for that folder (could be a big security risk). The AllowOverride has potential security risks, but is the easiest way to enable the use of .htaccess files. Those .htaccess files are useful for setting specific configurations per directory. Some people will suggest remove all the cgi-bin stuff. Which can help with security, but I don’t really see the use in that right now since it isn’t doing anything. Any who! Go ahead and save that file down by pressing CONTROL+X, then Y, then ENTER (all of which is on Mac). Next up, enter these commands…

sudo a2dissite default

sudo a2ensite tutorial

sudo service apache2 reload

That should make all the changes you just made, stick. If you got any warnings or errors its because something went wrong during that configuration part. Check your directories and such. Now if you refresh your browser you should get a 404 or some other error. If you still get the “It Works!” page, then check that you properly disabled the default settings with that “sudo a2dissite default” command. The error page right now tells us that it is looking for a file in the proper folder. So go ahead, just to test and make sure, enter these commands.

sudo nano /vol/web/public/index.php




Now refresh your browser. You should get a tasty PHP info page telling you all sorts of stuff about the system. If you see this (and you will know if you do). Congrats! Apache2 is setup for the new directory, and PHP is running with it! Now at this point. Your LAMP is mostly working. You could start coding in PHP right now, or building your site as you wish. But you’ll probably want to add a bit more functionality and fine tuning, such as security fixes and setting up that database better!

Some helpful tools/apps

So I could teach you how to edit the database through command line, but that’s boring! So I’m gonna introduce you to PhpMyAdmin. Install it as such.

sudo apt-get install phpmyadmin

Press Y to install. Then at the first big pink screen go ahead and press enter. When it asks you about the database configuration, say yes, then type in that root password you setup for MySQL. You can then add some more passwords after that. Once it’s all done you *should* be able to just go to http://your-server-ip/phpmyadmin and get the page, but unfortunatly this didn’t work for me. Maybe they need to update the package. Anyways, heres how to fix that if you get a 404.

sudo ln -s /etc/phpmyadmin/apache.conf /etc/apache2/conf.d/phpmyadmin

What that does is creates a symbolic link for the correct configuration file to the apache configuration directory. Think of it like a shortcut. Now if you want you can change the address you enter for phpmyadmin right now if you want. I suggest changing it so that script-kiddies don’t come by and try messing with your stuff. To do so, follow this command and change the appropriate line….

sudo nano /etc/apache2/conf.d/phpmyadmin

Alias /[WHATEVER URI YOU WANT] /usr/share/phpmyadmin

Save that down. Then for good measure run the “sudo service apache2 restart” command. Then check out your new tool with the adjusted URI. You should be greeted with a login form.

Skipping to the database!

Well, before I go into the rest of those fine tunings, I’m gonna jump to the database. This is because I kind of want to steam roll all the performance and security tips into it’s own post. So go ahead and get to the phpmyadmin tool in your browser. Use your root MySQL information here and login. Once inside you’ll notice some databases on the left menu bar, and tons of info and clicky objects on the rest of it. Navigate your way to the “test” database listed on the left menu. After clicking on that click on “Operations” on the top toolbar. It’s right in between Import and Privileges. Next go ahead and select the big red link in the middle labeled “DROP DATABASE”. It’ll then warn you about how your deleting a database. Go ahead and say yes. The “test” database is good to remove since it essentially has wide open permissions to the world. And hackers love to look for those.

Go ahead and click the “Home” icon on the very top right. Nows a good time to click through the toolbar icons and familiarize yourself with the user interface. Check out whats under Variables, or Status. And when your ready, we should go ahead and adjust some user permissions.

So go ahead and click on “Users” on the top. If you don’t see it, click the Home icon first. Now on first glance theres a huge issue. In fact, they highlighted it bright red. The default installation allows any user to login and use this database. NOT GOOD. Check the boxes next to them and press the “Go” button in the “Remove selected users” section. Good, there gone. But at this point others would recommend removing, or at least replacing the root user as well. This is so that anyone trying to brute force your database isn’t gonna be lucky enough to start by guessing the user “root” right off the bat. Good idea! Let’s do that! Click on “Add user”. Fill that sucker out. On Host, I used “Any host”, this is because I never know where I might be when I have to login to the database. I didn’t create a database for this user (it’s a web app not a personal/employee database). And make sure you scroll down to “Global privileges”. Go ahead and guess what that does. If you didn’t guess it sets what this user can and can’t do. Since I’m replacing my root account with this, I made sure to just hit that “Check All” link. At the bottom is a Resource limits box. You could set this if you wanted, but I don’t see the need right now. So go ahead and press “Add user” when your ready.

Once thats effectively done, go ahead and log out, and then back in using your new info. Go back to the “Users” pane and go ahead and delete the root account if you want. IF YOU WANT. I stress that because depending on what other apps or services you will use, if they have self installers that use the root account, they generally don’t ask for a user name. So deleting this will require you to manually configure your database stuff from now on. Personally, I left it there. I just made sure that the Host was set specifically to localhost, or even the IP of the box I’m logged into. This way, the root can only be accessed from within the SSH terminal, or from the server box itself. Now if you thought your done. Your not. We need to create one more user. So go ahead and get to the Add User box and fill out some info. I chose a username of “webdev”. I set the Host to “localhost”. I pressed the Generate Password button and made sure to write it down. Then moving to the Global Privileges box i chose none. Thats right none. Thats because I’m going to use this user in my PHP scripts, so I’m going to set these per database. This way if someone screws around with my php scripts by ways of SQL injection or something the permissions won’t get them any root access. So go ahead and add that one.

Closing Remarks

Now you can go ahead and start making databases, and tables. And if you got the know how, you should be able to connect to it through PHP as well. You can create files in the new web directory, and see them work in your browser. So essentially you have a functioning LAMP stack. What should you do with it now? Well a System Admin’s job is never done. So in the next series I’ll be going over security and conditioning. As well as how to make life a bit easier when it comes to developing these pages. We might even touch on subjects such as IP logging, GIT repositories, external database applications and code IDE’s. Thanks for reading this far, and feel free to share your comments.

Starting a LAMP Server in Ubuntu 13.04 (AWS) Part 2: Mounting & Apache

Posted in Tutorials with tags , , , , , , , , , , , , , , , on July 11, 2013 by chrispikul510

Continuing from Part 1. We now have a working Ubuntu EC2 Instance from Amazon’s AWS Web Services. We made a secondary volume and attached it. We gave it an IP address to reference it from. And we got our SSH up and running so we can easily log into it from home. So whats our next steps? Well lets make this thing functional!

To start the process, even though we attached a volume from AWS’s console it doesn’t actually format or properly mount the volume in Ubuntu. In other words, the wires are connected, but Ubuntu doesn’t know what to do with it. So to fix this is easy. Log into your instance through SSH. If your on a Mac, type in “terminal” in spotlight. Launch the terminal app and type the ssh command you learned from Part 1. If you’re on Windows, you’ll need to start up Putty (A windows SSH shell) and log in with the settings you created for your server. Once your in, we can begin. A quick note on using Ubuntu with EC2 instances. Amazon always sets up the default SSH to require a Private/Public Key Pair. It will not use passwords, and SHOULDN’T. So even if you feel like changing it, don’t. Also, they disable you from logging directly into Root. This is also for your benefit. Although, the default user of “ubuntu” is still a sudoer and can do everything root can, including becoming root. It’s best that you stay as ubuntu and get used to just prefixing certain commands with “sudo”. This is to prevent you from accidentally mucking things up. For those that are unfamiliar with linux commands, sudo basically means, do this as if I was root. Not every user has this power, so if you create new users they will not be able to run sudo commands unless you add them to the sudoer list. But thats getting a bit complicated for us.

Formatting & Mounting an EBS Volume

So we have a EBS all hooked up, but we need to tell Ubuntu how to use it. Easy peasy. First step. Even though AWS says it mounted to /dev/sdb it didn’t. Because newer Ubuntu installations don’t call it sdb anymore. They call it “xvdb”. The last letter is still the general mount point if you think about it. So if you mounted to “/dev/sdf” for some reason, you’ll actually be looking for “/dev/xvdf”. Not too complicated. If you’re unsure where it is, check the AWS dashboard under Volumes. Select the extra volume and in the details it will tell you the mount point. Or, in the terminal you can just use this command to show the mounted volumes. XVDA1 is the root drive. Don’t mess with it.

ls /dev | grep xvd

So now that we know the mount name (/dev/xvdb) we need to format it to a filesystem that ubuntu uses. In this case we will be using EXT4. The command for that is as follows….

sudo mkfs.ext4 /dev/xvdb

After pressing enter, your terminal should start populating with commentary on how its writing blocks and journals and such. When its finished and back to the prompt, we can now make the mount point in which it will be attached. Simple enough. You can name it whatever you want but I like “vol” since its easy to recognize and short enough to be easy to type.

sudo mkdir -m 000 /vol

Now to attach the drive to that new folder. This is a pretty simple process.

sudo mount /dev/xvdb /vol

Who knew it was that easy? Well theres one issue. While this is in fact mounted, if you restart or reboot the system it will not be attached. Now some people will tell you to add a “fstab” entry that will attach it. Fstab is basically the script that executes when the system is booting up before the users and other services are started. While this perfectly correct in most circumstances it is NOT in Amazon AWS. In fact, if you do the fstab entry solution you will brick your instance as soon as it reboots. This is because AWS instances are VPS’, they are only virtual software emulating a box. So in the process that they allocate and attach EBS volumes is past the fstab execution point. So the fstab entry will be looking for something that does not exist yet and will stop the boot from continuing, meaning your system will hang and none of the SSH services are available for you to fix it. So what I do is add it to the “rc.local” script. This script is executed when the users are loaded up. Which is sufficient time for AWS to attach the volume. To edit it type in the command “sudo nano /etc/rc.local”. You should be greated with some commented text (lines that start with #) telling you about its purpose. Directly after the comments and before the “exit 0” line enter the sudo mount command from above. Then press CONTROL+X, then Y, then ENTER. Now the system will automatically mount the drive when rebooted.

Installing the LAMP Web Server

We are now ready to start installing some packages. The first one up is Apache2 web server. I will take this time to tell you that there are other web server options such as lighttp, and nginx. They are both better at synchronously handling requests as well as proxying, but I feel as if there PHP performance suffers. If your going to be making a proxy or maybe a simple content delivery network I would suggest looking into NGINX. But thats a different tutorial. Apache2 is tried and true. I doubt I would be a lier if I told you that the majority of websites on the internet are ran using Apache. Plus I find it a bit easier to configure. So how do we? Well we will be using the apt app.

sudo apt-get install apache2

Now the wheels should start spinning and it will ask you if you want to install the specified packages. To which I say, duh!. So let it go and install. The packages have gotten much smarter, so when its finished installing it should really just be working. You can test it by firing up your browser and typing in the Elastic IP address that you gave it earlier. If you get a page back that says “It works!” then, well, it works! Anything else and we got problems. Leave me some comments if you run into issues here because it could be a multitude of things. Oh, and if your getting all excited about using domain names and such, calm down. IP’s will be good enough for now. Next up, PHP5

sudo apt-get install php5 libapache2-mod-php5

Once again, it will confirm that you want to install the specified packages. Enter a nice “Y” and enter. It will automatically detect Apache and install the necessary modules as well as restart the Apache web server. So basically when its done loading its good to go. Next up is MySQL database. Heres the install command.

sudo apt-get install mysql-server libapache2-mod-auth-mysql php5-mysql

It will show a big pink screen asking for a password. Enter something strong, but that you can remember. In fact, write it down on paper. Seriously. This is the databases root administrative password. So slap yourself if you entered “1234”. Anyways, let it finish installing. And just like PHP it will automatically restart the web server.

Until Next Time

So at this point, we have the volume setup. We also installed the bare packages for Apache2, PHP, and MySQL. But now we have to configure them. So join me on the next installment and we will dive deep into configuring and customizing our server to be both secure and stable.

Starting a LAMP Server in Ubuntu 13.04 (AWS) Part 1: Amazon AWS

Posted in Tutorials with tags , , , , , , , , , , , , , , on July 11, 2013 by chrispikul510

So part of the way through creating this newest LAMP stack server in Amazon’s AWS I figured I might need to document it. Since it generally takes people (at lease myself) a lot of googling and cross-checking values in between linux distros and versions, etc. So hopefully, this should be a complete tutorial on the process I did to get a functioning LAMP server working.

Now some perquisite understandings. I’m using a Mac (irrelevant really) and Amazons’s AWS Web Services for my VP Servers. I do (mostly) everything through Terminal and SSH. The target requirements for our server are…

  • Ubuntu 13.04 (But really, 12.X will work too)
  • Apache2 Web Server
  • MySQL Client/Server
  • PHP5
  • Secondary EBS (Drive) Volume
  • Secure SSH Access
  • Protection from scanners, brute-forcers, and other na’er-do-wells.

So. These are our requirements, where to start? Well in your Amazon AWS EC2 dashboard, we need to create a new instance. So get to EC2->Instances->Launch Instance. It should bring up the Create New Instance window and give you 3 options. Choose the Classic Wizard and click continue. Under the Quick Start tab select Ubuntu Server 13.04. I left the platform radios defaulted to 64 bit, because duh, we want 64 bit. Then choose select.

So in the next window you really don’t need to do anything. Unless you want a specific instance type (more power = higher cost). Or an availability zone. But unless you have specific needs for these. Click “continue”. Next up is the Advanced Instance Options portion. Theres rally only three things to consider here. 1) Do we want (to pay for) CloudWatch monitoring. It allows you to view fun graphs such as CPU usage and Volume Read/Write bandwidths. 2) Termination Protection. This basically makes it so you have to take a couple extra steps to fully delete this instance from AWS. 3) Shutdown Behavior. When we tell AWS to shut this down, do we want it deleted as well? This is risky if you or anyone of this AWS account has an itchy trigger finger.

After those have been considered, the defaults are fine. Most of the other options relate to more advanced instances then the Free Tier T1.micro. Next up is Storage Devices. You’ll see the Root volume that should be 8 GiB. This is generally fine for a Root volume. Unless your going to have TONS of data on the root drive. I wouldn’t change it. But click Edit anyways. In the new section that drops down. Select EBS Volumes. Then fill out the volume size. The bigger the drive the more it costs. So even though you could create a terabyte drive here, ask yourself if you need it. I went with another 8 GiB. Because this is a tutorial and I’m gonna delete it anyways. Now under the Device portion it should say /dev/ then a drop-down box. Set that box to sdb. What this does is sets which mount point in ubuntu will this drive be attached to. Remember this option as its very important later. Which one you choose isn’t really, but I like sdb since it’s next in line. Then either leave checked or uncheck the Delete on Termination option. I leave them unchecked. This way if I destroy this instance, the volume can be re-attached on another instance. When done with that, click Add. Then continue.

Next page isn’t useful. Its basically where you add your own labels to the instance so you can understand what it is in the AWS dashboard. Fill out a name though at least if you have multiple instances.

Next up is the Create Key Pair page. This is IMPORTANT. If you all ready have made key pairs and know what they are, go ahead and select “Choose from your existing Key Pairs” and pick yours. If this is your first instance. Select “Create a new Key Pair”. Enter the name for it such as UbuntuServer and click “Create & Download you Key Pair”. And just as the message below that link says. Save it somewhere easy for you to remember. I saved it directly into documents on my mac. On my Win PC I might create a SSH folder directly on my drive and save it there. Do not loose this. And do not try and change it. When you got that figured out, continue to the Configure Firewall section.

On this page I generally recommend to always “Create a new Security Group”. Because some of your instances may have different firewall needs then others. For instance, a Web Server needs the general HTTP and HTTPS ports and thats about it, where as a full Mail Server will need HTTP, HTTPS, SMTP, SMTPS, POP3, IMAP, etc. The less you allow here the better. Only use what you absolutely need. So give this a Name and Description and start adding rules. If your going to enter port ranges (bad practice in my book) leave the “create a new rule” box on Custom TCP rule. If its a standard or common port, its probably listed in that drop down box. First one you must have is SSH. So click the box and select SSH. You’ll notice now you need an address. Leaving this at will mean any IP address can access this port. And when it comes to SSH, thats a NO NO. So enter an address. If your on a trusted network you can enter your networks CIDR mask. If you don’t know what a CIDR mask is, you’ll have to google it. But what I can tell you is use nothing bigger then 24 unless you know what your doing. For a single absolute address use 32. Ie. What that last bit (the /32) means is basically how big a range. 32 will match 1 address. /24 would match 127.127.127.*. So I recommend entering your absolute address with the CIDR mask as /32. When you’ve done that, click Add Rule. Now add the following rules: HTTP, HTTPS, SMTP, and MYSQL (if you want remote db access, I didn’t since I just use the terminal). Once your happy with these options go ahead and click next. You can always alter these options later. In fact, you will. After that review that everything is right and click Launch.

At this point the wheels should be spinning and AWS is launching your brand new bare-bones ubuntu server. Once your back at the dashboard looking at your new instance being started. Nows a good time to click the Elastic IP’s option under “Network & Security” on the left toolbar. Once there. Click the “Allocate New Address” button at the top. EC2 should be selected. Click “Yes, Allocate”. Now a new address should pop up on the dashboard. Select it, and click Associate Address from the top bar. Your new instance should all ready selected in the drop down. If its not, find it and select it. AWS uses Instance ID’s instead of your Key/Values so you’ll have to go back to the Instances portion, find the new instance, and write down the Instance ID. Once you got that down, go ahead and associate it. What this does is gives your instance a static IP address that you can use to access it.

Ok. That should be it for Amazon AWS’s portion for now. Next up we need to actually log in to it and start configuring our new server. Once again, I’m on a Mac so I will write down the instructions for Mac/Linux boxes. If your a Windows user, sorry, I will write down those instructions later.

Mac Instructions:

Open up the terminal app. You can do this easily but pressing Command+Space and typing “terminal” into the spotlight box. Once thats open we need to adjust and move some stuff around. Type in this command to make sure theres an SSH folder for this user.

ls ~/.ssh

You should get a read out of some file contents. If not, use this command to create it.

mkdir ~/.ssh

Now lets copy in our Key file. Remember, its the one you saved earlier in the Create Key Pair option. Use this command.

cp ~/Documents/KeyFile.pem ~/.ssh

Good. Now to make sure this has worked and that we can log in. We need to use ssh and the keyfile. Heres the general command syntax (all one line).

ssh -i ~/.ssh/KeyFile.pem ubuntu@YOUR.INSTANCE.IP.ADDRESS

If all works, it should ask you if you want to add the footprint to the known_hosts file. Type in yes and press enter. At this point it SHOULD log you in and you’ll be greeted with a bunch of text about how is ubuntu 13.04 and some system specs, and finally a command prompt line that should look similar to “ubuntu@ip-X-X-X-X:~$”. If you see this we are all good and can start configuring. If not and you receive a message about the Key File being rejected then its time to hit the google. In my travels have found that 9/10 times its because the file permissions got changed. And generally can be solved by settings the key files permissions like this.

chmod 600 ~/.ssh/KeyFile.pem

Now if you want an easy way to login without having to type that heavy command use this command (exactly)

nano ~/.ssh/config

Then copy these lines with your relative data changed.

User ubuntu
Port 25
IdentityFile ~/.ssh/[YOUR KEY FILE]

Then save it with CONTROL+X then press Y and ENTER. Now all you have to enter is…


And press enter. So now you have a server (well sort of) and a way to log into it. Check in later for the next part in the series. As a final touch, so far we have accomplished:

  • Started a new Ubuntu Instance
  • Created our secure KeyPair file
  • Adjusted our Firewall to be nice and secure
  • Gave it a static IP address
  • Setup our computer to login using SSH.