HOW TO – Setup your first Go Lang’s Workspace and run your 1st `Hello World` in your Linux box

This article covers brief steps of how to install, setup necessary tools which could help you to write & run your 1st Go Lang program, in your Linux machine.

Download & Installing Go

  • Browse to and click the download link of Go Lang’s Linux Binary
Go Lang’s download page
  • Once the download process is completed, open your terminal window, go to the directory where contains your the downloaded go lang’s binary file (e.g. ~/Downloads)


  • Run tar -xvf command to extract extract the content of go lang binary file.
Running tar -xvf command for extracting go lang binary file.
  • Move the extracted folder go into /opt folder.


  • Go to home directory then edit .profile file in there. In case you have not created it yet, create a new one. Open the file in text editor such as vim, and write down a line for declaring a new environment variable GOROOT which points to the location of our moved go binary files, the /opt/go directory. Then, add a line which export & concat the $GOROOT/bin path with PATH variable so that we could run go in any directories.


  • Save the changes in .profilefile then run source .profile command in terminal to get our new changes take effect immediately in our environment.By the next time you login or booting into your linux box using your current login account, you would be able to run go command from any directories.


Setup your 1st Go Lang workspace

Workspace in Go Lang, is a directory which contains source code of our go lang application & library projects, 3rd party go dependencies & binary files of our compiled go lang projects. Below are steps of how to create it:

  • Create a new directory in somewhere in your home directory (e.g. ~/projects/golang). This directory will be the root our golang’s workspace directory. Within the directory, create 3 new sub directories with these following names: bin, pkg, and src


  • Going into src folder, we’ll create a subdirectory which represent of our source control provider such Then going into the created subdirectory, we’ll create another subdirectory. We name it with our source control provider account’s name.


  • Go back to home directory, edit .profile file again in a text editor.  Add a new entry which export GOPATH environment variable. Ensure that the GOPATH variable points to the path of our workspace directory (e.g. ~/Documents/projects/golang ). Save changes and re-run source .profile command to force the changes taking effect in your environment immediately.



Creating your 1st “Hello World” Go Lang project

From here, we have already created our initial go lang workspace. Now, we are ready to create our 1st “Hello World” Go Lang project.

  • Go to the workspace’s source code directory then create a new directory (e.g. ~/Documents/projects/golang/src/ )
  • Create a new .go file (e.g. hello.go ). Open the file using code editor such as Visual Studio Code.
  • Inside opened the blank .go file, we’ll write our 1st hello world in go lang as follow:


  • We’ll go back to terminal, and run go install command to compile the program. The compiled binary file will be put under $GOPATH/bin directory.


  • Since we have exported the $GOPATH/bin as a part of $PATH variable, we should be able to run the program through running hello-golang command (name of your compiled go lang program).




At this point, we have already setup Go Lang workspace in our linux box. The workspace is a single location of where we will put files , directories of our current & future go lang source code projects, dependencies & compiled projects inside.
We also has setup GOPATH & GOROOT environment variables, integrate them with the PATH variable. This would make our efforts easier when we want to go into the workspace’s location, execute compiled binaries or just running go command from any directories.


Serverles AWS Lambda – Part 2: Retrieve data from AWS DynamoDB

In this past article , we learned how to create our 1st AWS Lambda service through using Serverless framework. Our current Function in the project, currently exposing GET HTTP verb and when it is invoked, it returns a list of harcoded blog objects. In this arcitle, we are going to extends its capability, by refactoring this part, to not returning hardcoded array. Instead, we are going to return an array of objects where are stored in AWS Data Storage service.

DynamoDB vs SimpleDB vs RDS (Relational Database Service)

AWS offers 3 Database Services to customers as follows: DynamoDB (a NoSQL database solution) , RDS ( Relational Database service, hosted & handled by AWS) and SimpleDB ( a similar NoSQL database to DynamoDB, yet AWS seems `hide` it from customers, but it’s accessible ). Among of these 3 options, I rule out RDS, because its pricing is the most expensive compared to the others (

So we have SimpleDB vs DynamoDB now. DynamoDB is popular and widely used. However, in terms of query speed & price, SimpleDB is more attractive compared to DynamoDB on a certain case. I am tempted to choose SimpleDB over DynamoDB, but since in this sample, we are going to build the backend API for a kind of blog application, DynamoDB is mentioned as suitable choice for this case. However, in future articles, I will cover the SimpleDB version of this sample, because it’s still interesting to me.


Initialise blog table on AWS DynamoDB

  • Open Blog project’s serverless.yml file and then add these resources entries.
Add resources block in serveless.yml

The resources section we added in serverless.yml file in there, is a way of telling serverless for creating a new DynamoDB table on AWS. In the section we define the table’s name, a String typed Attribute that we defined as the Primary key for this table and initial Read & Write Capacity units of the table. As for other attributes, we will add them when we are going to create records later.

  • Once we have saved the changes in serverless.yml file, let’s go back to command terminal and invoke serverless deploy command for deploying the service. You may want to remove prior deployed service by running serverless remove command 1st.
Running serverless deploy command
  • Now we are going to check the created table and add records on it. Let’s login into AWS Web Console using your Account and look for DynamoDB home page.
Accessing DynamoDB’s Home Page
  • Go to DynamoDB Tables’s page. Confirm that you notice the Blogs table appear on screen. Select it and then click Items tab.


  • On the Items tab view, click Create item button and confirm that a modal dialog as in this following screenshot appears.
Access DynamoDB Create Item page
DynamoDB Create Item Modal
  • Through accessing the popup menu, add 3 more attributes (columns) and fill them with strings as their values.
Add more attributes with values
  • As for the id, we are going to assign it with UUID. To generate the UUID, we can use available tool such as in this site.
Assign UUID on id attribute
  • Repeat prior steps to add as many records as you want. You could go to Actions menu -> click Duplicate button for doing this.
Created Blog records

Setup AWS SDK for Node.JS

AWS provides SDK for developer which contains various APIs for accessing their services, including DynamoDB. We need to use the SDK for Node.JS in our Lambda function for accessing the DynamoDB Table we created in previously.

  • In the lambda project, ensure that it has package.json file. Otherwise, we will need to create it through running npm init command.
package.json file
running npm init command for creating package.json file, in proper way
  • Once we have finished prior step, we’ll install the AWS SDK inside our project through running npm install aws-sdk –save command. The extra –save argument on the command will add an entry in the package.json file, to ensure that when CloudFormation building our lambda, and run npm install command, npm would install the AWS SDK library into this project.
Installing AWS SDK
AWS SDK entry in dependencies section of package.json file
  • Ensure that you have setup your IAM account’s Access & Secret Keys. If you are not sure with this, open ~/.aws/credentials file and ensure that there are lines which define these keys entries. If not, follow guide in this site.
content of ~/.aws/credential file

Refactor Blogs resource’s GET verb – Phase 1

As we have done in prior article, we created retrieveBlogs helper method which returns a list of hardcoded blog objects. We are going to create a new method for replacing this retrieveBlog method. Here is the steps of how we are going to do this.

  • Remove the retrieveBlogs method from handler JS file. Move the removed method’s hardcoded lines into a class, we name it as DynamoDbDataService class. Change the handler JS file to instantiate the new class and call its getAll method.
Moving retrieveBlogs method into the new DynamoDbDataService class
  • Then, in the Handler JS file, we change the code by importing our new class, instantiate & initialise the new class and call its getAll method for retrieving the Blogs items from AWS DynamoDB.
Changed Handler JS file to use our new DynamoDbDataService class
  • Before we move to next refactoring phase, we’ll invoke serverless invoke local command 1st to ensure that there are no errors in our new code.
Test our refactored lambda function in our local machine

Implement calls to AWS DynamoDB using AWS-SDK

  • Let’s go back to DynamoDbDataService class. On the early lines (below ‘use strict’ line), we’ll put a statement to import AWS-SDK library.
Import AWS SDK
  • Moving to the constructor part, we write lines to initialise the AWS’s configuration property. We want to tell AWS SDK which AWS Region that our Lambda Service is deployed to. We could hardcode it to a specific region such as us-east-1, ap-southeast-1 , etc. But, we won’t do this way. Because we don’t want to change this line in the future if we want to deploy the Service to different region.
    AWS has provided an environment variable AWS_DEFAULT_REGION which filled with correct AWS Region of where our Lambda service deployed to, in the cloud environment. Therefore, we are going to get the AWS Region from AWS_DEFAULT_REGION environment variable, instead of hardcoded it.
Initialise AWS SDK & Region inside constructor
  • Removed all hardcoded lines inside getAll’s returned Promise object. Then, we’ll start with instantiating AWS.DynamoDB.DocumentClient type. This type expose methods which one of them can be used for pulling data from a DynamoDB table – the scan method. scan method takes TableName as a required parameter. We build the scan’s parameters which consist of TableName & Limit(define maximum number of returned records). We assign the tableName & numberOfItems property’s values into these parameters. Next, we call the documentClient instance’s scan method and pass the params as its argument. When the call is finished, the callback in the method’s 2nd argument is triggered. Inside the callback method, we check whether the calling process is ended as giving error or result. Should it is ended as error (err is not null), we call the promise’s reject method and takes the err object as its argument. Otherwise, we call resolve method and takes the result (data) as its argument.
Re-implement the getAll method

Testing the changes in local development machine

  • At this point, we should be ready to test our changes. Before we deploy our code to AWS, it’s good thing to do if we test it first in our local machine. As usual, to do this, we will invoke this command to invoke the service, AWS_DEFAULT_REGION=<AWS Region of where your lambda sits on> serverless invoke local -f <lambda function’s name> . In our case, we will invoke the command as AWS_DEFAULT_REGION=us-east-1 serverless invoke local -f blogsFetch
  • Invoke the command and confirm that we got the result with status code is 200 and the body contains stringified retrieved data, came from our AWS DynamoDB’s table.
Invoke changed Lambda function in local against AWS DynamoDB

Deploy to AWS and testing it on API Gateway test page

  • All should be still fine. It’s time for uploading our changed Lambda function to AWS. This time, instead of running serverless deploy -s dev -r <aws-region> command, we call this command for deploying only the Lambda code only (node.js code): serverless deploy -f <function’s name> -s <stage> -r <region>.We use this command because we don’t want to rebuild other resources such as the DynamoDB when deploying our updated Lambda code.
Deploy the updated lambda using -f argument (deploy per function)
  • Although it was working fine when we tested our lambda in local environment, we still need to test the deployed Lambda function. This time, instead of using Postman for testing our API, we will do it in different way. We are going to test it through using API Gateway’s Test Page. Go back to our AWS Console page then go to AWS API Gateway service page. On the page, click Resources link in left menu. Then on Resources pane, click the GET verb. On right pane, click a Blinking Dot with label TEST link. This will bring the API Test page when we clicked it.
Accessing API Test Page
  • Confirm that the right pane is refreshed and display the test page. On the page, there is blue coloured with thunder icon button, the test button. Click this button. This will invoke our blogs/fetch API.


  • When the call is finished, we received “Internal server error”. We will cover how we are going to fixing these issues.
“Internal server error” when testing ther API GET verb
  • On the displayed Logs, we could not find any useful information which explains why the error happened. To look for what was going on and the cause of this error, we could see it on CloudWatch Logs window. Open the Cloud Watch page then click Logs item on left menu. Confirm that the right pane refresh and display a list of Log Group item. Click the item whose name is matched to our blogs/lambda service.
CloudWatch page with displayed Log Groups list
  • When we clicked one of displayed Log Groups item, the right pane is refreshed again and displays a list of Log Streams. Click the one with latest Last Event Time.
CloudWatch page with displayed Log Streams list
  • In the next page, expand the item that looks like explain this error. Notice the errorMessage & errorType, it seems that we have not authorise the Lambda Function to do Scan operation against the designated DynanmoDB table.


Fixing the unauthorised access error

  • To fix the previous error, we need to give authorisation access to our Lambda function for performing scan operation against the DynamoDB Table. The way to do this is by adding DynamoDBIamPolicy entry in the serverless.yml file, under Resources entry as follow:
Updated serverless.yml with DynamoDBIamPolicy entry
  • Save the changed serverless.yml file and then re-run the serverless deploy -f <function’s name> -s <stage> -r <region> command again.

Retesting the Lambda on API Gateway test page

  • Once we have redployed our lambda function, go back to AWS Web Console’s API Gateway Test page of our deployed Lambda function. Then, press the Test button. Noticed that we do not receive error anymore. Instead, we should see records from DynamoDB Blogs table are retrieved and displayed as follow.
Returned response from testing the GET API


At the end of this article, we have learned a number of key things to get our Lambda function able to pull data from AWS DynamoDB. First, we define the DynamoDB table we want to create through adding Resources section in serverless.yml file.  Once we redeployed our Lambda to AWS, AWS CloudFormation will create the DynamoDB Table, beside our Lambda function & its API Gateway endpoint. Then, we filled the created table with several items through AWS DynamoDB Web Console.

On the Lambda function’s handler code, we structured our code by moving retrieveBlogs method into an ES6 class (the DynamoDbDataService class) and wrap the method’s body with ES6 Promise (because we want the records retrieval to be an asynchronous process). Then, we replaced the hardcoded lines with logic for Initialising AWS SDK’s Document Client class and calling its scan method for retrieving records from Blogs table.

Aside from these, we learned how to get the detailed error log in case a request to our deployed Lambda’s endpoints return “Internal Server Error”,through looking at AWS CloudWatch web page. We also learned that the API Gateway page has a section that allow us to Test our Lambda’s HTTP Endpoint.The source code of this article can be found in this link.

In the next article, we will add more verbs on the Lambda function so that it would provide complete CRUD endpoints.

Serverless AWS Lambda – Part 1: A Quickstart for Beginners

This article covers steps for creating your first AWS Lambda service & functions through using Serveless framework. Before following these steps, ensure that you have already had AWS Account.


  • Ensure that you have installed latest LTS version of Node.JS. Visit this link to finding out how to install it in your machine:
  • Ensure that you have installed Serverless framework. If not, run this command for installing it:
    sudo npm install serverless -g

    Then, run this command to check whether serverless has been installed successfully or not:

    serverless -v
  • Create a new or reuse existing IAM User account. Ensure that you have given AdministratorAccess to the user account. If you are not sure how to do this, follow the guidance of how to do it in this document
  • Upon created a new IAM User account, take note the displayed API Key & Secret Key of the new IAM Account.
  • Follow the guidance in this document to configure the aws credentials (API Key & Secret Key) that you have noted in prior step. Serverless framework need this information so that it could deploy.

Create your first service:

  • Once we have completed all of required pre-requisites, create a new folder, go into the new folder then run this command to begin creating your 1st service:
    serverless create --template  --path

    . Example:

    serverless create --template aws-nodejs --path blog

    Creating 1st Service
  • When creating a new service is finished, we will see file structure in the project folder, as shown in this following screenshot:
    • serverless.yml – a YAML file where we will define configurations for our service, such as AWS Resources (S3, DynamoDB, etc), Region, Nodejs Runtime, we want to use and also our service’s functions configurations.
    • handler.js – Initial Javascript file , created by serverless, that is supposed to be the place where we will write our function’s logic. Rename the file’s name with name of entity that our function interacts with (e.g. blog, product, task, etc).
Initial Project’s Structure
  • Open the serverless.yml file and edit these Configuration sections: lambda function’s name, handler method’s name, associated HTTP path & verb.


serverless.yml – Configure service’s function
  • Open the handler javascript file. Let’s write code inside the exported function whose the logic is simple – just returning an array of JSON objects
handler node.js – It retrieves the data, wrap the result in response’s body and return
handler node.js code – helper method that is supposed for retrieving the data from storage
  • Before we deploy the lambda function, let’s invoke it in our local machine through executing this command:
    serverless invoke local --function
     e.g. serverless invoke local --function blogs 

    Ensure that no error happens and we notice correct result is printed on terminal.


Deploy the service to AWS Lambda

We have implemented simple logic inside Lambda service’s handler and then invoke it locally using serverless invoke command. Now, we need to deploy our lambda through running this command in terminal :

serverless deploy --stage <dev, uat, production> --region .


serverless deploy --stage dev --region ap-shouteast-1


Deploying Lambda to AWS through calling serverless deploy command
When deployment is successful, we should get the URL Endpoint of our deployed service and there should be no error message appear. If we don’t see the URL Endpoint, there should be a typo inside serverless.yaml (check the events section , if you type it as event, this issue occurs).

This is result that you should get when we browse the endpoint or invoke it using Postman

Invoke the Lambda in Postman

How the Serverless deploys our Lambda function to AWS

When we invoked serverless deploy command, serverless zipped our function file(s) and also created a file for configuring AWS CloudFormation stack setting (cloud formation template). Serverless also created a new AWS S3 bucket using our AWS API Key & Secret Key, then upload the zip file & cloudformation setting file into the created AWS S3 Bucket.

Lambda files uploaded by Serverless in AWS S3 Bucket
Once the file uploading process is done, Serverless manage the creation of our Lambda function & its API Gateway Endpoint through AWS CloudFormation service. This is done based on the uploaded cloud formation template file. Upon finished the deployment, you could see the created AWS Resources which build up your deployed service through browsing the Lambda , CloudFormation, API Gateway section pages, on your AWS web console.

Removing Deployed Service

In case you need to destroy your deployed lambda service, the common way to do this is through destroying the resources that built your service, through AWS Web console page and then do these procedures: Open S3 page and destroy the Bucket which build the service, Destroy CloudFormation stack, Destroy the Lambda & then the related API Gateway. Serverless provides a quickest way for destroying our service along with its AWS resources. We can do this through invoking serverless remove command, inside the serverless project folder.

Removing Lambda function and its claimed AWS Resource


Serverless has simplified the efforts of writing AWS Lambda Function, deploying & hosting the function as an API Gateway resource endpoint. By hosting our node.js-based API on AWS Lambda, we do not need to setup an EC2 instance or other kind of virtual private server just for hosting our code. Through using Serverless & AWS Lambda, we shift this responsibility to AWS and thus, free us from responsibility of setup our own server. In the future article, we will cover steps of how to integrate our Lambda service with AWS Data storage services such as SimpleDB or DynamoDB.

Host an Ionic app into AWS S3 Bucket using Serverless framework

When I was working on a Mobile App’s front end development project, we were asked by our customer to deploy the ionic app into a public server ( as a Web SPA ), so that they could access & playing around with it immediately, in their browsers. Ionic framework comes with a built in web server which allow you to run the app as a Web application ( through invoking `ionic serve` command). Then, this part was not a problem to us. However, we should figure out where & how we were going to deploy the ionic app to. That time, we suggested our customer, to deploy the ionic app into our AWS EC2 Instance/Elasticbeanstalk or into their own on premise server.
Fast forward back to today, we have more options which allow us to deploy an Ionic app (also other web SPA like angular JS or React) into AWS S3 bucket and host the app in there. This is possible since AWS S3 bucket has capability to host static web page. Then, we have several tools to help us doing this: through AWS S3 Web Console, AWS SDK CLI tool or Serverless framework with S3 Client plugin.
In the time I wrote this article, I am working with Backend team which use Serverless framework in our project. Based on this situation, I am curious with this tool’s ability to deploy front end web Single Page Application (SPA) like Ionic, into an S3 bucket. In the next sections, we will cover steps of how to do this in Serverless framework’s way.



  • Ensure that you have AWS account.
  • Ensure that you have installed Node.js in your machine. Refer to this link if you have not installed it yet.
  • Ensure that you have installed Ionic framework, Android SDK and/or XCode in your machine. Refers to this link if you have not installed Ionic yet.
  • Ensure that you have installed stable version of Serverless framework (version 0.5.6). If not, run this command to install it: npm install -g serverless@^0.5.6
  • Login into your AWS Management Console, create a new IAM account and attach AdministratorAccess policy into the created IAM Account.


  • Ensure that you have written the created IAM account’s ACCESS KEY ID & Secret access key. We are going to use these in later steps.


Prepare a Serverless project

  • In a terminal box, run sls project create command. Notice that running this command creates a new Cloud Formation’s template in your AWS account for specified region (in this sample, we choose ap-northeast-1 AWS region) and also created a new folder in your current directory.

  • Change directory to the folder created by running command in prior step, then running this command: npm install serverless-client-s3 --save to install serverless-client-s3 plugin
  • Create folder client/dist. In mac/linux command shell, we can do this by running this command mkdir -p client/dist.
  • Copy all files inside your Ionic project’s /www folder into client/dist folder , created in prior step.

  • In the serverless project’s root directory, edit & update the s-project.json file by adding these following changes:


Deploy the Ionic App into AWS S3 Bucket

  • In the serverless project’s root directory, run sls client deploy -s stageName -r regionName command to deploy the Ionic app into AWS S3 bucket for specific stage & region. Example: sls client deploy -s dev -r ap-northeast-1 to deploy the app into Tokyo data center and labeled as dev stage. This might take a while to finish.

  • Once it’s done, notice your AWS S3 console. There is a new bucket created in the list where its name is matched with the bucketname setting we define in our s-project.json file. View its content by clicking the folder’s link. Notice that the bucket contains all of files of our Ionic app, we copied in client/dist folder in prior step.

  • Click Properties button on the page, and click ‘Enable Web Hosting’ accordion button to display url link of the deployed web application.
  • Click the url link to display our Ionic app. Ensure that no error happens and our Ionic app’s landing page is displayed successfully.


The Flaw

The latest version of serverless-s3-client plugin (version 2.0) deploys the web app to s3 bucket with no restriction access policy, which mean it is accessible to anyone / public. While this is not a problem to us (the developers), to our customers, this is mostly undesired. Because they might not want people outside the developer team and themselves, to look at our current works on the deployed Ionic app.

At the moment, restricting accesses to the S3 bucket, require us to add access Policy on it manually through S3 web console. There are couple of guys whom created Pull Request on serverless-s3-client plugin’s github page which allow us to define S3 policy bucket inside s-project.json file. Once the serverless maintainers reviewed & merge these Pull Request, I will update this article with steps to add the s3 bucket’s access restriction policy.


Deploying Sails.js Web Application on AWS EC2 Instance

Sails.js is a Web MVC Framework, built on the top of Node.js platfom, that is interesting to me. Beside the framework leverages MVC pattern and offers blazing fast performance during runtime (thanks to Node.js), it also came in with a built in HTTP server. When i am going to run my web application, I do not need to compile, package & deploy my code into a separated web server. Instead, it just requires me to invoke a line of command to get my web application up & running:

 sails lift MyAwesomeWebApplication 

Also, it just need me to press CTL+C keys in the terminal, to stop the running web application. This is a similar feature that exists in Play.

In the meantime, I was thinking about what if I deploy & run a sails web application on a cloud environment, let’s say, Amazon Web Service. So, i made the first attempt by deploying a simple sails web app on AWS through Elastic Beanstalk (EB). The result is my web app’s homepage did not show in my browser. Instead, it displays EB App’s default homepage.

Then, I took the other route. I create an EC2 Instance (It’s a virtual private server) using AMI (Amazon Machine Image) with pre-installed Ubuntu 14.04 OS. I installed required softwares in the created EC2 instance then lift my sails web app on it. Then, check it using my browser & it confirmed me that my sails web app was up & running.

In this article, I would like to share you the steps that I did to get my sails app running in a Ubuntu AWS EC2 instance. And, here they are:

Create a new AWS Instance using Ubuntu 64 AMI

  1. Browse to AWS console at & click EC2 link.aws_console
  2. On the EC2 Dashboard, click IMAGES->AMIs menu link.
  3. On the Filter menu bar, modify the options to Public images | 64-bit images | Ubuntu.
  4. On the returned filtered result, tick a desired AMI (e.g. ubuntu/images/ebs/ubuntu-trusty-14.04-amd64-server).ubuntu_ami
  5. Click [Launch] button.
  6. On the “Step 2: Choose an Instance Type” page, tick a desired EC2 Instance Type (e.g. Micro Instances) then click [Next: Configure Instance Details] button.choose_instance_type
  7. On the “Step 3: Configure Instance Details” page, leave the default settings and click [Next: Add Storage] button.
  8. On the “Step 4: Add Storage” page, adjust the Root’s storage size or just leave the default settings then click [Next: Tag Instance] button.add_storage
  9. On the “Step 5: Tag Instance” page, leave the default settings and click [Next: Configure Security Group] button.
  10. On the “Step 6: Configure Security Group” page:
    • Select “Create a new security group” option.
    • Enter name & description for the new security group.
    • Click [Add Rule] button & on the new entry, enter: Type = Custom TCP Rule, Protocol = TCP, Port Range = 1337, Source = Anywhere.
    • Click [Review and Launch] button.configure_security_group
  11. On the “Step 7: Review Instance Launch”, click [Launch] button.
  12. On the displayed “Select an existing key pair of create a new key pair” dialog, select “Create a new key pair” on the combo field, enter Key pair name then click [Download Key Pair]  & [Launch Instances] buttons.
  13. Save the downloaded .pem file into somewhere within your home directory & restrict its access by running this command in terminal:
    chmod 400 yourdownloadedkeypair.pem
  14. Go back to browser, click [View Instance] button. Notice that the browser redirects to Instances dashboard page and the new AMI Instance is shown in the Instances list. Give the new Instance a name if you like ( by clicking the new instance’s empty Name cell and type the name on it).  Make notes on the new Instance’s Public IP or Public DNS fields.instances_dashboard

Connecting to the created AMI Instance using SSH

  1. On the Instances Dashboard page, click [Connect] button. A dialog would appear, showing 2 options for connecting to the Instance. Select “A standalone SSH client”, block and copy the command written under ‘Example’ section.connect_to_instance
  2. Open the command line Terminal box, move to the directory that has the downloaded .pem file and run the command written in the earlier instructions dialog.
    ssh -i downloaded_keypair.pem ubuntu@new_instance_public_ip_or_dns


  3. Confirm that you have logged in successfully.ssh_connected
  4. Set the root’s password by running these commands:
    sudo su


Setup required softwares on the created AMI Instance (as root)

  1. In the SSH terminal connected to the new Instance, run these following commands to update the new Instance’s software repositories:
    apt-get upgrade && apt-get dist-upgrade && apt-get update && apt-get autoclean


  2. Run these commands to set up the git client:
    apt-get install build-essential git



  3. Create a new directory & pull the latest Node.js source code in this directory by running this command ( using git client ):
    git clone


  4. Change directory to the cloned Node.js source directory, then run these commands to compile & install the Node.js:
    sudo ./configure && make && make install
  5. Confirm that the compilation process is finished successfully.check_node_version
  6. Install sail.js web MVC framework by running this command:
    npm -g install sails
  7. Confirm that the installation is finished successfullycheck_sails_version

Deploy a sails.js app into the created Instance

  1. Ensure that you have put the sails.js app source code in your git account (e.g. github, bitbucket, etc ).
  2. On the created Instance, create a new directory and do git clone the sails.js app source code into this directory.clone_app_source
  3. Change directory to the cloned source code’s directory and run this command to install node module dependencies referenced by your sails.js app
    npm install


  4. Run this command to lift the sails.js app online on the created instance:
    sails lift
  5. Confirm that the sails.js app is lifted successfullysails_lift
  6. Go back to your internet browser and browse to your instance’s public IP address , port 1337.sails_app_url
  7. Confirm that the lifted sails.js app’s home page is displayedlifted_sails_app

It’s alive now ! But, wait..

When I close my current SSH session that was connected to the EC2 instance & refreshed the sails app’s page in my browser, I noticed that it returns 404 error. My sails web app was offline. Apparently, each processes that have been started during SSH session within the EC2 instance, would be shutdown when the SSH connection is closed. Somehow, I need to prevent the running sails app from being closed even when the SSH session is ended.

Fortunately, the solution for this is already suggested in the documentation of sails.js. The document suggests us to install & start a sails.js app by using forever. Forever prevents any running scripts from being closed during SSH session by running them as a Daemon (*nix service). Then, I tried the solution and it worked well. I would explain the steps of how to forever my sails app in EC2, in the next section,

Run the deployed app as a running daemon in the EC 2 Instance

  1. Install forever globally:
    npm -g install forever
  2. In the terminal connected to the EC2 Instance, change directory to the sails.js app’s root folder then run this command:
    forever start -ae errors.log app.js --dev --port 1337

    OR, run this command if you wish to run the production version:

    forever start -ae errors.log app.js --prod --port 80
  3. If you write your controllers as coffee script files, open the errors.log file. Notice that there is error message written in it ( This means the sails.js app is failing to be lifted by running prior command. This is a known issue in Sails.js version 0.9.16. This issue has been raised to balderdashy and it can be seen in this link, along with the temporary workaround as well :
  4. Logout or disconnect from the EC2 instance’s SSH session and then browse to your lifted sails.js app’s url. Confirm that your lifted app is still up & running now.
The previous section marks the end of this article. I hope this ‘how-to’ guide would help you deploying your sails.js web app on your AWS account. Happy sailing in your AWS cloud.

Creating a Phonegap-Android Application Development Project on Intellij IDEA 12

Creating an Android mobile application could be a tedious one when you need to build rich UI elements on your application. This could be a real problem if your UI designers have adequate or good HTML & CSS skillsets but have little to none knowledge of working with Android XML pages. Another problem arise when there is a requirement to ship your mobile app to support other mobile platforms such as iPhone, Win RT, Blackberry, etc beside on the Android platform. You would need to spend more time, resources & efforts  for Designing, Developing, Testing & Shipping your app across multiple mobile platforms which could hurt your budget.

There is a workaround to solve these. Thanks to people whom were involved in Apache Cordova project, a library named “Phonegap” was born to the rescue. Phonegap is a java library that enable Android runtime to load & display HTML pages ( along with their CSS styles ) and also enable the runtime executing Javascript files as well in an Android application. Due to this feature, UI Designer is freed from working with tedious Android XML pages and able to use their current HTML+CSS+JS skillsets for developing the app’s UI elements similar to pages in web application. Since the app is mainly built on top of HTML 5+CSS +JS, a mobile application which is built on top of Phonegap is also runable on other mobile platforms, such as iPhone, with minimum to none modifications to the original code.

So how would it be look like in a simple Hello World application ? I would show you the steps in this article on how to do it in my favourite Java IDE, the Intellij IDEA 12.


  • You have setup latest updates of JDK 6 or 7 in your machine. If you are a Linux Ubuntu’s user and have not done it yet in your environment, this article might be useful.
  • Ensure that you have setup Android SDK properly in your machine.
  • If you use Intellij IDEA like me, ensure that you have setup JDK & Android SDK settings in your IDEA.
  • Ensure that you have downloaded latest Phonegap’s library (By the time I write this post, the latest version is 2.7.0).

Enter the Steps:

    • Create a new Empty Project in the Intellij IDEA 12.
    • Add a new Android Application Module.
    • In the Android Module structure, create a new folder under ‘assets’ folder & name the new folder as ‘www’.
    • Copy Phonegap’s ‘cordova-x.x.x.js’ file into the ‘www’ folder.
    • Copy ‘cordova-x.x.x.jar’ into ‘libs’ folder.

    • Right click the ‘cordova-x.x.x.jar’ node & click ‘Add as library …’ option.
    • On the Create Library dialog, ensure that Name= cordova-x.x.x, Level = Project LIbrary, Add to Module = the Android Application Module then click [Ok].

    • Copy the Phonegap’s ‘xml’ folder where is located in <your phonegap root folder>\lib\android  into the ‘res’ folder.
    • Add a new HTML5 file into ‘www’ folder. This is the HTML file which is designated for the application’s main page.
    • Add a script line in the Head tag for referring to Phonegap’s javascript file.
      <!DOCTYPE html>
       <title>Demo Phonegap</title>
       <script type="text/javascript" charset="utf-8" src="cordova-2.7.0.js">
       <h2>Hello Android</h2>
    • Initially, our HomeActivity class is extended from Android’s Activity class. We need to modify it so that our HomeActivity class would be able to load & display our index.html page as the app’s main page. First, we’ll modify the class so that it is inherited from Phonegap’s DroidGap class. Secondly, we get rid the second line inside OnCreate method and replace it with a code that calls DroidGap’s loadUrl method. This is the code that is exactly doing the magic.
      package Demo.Phonegap.Application;
      import android.os.Bundle;
      import org.apache.cordova.DroidGap;
      public class HomeActivity extends DroidGap {
       * Called when the activity is first created.
       public void onCreate(Bundle savedInstanceState) {
    • Finally, we’ll modify the ‘AndroidManifest.xml’ file as in the following code snippet. Be advised, you might need to adjust the android:minSdkVersion option so that it match with the android version that has been used by your AVD or Physical Android Devices.
      <?xml version="1.0" encoding="utf-8"?>
      <manifest xmlns:android=""
       <uses-sdk android:minSdkVersion="16"/>
       android:anyDensity="true" />
       <uses-permission android:name="android.permission.VIBRATE" />
       <uses-permission android:name="android.permission.ACCESS_COARSE_LOCATION" />
       <uses-permission android:name="android.permission.ACCESS_FINE_LOCATION" />
       <uses-permission android:name="android.permission.ACCESS_LOCATION_EXTRA_COMMANDS" />
       <uses-permission android:name="android.permission.READ_PHONE_STATE" />
       <uses-permission android:name="android.permission.INTERNET" />
       <uses-permission android:name="android.permission.RECEIVE_SMS" />
       <uses-permission android:name="android.permission.RECORD_AUDIO" />
       <uses-permission android:name="android.permission.MODIFY_AUDIO_SETTINGS" />
       <uses-permission android:name="android.permission.READ_CONTACTS" />
       <uses-permission android:name="android.permission.WRITE_CONTACTS" />
       <uses-permission android:name="android.permission.WRITE_EXTERNAL_STORAGE" />
       <uses-permission android:name="android.permission.ACCESS_NETWORK_STATE" />
       <uses-permission android:name="android.permission.GET_ACCOUNTS" />
       <uses-permission android:name="android.permission.BROADCAST_STICKY" />
       <application android:label="@string/app_name" android:icon="@drawable/ic_launcher">
       <activity android:name="HomeActivity"
       <action android:name="android.intent.action.MAIN"/>
       <category android:name="android.intent.category.LAUNCHER"/>
    • Fire up your Android Virtual Device or connect your Android device into your development machine then build & run the project. You will see this if everything is alright.

Where do we go next from here ?

Of course, you could tell your Web Designers to get back working on the Rich UI elements you wish to put in your super-awesome mobile app, immediately 😀 However, there is a drawback for you as a Java developer if you are going on this way. You would need to use lot of JS when you are coding the UI Logics. Your Java skill would be shifted from being used for developing the UI logics ( Activities , events) to your app’s backend development ( This is true if you goes to n-tier with your mobile app and you still want to use Java for developing your app’s backend services ).
Another situation that you should consider is when you have no UI Designers in your team due to limited budget, for example. This should not stop you from using Phonegap. 3rd party vendors like Telerik or Community have created wonderful UI framework that would sit tight & nice with Phonegap, such as Telerik’s Kendo UI or Jquery Mobile. In the upcoming articles, i would present you about how to integrate these UI Frameworks with android & Phonegap through using a little bit advanced problem sample than a simple ‘Hello World’ app. Or extending them further by integrating cool JS framework like Durandaljs 😀 So, stay tuned, My Friends 🙂

Setup Latest Oracle JDK on Ubuntu Linux

Most of people understand that installing Oracle JDK & JRE on their Windows machine is mostly easy like eating peanuts ( Well, they might still need to setup JAVA_HOME variable & update their PATH variable though). It can be turned out to be a problem when due to some case they need to move from Windows to Linux development environment, such as Ubuntu Linux. In this HOW TO article, i’ll cover the steps for setting up the JDK on Ubuntu Linux. I hope it would be useful for anyone whom are still having problem with it.

  • Open a terminal window (press CTL+T) and run these commands in order (we’ll assume that we are going to install JDK version 7 ):
    1. sudo add-apt-repository ppa:webupd8team/java
    2. sudo apt-get update
    3. sudo apt-get install oracle-java7-installer
  • If you wish to remove the JDK from your machine, run this command in the terminal box:
    sudo apt-get remove oracle-java7-installer
  • Should you need to upgrade your current JDK version to the latest one, you could try running this command and see if it would work:
    sudo update-java-alternatives -s java-7-oracle
  • If the prior command does not work then you could remove your current JDK and re-install the latest one using prior steps as the last resort.

Those are the steps that I always follow when setting up JDK on my Ubuntu machine. Until the time I written this post, i still found no problem when using it on my Intellij IDEA 12.1,  working for either Android application development or Play 2 based web application development.

HOW TO – Enable Play 2.0 Support in Intellij IDEA 12

The new version of Intellij IDEA has been released by Jetbrains. By the time i write this article, the version of IDEA that i use is 12.0.1. As a Play 2.1 developer, i got myself feel excited as Jetbrains has announced that the latest IDEA would support Play 2.0. However, the excitement wand off, turned into little bit disappointment as i found the feature came ‘half-baked’. Extra efforts needs to be performed in order to get the feature works on the new IDEA. I’m not sure if most of Play 2.0 developers known this issue and able to resolved it after taking some time to figure out. But in case you are a Player, an owner of Intellij IDEA 12.x copy and still struggling on how to get the feature working properly, i hope this ‘HOW  TO’ would help you resolve the issue. Please enjoy, Players.

General steps to install the Play 2.0 supports on IDEA 12:
1. Install & Enable Scala plugin.
2. Install & Enable Play 2.0 support plugins.
3. Restart the IDEA to get the new plugins work.
4. Create Play 2 app configuration.
5. Make & run the Play 2 project using the created Play 2 app configuration.

HOW TO – Install Scala Plugin & Play 2.0 support plugins:

1. Click [File] then [Settings] menu.
2. On the “Settings” dialog, select ‘Plugins’ where is placed under ‘– IDE Settings –’ section.
3. Click [Browse Repositories…] where is placed at the bottom of ‘Plugins’ right side panel.


4. On the Browse Repositories dialog, type ‘scala’ in the small search text box where is placed at the righ-top corner of the dialog.
5. The result grid should display an item with name ‘Scala Custom Languages’, double click this item to install the plugin. The plugin’s size is 23 megs, it might take a while for IDEA to download & install it.



6. To install the play 2.0 plugins, type ‘Play 2.0 support’ in the search text box and double click the result.


HOW TO – Enable play framework support on settings menu:

1. Click [File] then [Settings] menu.
2. Select ‘Play Configuration’ where is placed under ‘–Project Settings [your project’s name] –’.
3. On the Play Configuration panel section, enter the root folder of your play 2.x binaries ( e.g. D:\Java\Play-2.1rc), enter the working directory of your play 2 project (e.g. E:\PROJECTS\JAVA\Demo.Play.SocMed) and tick ‘Show on console run’ option.
4. Click [Apply] and [Ok].


5. Restart your Intellij IDEA & then re-open your play project in the restarted IDEA.

HOW TO – Create Play 2 App Run/Debug configuration:
1. Click [Run] then [Edit configurations…] menu.
2. On the Run/Debug Configurations dialog, click the greenish ‘+’ icon on the top-left corner of the dialog.

3. Click ‘Play 2 app’ option in the drop down menu. Confirm that a new option ‘Unnamed’ is created under ‘Play 2 App’ section on the left panel.
4. Rename the play 2 app config then click [Apply], [Ok].


5. Press [CTL+F5] to run the play project or [ALT+F5] to debug play project.




Tutorial Pemrograman CUDA – Inisialisasi

Pada artikel ini, saya akan mengulas salah satu fase dari pemrograman komputasi pararel menggunakan CUDA API, yaitu Inisialisasi Divais CUDA. Fase ini umumnya dijalankan aplikasi di awal – awal sebelum aplikasi mengeksekusi kernel – kernel CUDA ( Misal di baris – baris awal implementasi method main ). Tujuannya adalah untuk mendeteksi ada/tidaknya divais yang mendukung CUDA ( VGA/GPU Nvidia GeForce 8xxx keatas ) serta memilih divais CUDA yang akan digunakan oleh API CUDA. CUDA API memiliki 2 cita rasa , yaitu Driver API & CUDA C. Mengingat kompleksnya ( low level )Driver API & motivasi saya yang ingin memberdaya gunakan pengetahuan bahasa C saya, maka di tutorial ini ( & yang akan datang ), saya memilih menggunakan CUDA C sebagai API di kode tutorial ini.


  • Pengetahuan pemrograman C dasar ( forward declaration, method, pointer, passing method parameter by reference, header & source file, #include directive, dsb )
  • Visual Studio 2008 SP1 ( atau/dan Visual Studio 2010) dengan Visual C++ terinstall di dalamnya.
  • CUDA Toolkit versi 3.2 atau lebih tinggi. Untuk tutorial instalasi nya dapat di simak di artikel ini.

Mari kita mulai

    • Method InitializeCudaDevice

Kita akan mendefinisikan kode ini sebagai method C global, dengan nama InitializeCudaDevice. Berikut adalah deklarasi method ini:

// Forward declarations
bool InitializeCudaDevice( int* numberDevices );

Tulis kode di atas pada file header baru (.h), beri nama sembarang ( misal: CudaUtility.h ). Method ini mengembalikan nilai boolean true bila inisialisasi divais CUDA berhasil lalu memilih divais pertama sebagai divais yang akan di gunakan CUDA API. Parameter pass-by-reference ( pointer integer ) numberDevices akan di isi jumlah divais CUDA yang terdeteksi bila inisialisasi berhasil ( misal : *numberDevices = 2 bila terdeteksi 2 GPU GeForce terinstal ).

Berikut adalah pseudocode dari implementasi method ini:

1. Dapatkan jumlah CUDA devices, bila tidak ada CUDA Devices yang terpasang/gagal maka kembalikan nilai false.

2. Set device CUDA pertama untuk di gunakan kernel CUDA, bila gagal kembalikan nilai false.

3. Kembalikan nilai true, sampai disini dapat di nyatakan bahwa inisialisasi divais CUDA telah berhasil.

Pertama kali, tambahkan file source code C++ baru (.cpp ), beri nama sembarang ( misal: CudaUtility.cpp ). Tulis kode deklarasi penggunaan file header kita di atas dan CUDA runtime API (<cuda_runtime_api.h>). Tulis juga kerangka implementasi method InitializeCudaDevice ini.

#include &lt;cuda_runtime_api.h&gt;
#include "CudaUtility.h";

/// Initialize CUDA
/// Returns TRUE if initialization is success, otherwise FALSE.
bool InitializeCudaDevice( int* numberDevices ){
	// Get number of CUDA Supported device

	// Set device 0 to be used as current GPU's execution

	// Initialization is success.
	return true;

Di dalam method ini, tulis kode untuk mendapatkan jumlah divais CUDA yg terpasang dengan memanggil method CUDA API cudaError_t cudaGetDeviceCount(int* numberDevices). Method ini akan mengembalikan nilai enumerasi cudaSuccess bila berhasil dan parameter pass-by-reference numberDevices akan terisi jumlah divais CUDA yang dapat dikenali. Pada baris ini, kita akan check apakah pemanggilan method cudaGetDeviceCount berhasil/tidak dan apakah jumlah divais CUDA yg dapat dikenali lebih dari 0 / sebaliknya. Bila 2 kondisi ini tidak terpenuhi, maka dapat kita nyatakan inisialisasi gagal.

	// Get number of CUDA Supported device
	if ( ( cudaGetDeviceCount(numberDevices) != cudaSuccess ) || ( *numberDevices  &lt; 1) ){
		// If there is no CUDA Supported device presents, return FALSE from here.
		return false;

Langkah selanjutnya adalah memanggil CUDA API (cudaError_t cudaSetDevice( int deviceId )) untuk memilih divais CUDA pertama ( divais 0 ) sebagai divais yang akan di gunakan oleh CUDA API dan kernel – kernel CUDA di dalam aplikasi kita nantinya. Bila pemanggilan method ini mengembalikan nilai yang selain cudaSuccess, dapat kita nyatakan inisialisasi gagal.

	// Set device 0 to be used as current GPU's execution
	if ( cudaSetDevice(0) != cudaSuccess ){
		// Unable to use 1st CUDA Device
		return false;

Berikut implementasi lengkap method inisialisasi divais CUDA ini

#include &lt;cuda_runtime_api.h&gt;
#include "CudaUtility.h";

/// Initialize CUDA
/// Returns TRUE if initialization is success, otherwise FALSE.
bool InitializeCudaDevice( int* numberDevices ){
	// Get number of CUDA Supported device
	if ( ( cudaGetDeviceCount(numberDevices) != cudaSuccess ) || ( *numberDevices  &lt; 1) ){
		// If there is no CUDA Supported device presents, return FALSE from here.
		return false;

	// Set device 0 to be used as current GPU's execution
	if ( cudaSetDevice(0) != cudaSuccess ){
		// Unable to use 1st CUDA Device
		return false;

	// Initialization is success.
	return true;
    • Contoh penggunaan – Aplikasi .NET Command Line

Buatlah sebuah project baru di dalam visual studio .NET, pilih : Visual C++ -> CLR -> Console Application di dalam dialog new project. Set CUDA’s Customization Build, runtime library dan setting lainnya pada project ini ( di ulas di artikel ini). Buat file .h & .cpp baru dan tulis implementasi kode di atas ke dalam project ini. Berikut adalah kode contoh penggunaannya di dalam file main.cpp :

#include "stdafx.h"
#include "CudaUtility.h"

using namespace System;
using namespace System::Text;

int main(array ^args)
	// Initialize Cuda
	int numberOfDevices = 0;
	StringBuilder^ stringBuilder = gcnew StringBuilder();
	if ( InitializeCudaDevice(&amp;numberOfDevices) ){
		// If success , display the number of initialized devices to screen and success message
		stringBuilder-&gt;Append( L"CUDA Devices initialization is SUCCESS\n" );
		stringBuilder-&gt;Append( String::Format( L"Number of detected devices : {0}", numberOfDevices ) );
		// otherwise display failed initialization's message
		stringBuilder-&gt;Append( L"Initialization is FAILED" );
	stringBuilder-&gt;Append( L"\n\nPress enter to exit..." );
	Console::Write( stringBuilder-&amp;gt;ToString() );
    return 0;

Kode keseluruhan yg di ulas dalam artikel ini dapat di ambil dari url Subversion berikut ini : Selamat mencoba 🙂

Tutorial – Instalasi Nvidia CUDA Toolkit pada Visual Studio 2010

Pada artikel tutorial ini, saya akan mengulas langkah – langkah instalasi CUDA Toolkit versi 3.2 pada Visual Studio 2010. Perlu di ketahui pembaca bahwa CUDA Toolkit versi 3.2 ini di build dengan menggunakan Visual C++ runtime versi 9.0 ( terdapat pada VS .NET 2008 ). Secara default, compiler C++ di toolkit ini ( nvcc ) tidak dapat digunakan bila kita coba membuild project kita di dalam Visual C++ 2010 ( Runtime C++ yg digunakan versi 10.0). Namun kendala ini dapat diatasi dengan melakukan cara instalasi yang akan di ulas berikut ini.


Unduh file-file instalasi berikut ini:

Unduh installer yang sesuai dengan jenis OS & GPU yg di pakai oleh mesin anda ( 32 bit / 64 bit OS; GeForce/Quadro GPU). Untuk Parallel NSight, ada 2 installer yang harus di unduh yaitu Parallel Nsight Host dan Parallel Nsight Monitor.

Tautan pengunduhan NSight
Tautan pengunduhan NSight

Begitupun pada CUDA Toolkit, unduh installer yg sesuai dengan OS mesin anda ( 32 bit/64 bit )

Halaman unduh CUDA Toolkit
Halaman unduh CUDA Toolkit


Sebelum meng-instal file – file yang telah diunduh, pastikan bahwa anda sudah menginstal Visual Studio 2008 SP1 di mesin anda. Lakukanlah instalasi dengan urutan : Nsight Host -> NSight Monitor -> CUDA Toolkit 3.2. Matikan juga aplikasi Visual Studio yang aktif sebelum instalasi di mulai. Pada dialog instalasi , cukup tekan tombol Next; isi nama, email pada window registrasi dan pilih jenis instalasi Typical / Complete.

Instalasi - Jendela registrasi user
Instalasi - Jendela registrasi user

Menambahkan extensi file .cu ( kernel CUDA ) di VS .NET

Pada menu bar VS .NET, klik Tools -> Options untuk membuka jendela Options. Di jendela ini, buka node treeview Text Editor, pilih File Extension. Di textbox berlabel Extension, ketik .cu. Klik kotak kombo Editor, pilih entry Microsoft Visual C++. Klik tombol Apply. Klik tombol Ok.

Regstrasi file CUDA ( .cu ) pada VS .NET
Regstrasi file CUDA ( .cu ) pada VS .NET

Menyeting & mengkompilasi project CUDA

  • Buat solution kosong yang baru. Tambahkan project baru di dalam solusi yg di buat. Pilih jenis project CLR->Class Library untuk menanmbahkan project .NET DLL ke dalam solution. Alasan dipilihnya project .NET DLL pada tutorial ini karena penulis ingin menguji apakah solusi CUDA ini dapat di integrasikan dengan aplikasi .NET.
    Membuat project DLL .NET ke dalam solution
    Membuat project DLL .NET ke dalam solution
  • Mengaktifkan CUDA’s Build Customization – Pada menu bar VS .NET, klik Project -> Build Customizations… untuk memunculkan jendela Visual C++ Build Customization Files. Centang pilihan CUDA 3.2(.targets, props) lalu klik tombol Ok. Entry ini akan di tampilkan bila proses instalasi NSight & CUDA Tools berhasil dan sesuai dengan langkah – langkah sebelumnya.
    Mengaktifkan Custom Build CUDA pada project yang di pilih.
    Mengaktifkan Custom Build CUDA pada project yang di pilih.
  • Buka halaman property project .NET C++ -nya, pilih node Configuration Properties -> General lalu di panel kanan jendela property, ubah nilai entry Platform Toolset yang semula v100 menjadi v90. Klik tombol Apply.
    Mengubah Platform Toolset
    Mengubah Platform Toolset
  • Masih pada jendela Property Pages, pilih node Configuration Properties -> Linker -> Input, lalu di panel kanan jendela property, tambahkan nilai cudart.lib; pada entry Additional Dependencies. Klik tombol Ok pada jendela ini.
    Menambahkan referensi ke Library Runtime CUDA
    Menambahkan referensi ke Library Runtime CUDA
  • Ubah target .NET Framework project ke versi 3.5 – Unload terlebih dulu project ini dengan cara klik kanan node project, klik menu item Unload project pada pop up menu. Klik kanan kembali node project yang telah di unload, lalu klik pilihan Edit .vcxproj pada menu pop up untuk memulai editing file project ini secara manual. Ubah nilai di dalam tag xml TargetFrameworkVersion dari yang semula 4.0 ke 3.5. Save file ini lalu load kembali project ini.
    Mengubah Versi .NET Framework dari Project .NET DLL
    Mengubah Versi .NET Framework dari Project .NET DLL
  • Tambahkan file baru pada project .NET DLL ini, berekstensi .cu dengan nama sembarang ( misal: ). Biarkan kosong terlebih dahulu isi file .cu ini. Save semua perubahan, lalu build project ini. Berikut adalah isi panel output build bila kompilasi berhasil. Selamat mencoba 🙂

    1>------ Rebuild All started: Project: Excercise.Cuda.ManagedKernel, Configuration: Debug Win32 ------
    1> E:\PROJECTS\Excercise.Cuda\Excercise.Cuda.ManagedKernel>"C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v3.2\\bin\nvcc.exe" -ccbin "C:\Program Files (x86)\Microsoft Visual Studio 9.0\VC\bin" -I"C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v3.2\\include" -G0 --keep-dir "Debug\\" -maxrregcount=32 --machine 32 --compile -D_NEXUS_DEBUG -g -Xcompiler "/EHsc /nologo /Od /Zi /MDd " -o "Debug\MathKernel.obj" "E:\PROJECTS\Excercise.Cuda\Excercise.Cuda.ManagedKernel\Kernels\" -clean
    1> Compiling CUDA source file Kernels\
    1> E:\PROJECTS\Excercise.Cuda\Excercise.Cuda.ManagedKernel>"C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v3.2\\bin\nvcc.exe" -gencode=arch=compute_10,code=\"sm_10,compute_10\" --use-local-env --cl-version 2008 -ccbin "C:\Program Files (x86)\Microsoft Visual Studio 9.0\VC\bin" -I"C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v3.2\\include" -G0 --keep-dir "Debug\\" -maxrregcount=32 --machine 32 --compile -D_NEXUS_DEBUG -g -Xcompiler "/EHsc /nologo /Od /Zi /MDd " -o "Debug\MathKernel.obj" "E:\PROJECTS\Excercise.Cuda\Excercise.Cuda.ManagedKernel\Kernels\"
    1> tmpxft_000010ac_00000000-11_MathKernel.ii
    1> Stdafx.cpp
    1> AssemblyInfo.cpp
    1> Generating Code...
    1> Microsoft (R) Windows (R) Resource Compiler Version 6.1.6723.1
    1> Copyright (C) Microsoft Corporation. All rights reserved.
    1> Excercise.Cuda.ManagedKernel.vcxproj -> E:\PROJECTS\Excercise.Cuda\Debug\Excercise.Cuda.ManagedKernel.dll
    ========== Rebuild All: 1 succeeded, 0 failed, 0 skipped ==========