An Introduction to Rails API


API stands for Application Interface Program, which provides one application to interact with ‘n’ number of applications which is of same/different language, to access the data/functionality.
Creating API application provides more scalability to the web applications. It will also helps for the easy integration with cross domain applications/languages.
• iOS apps
• Android apps
• Node js framework
• Angular js framework

There are 2 ways to achieve this in rails.

1. We can easily create a new API application using gem called rails-api, which inherit the application from ActionControllerAPI instead of ActionControllerBase and it will skip view generation. This will also helps to configure the middlewares.

2. In case the application is already created we have to inherit ActionControllerAPI manually.

Basic Flow

railscarma_Api

Versioning API’s
Once the application is set-up we can create the controller under controller/v1 folder, that will helps for the easy maintenance of versions and releasing new version of API’s.

In this controller we can write code for crud or some functionality that can be called by curl or as API request from front end application for the GET, POST, DELETE, PATCH requests gives responses in JSON/xml format, which is in human readable form. This json data can be read and shown from front-end application.

Security
By passing the token which is generated for each user and email of the user through an api header to secure an api. It can be ensured that there only authenticated user can access and modify data using api.

Using these we can authenticate the user and secure the application. According to the data sent and the data matches in the applications we can send the proper responses back to the front-end application.

These are few basic aspects which can be implemented using rails and create a robust API architecture.

Advertisements

Components of Hadoop


In our previous blog we learned that the platform that processes and organizes Big Data is  Hadoop. Here we will learn more about Hadoop which is a core platform for structuring Big Data and solves the problems of utilizing it for analytic purposes. It is an Open Source software framework for distributed storage and distributed processing of Big Data on clusters of commodity hardware.

Main characteristics of Hadoop:

  • Highly scalable (scaled out)
  • Commodity hardware based
  • Open Source, low acquisition and storage costs

Hadoop is basically divided into two parts namely : HDFS and Mapreduce framework. A Hadoop cluster is specially designed for storing and analyzing huge amounts of unstructured data. Workload is distributed across multiple cluster nodes that work to process data in parallel.

images

History of Hadoop

Doug Cutting is the brains behind Hadoop which has its origin in Apache and Nutch. Nutch was started in 2002 and it itself is an Open Source web search engine. Google published the paper that introduced the Mapreduce to the world.  In early 2005  Nutch developers had a working Mapreduce implementation in Nutch.

In February 2006  Hadoop was formed as an independent project by Nutch. In January 2008 Hadoop has made its own top level project at Apache and by this time major companies like Yahoo and Facebook started using  Hadoop.

HDFS is the first aspect and Mapreduce  is the secondary aspect of  Hadoop.  HDFS has an architecture which helps it in processing the data and organizing it.

To get into details of  HDFS, its architecture, functioning and several other concepts, keep an eye on the blogs that will be published in coming days.

Source : RailsCarma

The Tool For Processing Big Data – Hadoop


imagesIn our previous blog we learned that the platform that processes and organizes Big Data is  Hadoop. Here we will learn more about Hadoop which is a core platform for structuring Big Data and solves the problems of utilizing it for analytic purposes. It is an Open Source software framework for distributed storage and distributed processing of Big Data on clusters of commodity hardware.

Main characteristics of Hadoop:

  • Highly scalable (scaled out)
  • Commodity hardware based
  • Open Source, low acquisition and storage costs

Hadoop is basically divided into two parts namely : HDFS and Mapreduce framework. A Hadoop cluster is specially designed for storing and analyzing huge amounts of unstructured data. Workload is distributed across multiple cluster nodes that work to process data in parallel.

History of Hadoop

Doug Cutting is the brains behind Hadoop which has its origin in Apache and Nutch. Nutch was started in 2002 and it itself is an Open Source web search engine. Google published the paper that introduced the Mapreduce to the world.  In early 2005  Nutch developers had a working Mapreduce implementation in Nutch.

In February 2006  Hadoop was formed as an independent project by Nutch. In January 2008 Hadoop has made its own top level project at Apache and by this time major companies like Yahoo and Facebook started using  Hadoop.

HDFS is the first aspect and Mapreduce  is the secondary aspect of  Hadoop.  HDFS has an architecture which helps it in processing the data and organizing it.

To get into details of  HDFS, its architecture, functioning and several other concepts, keep an eye on the blogs that will be published in coming days.

Source : RailsCarma