Sunday, June 28, 2015

The Difference Between URLs and URIs

There are many classic tech debates, and the question of what to formally call web addresses is one of the most nuanced. The way this normally manifests is someone asks for the “URL” to put into his or her browser, and someone perks up with,
Actually, that’s called a URI, not a URL…
The response to this correction can range from quietly thinking this person needs to get out more, to agreeing indifferently via shoulder shrug, to removing the safety clasp on a Katana. This page hopes to serve as a simple, one page summary for navigating the subtleties of this debate.

URI, URL, URN

As the image above indicates, there are three distinct components at play here. It’s usually best to go to the source when discussing matters like these, so here’s an exerpt from Tim Berners-Lee, et. al. in RFC 3986: Uniform Resource Identifier (URI): Generic Syntax:
A Uniform Resource Identifier (URI) is a compact sequence of characters that identifies an abstract or physical resource.
A URI can be further classified as a locator, a name, or both. The term “Uniform Resource Locator” (URL) refers to the subset of URIs that, in addition to identifying a resource, provide a means of locating the resource by describing its primary access mechanism (e.g., its network “location”).
Wikipedia captures this well with the following simplification:
One can classify URIs as locators (URLs), or as names (URNs), or as both. A Uniform Resource Name (URN) functions like a person’s name, while a Uniform Resource Locator (URL) resembles that person’s street address. In other words: the URN defines an item’s identity, while the URL provides a method for finding it.
So we get a few things from these descriptions:
  1. First of all (as we see in the diagram as well) a URL is a type of URI. So if someone tells you that a URL is not a URI, he’s wrong. But that doesn’t mean all URIs are URLs. All butterflies fly, but not everything that flies is a butterfly.
  2. The part that makes a URI a URL is the inclusion of the “access mechanism”, or “network location”, e.g. http:// or ftp://.
  3. The URN is the “globally unique” part of the identification; it’s a unique name.
So let’s look at some examples of URIs–again from the RFC:
  • ftp://ftp.is.co.za/rfc/rfc1808.txt (also a URL because of the protocol)
  • http://www.ietf.org/rfc/rfc2396.txt (also a URL because of the protocol)
  • ldap://[2001:db8::7]/c=GB?objectClass?one (also a URL because of the protocol)
  • mailto:John.Doe@example.com (also a URL because of the protocol)
  • news:comp.infosystems.www.servers.unix (also a URL because of the protocol)
  • tel:+1-816-555-1212
  • telnet://192.0.2.16:80/ (also a URL because of the protocol)
  • urn:oasis:names:specification:docbook:dtd:xml:4.1.2
Those are all URIs, and some of them are URLs. Which are URLs? The ones that show you how to get to them. Again, the name vs. address analogy serves well.

Summary

So this brings us to the question that brings many readers here:
Which is the more proper term when referring to web addresses?
Based on the dozen or so articles and RFCs I read while researching this article, I’d say that URI is probably the better term to use.
Why?
Well, because we often use URIs in forms that don’t technically qualify as a URL. For example, you might be told that a file you need is located at files.hp.com. That’s a URI, not a URL—and that system might very well respond to many protocols over many ports.
If you go to http://files.hp.com you could conceivably get completely different content than if you go to ftp://files.hp.com. And this type of thing is only getting more common. Think of all the different services that live on the various Google domains.
So, if you use URI you’ll always be technically correct, and if you use URL you might not be. Finally, there is significant chatter around the term “URL” being—or becoming—deprecated. So URI is a fairly safe choice in terms of accuracy.
That being said, Dafydd Stuttard has a different view, which is that the terms are near enough the same so as to make it pure pedantry to differentiate. In The Web Application Hacker’s Handbook he states:
The correct technical term for a URL is actually URI (or uniform resource identifier), but this term is really only used in formal specifications and by those who wish to exhibit their pedantry.
Indeed.
[ NOTE: If someone actually gives a full URL then the more correct technical term is still URL, but I think I know what he meant. ]

Final word

If you don’t mind being “that guy”, URI is probably the more accurate term to use. But if you are in the linguist / “use what’s understood” camp, feel free to go with URL.

Saturday, June 27, 2015

What is a cloud service?

A cloud service is any resource that is provided over the Internet. The most common cloud service resources are Software as a Service (SaaS), Platform as a Service (PaaS) and Infrastructure as a Service (IaaS).

SaaS is a software distribution model in which applications are hosted by a vendor or service provider and made available to customers over a network, typically the Internet. PaaS refers to the delivery of operating systems and associated services over the Internet without downloads or installation. IaaS involves outsourcing the equipment used to support operations, including storage, hardware, servers and networking components, all of which are made accessible over a network.  SaaS, PaaS and IaaS are sometimes referred to collectively as the SPI model.
Cloud services are the same thing as Web services. However, the term cloud services has been more commonly used as  cloud computing has become more pervasive.

Web Application Design Patterns

Web Application Architectures
We have already seen that modern web applications involve a significant amount of complexity, particularly on the server side.
A typical web application involves numerous protocols, programming languages and technologies spread throughout the web stack.
Developing, maintaining and extending a complex web application is extremely difficult – but, building it using a foundation of solid design principles can simplify each of these tasks.
Software engineers use abstraction to deal with this type of complexity. Design patterns provide useful design abstractions for object-oriented
systems.

Design Patterns
Definition (Design Pattern)
A design pattern is a description of interacting objects and classes that interact to solve a general design problem within a particular context.
A design pattern is an abstract template that can be applied over and over again.
The idea is apply abstract design patterns in order to solve specific design problems that occur while building real systems.
Design patterns provide a way to communicate the parts of a design, i.e., it’s the vernacular software engineers use to talk about designs.

Client-Sever Model
The n-tier architecture is a highly useful design pattern that maps to the client-server model.
This design pattern is based on the concept of breaking a system into different pieces or tiers that can be physically separated:
– Each tier is responsible for providing a specific functionality.
– A tier only interacts with the tiers adjacent to it through a well-defined
interface.
Ex.
Print server – 2-tier architectural pattern.
Early web applications – 2-tier client-server architecture:
- User interface (browser) functionality resided on the (thin) client.
- Server provided static web pages (HTML).
- Interface between the two via the hypertext transfer protocol (HTTP).

n-Tier Architecture
Additional tiers show up when the application functionality is further partitioned.
What are the advantages of such a design?
The abstraction provides a means for managing the complexity of the design.
Tiers can be upgraded or replaced independently as requirements or technology change — the new tier just needs to use the same interfaces as the old one.
It provides a balance between innovation and standardization.
Tiered systems are much easier to build, maintain, scale and upgrade.

3-Tier Architecture
One of the most common is the 3-tier architecture:
- Presentation tier – The user interface.
- Application (logic) tier – Retrieves, modifies and/or deletes data in the
data tier, and sends the results to the presentation tier. Also
responsible for processing the data itself.
- Data tier – The source of the data associated with the application.
A modern web application is often deployed over the Internet as a 3-tier architecture:
- Presentation tier – User’s web browser.
- Application (logic) tier – The web server and logic associated with
generating dynamic web content, e.g., collecting and formatting the
results of a search.
- Data tier – A database.

6-Tier Web Application Architecture
The Application tier is often subdivided into two tiers:
- Business logic tier – Models the business objects associated with the application, e.g., accounts, inventories, etc., and captures the business rules and workflows associated with how these processes can be processed and manipulated.
- Data access tier – Responsible for accessing data, and passing it to the business logic tier, e.g., account balances, transactions, etc.
The Presentation tier is often subdivided into two tiers:
- Client tier – client-side user interface components.
- Presentation logic tier – server-side scripts for generating web pages.
Finally, the web server is often separated out into its own Web tier.




Travelling from Web 1.0 to 3.0

You can view the pdf here:
https://drive.google.com/file/d/0B2qKwiFBZXH3LWFGVzg1NGdxNG8/view?usp=sharing

Here is the presentation of the above here:

Web Applications - Introduction and client server model

The client-server architecture is the most basic model for describing the relationship between the cooperating programs in a web application.
The two parts of a client-server architecture are:
  • Server component – “listens” for request, and provides services and/or resources accordingly.
  • Client component – establishes a connection to the server, and requests services and/or resources from it.


Definition (Web Application)
A web application is accessed by users over a network, uses a browser as the client, and consists of a collection of client- and server-side scripts, HTML pages, and other resources that may be spread across multiple servers. The application itself is accessed by users via a specific path within a web server, e.g., www.amazon.com.
Ex. Webmail, online retail stores, online banks, online auctions, wikis, blogs, document storage, etc.

There’s a bit more to it:
Network –
The Internet, a global system of interconnected computer networks. Uses the standard Internet protocol suite (TCP/IP).
Web (World Wide Web) –
A system of interlinked documents (web pages) accessed via the Internet using HTTP.
Web pages contain hypermedia: text, graphics, images, video and other multimedia, along with hyperlinks to other web pages.
Hyperlinks give the Web its structure.
The structure of the Web is what makes it useful and gives it value.

Advantages —
Ubiquity and convenience of using a web browser as a client.
Inherent cross-platform compatibility.
Ability to update and maintain web applications without distributing and installing software on potentially thousands of client computers.
Reduction in IT costs.
Disadvantages —
User experience not as good as standalone (workstation/PC) applications — increasingly not the case.
Privacy and security issues associated with your data.
From a developer’s perspective, difficult to develop and debug — there are a lot of moving parts!

Historical timeline of web and Web1.0,2.0 and 3.0

Here is the historical timeline and how we progressed with web 1.0 to web 3.0.
Here is the download link:https://drive.google.com/file/d/0B2qKwiFBZXH3bGNUYl8zRWNwaEU/view?usp=sharing


HTTP Tutorial

HTTP stands for Hypertext Transfer Protocol. It's a stateless, application-layer protocol for communicating between distributed systems, and is the foundation of the modern web. As a web developer, we all must have a strong understanding of this protocol.
Let's review this powerful protocol through the lens of a web developer. We'll tackle the topic in two parts. In this first entry, we'll cover the basics and outline the various request and response headers. In the follow-up article, we'll review specific pieces of HTTP - namely caching, connection handling and authentication.
Although I'll mention some details related to headers, it's best to instead consult the RFC (RFC 2616) for in-depth coverage. I will be pointing to specific parts of the RFC throughout the article.
HTTP uses client-server model, discussed here. HTTP allows for communication between a variety of hosts and clients, and supports a mixture of network configurations.
To make this possible, it assumes very little about a particular system, and does not keep state between different message exchanges.
This makes HTTP a stateless protocol. The communication usually takes place over TCP/IP, but any reliable transport can be used. The default port for TCP/IP is 80, but other ports can also be used.



Why HTTP?

  • The stateless design simplifies the server design because there is no need to dynamically allocate storage to deal with conversations in progress. 
  • All the existing infrastructure can be reused for any platform. The infrastructure including

 

Request and Response

Custom headers can also be created and sent by the client.
Communication between a host and a client occurs, via a request/response pair. The client initiates an HTTP request message, which is serviced through a HTTP response message in return. We will look at this fundamental message-pair in the next section.
The current version of the protocol is HTTP/1.1, which adds a few extra features to the previous 1.0 version. The most important of these, in my opinion, includes persistent connections, chunked transfer-coding and fine-grained caching headers. We'll briefly touch upon these features in this article; in-depth coverage will be provided in part two.
At the heart of web communications is the request message, which are sent via Uniform Resource Locators (URLs). I'm sure you are already familiar with URLs, but for completeness sake, I'll include it here. URLs have a simple structure that consists of the following components:






There are also web debugging proxies, like Fiddler on Windows and Charles Proxy for OSX.
URLs reveal the identity of the particular host with which we want to communicate, but the action that should be performed on the host is specified via HTTP verbs. Of course, there are several actions that a client would like the host to perform. HTTP has formalized on a few that capture the essentials that are universally applicable for all kinds of applications.
These request verbs are:
  • GET: fetch an existing resource. The URL contains all the necessary information the server needs to locate and return the resource.
  • POST: create a new resource. POST requests usually carry a payload that specifies the data for the new resource.
  • PUT: update an existing resource. The payload may contain the updated data for the resource.
  • DELETE: delete an existing resource.
The above four verbs are the most popular, and most tools and frameworks explicitly expose these request verbs. PUT and DELETE are sometimes considered specialized versions of the POST verb, and they may be packaged as POST requests with the payload containing the exact action: create, update or delete.
There are some lesser used verbs that HTTP also supports:
  • HEAD: this is similar to GET, but without the message body. It's used to retrieve the server headers for a particular resource, generally to check if the resource has changed, via timestamps.
  • TRACE: used to retrieve the hops that a request takes to round trip from the server. Each intermediate proxy or gateway would inject its IP or DNS name into the Via header field. This can be used for diagnostic purposes.
  • OPTIONS: used to retrieve the server capabilities. On the client-side, it can be used to modify the request based on what the server can support.
With URLs and verbs, the client can initiate requests to the server. In return, the server responds with status codes and message payloads. The status code is important and tells the client how to interpret the server response. The HTTP spec defines certain number ranges for specific types of responses:
All HTTP/1.1 clients are required to accept the Transfer-Encoding header.
This class of codes was introduced in HTTP/1.1 and is purely provisional. The server can send a Expect: 100-continue message, telling the client to continue sending the remainder of the request, or ignore if it has already sent it. HTTP/1.0 clients are supposed to ignore this header.
This tells the client that the request was successfully processed. The most common code is 200 OK. For a GET request, the server sends the resource in the message body. There are other less frequently used codes:
  • 202 Accepted: the request was accepted but may not include the resource in the response. This is useful for async processing on the server side. The server may choose to send information for monitoring.
  • 204 No Content: there is no message body in the response.
  • 205 Reset Content: indicates to the client to reset its document view.
  • 206 Partial Content: indicates that the response only contains partial content. Additional headers indicate the exact range and content expiration information.
404 indicates that the resource is invalid and does not exist on the server.
This requires the client to take additional action. The most common use-case is to jump to a different URL in order to fetch the resource.
  • 301 Moved Permanently: the resource is now located at a new URL.
  • 303 See Other: the resource is temporarily located at a new URL. The Location response header contains the temporary URL.
  • 304 Not Modified: the server has determined that the resource has not changed and the client should use its cached copy. This relies on the fact that the client is sending ETag (Enttity Tag) information that is a hash of the content. The server compares this with its own computed ETag to check for modifications.
These codes are used when the server thinks that the client is at fault, either by requesting an invalid resource or making a bad request. The most popular code in this class is 404 Not Found, which I think everyone will identify with. 404 indicates that the resource is invalid and does not exist on the server. The other codes in this class include:
  • 400 Bad Request: the request was malformed.
  • 401 Unauthorized: request requires authentication. The client can repeat the request with the Authorization header. If the client already included the Authorization header, then the credentials were wrong.
  • 403 Forbidden: server has denied access to the resource.
  • 405 Method Not Allowed: invalid HTTP verb used in the request line, or the server does not support that verb.
  • 409 Conflict: the server could not complete the request because the client is trying to modify a resource that is newer than the client's timestamp. Conflicts arise mostly for PUT requests during collaborative edits on a resource.
This class of codes are used to indicate a server failure while processing the request. The most commonly used error code is 500 Internal Server Error. The others in this class are:
  • 501 Not Implemented: the server does not yet support the requested functionality.
  • 503 Service Unavailable: this could happen if an internal system on the server has failed or the server is overloaded. Typically, the server won't even respond and the request will timeout.
So far, we've seen that URLs, verbs and status codes make up the fundamental pieces of an HTTP request/response pair.





Let's now look at the content of these messages. The HTTP specification states that a request or response message has the following generic structure:
It's mandatory to place a new line between the message headers and body. The message can contain one or more headers, of which are broadly classified into:
The message body may contain the complete entity data, or it may be piecemeal if the chunked encoding (Transfer-Encoding: chunked) is used. All HTTP/1.1 clients are required to accept the Transfer-Encoding header.
There are a few headers (general headers) that are shared by both request and response messages:
We have already seen some of these headers, specifically Via and Transfer-Encoding. We will cover Cache-Control and Connection in part two.
The status code is important and tells the client how to interpret the server response.
  • Via header is used in a TRACE message and updated by all intermittent proxies and gateways
  • Pragma is considered a custom header and may be used to include implementation-specific headers. The most commonly used pragma-directive is Pragma: no-cache, which really is Cache-Control: no-cache under HTTP/1.1. This will be covered in Part 2 of the article.
  • The Date header field is used to timestamp the request/response message
  • Upgrade is used to switch protocols and allow a smooth transition to a newer protocol.
  • Transfer-Encoding is generally used to break the response into smaller parts with the Transfer-Encoding: chunked value. This is a new header in HTTP/1.1 and allows for streaming of response to the client instead of one big payload.
Request and Response messages may also include entity headers to provide meta-information about the the content (aka Message Body or Entity). These headers include:
All of the Content- prefixed headers provide information about the structure, encoding and size of the message body. Some of these headers need to be present if the entity is part of the message.
The Expires header indicates a timestamp of whent he entity expires. Interestingly, a "never expires" entity is sent with a timestamp of one year into the future. The Last-Modified header indicates the last modification timestamp for the entity.
Custom headers can also be created and sent by the client; they will be treated as entity headers by the HTTP protocol.
This is really an extension mechanism, and some client-server implementations may choose to communicate specifically over these extension headers. Although HTTP supports custom headers, what it really looks for are the request and response headers, which we cover next.
The request message has the same generic structure as above, except for the request line which looks like:
SP is the space separator between the tokens. HTTP-Version is specified as "HTTP/1.1" and then followed by a new line. Thus, a typical request message might look like:
Note the request line followed by many request headers. The Host header is mandatory for HTTP/1.1 clients. GET requests do not have a message body, but POST requests can contain the post data in the body.
The request headers act as modifiers of the request message. The complete list of known request headers is not too long, and is provided below. Unknown headers are treated as entity-header fields.
The Accept prefixed headers indicate the acceptable media-types, languages and character sets on the client. From, Host, Referer and User-Agent identify details about the client that initiated the request. The If- prefixed headers are used to make a request more conditional, and the server returns the resource only if the condition matches. Otherwise, it returns a 304 Not Modified. The condition can be based on a timestamp or an ETag (a hash of the entity).
The response format is similar to the request message, except for the status line and headers. The status line has the following structure:
  • HTTP-Version is sent as HTTP/1.1
  • The Status-Code is one of the many statuses discussed earlier.
  • The Reason-Phrase is a human-readable version of the status code.
A typical status line for a successful response might look like so:
The response headers are also fairly limited, and the full set is given below:
  • Age is the time in seconds since the message was generated on the server.
  • ETag is the MD5 hash of the entity and used to check for modifications.
  • Location is used when sending a redirection and contains the new URL.
  • Server identifies the server generating the message.
It's been a lot of theory upto this point, so I won't blame you for drowsy eyes. In the next sections, we will get more practical and take a survey of the tools, frameworks and libraries.
There are a number of tools available to monitor HTTP communication. Here, we list some of the more popular tools.
Undoubtedly, the Chrome/Webkit inspector is a favorite amongst web developers:





There are also web debugging proxies, like Fiddler on Windows and Charles Proxy for OSX. My colleague, Rey Bango wrote an excellent article on this topic.











For the command line, we have utilities like curl, tcpdump and tshark for monitoring HTTP traffic.
Now that we have looked at the request/response messages, it's time that we learn how libraries and frameworks expose it in the form of an API. We'll use ExpressJS for Node, Ruby on Rails, and jQuery Ajax as our examples.
If you are building web servers in NodeJS, chances are high that you've considered ExpressJS. ExpressJS was originally inspired by a Ruby Web framework, called Sinatra. As expected, the API is also equally influenced.
Because we are dealing with a server-side framework, there are two primary tasks when dealing with HTTP messages:
  • Read URL fragments and request headers.
  • Write response headers and body
Understanding HTTP is crucial for having a clean, simple and RESTful interface between two endpoints.
ExpressJS provides a simple API for doing just that. We won't cover the details of the API. Instead, we will provide links to the detailed documentation on ExpressJS guides. The methods in the API are self-explanatory in most cases. A sampling of the request-related API is below:
On the way out to the client, ExpressJS provides the following response API:
  • res.status: set an explicit status code.
  • res.set: set a specific response header.
  • res.send: send HTML, JSON or an octet-stream.
  • res.sendFile: transfer a file to the client.
  • res.render: render an express view template.
  • res.redirect: redirect to a different route. Express automatically adds the default redirection code of 302.
The request and response messages are mostly the same, except for the first line and message headers.
In Rails, the ActionController and ActionDispatch modules provide the API for handling request and response messages.
ActionController provides a high level API to read the request URL, render output and redirect to a different end-point. An end-point (aka route) is handled as an action method. Most of the necessary context information inside an action-method is provided via the request, response and params objects.
  • params: gives access to the URL parameters and POST data.
  • request: contains information about the client, headers and URL.
  • response: used to set headers and status codes.
  • render: render views by expanding templates.
  • redirect_to: redirect to a different action-method or URL.
ActionDispatch provides fine-grained access to the request/response messages, via the ActionDispatch::Request and ActionDispatch::Response classes. It exposes a set of query methods to check the type of request (get?(), post?(), head?(), local?()). Request headers can be directly accessed via the request.headers() method.
On the response side, it provides methods to set cookies(), location=() and status=(). If you feel adventurous, you can also set the body=() and bypass the Rails rendering system.
Because jQuery is primarily a client-side library, its Ajax API provides the opposite of a server-side framework. In other words, it allows you to read response messages and modify request messages. jQuery exposes a simple API via jQuery.ajax(settings):
By passing a settings object with the beforeSend callback, we can modify the request headers. The callback receives the jqXHR (jQuery XMLHttpRequest) object that exposes a method, called setRequestHeader() to set headers.
  • The jqXHR object can also be used to read the response headers with the jqXHR.getResponseHeader().
  • If you want to take specific actions for various status codes, you can use the statusCode callback:
So that sums up our quick tour of the HTTP protocol.
We reviewed URL structure, verbs and status codes: the three pillars of HTTP communication.
The request and response messages are mostly the same, except for the first line and message headers. Finally, we reviewed how you can modify the request and response headers in web frameworks and libraries.
Understanding HTTP is crucial for having a clean, simple, and RESTful interface between two endpoints. On a larger scale, it also helps when designing your network infrastructure and providing a great experience to your end users.
In part two, we'll review connection handling, authentication and caching! See you then.