Friday 29 April 2011

Apple iPhone 6 to get Sharp LCDs

Apple iPhone 6 to get Sharp LCDs

Next-generation thinner and lighter display panels to be used







Rumors of the iPhone 5 continue to rife and images of purported prototypes are already soaring on the web. Amidst that craziness, Japanese news daily Nikkei, reported that Apple will partner with Sharp to create next-generation low-temperature poly-silicon LCD displays for the iPhone 6. That means the iPhone 6 would have thinner and lighter display compared to the iPhone 4 and also offer more durability, higher aperture ratios and few connection pins.

Even in the past, Apple employed the Retina Display technology in the iPhone 4 providing crisper images, bright and improved contrast ratio. However, Apple seems to be trying out another option for the iPhone 6 scheduled to hit production floors in mid-2012. 

For iPhone 6, Apple will use Sharp's LCD displays with Thin Film Transistors made from polycrystalline silicon (p-Si). These p-Si LCD displays are thinner, lighter and consume less power compared to the traditional LCD display screens. Apart from that, the p-Si LCD displays have high aperture ratio, more durability and less number of connection pins. 

Apple is expected to make use of these p-Si LCD displays as the technology allows putting optical sensors and other components on the glass substrate. So there's no need to add any additional layer of 'touch panel' to the device. In short, it will be part of more compact and slimmer iPhone device which we're currently referring as iPhone 6.

According to the images of iPhone parts circulating online, loads of assumptions can be made about the form factor, size and display technology to be used in the iPhone 6. However, nothing can be confirmed beforehand since Apple may just change the plans/designs for iPhone 6. Take it with a pinch of salt.

How Tech Company Logos Evolved

How Tech Company Logos Evolved: Part II

Logos that changed to suit the changing trends








A company's corporate logo is something that becomes a symbol of the company and the company starts getting identified by that logo. It is this power of identity that makes us instantly recognize a brand by just glimpsing at its logo. Corporate logos of almost every technology company have changed over the course of years and that is what we are going to have a look at today in part 2 of this series.

LG


The Korean company we know today as LG had its humble beginnings as a merger of two different companies owned by Koo In-Hwoi - Lucky (Lak-Hui), which was a chemical cosmetic company, and Goldstar, which was a radio manufacturing plant. Known as Lucky Goldstar since then, the company was renamed as LG Electronics in 1995.

The LG logo in use today is of symbolic importance to the company. According to LG, the letters "L" and "G" in a circle symbolize the world, future, youth, humanity and technology. The red color represents friendliness along with a strong impression of the company's commitment to the best.

The circle symbolizes the globe, the stylized image of a smiling face conveys friendliness and approachability, while the one eye conveys goal-oriented, focused and confident. The LG Grey represents technology and reliability. The logo is asymmetrical and it has been deliberately created thus to represent the creativity and adaptability to changes of this company.

While it is a well-known fact that LG stands for Lucky Goldstar, the company now denies it and rather states that it stands for "Life is Good" or simply LG.

Microsoft



When Microsoft was founded in 1975 by Bill Gates and Paul Allen, the first product of this company was the computer language BASIC. Being a software company and since the software runs on hardware based on a microprocessor, they named this company taking the first few letters of the word microprocessor and software to create Micro-Soft (with a hyphen separating the two words). The company logo was also designed to reflect the name of the company at that time.

They decided to drop the hyphen in the same year and rename the company Microsoft, and in the process also changed the logo accordingly. The new green Microsoft logo with an artistic O (nicknamed "Blibbet" by company employees) was used till 1987 as the company logo.

Microsoft decided to introduce a new corporate logo in 1987 and did away with the "Blibbet" (causing Dave Norris to run an internal joke campaign called "Save the Blibbet"). Nevertheless, the company had made up its mind to get a new logo designed by Scott Baker using Hevletica italic typeface. The slant was an indicator of motion and speed. This came to be known as the "Pacman logo" because the O in this logo looked like the video game character Pacman, with a distinctive cut, to subtly differentiate between the Micro and Soft part.

Microsoft added the tagline "Where do you want to go today?" to this logo below the original logo in 1994. Microsoft retained the logo design, but opted to change the tagline in 2006 to "Your potential. Our passion" with the tagline also written in italics typeface. In 2011, Microsoft yet again changed its tagline to "Be What's Next." and this tagline is in normal typeface.

Mozilla Firefox


When Dave Hyatt and Blake Ross first created the open source browser in 2001, they named it Phoenix and this is visible in its first logo. But there were some trademark issues which forced them to change the name of their browser to Firebird, which allowed them to retain the original logo since the meaning is the same.

When it was later found out that even the name Firebird was already trademarked, they had to quickly find an alternative name. They decided to name it Firefox in 2003 and that name was the final name of this browser. Accordingly, they got the logo changed by professional interface designer John Hicks to a Firefox engulfing the blue globe representing the world. Was it a play on the Microsoft Internet Explorer logo getting engulfed by this new promising browser? We do not know.

Nevertheless, this has remained the logo of the Firefox browser since then and has changed very little, save for some colors of the continents using a lighter blue color to be able to differentiate the oceans from the land mass.


Nokia


In Finnish, the word "Nokia" means a dark and furry animal called the Pine Marten weasel. The name of the company Nokia came from the company set up on the banks of the river Nokiavirta in the town of Nokia in 1868 by Knut Fredrik Idestam as a second groundwood pulp mill in his new business established in 1865. Nokia has therefore been in the communication business right from the start as this company manufactured paper, which was a major means of communication back then.

It is not exactly known if it is really true, but it is generally believed that since the company was situated on the banks of the Nokiavirta river, the first Nokia Company logo was that of a diving salmon fish, possibly from that river.

Finnish Rubber Works was founded in 1898, which would eventually become part of Nokia and along with Nokia Ab and the Finish Cable works, would form Nokia Corporation in 1967.

The current slogan of Nokia - "Connecting People", which is part of its present logo, was invented by Ove Strandberg.

Xerox



At its inception in 1906, the company Xerox Corporation was known as The Haloid Photographic Company, which manufactured photographic paper and equipment. In 1938, Chester Carlson invented a technique called xerography, now known as the photocopy technique. Despite his persistent efforts, he could not find a financer for his invention. Being turned down by the big giants such as IBM, GE and RCA, Carlson turned to The Haloid Photographic Company, which decided to back his invention. The company made the first photocopying machine named Haloid Xerox 14.

The company's name was later changed to Haloid Xerox in 1958. You will notice in the company logos, that the original word Haloid, which used to be prominent in earlier logos, was gradually replaced in prominence by the word Xerox in the 1948 logo and in the 1949 logo. Haloid Xerox made a brief appearance in the logo in 1958, but was dropped three years later when it was completely replaced by Xerox in 1961. This was due to the unprecedented success enjoyed by the photocopying machines named thus.

Xerox retained this logo from 1961 to 2004. In 2004, Xerox decided that it does not want to be associated just with photocopy machines as had been happening over the years; they wanted to diversify. They changed the logo just a little bit by removing the words "The Document Company", while retaining the rest of the logo.

Since they had already decided to get away from the complete association with photocopying machines, in 2008 they changed the font of the logo and added a ball with a stylish X.




Apple launches iPad 2 in India

In a move that has taken everyone by surprise, Apple has launched the iPad 2 in India. Compared to the first generation iPad, which was launched nine months after its US launch, the iPad 2 lands in India just over a month after the US launch. India isn't the only country getting it. The iPad 2 is being launched in 12 countries. Japan would be getting it first on April 28 whereas India, Hong Kong, Israel, Korea, Macau, Malaysia, Philippines, Singapore, South Africa, Turkey and UAE will be getting it on April 29. Additionally, China is also expected to get the iPad 2, but it will have to wait till May 6.

 




Now you must be waiting for the price. Contrary to its US counterpart, iPad 2 in India is slightly more expensive than the model it replaces. The original iPad was launched at Rs. 27,900 and was later dropped to Rs. 24,500. The iPad 2 starts at Rs. 29,500 and goes all the way up to Rs. 46,900.

The following are the prices of all the models. They include VAT and are M.R.P.

16GB Wi-Fi: Rs. 29,500
32G Wi-Fi: Rs. 34,500
64GB Wi-Fi: Rs. 39,500
16GB 3G + Wi-Fi: Rs. 36,900
32GB 3G + Wi-Fi: Rs. 41,900
64GB 3G + Wi-Fi: Rs. 46,900 

Thursday 28 April 2011

MySpace worth $580 million in 2005 ready for sale at $100 million

A handful of venture capital firms and other companies are expected to make News Corp. offers for one of its most disappointing properties MySpace.
News Corp. declared it was ready to sell MySpace in an earnings call in February. Now, The Wall Street Journal, which shares a parent company with the faltering social network, is reporting that News Corp. is attempting to get at least $100 million out of the sale. It names Redscout Ventures, Thomas H. Lee Partners, and Criterion Capital Partners LLC, which also owns Bebo, as potential buyers.

News Corp. purchased MySpace in 2005 for $580 million. At that time, the year-and-a-half-old Facebook hadn’t even acquired the Facebook.com URL and recorded a net loss of $3.63 million for the year. Even as late as 2007, Facebook’s traffic was disappointing when compared to traffic on MySpace.

But all that changed quickly. MySpace users began abandoning ship for Facebook and, in late 2009, site traffic took a dive from which it never really recovered. By 2010, even relative upstart Twitter was getting more traffic than MySpace.

Even though the network has pivoted to become an entertainment destination (in a nod to the bands and filmmakers that have clung to the platform out of habit or necessity), MySpace is still losing ground in these creative industries.

How to run a trial software forever



"CrackLock" is a program that allows us to extend the TRIAL period of Shareware programs that have a time limit.

It works by making the program believe the date remains constant always.
No need to have cracks, keygens serials ..

CrackLock 3.9.44

Which Antivirus Solutions Are Considered Government Grade Protection?

Even though every organization has security cracks, the government, for the most part, takes their security very seriously.

They will do all kinds of test so that they can make sure that no-one is able to get into their system. Of course, not all of those tests will be accurate and some people will be able to get in but, for the most part, you will be safe.

So how can you bring that government style of protection to your computer and network? This is an especially good question when it comes to antivirus protection.

Which antivirus is able to be used on a government system and remain effective? I will take a good look at this question so that I can help you decide which one you should be running.

Which Antivirus Does The Government Use?
For the most part, the government uses the same antivirus as you do.
If you go into any government offices you will see plenty of computers running the likes of Norton and MacAfee but that is for their non-sensitive computers.

For the systems that are very sensitive they may use a custom set of protections as it may be too important to leave that job to normal everyday virus filters.

These are computers that are attacked hundreds maybe even thousands of time a day.
Everyone wants to be able to get into a US government computer.

To be able to fight all of those attacks you would need several servers that are dedicated to stopping these attacks.
This is not something that you would be able to put into your home so you need to look for a solution that the government trust but it is not considered on their highest levels of security.

As I said before, using the normal antivirus software package that you see is a good start.
Norton and companies like it offer different levels of protection.

Instead of purchasing their consumer level brand you can purchase the enterprise version.
This will lead you into having better protection than you normally would. Usually, to get this type of protection, you are going to have to pay for it.

While at the consumer level you can get quality free protection for your computer, when it comes to enterprise level, you are going to have to pay some money.

There is no easy way to get around that.

Do You Need That Much Protection?
Sadly, you might not be as interesting as you think you are.

While it is true that there are some people in the private sector that need to have very well protected computers, not everyone does.

While the data on your computer might be important to you, not everyone is going to try to attack and get that data.
Most hackers aim at low hanging fruit and if your system has a good retail antivirus program on it then that should be enough.

There is no need to get fanatical about the level of security that your computer has.
When I talk about government level security on your system that term can mean many different things.
The government uses several different layers to secure their systems.

You most likely will not need the very top layer but you can make sure that the tool that you use is government approved.

How To Hide Text In Images(Steganography)

It is widely claimed that the trick of hiding text into images (Steganography) has been used by Al Qaeda for communication for carrying out 911 attacks.Well if you also want such a strong security in your communication so that nobody else can interpret your messages except the desired person then we have a free solution for you.
 

Here is the tutorial with Trick On How To Hide Text In Images (Steganography) for free.

(1).    First download image hide software. Its a freeware utility for hiding text in an image.Click here to download Image hide .
(2).    Once it is installed all you have to do is to load the image in which you want to hide your text (Steganography) type the text and hit the 

          Write data option. You can also password protect the text so that it can only be opened with the help of a password.

Once you have completed the above steps then you can send the image to your friend and he/she can only open it with the image hide software and the password that you have used. Once opened with imagehide the text can be read by your friend. So use Steganography for more secure communication.

The image hide is freeware and supports all windows like windows 7,windows vista and windows xp.

Viruses to Build More Efficient Solar Panels says MIT


Teams of viruses can help build better solar panels by ensuring nanoscale components behave properly, according to a new study. MIT researchers say their virus-assisted breakthrough could improve solar panels’ energy conversion efficiency by one-third. 
Scientists already knew that carbon nanotubes, rolled-up sheets of graphene, could improve the efficiency of photovoltaic cells. Ideally, the nanotubes would gather more electrons that are kicked up from the surface of a PV cell, allowing a greater number of electrons to be used to produce a current. 

But there are complications — carbon nanotubes come in two varieties, functioning either as semiconductors or wires, and they each behave differently. They also tend to clump together, which makes them less effective at gathering up their own electrons.

Graduate students Xiangnan Dang and Hyunjung Yi, MIT professor Angela Belcher and colleagues tested this method with Grätzel cells, but they say the technique could be used to build other virus-augmented solar cells, including quantum-dot and organic solar cells. 

They also learned that the two flavors of nanotubes have different effects on solar cell efficiency — something that had not been demonstrated before. Semiconducting nanotubes can enhance solar cells’ performance, but the continuously conducting wires had the opposite effect. This knowledge could be useful for designing more efficient nanoscale batteries, piezoelectrics or other power-related materials. 

The virus-built structures enhanced the solar cells’ power conversion efficiency to 10.6 percent from 8 percent, according to MIT News. That’s about a one-third improvement, using a viral system that makes up just 0.1 percent of the cells’ weight. 

“A little biology goes a long way,” Belcher said in an MIT News article. The researchers think with further research, they can improve the efficiency even more.

Android Beats iPhone in Consumer Desirability

Android is now the most desired smartphone operating system, reports market research company Nielsen. According to Nielsen’s figures from January to March 31 percent of consumers indicate Android as their preferred operating system. Apple’s iOS’s share was 30 percent. 
Android did overtake iPhone quite quickly. Nielsen did the same study in July-September 2010. During that time, 33 percent of consumers wanted an iPhone and 26 percent wanted a device with Android operating system.
Desirability can also be seen in consumer behavior. Half of all smartphones purchased in the last six months are using Google’s Android operating system. 

iPhone’s share among the recent acquirers was 25 percent, RIM’s 15 percent and Windows Phone 7 had a 7 percent share. The total installed base of smartphones is also dominated by Android — 37 percent of all smartphones in the U.S. are Android phones. Apple’s market share is 27 percent and RIM BlackBerry’s 22 percent. 

One thing that could explain Android’s popularity is the variety of available Android smartphones. There are Android phones in multiple price categories and features vary from basic entry-level phones to the latest state-of-the-art smartphones. 

Apple offers only high-end iPhones and that possibly explains the difference in desirability and new phone sales in Nielsen’s data. The iPhone is desired by many, but there are an overwhelming amount Android phones that are more affordable, as well as some that offer more features.

How to "Delete administrator Password" without any software



Method 1
Boot up with DOS and delete the sam.exe and sam.log files from Windows\system32\config in your hard drive. Now when you boot up in NT the password on your built-in administrator account which will be blank (i.e No password). This solution works only if your hard drive is FAT kind.

Method 2
Step 1. Put your hard disk of your computer in any other pc .
Step 2. Boot that computer and use your hard disk as a secondary hard disk (D'nt boot as primary hard disk ).
Step 3. Then open that drive in which the victim’s window(or your window) is installed.
Step 4. Go to location windows->system32->config
Step 5. And delete SAM.exe and SAM.log
Step 6. Now remove hard disk and put in your computer.
Step 7. And boot your computer

Tuesday 26 April 2011

Top 5 Most Serious Internet Security Holes




Businesses can leave themselves vulnerable to date theft and other online threats; particularly as security and IT budgets are under pressure as businesses look to save money. Although money is tight, it is important companies stay protected online, as on average, the total cost of security breaches including lost business in the UK last year was $2,565,702 (US dollars).

Data theft and other online threats presently represent a significant danger for businesses in the UK. Compounding this problem is the economic downturn, which is leading many executives to cancel, defer or downsize security budgets. 

To highlight the risks facing companies today, Astaro has compiled the following list detailing the five most serious internet security holes.

1. Browser vulnerabilities

No provider is immune to the security holes that keep appearing in web browsers. A recent example is the CSS bug that affected Internet Explorer versions 6, 7, and 8 (CVE-2010-3962). This bug targets the computers in a two-stage attack: First, the user follows an e-mail link to a web page containing malicious code. This code is then run without the user realizing it and automatically installs a trojan on the computer. The user does not need to click the mouse; simply visiting the website is enough. The only way companies can protect themselves fully from this is to refrain from using any browsers with current known security holes for as long as they remain unpatched. 

2. Vulnerabilities in Adobe PDF Reader, Flash, Java

The ubiquity of tools and programs such as Adobe PDF Reader, Flash, and Java makes them highly vulnerable to attack. Although they do frequently show security holes, most providers are quick to provide patches. However, companies then have to make sure these patches are installed on all computers - which is where they often fall down. Either the IT departments are not aware of the patches, are unable to install them, or bemoan the fact that the update failed. In this case, if an employee visits a page with embedded Flash videos that launch automatically, malicious code can then be run automatically in the background. With the user being completely unaware of it, a trojan will infiltrate the computer unnoticed, making it part of a botnet. 

While there are only a few Windows exploits, for instance, there is a vast number in Adobe, Java, and Flash. Flash and Java, in particular, have become veritable malware disseminators over the past few months, providing the perfect access point for trojans lurking in the background of colorful websites, which then bypass all virus scanners to become permanently ensconced on the computer. Private users should therefore never use these programs and companies should employ standard procedures or policies prohibiting their use. To prevent attacks via Flash, companies can use Flash blockers (a browser plug-in) to prevent videos from being played automatically.

3. Vulnerabilities in Web 2.0 applications

The latest web-based security holes of note tend to be new methods of attack, such as Cross-Site Scripting (XSS) or SQL Injection. The cause of the vulnerability in this case is generally inaccurate or incorrect implementation of AJAX, a method for exchanging data asynchronously between server and browser. This type of vulnerability was exploited, for example, by the MySpace worm created by the hacker known as Samy. It was published around a year ago and allowed the hacker to swiftly obtain and access the profiles of millions of MySpace contacts. Another, more recent attack was the "on mouse over" attack on Twitter. This attack was particularly sophisticated because its authors were able to embed malicious code that disseminated itself and directed users to websites containing malware in just 140 characters and without any clicking required. All the user had to do was move the cursor over the Tweet. There is very little users of such applications can do to protect themselves against this other than to stop using the service as soon as a security problem is made public. It is therefore the manufacturers' responsibility to ensure that their applications are well and securely programmed – or to take the precautionary measure of protecting the data of its users with a Web Application Firewall.

4. Cell phone and smartphone data security holes

In the UK alone, there are currently more mobile phones than people. This very fact means that new data security risks are being discovered in this arena on a daily basis. For instance, there is a new generation of worms specifically targeted at smartphones (let's call them "iWorms"). In September, it was discovered that the ZeuS botnet was specifically attacking cell phones. Using infected HTML forms on the victim's browser, it would obtain their cell number and then send a text message containing the new malware SymbOS/Zitmo.A!tr (for "Zeus In The Mobile") to this number. The malware, which was designed to intercept and divert banking transactions, would then install itself in the background.

Many Apple users wishing to circumvent SIM card restrictions to a specific network provider or to use applications that are unavailable through the Apple store perform a process known as jailbreaking to remove the usage and access limitations imposed by Apple. This process allows users to gain root access to the command line of their device's operating system. The risk inherent with jailbreaking is that it makes many of the devices more vulnerable to attack; for instance, the majority of users do not change the SSH password after performing a jailbreak – this is a serious failing because Apple's default root password "alpine" is now widely known. If the password is not changed, the device is susceptible to unauthorized third-party access.

5. Zero-day exploits in operating systems

Zero-day attack is the term given to a threat that uses vulnerabilities that are unknown to others and for which there is no patch. In other words, the manufacturer of a system first becomes aware of the vulnerability on the actual day of the attack - or even later. This gives hackers the perfect opportunity to exploit holes. This type of operating system attack is particularly dangerous because the cyber criminals have direct remote access to the affected systems. They require no additional tools such as browsers or Java, the only requirement is that the target computer is online. There is no way to protect against zero-day exploits because patches and first-aid measures can only be published retroactively. It is not only Microsoft computers that are affected by this problem; the growing prevalence of Macs means that they are also becoming a target for zero-day attacks.

Cloud Computing - What is it? How does it work?

Cloud computing refers to the provision of computational resources on demand via a computer network.
Cloud computing can be compared to the supply of electricity and gas, or the provision of telephone, television and postal services. All of these services are presented to the users in a simple way that is easy to understand without the users needing to know how the services are provided. This simplified view is called an abstraction. Similarly, cloud computing offers computer application developers and users an abstract view of services that simplifies and ignores much of the details and inner workings. A provider's offering of abstracted Internet services is often called "The Cloud".

How it works
When a user accesses the cloud for a popular website, many things can happen. The user's IP for example can be used to establish where the user is located (geolocation). DNS services can then direct the user to a cluster of servers that are close to the user so the site can be accessed rapidly and in their local language. The user doesn't login to a server, but they login to the service they are using by obtaining a session id and/or a cookie which is stored in their browser.
What the user sees in the browser will usually come from a cluster of web servers. The webservers run software which presents the user with an interface which is used to collect commands or instructions from the user (the clicks, typing, uploads etc.) These commands are then interpreted by webservers or processed by application servers. Information is then stored on or retrieved from the database servers or file servers and the user is then presented with an updated page. The data across the multiple servers is synchronised around the world for rapid global access .

Technical Description
Cloud computing is computation, software, data access, and storage services that do not require end-user knowledge of the physical location and configuration of the system that delivers the services. Parallels to this concept can be drawn with the electricity grid where end-users consume power resources without any necessary understanding of the component devices in the grid required to provide the service.

Cloud computing describes a new supplement, consumption, and delivery model for IT services based on Internet protocols, and it typically involves provisioning of dynamically scalable and often virtualized resources. It is a byproduct and consequence of the ease-of-access to remote computing sites provided by the Internet. This frequently takes the form of web-based tools or applications that users can access and use through a web browser as if they were programs installed locally on their own computers. The National Institute of Standards and Technology (NIST) provides a somewhat more objective and specific definition:

Cloud computing is a model for enabling convenient, on-demand network access to a shared pool of configurable computing resources (e.g., networks, servers, storage, applications, and services) that can be rapidly provisioned and released with minimal management effort or service provider interaction.

Typical cloud computing providers deliver common business applications online that are accessed from another Web service or software like a Web browser, while the software and data are stored on servers.

Most cloud computing infrastructures consist of services delivered through common centers and built on servers. Clouds often appear as single points of access for consumers' computing needs. Commercial offerings are generally expected to meet quality of service (QoS) requirements of customers, and typically include service level agreements (SLAs).

Characteristics
The key characteristic of cloud computing is that the computing is "in the cloud" i.e. the processing (and the related data) is not in a specified, known or static place(s). This is in contrast to a model in which the processing takes place in one or more specific servers that are known. All the other concepts mentioned are supplementary or complementary to this concept.

Architecture
Cloud computing sample architecture Cloud architecture, the systems architecture of the software systems involved in the delivery of cloud computing, typically involves multiple cloud components communicating with each other over application programming interfaces, usually web services and 3-tier architecture. This resembles theUnix philosophy of having multiple programs each doing one thing well and working together over universal interfaces. Complexity is controlled and the resulting systems are more manageable than their monolithic counterparts.

The two most significant components of cloud computing architecture are known as the front end and the back end. The front end is the part seen by the client, i.e. the computer user. This includes the client’s network (or computer) and the applications used to access the cloud via a user interface such as a web browser. The back end of the cloud computing architecture is the ‘cloud’ itself, comprising various computers, servers and data storage devices.

History
The term "cloud" is used as a metaphor for the Internet, based on the cloud drawing used in the past to represent the telephone network, and later to depict the Internet in computer network diagrams as an abstraction of the underlying infrastructure it represents.

Cloud computing is a natural evolution of the widespread adoption of virtualization, service-oriented architecture, autonomic and utility computing. Details are abstracted from end-users, who no longer have need for expertise in, or control over, the technology infrastructure "in the cloud" that supports them.

The underlying concept of cloud computing dates back to the 1960s, when John McCarthy opined that "computation may someday be organized as a public utility." Almost all the modern-day characteristics of cloud computing (elastic provision, provided as a utility, online, illusion of infinite supply), the comparison to the electricity industry and the use of public, private, government and community forms, were thoroughly explored in Douglas Parkhill's 1966 book, The Challenge of the Computer Utility.

The actual term "cloud" borrows from telephony in that telecommunications companies, who until the 1990s primarily offered dedicated point-to-point data circuits, began offering Virtual Private Network (VPN) services with comparable quality of service but at a much lower cost. By switching traffic to balance utilization as they saw fit, they were able to utilize their overall network bandwidth more effectively. The cloud symbol was used to denote the demarcation point between that which was the responsibility of the provider from that of the user. Cloud computing extends this boundary to cover servers as well as the network infrastructure. The first scholarly use of the term “cloud computing” was in a 1997 lecture by Ramnath Chellappa.

Amazon played a key role in the development of cloud computing by modernizing their data centers after the dot-com bubble, which, like most computer networks, were using as little as 10% of their capacity at any one time, just to leave room for occasional spikes. Having found that the new cloud architecture resulted in significant internal efficiency improvements whereby small, fast-moving "two-pizza teams" could add new features faster and more easily, Amazon initiated a new product development effort to provide cloud computing to external customers, and launched Amazon Web Service (AWS) on a utility computing basis in 2006.

In 2007, Google, IBM and a number of universities embarked on a large scale cloud computing research project.
In early 2008, Eucalyptus became the first open source AWS API compatible platform for deploying private clouds. In early 2008, OpenNebula, enhanced in the RESERVOIR European Commission funded project, became the first open source software for deploying private and hybrid clouds and for the federation of clouds. In the same year, efforts were focused on providing QoS guarantees (as required by real-time interactive applications) to cloud-based infrastructures, in the framework of the IRMOS European Commission funded project.

By mid-2008, Gartner saw an opportunity for cloud computing "to shape the relationship among consumers of IT services, those who use IT services and those who sell them" and observed that "[o]rganisations are switching from company-owned hardware and software assets to per-use service-based models" so that the "projected shift to cloud computing ... will result in dramatic growth in IT products in some areas and significant reductions in other areas."

Key characteristics
  • Agility improves with users' ability to rapidly and inexpensively re-provision technological infrastructure resources.
  • Application Programming Interface (API) accessibility to software that enables machines to interact with cloud software in the same way the user interface facilitates interaction between humans and computers. Cloud computing systems typically use REST-based APIs.
  • Cost is claimed to be greatly reduced and in a public cloud delivery model capital expenditure is converted to operational expenditure. This ostensibly lowers barriers to entry, as infrastructure is typically provided by a third-party and does not need to be purchased for one-time or infrequent intensive computing tasks. Pricing on a utility computing basis is fine-grained with usage-based options and fewer IT skills are required for implementation (in-house).
  • Device and location independence enable users to access systems using a web browser regardless of their location or what device they are using (e.g., PC, mobile phone). As infrastructure is off-site (typically provided by a third-party) and accessed via the Internet, users can connect from anywhere.
  • Multi-tenancy enables sharing of resources and costs across a large pool of users thus allowing for:
    • Centralization of infrastructure in locations with lower costs (such as real estate, electricity, etc.)
    • Peak-load capacity increases (users need not engineer for highest possible load-levels)
    • Utilization and efficiency improvements for systems that are often only 10–20% utilized.
  • Reliability is improved if multiple redundant sites are used, which makes well designed cloud computing suitable for business continuity and disaster recovery. Nonetheless, many major cloud computing services have suffered outages, and IT and business managers can at times do little when they are affected.
  • Scalability via dynamic ("on-demand") provisioning of resources on a fine-grained, self-service basis near real-time, without users having to engineer for peak loads. Performance is monitored, and consistent and loosely coupled architectures are constructed using web services as the system interface.
  • Security could improve due to centralization of data, increased security-focused resources, etc., but concerns can persist about loss of control over certain sensitive data, and the lack of security for stored kernels.Security is often as good as or better than under traditional systems, in part because providers are able to devote resources to solving security issues that many customers cannot afford. However, the complexity of security is greatly increased when data is distributed over a wider area or greater number of devices and in multi-tenant systems which are being shared by unrelated users. In addition, user access to security audit logs may be difficult or impossible. Private cloud installations are in part motivated by users' desire to retain control over the infrastructure and avoid losing control of information security.
  • Maintenance of cloud computing applications is easier, because they do not need to be installed on each user's computer. They are easier to support and to improve, as the changes reach the clients instantly.
  • Metering means that cloud computing resources usage should be measurable and should be metered per client and application on a daily, weekly, monthly, and yearly basis.
Deployment models - Cloud computing types

Public cloud
Public cloud or external cloud describes cloud computing in the traditional mainstream sense, whereby resources are dynamically provisioned on a fine-grained, self-service basis over the Internet, via web applications/web services, from an off-site third-party provider who bills on a fine-grained utility computing basis.

Community cloud
A community cloud may be established where several organizations have similar requirements and seek to share infrastructure so as to realize some of the benefits of cloud computing. With the costs spread over fewer users than a public cloud (but more than a single tenant) this option is more expensive but may offer a higher level of privacy, security and/or policy compliance. Examples of community clouds include Google's "Gov Cloud".

Hybrid cloud and hybrid IT delivery
The main responsibility of the IT department is to deliver services to the business. With the proliferation of cloud computing (both private and public) and the fact that IT departments must also deliver services via traditional, in-house methods, the newest catch-phrase has become “hybrid cloud computing.” Hybrid cloud is also called hybrid delivery by the major vendors including HP, IBM, Oracle and VMware who offer technology to manage the complexity in managing the performance, security and privacy concerns that results from the mixed delivery methods of IT services.
A hybrid storage cloud uses a combination of public and private storage clouds. Hybrid storage clouds are often useful for archiving and backup functions, allowing local data to be replicated to a public cloud.

Another perspective on deploying a web application in the cloud is using Hybrid Web Hosting, where the hosting infrastructure is a mix between cloud hosting and managed dedicated servers – this is most commonly achieved as part of a web cluster in which some of the nodes are running on real physical hardware and some are running on cloud server instances.

Combined cloud
Two clouds that have been joined together are more correctly called a "combined cloud". A combined cloud environment consisting of multiple internal and/or external providers "will be typical for most enterprises". By integrating multiple cloud services users may be able to ease the transition to public cloud services while avoiding issues such as PCI compliance.

Private cloud
Douglas Parkhill first described the concept of a "private computer utility" in his 1966 book The Challenge of the Computer Utility. The idea was based upon direct comparison with other industries (e.g. the electricity industry) and the extensive use of hybrid supply models to balance and mitigate risks.

"Private cloud" and "internal cloud" have been described as neologisms, but the concepts themselves pre-date the term cloud by 40 years. Even within modern utility industries, hybrid models still exist despite the formation of reasonably well-functioning markets and the ability to combine multiple providers.

Some vendors have used the terms to describe offerings that emulate cloud computing on private networks. These (typically virtualization automation) products offer the ability to host applications or virtual machines in a company's own set of hosts. These provide the benefits of utility computing – shared hardware costs, the ability to recover from failure, and the ability to scale up or down depending upon demand.

Private clouds have attracted criticism because users "still have to buy, build, and manage them" and thus do not benefit from lower up-front capital costs and less hands-on management, essentially "[lacking] the economic model that makes cloud computing such an intriguing concept". Enterprise IT organizations use their own private cloud(s) for mission critical and other operational systems to protect critical infrastructures.

Cloud engineering
Cloud engineering is the application of a systematic, disciplined, quantifiable, and interdisciplinary approach to the ideation, conceptualization, development, operation, and maintenance of cloud computing, as well as the study and applied research of the approach, i.e., the application of engineering to cloud. It is a maturing and evolving discipline to facilitate the adoption, strategization, operationalization, industrialization, standardization, productization, commoditization, and governance of cloud solutions, leading towards a cloud ecosystem[further explanation needed]. Cloud engineering is also known as cloud service engineering.

Cloud storage
Cloud storage is a model of networked computer data storage where data is stored on multiple virtual servers, generally hosted by third parties, rather than being hosted on dedicated servers. Hosting companies operate large data centers; and people who require their data to be hosted buy or lease storage capacity from them and use it for their storage needs. The data center operators, in the background, virtualize the resources according to the requirements of the customer and expose them as virtual servers, which the customers can themselves manage. Physically, the resource may span across multiple servers.

The Intercloud
The Intercloud is an interconnected global "cloud of clouds" and an extension of the Internet "network of networks" on which it is based.The term was first used in the context of cloud computing in 2007 when Kevin Kelly stated that "eventually we'll have the intercloud, the cloud of clouds. This Intercloud will have the dimensions of one machine comprising all servers and attendant cloudbooks on the planet.". It became popular in 2009 and has also been used to describe the datacenter of the future.

The Intercloud scenario is based on the key concept that each single cloud does not have infinite physical resources. If a cloud saturates the computational and storage resources of its virtualization infrastructure, it could not be able to satisfy further requests for service allocations sent from its clients. The Intercloud scenario aims to address such situation, and in theory, each cloud can use the computational and storage resources of the virtualization infrastructures of other clouds. Such form of pay-for-use may introduce new business opportunities among cloud providers if they manage to go beyond theoretical framework. Nevertheless, the Intercloud raises many more challenges than solutions concerning cloud federation, security, interoperability, quality of service, vendor's lock-ins, trust, legal issues, monitoring and billing.

The concept of a competitive utility computing market which combined many computer utilities together was originally described by Douglas Parkhill in his 1966 book, the "Challenge of the Computer Utility". This concept has been subsequently used many times over the last 40 years and is identical to the Intercloud.

About Shubham..

My photo
Jamshedpur, Jharkhand, India
A cant-live-without-technology type of teen...Blogger by hobby...