Saturday 28 March 2015

Microsoft Windows OS

Microsoft Windows OS
Windows has seen nine major versions since its first release in 1985. Over 29 years later, Windows looks very different but somehow familiar with elements that have survived the test of time, increases in computing power and – most recently – a shift from the keyboard and mouse to the touchscreen.
Here’s a brief look at the history of Windows, from its birth at the hands of Bill Gates with Windows 1 to the latest arrival under new Microsoft chief executive Satya Nadella.

Windows 1

Windows 1
The first version of Windows. Photograph: Wikipedia
This is where it all started for Windows. The original Windows 1 was released in November 1985 and was Microsoft’s first true attempt at a graphical user interface in 16-bit.
Development was spearheaded by Microsoft founder Bill Gates and ran on top of MS-DOS, which relied on command-line input.
It was notable because it relied heavily on use of a mouse before the mouse was a common computer input device. To help users become familiar with this odd input system, Microsoft included a game, Reversi (visible in the screenshot) that relied on mouse control, not the keyboard, to get people used to moving the mouse around and clicking onscreen elements.

Windows 2

Windows 2
Windows 2 with overlapping windows. Photograph: Wikipedia
Two years after the release of Windows 1, Microsoft’s Windows 2 replaced it in December 1987. The big innovation for Windows 2 was that windows could overlap each other, and it also introduced the ability to minimise or maximise windows instead of “iconising” or “zooming”.
The control panel, where various system settings and configuration options were collected together in one place, was introduced in Windows 2 and survives to this day.
Microsoft Word and Excel also made their first appearances running on Windows 2.

Windows 3

Windows 3.0
Windows 3.0 got colourful.
The first Windows that required a hard drive launched in 1990. Windows 3 was the first version to see more widespread success and be considered a challenger to Apple’s Macintosh and the Commodore Amiga graphical user interfaces, coming pre-installed on computers from PC-compatible manufacturers including Zenith Data Systems.
Windows 3 introduced the ability to run MS-DOS programmes in windows, which brought multitasking to legacy programmes, and supported 256 colours bringing a more modern, colourful look to the interface.
More important - at least to the sum total of human time wasted - it introduced the card-moving timesink (and mouse use trainer) Solitaire.

Windows 3.1

Windows 3.1
Windows 3.1 with Minesweeper. Photograph: Wikipedia
Windows 1 and 2 both had point release updates, but Windows 3.1 released in 1992 is notable because it introduced TrueType fonts making Windows a viable publishing platform for the first time.
Minesweeper also made its first appearance. Windows 3.1 required 1MB of RAM to run and allowed supported MS-DOS programs to be controlled with a mouse for the first time. Windows 3.1 was also the first Windows to be distributed on a CD-ROM, although once installed on a hard drive it only took up 10 to 15MB (a CD can typically store up to 700MB).

Windows 95

Windows 95
Windows 95: oh hello Start menu.
As the name implies, Windows 95 arrived in August 1995 and with it brought the first ever Start button and Start menu (launched with a gigantic advertising campaign that used the Rolling Stones’ Start Me Up, and a couple of months later Friends stars Jennifer Aniston and Matthew Perry. Could it be any more up-to-date?)
It also introduced the concept of “plug and play” – connect a peripheral and the operating system finds the appropriate drivers for it and makes it work. That was the idea; it didn’t always work in practice.
Windows 95 also introduced a 32-bit environment, the task bar and focused on multitasking. MS-DOS still played an important role for Windows 95, which required it to run some programmes and elements.
Internet Explorer also made its debut on Windows 95, but was not installed by default requiring the Windows 95 Plus! pack. Later revisions of Windows 95 included IE by default, as Netscape Navigator and NCSA Mosaic were popular at the time.

Windows 98

Windows 98
Windows 98, the last great DOS-based Windows. Photograph: Wikipedia
Released in June 1998, Windows 98 built on Windows 95 and brought with it IE 4, Outlook Express, Windows Address Book, Microsoft Chat and NetShow Player, which was replaced by Windows Media Player 6.2 in Windows 98 Second Edition in 1999.
Windows 98 introduced the back and forward navigation buttons and the address bar in Windows Explorer, among other things. One of the biggest changes was the introduction of the Windows Driver Model for computer components and accessories – one driver to support all future versions of Windows.
USB support was much improved in Windows 98 and led to its widespread adoption, including USB hubs and USB mice.

Windows ME

Windows ME
Windows ME was one to skip. Photograph: Wikipedia
Considered a low point in the Windows series by many – at least, until they saw Windows Vista – Windows Millennium Edition was the last Windows to be based on MS-DOS, and the last in the Windows 9x line.
Released in September 2000, it was the consumer-aimed operating system twined with Windows 2000 aimed at the enterprise market. It introduced some important concepts to consumers, including more automated system recovery tools.
IE 5.5, Windows Media Player 7 and Windows Movie Maker all made their appearance for the first time. Autocomplete also appeared in Windows Explorer, but the operating system was notorious for being buggy, failing to install properly and being generally poor.

Windows 2000

Windows 2000
Windows 2000 was ME’s enterprise twin. Photograph: Wikipedia
The enterprise twin of ME, Windows 2000 was released in February 2000 and was based on Microsoft’s business-orientated system Windows NT and later became the basis for Windows XP.
Microsoft’s automatic updating played an important role in Windows 2000 and became the first Windows to support hibernation.

Windows XP

Windows XP
  Pinterest
Windows XP still survives to this day. Photograph: Schrift-Architekt/flickr
Arguably one of the best Windows versions, Windows XP was released in October 2001 and brought Microsoft’s enterprise line and consumer line of operating systems under one roof.
It was based on Windows NT like Windows 2000, but brought the consumer-friendly elements from Windows ME. The Start menu and task bar got a visual overhaul, bringing the familiar green Start button, blue task bar and vista wallpaper, along with various shadow and other visual effects.
ClearType, which was designed to make text easier to read on LCD screens, was introduced, as were built-in CD burning, autoplay from CDs and other media, plus various automated update and recovery tools, that unlike Windows ME actually worked.
Windows XP was the longest running Microsoft operating system, seeing three major updates and support up until April 2014 – 13 years from its original release date. Windows XP was still used on an estimated 430m PCs when it was discontinued.
Its biggest problem was security: though it had a firewall built in, it was turned off by default. Windows XP’s huge popularity turned out to be a boon for hackers and criminals, who exploited its flaws, especially in Internet Explorer, mercilessly - leading Bill Gates to initiate a “Trustworthy Computing” initiative and the subsequent issuance of to Service Pack updates that hardened XP against attack substantially.

Windows Vista

Windows Vista
  Pinterest
Windows Vista, arguably worse than Windows ME. Photograph: Microsoft
Windows XP stayed the course for close to six years before being replaced by Windows Vista in January 2007. Vista updated the look and feel of Windows with more focus on transparent elements, search and security. Its development, under the codename “Longhorn”, was troubled, with ambitious elements abandoned in order to get it into production.
It was buggy, burdened the user with hundreds of requests for app permissions under “User Account Control” - the outcome of the Trustworthy Computing initiative which now meant that users had to approve or disapprove attempts by programs to make various changes. The problem with UAC was that it led to complacency, with people clicking “yes” to almost anything - taking security back to the pre-UAC state. It also ran slowly on older computers despite them being deemed as “Vista Ready” - a labelling that saw it sued because not all versions of Vista could run on PCs with that label.
PC gamers saw a boost from Vista’s inclusion of Microsoft’s DirectX 10 technology.
Windows Media Player 11 and IE 7 debuted, along with Windows Defender an anti-spyware programme. Vista also included speech recognition, Windows DVD Maker and Photo Gallery, as well as being the first Windows to be distributed on DVD. Later a version of Windows Vista without Windows Media Player was created in response to anti-trust investigations.

Windows 7

Windows 7
Windows 7 was everything Windows Vista should have been. Photograph: Wikipedia
Considered by many as what Windows Vista should have been, Windows 7 was first released in October 2009. It was intended to fix all the problems and criticism faced by Vista, with slight tweaks to its appearance and a concentration on user-friendly features and less “dialogue box overload”.
It was faster, more stable and easier to use, becoming the operating system most users and business would upgrade to from Windows XP, forgoing Vista entirely.
Handwriting recognition debuted in 7, as did the ability to “snap” windows to the tops or sides of the screen, allowing faster more automatic window resizing.
Windows 7 saw Microsoft hit in Europe with antitrust investigations over the pre-installing of IE, which led to a browser ballot screen being shown to new users allowing them to choose, which browser to install on first boot.

Windows 8

Windows 8 on a Surface Pro tablet
  Pinterest
Windows 8 focused more on touch than a keyboard and mouse.
Released in October 2012, Windows 8 was Microsoft’s most radical overhaul of the Windows interface, ditching the Start button and Start menu in favour of a more touch-friendly Start screen.
The new tiled interface saw programme icons and live tiles, which displayed at-a-glance information normally associated with “widgets”, replace the lists of programmes and icons. A desktop was still included, which resembled Windows 7.
Windows 8 was faster than previous versions of Windows and included support for the new, much faster USB 3.0 devices. The Windows Store, which offers universal Windows apps that run in a full-screen mode only, was introduced. Programs could still be installed from third-parties like other iterations of Windows, but they could only access the traditional desktop interface of Windows.
The radical overhaul was not welcomed by many. Microsoft attempted to tread a fine line between touchscreen support and desktop users, but ultimately desktop users wanting to control Windows with a traditional mouse and keyboard and not a touchscreen felt Windows 8 was a step back. There were also too few touchscreens in use, or on offer, to make its touch-oriented interface useful or even necessary - despite the parallel rise of tablets such as the iPad, and smartphones, which had begun outselling PCs by the end of 2010.
Windows RT, which runs on ARM-based processors traditionally found in smartphones and non-PC tablets, was introduced at the same time as Windows 8 with the Microsoft Surface tablet. It looked and felt like Windows 8, but could not run traditional Windows applications, instead solely rely

History of Computers


History of Computers




This chapter is a brief summary of the history of Computers. It is supplemented by the two PBS documentaries video tapes "Inventing the Future" And "The Paperback Computer". The chapter highlights some of the advances to look for in the documentaries.
In particular, when viewing the movies you should look for two things:
  • The progression in hardware representation of a bit of data:
    1. Vacuum Tubes (1950s) - one bit on the size of a thumb;
    2. Transistors (1950s and 1960s) - one bit on the size of a fingernail;
    3. Integrated Circuits (1960s and 70s) - thousands of bits on the size of a hand
    4. Silicon computer chips (1970s and on) - millions of bits on the size of a finger nail.
  • The progression of the ease of use of computers:
    1. Almost impossible to use except by very patient geniuses (1950s);
    2. Programmable by highly trained people only (1960s and 1970s);
    3. Useable by just about anyone (1980s and on).
to see how computers got smaller, cheaper, and easier to use.

First Computers


Eniac:
Eniac Computer
The first substantial computer was the giant ENIAC machine by John W. Mauchly and J. Presper Eckert at the University of Pennsylvania. ENIAC (Electrical Numerical Integrator and Calculator) used a word of 10 decimal digits instead of binary ones like previous automated calculators/computers. ENIAC was also the first machine to use more than 2,000 vacuum tubes, using nearly 18,000 vacuum tubes. Storage of all those vacuum tubes and the machinery required to keep the cool took up over 167 square meters (1800 square feet) of floor space. Nonetheless, it had punched-card input and output and arithmetically had 1 multiplier, 1 divider-square rooter, and 20 adders employing decimal "ring counters," which served as adders and also as quick-access (0.0002 seconds) read-write register storage.
The executable instructions composing a program were embodied in the separate units of ENIAC, which were plugged together to form a route through the machine for the flow of computations. These connections had to be redone for each different problem, together with presetting function tables and switches. This "wire-your-own" instruction technique was inconvenient, and only with some license could ENIAC be considered programmable; it was, however, efficient in handling the particular programs for which it had been designed. ENIAC is generally acknowledged to be the first successful high-speed electronic digital computer (EDC) and was productively used from 1946 to 1955. A controversy developed in 1971, however, over the patentability of ENIAC's basic digital concepts, the claim being made that another U.S. physicist, John V. Atanasoff, had already used the same ideas in a simpler vacuum-tube device he built in the 1930s while at Iowa State College. In 1973, the court found in favor of the company using Atanasoff claim and Atanasoff received the acclaim he rightly deserved.









Progression of Hardware


In the 1950's two devices would be invented that would improve the computer field and set in motion the beginning of the computer revolution. The first of these two devices was the transistor. Invented in 1947 by William Shockley, John Bardeen, and Walter Brattain of Bell Labs, the transistor was fated to oust the days of vacuum tubes in computers, radios, and other electronics.
Vaccum Tubes
The vacuum tube, used up to this time in almost all the computers and calculating machines, had been invented by American physicist Lee De Forest in 1906. The vacuum tube, which is about the size of a human thumb, worked by using large amounts of electricity to heat a filament inside the tube until it was cherry red. One result of heating this filament up was the release of electrons into the tube, which could be controlled by other elements within the tube. De Forest's original device was a triode, which could control the flow of electrons to a positively charged plate inside the tube. A zero could then be represented by the absence of an electron current to the plate; the presence of a small but detectable current to the plate represented a one.
Transistors
Vacuum tubes were highly inefficient, required a great deal of space, and needed to be replaced often. Computers of the 1940s and 50s had 18,000 tubes in them and housing all these tubes and cooling the rooms from the heat produced by 18,000 tubes was not cheap. The transistor promised to solve all of these problems and it did so. Transistors, however, had their problems too. The main problem was that transistors, like other electronic components, needed to be soldered together. As a result, the more complex the circuits became, the more complicated and numerous the connections between the individual transistors and the likelihood of faulty wiring increased.
In 1958, this problem too was solved by Jack St. Clair Kilby of Texas Instruments. He manufactured the first integrated circuit or chip. A chip is really a collection of tiny transistors which are connected together when the transistor is manufactured. Thus, the need for soldering together large numbers of transistors was practically nullified; now only connections were needed to other electronic components. In addition to saving space, the speed of the machine was now increased since there was a diminished distance that the electrons had to follow.

Circuit BoardSilicon Chip


Mainframes to PCs


The 1960s saw large mainframe computers become much more common in large industries and with the US military and space program. IBM became the unquestioned market leader in selling these large, expensive, error-prone, and very hard to use machines.
A veritable explosion of personal computers occurred in the early 1970s, starting with Steve Jobs and Steve Wozniak exhibiting the first Apple II at the First West Coast Computer Faire in San Francisco. The Apple II boasted built-in BASIC programming language, color graphics, and a 4100 character memory for only $1298. Programs and data could be stored on an everyday audio-cassette recorder. Before the end of the fair, Wozniak and Jobs had secured 300 orders for the Apple II and from there Apple just took off.
Also introduced in 1977 was the TRS-80. This was a home computer manufactured by Tandy Radio Shack. In its second incarnation, the TRS-80 Model II, came complete with a 64,000 character memory and a disk drive to store programs and data on. At this time, only Apple and TRS had machines with disk drives. With the introduction of the disk drive, personal computer applications took off as a floppy disk was a most convenient publishing medium for distribution of software.
IBM, which up to this time had been producing mainframes and minicomputers for medium to large-sized businesses, decided that it had to get into the act and started working on the Acorn, which would later be called the IBM PC. The PC was the first computer designed for the home market which would feature modular design so that pieces could easily be added to the architecture. Most of the components, surprisingly, came from outside of IBM, since building it with IBM parts would have cost too much for the home computer market. When it was introduced, the PC came with a 16,000 character memory, keyboard from an IBM electric typewriter, and a connection for tape cassette player for $1265.
By 1984, Apple and IBM had come out with new models. Apple released the first generation Macintosh, which was the first computer to come with a graphical user interface(GUI) and a mouse. The GUI made the machine much more attractive to home computer users because it was easy to use. Sales of the Macintosh soared like nothing ever seen before. IBM was hot on Apple's tail and released the 286-AT, which with applications like Lotus 1-2-3, a spreadsheet, and Microsoft Word, quickly became the favourite of business concerns.
That brings us up to about ten years ago. Now people have their own personal graphics workstations and powerful home computers. The average computer a person might have in their home is more powerful by several orders of magnitude than a machine like ENIAC. The computer revolution has been the fastest growing technology in man's history.


Timeline

If you would like more detail, visit this annotated timeline with pictures and paragraphs on the important advances in computers since the 1940s.

ADSENSE MONEY MAKING TIPS

ADSENSE MONEY MAKING TIPS

Hello Friends, today we will talk about Adsense again, but this time we will not be talking about How to Make money with Adsense or How to get Approved Adsense Account, Instead we Select twisted topic but very important for Bloggers.So, You are a Blogger. OK, Tell me How much Page views or Unique Visitors your Blog is having and How much Money you making daily as a Google Publishers. You Know, I am Asking this, because today we are talking on the same topic “How Much Traffic you Need to Make Money with Adsense“. Any Expert Bloggers reading this will think that it’s a crazy topic. But it’s not, because in Google Search there are many Searches on similar phrases like “adsense how much traffic to make money“, “How Much Traffic Adsense Make money“, etc. So, Here i am writing from those searchers as well as those Publishers who really don’t know about this. They really suffer and in the end, get irritated due to lack of Money and quit blogging. I Don’t want them to quit. So, here i will explain briefly that How Much Traffic they Need to Make Money with Adsense. Let’s start without wasting too much time.
How much Traffic do you need to Make Money with Adsense

How Much Traffic You Need to Make money with Google Adsense

#1 Cost Per Click (CPC)

First of all you need to learn about “Cost per Click” which is really a key to Earn Money with Adsense. I think Many of should know that Adsense Earning depends on Keywords. There are many which will pay you $100+ for Single Click but Majority of Keywords will only pay $0.10 to $5. So, You need to Target those High Paying Keywords in Your Articles. Some of those are “Loans, car Insurance, Health, Forex Trading, Auto Insurance, etc.

#2 Click Through Rate (CTR)

This is the most important thing for Adsense Publishers Because if you have Tons of traffic and you still not making enough money, then, there is something wrong with your Ad Placement or its color. So, Always put emphasis on this. You CTR Rate needs to be around 5 to 8%, which will be perfect. For further Explanation, Suppose you get 100 Page Views and 8 of them clicked your Ads.
100 Page views / 8 Clicks = 8% CTR
So, You will have 8% CTR. Now, i think it’s enough for you, It’s time to move on and do more calculations of traffic.
Suppose Your Blog Receiving about 500 Visitors per Day.
500 Page Views – $2 Cost Per Click – 8 clicks per 100 Page views

Then, Per day Earnings will be: – 500*8*2/100 = $80
It Means with this amount of traffic and CTR , you will earn $80 Dollars. But i think it’s too much Dream type calculation. So, We make it a real one below.
1,250 Page Views – $ 0.40 Cost per Click – 4 clicks per 100 Page Views
Then, Par Day earnings will be: – 1250*.40*4/100 = $20
Now, this is a real calculation. You will need a minimum of 1250 Page Views Per day to Earn $20 from Adsense. Many Newbies and Medium bloggers find themselves here or below this. But for those who are below this landmark, i will only advice to keep this Calculation in mind and work accordingly. Soon you will get to this.
To Earn $500 Per day:-
It is not going to be easy, because to needs a lot of page views to hit this Earning landmark. But still we can give you the calculations about this.
If you are getting 20 Dollars for 1250 Page Views, then, to get $500 per day, you will need 31250 Page Views daily with a CPC of .40.
31250*.40*4/100 = $500
So, This Number of Pageviews can help you earn $500 per day. But this is not easy as i said before.
Here I wanna tell you one thing more, there are some Ads based on CPM (Cost per Mile) which means you will get Money for per 100 Impressions. But In Adsense Majority of Ads are CPC Based, So, you only Need to Increase your CPC and Also to get More earnings increase your CTR.
But still, if you are ablt to earn High from Adsense, then I will suggest you use some of the Google Adsense Alternatives. Although no one can match Google’s advertising company, but still you can try.
I will recommend Reading:- How to Increase Cost per Click this Year
So, I hope you Liked this Article “How much Traffic it Needs to Make money with Adsense“. Don’t Forget to Share it on Social Networks. happy Blogging.

Friday 27 March 2015

Face Detection Document


CHAPTER-1
INTRODUCTION
A smart environment is one that is able to identify people, interpret their actions, and react appropriately. Thus, one of the most important building blocks of smart environments is a person identification system. Face recognition devices are ideal for such systems, since they have recently become fast, cheap, unobtrusive, and, when combined with voice-recognition, are very robust against changes in the environment. Moreover, since humans primarily recognize each other by their faces and voices, they feel comfortable interacting with an environment that does the same.
Facial recognition systems are built on computer programs that analyze images of human faces for the purpose of identifying them. The programs take a facial image, measure characteristics such as the distance between the eyes, the length of the nose, and the angle of the jaw, and create a unique file called a "template." Using templates, the software then compares that image with another image and produces a score that measures how similar the images are to each other. Typical sources of images for use in facial recognition include video camera signals and pre-existing photos such as those in driver's license databases.
Facial recognition systems are computer-based security systems that are able to automatically detect and identify human faces. These systems depend on a recognition algorithm, such as eigenface or the hidden Markov model. The first step for a facial recognition system is to recognize a human face and extract it for the rest of the scene. Next, the system measures nodal points on the face, such as the distance between the eyes, the shape of the cheekbones and other distinguishable features.
These nodal points are then compared to the nodal points computed from a database of pictures in order to find a match. Obviously, such a system is limited based on the angle of the face captured and the lighting conditions present. New technologies are currently in development to create three-dimensional models of a person's face based on a digital photograph in order to create more nodal points for comparison. However, such technology is inherently susceptible to error given that the computer is extrapolating a three-dimensional model from a two-dimensional photograph.
Principle Component Analysis is an eigenvector method designed to model linear variation in high-dimensional data. PCA performs dimensionality reduction by projecting the original n-dimensional data onto the k << n -dimensional linear subspace spanned by the leading eigenvectors of the data’s covariance matrix. Its goal is to find a set of mutually orthogonal basis functions that capture the directions of maximum variance in the data and for which the coefficients are pair wise decorrelated. For linearly embedded manifolds, PCA is guaranteed to discover the dimensionality of the manifold and produces a compact representation.

Facial Recognition Applications:

Facial recognition is deployed in large-scale citizen identification applications, surveillance applications, law enforcement applications such as booking stations, and kiosks.


1.1 Problem Definition
          Facial recognition systems are computer-based security systems that are able to automatically detect and identify human faces. These systems depend on a recognition algorithm. But the most of the algorithm considers some what global data patterns while recognition process. This will not yield accurate recognition system. So we propose a face recognition system which can able to recognition with maximum accuracy as possible.

 1.2 System Environment
The front end is designed and executed with the J2SDK1.4.0 handling the core java part with User interface Swing component. Java is robust , object oriented , multi-threaded , distributed , secure and platform independent language. It has wide variety of package to implement our requirement and number of classes and methods can be utilized for programming purpose. These features make the programmer’s to implement to require concept and algorithm very easier way in Java.
The features of Java as follows:
          Core java contains the concepts like Exception handling, Multithreading; Streams can be well utilized in the project environment.
The Exception handling can be done with predefined exception and has provision for writing custom exception for our application.

Garbage collection is done automatically, so that it is very secure in memory management.
The user interface can be done with the Abstract Window tool KitAnd also Swing class. This has variety of classes for components and containers. We can make instance of these classes and this instances denotes particular object that can be utilized in our program.
Event handling can be performed with Delegate Event model. The objects are assigned to the Listener that observe for event, when the event takes place the corresponding methods to handle that event will be called by Listener which is in the form of interfaces and executed.
This application makes use of Action Listener interface and the event click event gets handled by this. The separate method actionPerformed() method contains details about the response of event.
Java also contains concepts like Remote method invocation; Networking can be useful in distributed environment.
CHAPTER-2
SYSYTEM ANALYSIS


2.1 Existing System:
                    Many face recognition techniques have been developed over the past few decades. One of the most successful and well-studied techniques to face recognition is the appearance-based method. When using appearance-based methods, we usually represent an image of size *pixels by a vector in an *m-dimensional space. In practice, however, these n*dimensional spaces are too large to allow robust and fast face recognition. A common way to attempt to resolve this problem is to use dimensionality reduction techniques.

 Two of the most popular techniques for this purpose are,
2.1.1 Principal Component Analysis (PCA).
2.1.2 Linear Discriminant Analysis (LDA).


2.1.1 Principal Component Analysis (PCA):
The purpose of PCA is to reduce the large dimensionality of the data space (observed variables) to the smaller intrinsic dimensionality of feature space (independent variables), which are needed to describe the data economically. This is the case when there is a strong correlation between observed variables. The jobs which PCA can do are prediction, redundancy removal, feature extraction, data compression, etc. Because PCA is a known powerful technique which can do something in the linear domain, applications having linear models are suitable, such as signal processing, image processing, system and control theory, communications, etc.

The main idea of using PCA for face recognition is to express the large 1-D vector of pixels constructed from 2-D face image into the compact principal components of the feature space. This is called eigenspace projection. Eigenspace is calculated by identifying the eigenvectors of the covariance matrix derived from a set of fingerprint images (vectors).


2.1.2 Linear Discriminant Analysis (LDA):

                    LDA is a supervised learning algorithm. LDA searches for the project axes on which the data points of different classes are far from each other while requiring data points of the same class to be close to each other. Unlike PCA which encodes information in an orthogonal linear space, LDA encodes discriminating information in a linearly separable space using bases that are not necessarily orthogonal. It is generally believed that algorithms based on LDA are superior to those based on PCA.
But the most of the algorithm considers some what global data patterns while recognition process. This will not yield accurate recognition system.
                                                                                                         
ü Less accurate
ü Does not deal with manifold structure
ü It doest not deal with biometric characteristics.

2.2 Proposed System:
         
  PCA and LDA aim to preserve the global structure. However, in many real-world applications, the local structure is more important. In this section, we describe Locality Preserving Projection (LPP), a new algorithm for learning a locality preserving subspace.

The objective function of LPP is as follows,
                 
           The manifold structure is modeled by a nearest-neighbor graph which preserves the local structure of the image space. A face subspace is obtained by Locality Preserving Projections (LPP).Each face image in the image space is mapped to a low-dimensional face subspace, which is characterized by a set of feature images, called Laplacianfaces. The face subspace preserves local structure and seems to have more discriminating power than the PCA approach for classification purpose. We also provide

Theoretical analysis to show that PCA, LDA, and LPP can be obtained from different graph models. Central to this is a graph structure that is inferred on the data points. LPP finds a projection that respects this graph structure. In our the theoretical analysis, we show how PCA, LDA, and LPP arise from the same principle applied to different choices of this graph structure.




It is worth while to highlight several aspects of the proposed approach here:

1. While the Eigenfaces method aims to preserve the global structure of the image space, and the Fisher faces method aims to preserve the discriminating information .Our Laplacianfaces method aims to preserve the local structure of the image space which real -world application mostly needs.

2. An efficient subspace learning algorithm for face recognition should be able to discover the nonlinear manifold structure of the face space. Our proposed Laplacianfaces method explicitly considers the manifold structure which is modeled by an adjacency graph and they reflect the intrinsic face manifold structures.

3. LPP shares some similar properties to LLE .  LPP is linear, while LLE is
nonlinear. Moreover, LPP is defined everywhere, while LLE is defined only on the training data points and it is unclear how to evaluate the maps for new test points. In contrast, LPP may be simply applied to any new data point to locate it in.

The algorithmic procedure of Laplacianfaces is formally stated below:

1. PCA projection.
We project the image set  into the PCA subspace by throwing away the smallest principal components. In our experiments, we kept 98 percent information in the sense of reconstruction error. For the sake of simplicity, we still use to denote the images in the PCA subspace in the following steps. We denote by WPCA the transformation matrix of PCA.

2. Constructing the nearest-neighbor graph.
Let G denote a graph with nodes. The ith node corresponds to the face image xi . We put an edge between nodes and if xand xi are “close,” i.e., xi is among nearest neighbors of xi, or xis among nearest neighbors of xj. The constructed nearest neighbor graph is an approximation of the local manifold structure. Note that here we do not use the neighborhood to construct the graph. This is simply because it is often difficult to choose the optimal in the real-world applications, while nearest-neighbor graph can be constructed more stably. The disadvantage is that the nearest-neighbor search will increase the computational complexity of our algorithm. When the computational complexity is a major concern, one can switch to the "-neighborhood.

3. Choosing the weights. If node and are connected, put
    where is a suitable constant. Otherwise, put    Sij = 0.The weight matrix of graph G models the face manifold structure by preserving local structure. The justification for this choice of weights can be traced.
where is a diagonal matrix whose entries are column (or row, since is symmetric) sums of S,  Dii =  ∑j Sji=is the Laplacian matrix. The
ith row of matrix is xi.
These eigenvalues are equal to or greater than zero because the matrices XLXT and XDXT are both symmetric and positive semi definite. Thus, the embedding is as follows:

where is a k-dimensional vector. is the transformation matrix. This linear mapping best preserves the manifold’s estimated intrinsic geometry in a linear sense. The column vectors of are the so-called Laplacianfaces.
          This principle is implemented with unsupervised learning concept with training and test data.

 The system must require to implement Principle Component Analysis to reduce image in the dimension less than n  and co-variance of the data.
The system must be used in Unsupervised learning algorithm . So it must be trained properly with relevant data sets. Based on this training , input data is tested by the application and result is displayed to the user.



2.3 System Requirement

Hardware specifications:
           Processor                                :                  Intel Processor IV
           RAM                                                :                  128 MB
           Hard disk                               :                  20 GB
           CD drive                                 :                  40 x Samsung
           Floppy drive                          :                  1.44 MB
           Monitor                                  :                  15’ Samtron color
           Keyboard                               :                  108 mercury keyboard
           Mouse                                     :                  Logitech mouse
    
 Software Specification:
Operating System – Windows XP/2000 
Language used – J2sdk1.4.0

2.4 System Analysis Methods
System analysis can be defined, as a method that is determined to use the resources, machine in the best manner and perform tasks to meet the information needs of an organization. It is also a management technique that helps us in designing a new systems or improving an existing system. The four basic elements in the system analysis are
·                    Output
·                    Input
·                    Files
·                    Process
The above-mentioned are mentioned are the four basis of the System Analysis.

2.5 Feasibility Study
                             Feasibility is the study of whether or not the project is worth doing. The process that follows this determination is called a Feasibility Study. This study is taken in right time constraints and normally culminates in a written and oral feasibility report. This feasibility study is categorized into seven different types. They are
·        Technical Analysis
·        Economical Analysis
·        Performance Analysis
·        Control and Security Analysis
·        Efficiency Analysis
·        Service Analysis

2.5.1 Technical Analysis

This analysis is concerned with specifying the software that will successfully satisfy the user requirements. The technical needs of a system are to have the facility to produce the outputs in a given time and the response time under certain conditions..

2.5.2 Economic Analysis

Economic Analysis is the most frequently used technique for evaluating the effectiveness of prepared system. This is called Cost/Benefit analysis. It is used to determine the benefits and savings that are expected from a proposed system and compare them with costs. If the benefits overweigh the cost, then the decision is taken to the design phase and implements the system.

2.5.3 Performance Analysis

The analysis on the performance of a system is also a very important analysis. This analysis analyses about the performance of the system both before and after the proposed system. If the analysis proves to be satisfying from the company’s side then this analysis result is moved to the next analysis phase. Performance analysis is nothing but invoking at program execution to pinpoint where bottle necks or other performance problems such as memory leaks might occur. If the problem is spotted out then it can be rectified.



2.5.4 Efficiency Analysis

This analysis mainly deals with the efficiency of the system based on this project. The resources required by the program to perform a particular function are analyzed in this phase. It is also checks how efficient the project is on the system, in spite of any changes in the system. The efficiency of the system should be analyzed in such a way that the user should not feel any difference in the way of working. Besides, it should be taken into consideration that the project on the system should last for a longer time.
CHAPTER-3
SYSTEM DESIGN

                   Design is concerned with identifying software components specifying relationships among components. Specifying software structure and providing blue print for the document phase.

                     Modularity is one of the desirable properties of large systems. It implies that the system is divided into several parts. In such a manner, the interaction between parts is minimal clearly specified.

                   Design will explain software components in detail. This will help the implementation of the system. Moreover, this will guide the further changes in the system to satisfy the future requirements.


3.1  Project modules:

3.1.1 Read/Write Module:

                        Here, the basic operations for loading and saving input and resultant images respectively from the algorithms. The image files are read, processed and new images are written into the output images.



3.1.2 Resizing Module:

                        Here, the faces are converted into equal size using linearity algorithm, for the calculation and comparison. In this module large images or smaller images are converted into standard sizing.

3.1.3 Image Manipulation:

Here, the face recognition algorithm using Locality Preserving Projections (LPP) is developed for various enrolled into the database.

3.1.4 Testing Module:

Here, the Input images are resized then compared with the Intermediate image and find the tested image then again compared with the laplacian faces to find the aureate faces.

                                            Designing Flow Diagram



3.2 System Development
This system is developed to implement Principle component analysis. Image manipulation: This module designed to view all the faces that are considered in our training case. Principle Component Analysis is an eigenvector method designed to model linear variation in high-dimensional data. PCA performs dimensionality reduction by projecting the original n-dimensional data onto the k << n -dimensional linear subspace spanned by the leading eigenvectors of the data’s covariance matrix. Its goal is to find a set of mutually orthogonal basis functions that capture the directions of maximum variance in the data and for which the coefficients are pair wise decorrelated. For linearly embedded manifolds, PCA is guaranteed to discover the dimensionality of the manifold and produces a compact representation.

1)Training module:
                     Unsupervised learning - this is learning from observation and discovery. The data mining system is supplied with objects but no classes are defined so it has to observe the examples and recognize patterns (i.e. class description) by itself. This process requires training data set .This system provides training set as 17 faces and each contains three different poses of faces. It undergoes iterative process stores require detail in face Template two dimension array.

          2) Test module:
After training process is over , it process the input image face for eigenface process then can able  to say  whether it recognizes or not.


CHAPTER-4
IMPLEMENTATION

Implementation includes all those activities that take place to convert from the old system to the new. The new system may be totally new, replacing an existing system or it may be major modification to the system currently put into use.
This system “Face Recognition” is a new system. Implementation as a whole involves all those tasks that we do for successfully replacing the existing or introduce new software to satisfy the requirement.

The entire work can be described as retrieval of faces from database, processed for eigen faces training method and test case are executed and finally result is displayed to the user.

The test case has performed in all aspect and the system has given correct result in all the cases.

4.1. Implementation Details:
4.1.1 Form design
Form is a tool with a message; it is the physical carrier of data or information. It also can constitute authority for actions. In the form design files are used to do each module. The following are list of forms used in this project:


1) Main Form
Contains option for viewing face from data base. The system retrieves the images stored in the folder called train and test folder, which is available in bin folder of your application.

          2) View database Form:
                    This form retrieves face available in the train folder. It is just for viewing purpose for the user.

3) Recognition Form :
                    This form provides option for loading input image from test folder. Then user has to click Train button which leads the application for training to gain knowledge as it is of the form unsupervised learning algorithm.

Unsupervised learning - This is learning from observation and discovery. The data mining system is supplied with objects but no classes are defined so it has to observe the examples and recognize patterns (i.e. class description) by itself. This system results in a set of class descriptions, one for each class discovered in the environment. Again this is similar to cluster analysis as in statistics.
Then user can click the test button Test button to see the matching for the faces. The matched face will be displayed in the place provided for matched face option. In case of any difference the information will be displayed in place provided in the form.





4.1.2 Input design
Accurate input data is the most common case of errors in data processing. Errors entered by data entry operators can control by input design. Input design is the process of converting user-originated inputs to a computer-based format. Input data are collected and organized into group of similar data.

4.1.3 Menu Design
           The menu in this application is organized into mdiform that organizes
viewing of image files from folder. Also it has option for loading image as input , try to perform training method and test whether it recognizes the face or not.

4.1.4 Data base design:
 A database is a collection of related data. The database has following properties:
i) Database reflects the changes of the information.
ii)A database is logically coherent collection of data with some              inherent meaning.
This application takes the images form the default folder set for this application train and test folders. The file extension is .jpeg  option.

4.1.5 Code Design
o   Face Enrollment
§  -a new face can be added by the user into facespace database



o   Face Verification
§  -verifies a persons face in the database with reference to his/her identity.

o   Face Recognition
§  -compares a persons face with all the images in database and choose the closest match. Here Principle Component Analysis is performed with training data set . The result is performed from test data set.

o   Face Retrieval
§  -displays all the faces and its templates in the database

o   Statistics
§  -stores a list of recognition accuracy for analyzing the FRR (False
                                Rejection Rate) and FAR (False Acceptance Rate)

4.2 Coding:

import java.lang.*;
import java.io.*;

public class PGM_ImageFilter
{
            //constructor
            public PGM_ImageFilter()
            {
                        inFilePath="";
                        outFilePath="";
            }

            //get functions
            public String get_inFilePath()
            {
                        return(inFilePath);
            }
           
            public String get_outFilePath()
            {
                        return(outFilePath);
            }
           
            //set functions
            public void set_inFilePath(String tFilePath)
            {
                        inFilePath=tFilePath;
            }
           
            public void set_outFilePath(String tFilePath)
            {
                        outFilePath=tFilePath;
            }

            //methods
            public void resize(int wout,int hout)
            {
                        PGM imgin=new PGM();
                        PGM imgout=new PGM();
           
                        if(printStatus==true)
                        {
                                    System.out.print("\nResizing...");
                        }
                        int r,c,inval,outval;
           
                        //read input image
                        imgin.setFilePath(inFilePath);
                        imgin.readImage();
           
                        //set output-image header
                        imgout.setFilePath(outFilePath);
                        imgout.setType("P5");
                        imgout.setComment("#resized image");
                        imgout.setDimension(wout,hout);
                        imgout.setMaxGray(imgin.getMaxGray());
           
                        //resize algorithm (linear)
                        double win,hin;
                        int xi,ci,yi,ri;
           
                        win=imgin.getCols();
                        hin=imgin.getRows();
           
                        for(r=0;r
                        {
                                    for(c=0;c
                                    {
                                                xi=c;
                                                yi=r;
           
                                                ci=(int)(xi*((double)win/(double)wout));
                                                ri=(int)(yi*((double)hin/(double)hout));
                                               
                                                inval=imgin.getPixel(ri,ci);
                                                outval=inval;
           
                                                imgout.setPixel(yi,xi,outval);
                                    }
                        }
           
                        if(printStatus==true)
                        {
                                    System.out.println("done.");
                        }
           
                        //write output image
                        imgout.writeImage();
            }
           
CHAPTER-5
             SYSTEM TESTING

5.1 Software Testing
Software Testing is the process of confirming the functionality and correctness of software by running it. Software testing is usually performed for one of two reasons:

     i) Defect detection
     ii)Reliability estimation.

Software Testing contains two types of testing. They are
          1) White Box Testing
          2) Block Box Testing    

1)    White Box Testing

White box testing is concerned only with testing the software product, it cannot guarantee that the complete specification has been implemented. White box testing is testing against the implementation and will discover faults of commission, indicating that part of the implementation is faulty.





2) Block Box Testing

 Black box testing is concerned only with testing the specification, it cannot guarantee that all parts of the implementation have been tested. Thus black box testing is testing against the specification and will discover faults of omission, indicating that part of the specification has not been fulfilled.

Functional testing is a testing process that is black box in nature. It is aimed at examine the overall functionality of the product. It usually includes testing of all the interfaces and should therefore involve the clients in the process.

The key to software testing is trying to find the myriad of failure modes – something that requires exhaustively testing the code on all possible inputs. For most programs, this is computationally infeasible. It is common place to attempt to test as many of the syntactic features of the code as possible (within some set of resource constraints) are called white box software testing technique. Techniques that do not consider the code’s structure when test cases are selected are called black box technique.

In order to fully test a software product both black and white box testing are required.The problem of applying software testing to defect detection is that software can only suggest the presence of flaws, not their absence (unless the testing is exhaustive). The problem of applying software testing to reliability estimation is that the input distribution used for selecting test cases may be flawed. In both of these cases, the mechanism used to determine whether program output is correct is often impossible to develop. Obviously the benefit of the entire software testing process is highly dependent on many different pieces. If any of these parts is faulty, the entire process is compromised.

Software is now unique unlike other physical processes where inputs are received and outputs are produced. Where software differs is in the manner in which it fails. Most physical systems fail in a fixed (and reasonably small) set of ways. By contrast, software can fail in many bizarre ways. Detecting all of the different failure modes for software is generally infeasible.

Final stage of the testing process should be System Testing. This type of test involves examination of the whole computer system, all the software components, all the hard ware components and any interfaces. The whole computer based system is checked not only for validity but also to meet the objectives.

5.2 Efficiency of Laplacian Algorithm

                           Now, consider a simple example of image variability. Imagine that a set of face images are generated while the human face rotates slowly. Thus, we can say that the set of face images are intrinsically one dimensional.
                         Many recent works shows that the face images do reside on a low dimensional(image space).Therefore, an effective subspace learning algorithm should be able to detect the nonlinear manifold structure. PCA and LDA, effectively see only the Euclidean structure; thus, they fail to detect the intrinsic low-dimensionality. With its neighborhood preserving character, the Laplacian faces capture the intrinsic face manifold structure .
                        Fig. 1 shows an example that the face images with various pose and expression of a person are mapped into two-dimensional subspace. This data set contains face images. The size of each image is 20 _ 28 pixels, with256 gray-levels per pixel. Thus, each face image is represented by a point in the 560-dimensional ambientspace. However, these images are believed to come from a  sub manifold with few degrees of freedom.

                       The face images are mapped into a two-dimensional space with continuous change in pose and expression. The representative face images are shown in the different parts of the space. The face images are divided into two parts. The left part includes the face images with open mouth, and the right part includes the face images with closed mouth. This is because in trying to preserve local structure .. Specifically, it makes the neighboring points in the image face nearer in the face space. . The 10 testing samples can be simply located in the reduced representation space by the Laplacian faces (columnvectors of the matrix W).


FIGURE:3
           As can be seen, these testing samples optimally find their coordinates which reflect their intrinsic properties, i.e., pose and expression. This observation tells us that the Laplacianfaces are capable of capturing the intrinsic face manifold structure.


                              The eigenvalues of LPP and LaplacianEigenmap.

Fig. 3 shows the eigen values computed by the two methods. As can be seen,
the eigen values of LPP is consistently greater than those of Laplacian Eigenmaps.




5.2. 1 Experimental Results
 A face image can be represented as a point in image space. However, due to the unwanted variations resulting from changes in lighting, facial expression, and pose, the image space might not be an optimal space for visual representation.
We can display the eigenvectors as images. These images may be called Laplacianfaces. Using the Yale face database as the training set, we present the first 10 Laplacianfaces in Fig. 4, together with Eigen faces and Fisher faces. A face image can be mapped into the locality preserving subspace by using the Laplacian faces.
5.2.2 Face Recognition Using Laplacianfaces
In this section, we investigate the performance of our proposed Laplacianfaces method for face recognition. The system performance is compared with the Eigen faces method and the Fisher faces method.
         
           In this study, three face databases were tested. The first one is the PIE (pose, illumination, and expression) .The second one is the Yale database and the
Third one is the MSRA database.
                   
            In short, the recognition process has three steps. First, we calculate the Laplacianfaces from the training set of face images; then the new face image to be identified is projected into the face subspace spanned by the Laplacianfaces; finally, the new face image is identified by a nearest neighbor classifier.
5.2.3 Yale Database
                      The Yale face database was constructed at the Yale Center for Computational Vision and Control. It contains 165 grayscale images of 15 individuals. The images demonstrate variations in lighting condition (left-light, center-light, right light),facial expression (normal, happy, sad, sleepy, surprised, and wink), and with/without glasses.

           A random subset with six images was taken for the training set. The rest was taken for testing. The testing samples were then projected into the low-dimensional Representation. Recognition was performed using a nearest-neighbor classifier.

                       In general, the performance of the Eigen faces method and the Laplacian faces method varies with the number of dimensions. We show the best results obtained by Fisher faces, Eigen faces, and Laplacian faces. The recognition results are shown in Table 1. It is found that the Laplacian faces method significantly outperforms both Eigen faces and Fisher faces methods.
5.2.4 PIE Database
                        Fig. 7 shows some of the faces with pose, illumination and expression variations in the PIE database. Table 2 shows the recognition results. As can be seen Fisher faces performs comparably to our algorithm on this Database, while Eigenfaces performs poorly. The error rate for Laplacian faces, Fisher faces, and Eigen faces .As can be seen, the error rate of our Laplacianfaces method decreases fast as the dimensionality of the face subspace.

5.2.5 MSRA Database

                                This database was collected at Microsoft Research Asia. Sixty-four to eighty face images were collected for each individual in each session. All the faces are frontal. Fig. 9 shows the sample cropped face images from this database. In this test, one session was used for training and the other was used for testing.

TABLE :3  
shows the recognition results. Laplacian faces method has lower error rate  than those of Eigen faces and fisher faces .

CHAPTER-6
                                                       CONCLUSION

Our system is proposed to use Locality Preserving Projection in Face Recognition which eliminates the flaws in the existing system. This system makes the faces to reduce into lower dimensions and algorithm for LPP is performed for recognition. The application is developed successfully and implemented as mentioned above.

This system seems to be working fine and successfully. This system can able to provide the proper training set of data and test input for recognition. The face matched or not is given in the form of picture image if matched and text message in case of any difference.
REFERENCES

1. X. He and P. Niyogi, “Locality Preserving Projections,” Proc. Conf.
Advances in Neural Information Processing Systems, 2003.


2. A.U. Batur and M.H. Hayes, “Linear Subspace for Illumination
Robust Face Recognition,” (dec2001).


3. M. Belkin and P. Niyogi, “Laplacian Eigenmaps and Spectral
Techniques for Embedding and Clustering,”

4. P.N. Belhumeur, J.P. Hespanha, and D.J. Kriegman, “Eigenfaces
Vs. Fisherfaces: Recognition Using Class Specific Linear Projection,”
 Pattern Analysis and Machine Intelligence (July 1997).


5. M. Belkin and P. Niyogi, “Using Manifold Structure for Partially
Labeled Classification (2002).


6. M. Brand, “Charting a Manifold,” Proc. Conf. Advances in Neural
Information Processing Systems, 2002.


7. F.R.K. Chung, “Spectral Graph Theory,” Proc. Regional Conf. Series
in Math., no. 92, 1997.


8. Y. Chang, C. Hu, and M. Turk, “Manifold of Facial Expression,”
Proc. IEEE Int’l Workshop Analysis and Modeling of Faces and
Gestures, Oct. 2003.


9. R. Gross, J. Shi, and J. Cohn, “Where to Go with Face
Recognition,” Proc. Third Workshop Empirical Evaluation Methods
in Computer Vision, Dec. 2001.


10. A.M. Martinez and A.C. Kak, “PCA versus LDA,” IEEE Trans.
Pattern Analysis and Machine Intelligence, Feb. 2001.




CakePHP Date Picker

Folder Structure: Controller:   File location - cakephp/src/controller/HomepageController.php namespace App\Controller; use...