mukesh

Mobile

Monday 30 April 2012

How the Google File System Works

Google is a multi-billion dollar company. It's one of the big power players on the World Wide Web and beyond. The company relies on a distributed computing system to provide users with the infrastructure they need to access, create and alter data. Surely Google buys state-of-the-art computers and servers to keep things running smoothly, right?
Wrong. The machines that power Google's operations aren't cutting-edge power computers with lots of bells and whistles. In fact, they're relatively inexpensive machines running on Linux operating systems. How can one of the most influential companies on the Web rely on cheap hardware? It's due to the Google File System (GFS), which capitalizes on the strengths of off-the-shelf servers while compensating for any hardware weaknesses. It's all in the design.
Google uses the GFS to organize and manipulate huge files and to allow application developers the research and development resources they require. The GFS is unique to Google and isn't for sale. But it could serve as a model for file systems for organizations with similar needs.
Some GFS details remain a mystery to anyone outside of Google. For example, Google doesn't reveal how many computers it uses to operate the GFS. In official Google papers, the company only says that there are "thousands" of computers in the system (source: Google). But despite this veil of secrecy, Google has made much of the GFS's structure and operation public knowledge.

Google File System Basics

Google developers routinely deal with large files that can be difficult to manipulate using a traditional computer file system. The size of the files drove many of the decisions programmers had to make for the GFS's design. Another big concern was scalability, which refers to the ease of adding capacity to the system. A system is scalable if it's easy to increase the system's capacity. The system's performance shouldn't suffer as it grows. Google requires a very large network of computers to handle all of its files, so scalability is a top concern.
Because the network is so huge, monitoring and maintaining it is a challenging task. While developing the GFS, programmers decided to automate as much of the administrative duties required to keep the system running as possible. This is a key principle of autonomic computing, a concept in which computers are able to diagnose problems and solve them in real time without the need for human intervention. The challenge for the GFS team was to not only create an automatic monitoring system, but also to design it so that it could work across a huge network of computers.
The key to the team's designs was the concept of simplification. They came to the conclusion that as systems grow more complex, problems arise more often. A simple approach is easier to control, even when the scale of the system is huge.
Based on that philosophy, the GFS team decided that users would have access to basic file commands. These include commands like open, create, read, write and close files. The team also included a couple of specialized commands: append and snapshot. They created the specialized commands based on Google's needs. Append allows clients to add information to an existing file without overwriting previously written data. Snapshot is a command that creates quick copy of a computer's contents.
Files on the GFS tend to be very large, usually in the multi-gigabyte (GB) range. Accessing and manipulating files that large would take up a lot of the network's bandwidth. Bandwidth is the capacity of a system to move data from one location to another. The GFS addresses this problem by breaking files up into chunks of 64 megabytes (MB) each. Every chunk receives a unique 64-bit identification number called a chunk handle. While the GFS can process smaller files, its developers didn't optimize the system for those kinds of tasks.
By requiring all the file chunks to be the same size, the GFS simplifies resource application. It's easy to see which computers in the system are near capacity and which are underused. It's also easy to port chunks from one resource to another to balance the workload across the system.

Google File System Architecture

Google organized the GFS into clusters of computers. A cluster is simply a network of computers. Each cluster might contain hundreds or even thousands of machines. Within GFS clusters there are three kinds of entities: clients, master servers and chunkservers.
In the world of GFS, the term "client" refers to any entity that makes a file request. Requests can range from retrieving and manipulating existing files to creating new files on the system. Clients can be other computers or computer applications. You can think of clients as the customers of the GFS.
The master server acts as the coordinator for the cluster. The master's duties include maintaining an operation log, which keeps track of the activities of the master's cluster. The operation log helps keep service interruptions to a minimum -- if the master server crashes, a replacement server that has monitored the operation log can take its place. The master server also keeps track of metadata, which is the information that describes chunks. The metadata tells the master server to which files the chunks belong and where they fit within the overall file. Upon startup, the master polls all the chunkservers in its cluster. The chunkservers respond by telling the master server the contents of their inventories. From that moment on, the master server keeps track of the location of chunks within the cluster.
There's only one active master server per cluster at any one time (though each cluster has multiple copies of the master server in case of a hardware failure). That might sound like a good recipe for a bottleneck -- after all, if there's only one machine coordinating a cluster of thousands of computers, wouldn't that cause data traffic jams? The GFS gets around this sticky situation by keeping the messages the master server sends and receives very small. The master server doesn't actually handle file data at all. It leaves that up to the chunkservers.
Chunkservers are the workhorses of the GFS. They're responsible for storing the 64-MB file chunks. The chunkservers don't send chunks to the master server. Instead, they send requested chunks directly to the client. The GFS copies every chunk multiple times and stores it on different chunkservers. Each copy is called a replica. By default, the GFS makes three replicas per chunk, but users can change the setting and make more or fewer replicas if desired.

Using the Google File System

File requests follow a standard work flow. A read request is simple -- the client sends a request to the master server to find out where the client can find a particular file on the system. The server responds with the location for the primary replica of the respective chunk. The primary replica holds a lease from the master server for the chunk in question.
­
If no replica currently holds a lease, the master server designates a chunk as the primary. It does this by comparing the IP address of the client to the addresses of the chunkservers containing the replicas. The master server chooses the chunkserver closest to the client. That chunkserver's chunk becomes the primary. The client then contacts the appropriate chunkserver directly, which sends the replica to the client.
Write requests are a little more complicated. The client still sends a request to the master server, which replies with the location of the primary and secondary replicas. The client stores this information in a memory cache. That way, if the client needs to refer to the same replica later on, it can bypass the master server. If the primary replica becomes unavailable or the replica changes, the client will have to consult the master server again before contacting a chunkserver.
The client then sends the write data to all the replicas, starting with the closest replica and ending with the furthest one. It doesn't matter if the closest replica is a primary or secondary. Google compares this data delivery method to a pipeline.
Once the replicas receive the data, the primary replica begins to assign consecutive serial numbers to each change to the file. Changes are called mutations. The serial numbers instruct the replicas on how to order each mutation. The primary then applies the mutations in sequential order to its own data. Then it sends a write request to the secondary replicas, which follow the same application process. If everything works as it should, all the replicas across the cluster incorporate the new data. The secondary replicas report back to the primary once the application process is over.
At that time, the primary replica reports back to the client. If the process was successful, it ends here. If not, the primary replica tells the client what happened. For example, if one secondary replica failed to update with a particular mutation, the primary replica notifies the client and retries the mutation application several more times. If the secondary replica doesn't update correctly, the primary replica tells the secondary replica to start over from the beginning of the write process. If that doesn't work, the master server will identify the affected replica as garbage.

Other Google File System Functions

Apart from the basic services the GFS provides, there are a few special functions that help keep the system running smoothly. While designing the system, the GFS developers knew that certain issues were bound to pop up based upon the system's architecture. They chose to use cheap hardware, which made building a large system a cost-effective process. It also meant that the individual computers in the system wouldn't always be reliable. The cheap price tag went hand-in-hand with computers that have a tendency to fail.
The GFS developers built functions into the system to compensate for the inherent unreliability of individual components. Those functions include master and chunk replication, a streamlined recovery process, rebalancing, stale replica detection, garbage removal and checksumming.
While there's only one active master server per GFS cluster, copies of the master server exist on other machines. Some copies, called shadow masters, provide limited services even when the primary master server is active. Those services are limited to read requests, since those requests don't alter data in any way. The shadow master servers always lag a little behind the primary master server, but it's usually only a matter of fractions of a second. The master server replicas maintain contact with the primary master server, monitoring the operation log and polling chunkservers to keep track of data. If the primary master server fails and cannot restart, a secondary master server can take its place.
The GFS replicates chunks to ensure that data is available even if hardware fails. It stores replicas on different machines across different racks. That way, if an entire rack were to fail, the data would still exist in an accessible format on another machine. The GFS uses the unique chunk identifier to verify that each replica is valid. If one of the replica's handles doesn't match the chunk handle, the master server creates a new replica and assigns it to a chunkserver.
The master server also monitors the cluster as a whole and periodically rebalances the workload by shifting chunks from one chunkserver to another. All chunkservers run at near capacity, but never at full capacity. The master server also monitors chunks and verifies that each replica is current. If a replica doesn't match the chunk's identification number, the master server designates it as a stale replica. The stale replica becomes garbage. After three days, the master server can delete a garbage chunk. This is a safety measure -- users can check on a garbage chunk before it is deleted permanently and prevent unwanted deletions.
To prevent data corruption, the GFS uses a system called checksumming. The system breaks each 64 MB chunk into blocks of 64 kilobytes (KB). Each block within a chunk has its own 32-bit checksum, which is sort of like a fingerprint. The master server monitors chunks by looking at the checksums. If the checksum of a replica doesn't match the checksum in the master server's memory, the master server deletes the replica and creates a new one to replace it.

Google File System Hardware

Google says little about the hardware it currently uses to run the GFS other than it's a collection of off-the-shelf, cheap Linux servers. But in an official GFS report, Google revealed the specifications of the equipment it used to run some benchmarking tests on GFS performance. While the test equipment might not be a true representation of the current GFS hardware, it gives you an idea of the sort of computers Google uses to handle the massive amounts of data it stores and manipulates.
The test equipment included one master server, two master replicas, 16 clients and 16 chunkservers. All of them used the same hardware with the same specifications, and they all ran on Linux operating systems. Each had dual 1.4 gigahertz Pentium III processors, 2 GB of memory and two 80 GB hard drives. In comparison, several vendors currently offer consumer PCs that are more than twice as powerful as the servers Google used in its tests. Google developers proved that the GFS could work efficiently using modest equipment.
The network connecting the machines together consisted of a 100 megabytes-per-second (Mbps) full-duplex Ethernet connection and two Hewlett Packard 2524 network switches. The GFS developers connected the 16 client machines to one switch and the other 19 machines to another switch. They linked the two switches together with a one gigabyte-per-second (Gbps) connection.
By lagging behind the leading edge of hardware technology, Google can purchase equipment and components at bargain prices. The structure of the GFS is such that it's easy to add more machines at any time. If a cluster begins to approach full capacity, Google can add more cheap hardware to the system and rebalance the workload. If a master server's memory is overtaxed, Google can upgrade the master server with more memory. The system is truly scalable.
How did Google decide to use this system? Some credit Google's hiring policy. Google has a reputation for hiring computer science majors right out of graduate school and giving them the resources and space they need to experiment with systems like the GFS. Others say it comes from a "do what you can with what you have" mentality that many computer system developers (including Google's founders) seem to possess. In the end, Google probably chose the GFS because it's geared to handle the kinds of processes that help the company pursue its stated goal of organizing the world's information.


Saturday 28 April 2012

NFC Technology Could Rock Your World




Smartphones are no longer just fancy mobile devices that let you e-mail and surf the Web. A contemporary smartphone has more computing power than all of the computers that were at NASA's disposal back in 1969 when the United States first landed on the moon [source: PC Mag]. Although you probably won't use your phone to control your own lunar lander anytime soon, it will likely do all sorts of other nifty stuff, like replace your wallet, thanks to NFC (near-field communication) technology.
The beauty and utility of NFC -- a short-range, wireless communication standard -- can be summed up in three primary purposes: sharing, pairing and transactions. NFC can turn your phone into a digital wallet, become a master key for everything from your house to your car, or serve as a government or corporate identification badge. And that's just for starters. Check out a whole swath of other nifty uses at NFC Rumor's sprawling infographic.
The possibilities for NFC tech are limited only by the imaginations of clever engineers and programmers. And because of its vast range of uses, the revolution is starting with your phone.
Armed with these tiny chips, smartphones are about to graduate from smart to downright brainiac status. Right now, only about 34 million phones have NFC, but some experts think that number will blow past 100 million in 2012 [source: USA Today]. Keep reading and you'll see how NFC phones and other gadgets could transform your tech-driven life.

NFC Pays Your Way

Chuck your cash in the trash and snip every last credit card into itty bitty pieces. With NFC, your smartphone becomes an ATM machine and credit device all in one. Instead of counting cash or swiping a card, you'll just wave your phone at a payment kiosk to complete a transaction and receive an email receipt instead of paper one.
Of all of the capabilities that NFC may bring to fruition, payment options are perhaps the likeliest to emerge soon. Executives at Google actually expect NFC smartphones to account for about 50 percent of the phone marketplace by 2014 [source: Popular Science], which would likely benefit its Google Wallet application. Google Wallet is a smartphone app that lets you wave your phone at a properly-equipped point-of-sale register to pay for all kinds of goods and services.
Other credit card companies and wireless service providers are working on similar systems to compete with Google. And it's that competition and lack of standardization, along with a lack of NFC-capable checkout systems at your local stores, that may delay the deployment of widespread NFC payment options.
Still, some pundits, including those at Juniper Research, expect that NFC transactions will hit around $50 billion by 2014 [source: Retail Merchandiser]. So be ready – your days of lugging around multiple plastic cards and a wad of paper money might just be numbered.




4: Data Grabbing Goodness

The chips and tags that an NFC-capable phone can read are so tiny that they could eventually be ubiquitous, embedded in everything from posters in movie theaters and schools to real estate signs, and much more. These so-called infotags orsmart tags will offer up all sorts of information to anyone who waves a smartphone at them.
At a movie theater, patrons could touch their phones to a poster for an intriguing film and be instantly directly to an online trailer. Or at school, students could use their phones to grab updated information on schedules and announcements.
Strolling by a home that's for sale? Wiggle your phone at the real estate company's sign and your phone immediately brings up all pertinent sales information on that house, including a video tour of the interior.
The chips work even in places more of more permanent residence. A system called Personal Rosetta Stone that lets cemetery visitors pull information from chip-laden headstones to read the life stories and obituaries of the deceased [source: Rosetta Stone].
There are thousands of other applications for this technology, and smartphones will help drive the proliferation of NFC. But suffice it to say, your smartphone will only find more and more ways to gather information from your tech-saturated environment, no matter where on Earth you might be.

3: Chips are Good for Your Health

Don't let anyone tell you that chips are bad for you. When it comes to your health care, NFC tags and the smart devices that can read them may help make health care data more accurate, more efficient and safer for patients and their caregivers.
Forget the clunky, inefficient ER rooms of the past. Now, patients could check into medical facilities using their phones, tap their prescription bottles for all instructions and side effects for a specific medication and make payments for services and products.
Medical professionals can use their NFC phones to access secure areas, scan patient tags to ensure that each person is receiving appropriate medicine and care, and automatically receive updates on when to check that patient again.
And thanks to the quick spread of smartphones throughout the developing world, health workers can better identify patients and track specific ailments, both of which help improve patient referral, emergency response, and disease data collection. In an age where health authorities fear pandemics, NFC could put health workers ahead of their bacterial and viral foes.
You may get much better personal care, too. The more data your doctor collects on your environmental exposure and your body's idiosyncrasies, the more likely you'll receive accurate diagnoses. A company named Gentag makes diagnostic skin tags that are affixed directly to the patient. These tags can monitor temperature, glucose levels or ultraviolet light exposure and then send pertinent health information directly to a smartphone.
So really, chips really are good for you. NFC devices could save many lives, including yours, and improve the quality of life for people all over the globe.

2: A Legendary Digital Locksmith

You already know that your smartphone can replace your wayward billboard. It can also help you do away with your keys and security cards.
You don't really need a key to get into your car. Nor do you need that jagged bit of metal and plastic for engine ignition. All you really need is permission. And your NFC smartphone might soon be able to give you that permission. Just wave your phone to unlock your car; then tap the dash to fire up the engine.
When you arrive at work, you don't need to show your ID badge to a security guard. You don't even need your badge anymore, because your phone tells the NFC access point exactly who you are and unlocks the door for you.
Then, when arrive at home from a long day at the office you won't need to dig through your purse for your keys. Your phone will unlock your apartment or house door so that you can waltz in without even the need to twist a key.
As with all such technologies, there are indeed security concerns galore with NFC. It won't hurt you to dosome reading before you recycle your keys (and credit cards) for good.
So although many of the first uses of NFC will likely apply to intangible digital payments, these examples show how NFC can grant access to all sorts of real physical places. You'll have few items to carry with you, too -- just don't lose your smartphone in the couch cushions.

1: Your Friendly Network Facilitator

You already know that NFC is good for sharing and transactions. It's also a handy way to quickly pair two devices so that they can exchange information via higher-speed networks, and in this sense, NFC could be heaven-sent, doing away with convoluted encryption schemes and long-winded, clunky passwords.
For example, if you and your co-worker are stranded at an airport and want to play a team racing game on your smartphones, you won't have to deal with a tedious configuration process. Instead, you can just tap your phones, and the NFC connection will authenticate your phones and let you immediately share a faster type of connection, such as Bluetooth or WiFi.
Want to print a photo that's on your phone? Tap your smartphone to an NFC inkjet printer and you can quickly start the print job. Or skip the printer and place your phone right next to your smart HDTV, and watch as your images appear on the screen without the need to set up a connection.
Now you know some of the ways that NFC might just live up to its hype in the next few years. While you're anxiously awaiting these marvelous new technologies, you can stay up to date on the latest NFC news and speculation at NFC Rumors.com, which details the many products and services that will put the power of NFC to use.
You can also jump into the fray and find an NFC -capable phone using this handy list. These phones might be your first taste of a wireless standard that will likely wow you and millions of others with its capabilities for a long time to come.



Top 5 Google Killers -- That Didn't


Whenever a product establishes itself as the dominant force in its particular market, people will be on the lookout for the next product or organization to push it off the top of the heap. It's the classic David versus Goliath story -- even if the Goliath is a product everyone likes. In the technology industry, it's not unusual for journalists and bloggers to refer to the upcoming product as a killer.
The technology blogosphere is filled with discussions about various killers. There are Apple iPhone killers -- the Palm Pre and HTC G1 both made that list. Then there are the various operating systems said to be Windows killers. But there's one Web Goliath that seems to collect more Davids than any other: Google.
Google began as a project headed by Stanford graduate students Larry Page and Sergey Brin. Their goal was to create the most powerful, accurate and comprehensive search engine on the Web. Their hard work paid off -- today, many people refer to the act of performing a Web search as "googling."

As the company grows, so too do the aspirations of the people behind Google. The company's mission is "to organize the world's information and make it universally accessible and useful" [source: Google]. It's telling that the mission doesn't specify online information -- Google's mission extends beyond the boundaries of the Web.
But Google isn't the only search engine game in town. Several companies and developers have created Web search tools. Some have even admitted to setting their sights on Google. Others say they're just trying to create a product that works well. And a few claim that their work isn't meant to compete with Google at all. We'll look at five Web products that journalists have described as Google killers.

5: Wikia Search

The Web 2.0 era has introduced dozens of new terms and phrases into the technology industry. One of the terms that has had a huge impact on the way people use the Web is wiki. A wiki is a site that uses a special kind of software that makes it easy for people to create and edit collaborative Web pages.
The most famous wiki on the Web is Wikipedia, the collaborative encyclopedia. One of the co-founders of Wikipedia is Jimmy Wales. Wales saw the success of collaborative work on the Web -- often calledcrowdsourcing -- and decided to apply that approach to search. That's how Wikia Search was born.
Wales hoped to create a search engine that harnessed the power of collaboration to produce the best, most relevant search results on the Web. Ideally, the collaborative process would be transparent and it would be hard for companies to game the system. Any registered user would be able to see who had made changes to search results pages and intervene if necessary.
In March 2009, Wales announced that his company was discontinuing the Wikia Search project indefinitely. The economic recession had hit the tech industry hard. As a result, there just wasn't enough money in the budget to support the development of Wikia Search. But we may still see the search engine resurface in the future.

4: Cuil

In the summer of 2008, a new search engine emerged onto the scene and began to make headlines. Headed by Web veterans -- including former Google employees -- this new search engine seemed poised to take on Google in a head-to-head competition. The engine's name was Cuil -- pronounced "cool."
The launch of Cuil wasn't exactly an example of smooth sailing. Rafe Needleman of CNET said that it launched in a "blaze of glory" followed by a collapse in a "ball of flames" [source: CNET]. The problem was that, despite claims that Cuil would search far more sites than Google or Microsoft, results came back incomplete or just plain wrong.
Cuil took a different approach to searching and ranking Web sites. Google's strategy is to search sites for keywords and then rank the sites based upon popularity. The more popular a Web site is, the higher it will rank on a Google results page. The philosophy behind this approach is pretty simple: If a lot of people link to a page, it must be pretty good.
Cuil attempted to rank pages not based upon popularity but by relevance. The search engine crawled through Web pages looking for keywords and searching for context. It looked not just for the phrase or word you searched for but also the rest of the content on the page. Theoretically, you should have received results that are most relevant to your query.
The problem was that Cuil didn't quite live up to user expectations when it launched. In fact, the site closed for business on Sept. 17, 2010 [source: Duan].

3: Wolfram|Alpha

Sometimes tech journalists will call a new service a Google killer even when it's not a search engine. That's the case with Wolfram|Alpha. It's easy to confuse Wolfram|Alpha with a search engine. It has a field into which you type a query and it searches its database for answers. But that's where the similarity ends.
Search engines provide users links to Web sites that presumably hold information the user wants. Wolfram|Alpha consults an enormous database to bring data directly to the user. You won't receive a list of links when you execute a query on Wolfram|Alpha. Instead, you'll be greeted with charts and graphs populated with data related to the keywords you entered.
This makes Wolfram|Alpha a very powerful research tool. Wolfram|Alpha employees vet all the information included in the database. They pull data from established and accepted resources. You can use Wolfram|Alpha to compare two subjects within the same category. Want to see if a Big Mac is healthier than a Whopper? Use Wolfram|Alpha to compare the nutritional information.
Because Wolfram|Alpha pulls back data rather than links, it's not in direct competition with Google. You should use Wolfram|Alpha if you need to know information about a specific concept. You should use Google if you want to read the latest news on the subject, find a product review or just browse.

2: Bing

Out of all of Google's potential rivals, one stands above all others: Microsoft. The software giant has a long history of dominating the computer marketplace. Almost everyone who has ever used a computer is familiar with the Windows operating system. Then there's Microsoft Office, a suite of productivity software that's very popular in the corporate world. As Google tries to edge into Microsoft's territory with products like Google Docs, Microsoft is doing the same thing to Google through search.
Microsoft has offered Web search engines under several names. The latest incarnation is called Bing. Bing has a snazzy interface and a simple navigation menu. You can search for Web site results, images, video, news and more. While Google search offers similar services, Bing's presentation has more style.
Microsoft has included other features within its search engine, too. Need to find a cheap airline fare? You can use Bing to search for ticket prices and the status of flights. Want to find out how many calories you consumed when you wolfed down that hot dog? You can use Bing to find out.
Bing enjoyed a big spike in user activity shortly after it debuted. Journalists remarked on the search result quality, particularly for images and videos. But later reports suggested that Bing's surge in popularity was short-lived. It appears that users just need search to be "good enough" without any of the bells and whistles you find in Bing. Could Bing bounce back and take Google's search throne?

1: Twitter Search

Last on our list is Twitter Search. Twitter is the messaging service that spans across cell phones and the Web. Users can send messages of up to 140 characters in length to a network of followers. They can also reply to messages publicly or send direct messages to their correspondent. Twitter messages -- or tweets -- show up in a user's Twitter account chronologically. In general, newer tweets are at the top of the list. But there are dozens of different applications for computers and phones that can arrange tweets in different ways.
One of the more useful Twitter applications is Twitter Search. Type a keyword into Twitter Search right from the Twitter home page and you'll see the most recent public tweets that contain that keyword. You can take the pulse of the Twitter audience instantly. A quick glance at the time stamp on each tweet tells you if the topic you're searching for is generating a lot of interest or is dead in the water.
Twitter users have adapted their behaviors to make Twitter Search more useful. For example, the hashtagis a way to designate a term in your tweet. It consists of a # symbol followed by a keyword. Why use a hashtag? By searching for a term with a hashtag on it, you're more likely to pull up tweets that are relevant to your interests. Otherwise, you'll get a search results page containing every tweet that includes your keyword. If the keyword is a common term, you may have to sort through dozens of irrelevant messages before you find one that applies to your search.
Is Twitter Search a threat to Google? Well, it gives the user an instant glance at topics of interest. And Twitter Search results update as you plow through them, while Google search results take more time to update. But Twitter limits messages to 140 characters in length. Most of the time, you'll find more helpful information using Google. Exceptions include breaking news or tweets that contain links to sites that Google has yet to index.
There are lots of useful search engine tools on the Internet. Some of them even rival Google -- there might even be a few that are arguably better at returning searches than Google. But it looks like it's going to take more than a good search results page to topple this Goliath.





Cloud Computing Architecture



When talking about a cloud computing system, it's helpful to divide it into two sections: the front end and the back end. They connect to each other through anetwork, usually the Internet. The front end is the side the computer user, or client, sees. The back end is the "cloud" section of the system.
The front end includes the client's computer (or computer network) and the application required to access the cloud computing system. Not all cloud computing systems have the same user interface. Services like Web-based e-mail programs leverage existing Web browsers like Internet Explorer or Firefox. Other systems have unique applications that provide network access to clients.
On the back end of the system are the various computers, servers and data storage systems that create the "cloud" of computing services. In theory, a cloud computing system could include practically any computer program you can imagine, from data processing to video games. Usually, each application will have its own dedicated server.
A central server administers the system, monitoring traffic and client demands to ensure everything runs smoothly. It follows a set of rules called protocols and uses a special kind of software called middleware. Middleware allows networked computers to communicate with each other. Most of the time, servers don't run at full capacity. That means there's unused processing power going to waste. It's possible to fool a physical server into thinking it's actually multiple servers, each running with its own independent operating system. The technique is called server virtualization. By maximizing the output of individual servers, server virtualization reduces the need for more physical machines.
If a cloud computing company has a lot of clients, there's likely to be a high demand for a lot of storage space. Some companies require hundreds of digital storage devices. Cloud computing systems need at least twice the number of storage devices it requires to keep all its clients' information stored. That's because these devices, like all computers, occasionally break down. A cloud computing system must make a copy of all its clients' information and store it on other devices. The copies enable the central server to access backup machines to retrieve data that otherwise would be unreachable. Making copies of data as a backup is called redundancy.
 some of the applications of cloud computing
  • Clients would be able to access their applications and data from anywhere at any time. They could access the cloud computing system using any computer linked to the Internet. Data wouldn't be confined to a hard drive on one user's computer or even a corporation's internal network.
  • It could bring hardware costs down. Cloud computing systems would reduce the need for advanced hardware on the client side. You wouldn't need to buy the fastest computer with the mostmemory, because the cloud system would take care of those needs for you. Instead, you could buy an inexpensive computer terminal. The terminal could include a monitor, input devices like akeyboard and mouse and just enough processing power to run the middleware necessary to connect to the cloud system. You wouldn't need a large hard drive because you'd store all your information on a remote computer.
  • Corporations that rely on computers have to make sure they have the right software in place to achieve goals. Cloud computing systems give these organizations company-wide access to computer applications. The companies don't have to buy a set of software or software licenses for every employee. Instead, the company could pay a metered fee to a cloud computing company.
  • Servers and digital storage devices take up space. Some companies rent physical space to store servers and databases because they don't have it available on site. Cloud computing gives these companies the option of storing data on someone else's hardware, removing the need for physical space on the front end.
  • Corporations might save money on IT support. Streamlined hardware would, in theory, have fewer problems than a network of heterogeneous machines and operating systems.
  • If the cloud computing system's back end is a grid computing system, then the client could take advantage of the entire network's processing power. Often, scientists and researchers work with calculations so complex that it would take years for individual computers to complete them. On a grid computing system, the client could send the calculation to the cloud for processing. The cloud system would tap into the processing power of all available co

How Cloud Computing Works


Let's say you're an executive at a large corporation. Your particular responsibilities include making sure that all of your employees have the right hardware and software they need to do their jobs. Buyingcomputers for everyone isn't enough -- you also have to purchase software or software licenses to give employees the tools they require. Whenever you have a new hire, you have to buy more software or make sure your current software license allows another user. It's so stressful that you find it difficult to go to sleep on your huge pile of money every night.
Soon, there may be an alternative for executives like you. Instead of installing a suite of software for each computer, you'd only have to load one application. That application would allow workers to log into a Web-based service which hosts all the programs the user would need for his or her job. Remote machines owned by another company would run everything from e-mail to word processing to complex data analysis programs. It's called cloud computing, and it could change the entire computer industry.
In a cloud computing system, there's a significant workload shift. Local computers no longer have to do all the heavy lifting when it comes to running applications. The network of computers that make up the cloud handles them instead. Hardware and software demands on the user's side decrease. The only thing the user's computer needs to be able to run is the cloud computing system's interface software, which can be as simple as a Web browser, and the cloud's network takes care of the rest.
There's a good chance you've already used some form of cloud computing. If you have an e-mail account with a Web-based e-mail service like Hotmail, Yahoo! Mail or Gmail, then you've had some experience with cloud computing. Instead of running an e-mail program on your computer, you log in to a Web e-mail account remotely. The software and storage for your account doesn't exist on your computer -- it's on the service's computer cloud.
What makes up a cloud computing system? Find out in the next blog

How Bluetooth Technology Works


Bluetooth is a high-speed, low-power microwave wireless link technology, designed to connect phones, laptops, PDAs and other portable equipment together with little or no work by the user. Unlike infra-red, Bluetooth does not require line-of-sight positioning of connected units. The technology uses modifications of existing wireless LAN techniques but is most notable for its small size and low cost. The current prototype circuits are contained on a circuit board 0.9cm square, with a much smaller single chip version in development. The cost of the device is expected to fall very fast, from $20 initially to $5 in a year or two. It is envisioned that Bluetooth will be included within equipment rather than being an optional extra. When one Bluetooth product comes within range of another, (this can be set to between 10cm and 100m) they automatically exchange address and capability details. They can then establish a 1 megabit/s link (up to 2 Mbps in the second generation of the technology) with security and error correction, to use as required. The protocols will handle both voice and data, with a very flexible network topography.
This technology achieves its goal by embedding tiny, inexpensive, short-range transceivers into the electronic devices that are available today. The radio operates on the globally-available unlicensed radio band, 2.45 GHz (meaning there will be no hindrance for international travelers using Bluetooth-enabled equipment.), and supports data speeds of up to 721 Kbps, as well as three voice channels. The bluetooth modules can be either built into electronic devices or used as an adaptor. For instance in a PC they can be built in as a PC card or externally attached via the USB port.
Each device has a unique 48-bit address from the IEEE 802 standard. Connections can be point-to-point or multipoint. The maximum range is 10 meters but can be extended to 100 meters by increasing the power. Bluetooth devices are protected from radio interference by changing their frequencies arbitrarily upto a maximum of 1600 times a second, a technique known as frequency hopping. They also use three different but complimentary error correction schemes. Built-in encryption and verification is provided.
Moreover, Bluetooth devices won't drain precious battery life. The Bluetooth specification targets power consumption of the device from a "hold" mode consuming 30 micro amps to the active transmitting range of 8-30 milliamps (or less than 1/10th of a watt). The radio chip consumers only 0.3mA in standby mode, which is less than 3 % of the power used by a standard mobile phone. The chips also have excellent power-saving features, as they will automatically shift to a low-power mode as soon as traffic volume lessens or stops.
Bluetooth devices are classified according to three different power classes, as shown in the following table.
Power Class
Maximum Output
Power
1
100 mW
(20 dBm)
2
2.5 mW
(4 dBm)
3
1 mW
(0 dBm)
But beyond untethering devices by replacing the cables, Bluetooth radio technology provides a universal bridge to existing data networks, a peripheral interface, and a mechanism to form small private ad hoc groupings of connected devices away from fixed network infrastructures. Designed to operate in a noisy radio frequency environment, the Bluetooth radio uses a fast acknowledgment and frequency hopping scheme to make the link robust. Bluetooth radio modules avoid interference from other signals by hopping to a new frequency after transmitting or receiving a packet. Compared with other systems operating in the same frequency band, the Bluetooth radio typically hops faster and uses shorter packets. This makes the Bluetooth radio more robust than other systems. Short packages and fast hopping also limit the impact of domestic and professional microwave ovens. Use of Forward Error Correction (FEC) limits the impact of random noise on long-distance links. The encoding is optimized for an uncoordinated environment.
Bluetooth guarantees security at the bit level. Authentication is controlled by the user by using a 128 bit key. Radio signals can be coded with 8 bits or anything upto 128 bits. The Bluetooth radio transmissions will conform to the safety standards required by the countries where the technology will be used with respect to the affects of radio transmissions on the human body. Emissions from Bluetooth enabled devices will be no greater than emissions from industry-standard cordless phones. The Bluetooth module will not interfere or cause harm to public or private telecommunications network.
The Bluetooth baseband protocol is a combination of circuit and packet switching. Slots can be reserved for synchronous packets. Each packet is transmitted in a different hop frequency. A packet nominally covers a single slot, but can be extended to cover up to five slots. Bluetooth can support an asynchronous data channel, up to three simultaneous synchronous voice channels, or a channel, which simultaneously supports asynchronous data and synchronous voice. It is thus possible to transfer the date asynchronously whilst at the same time talking synchronously at the same time. Each voice channel supports 64 kb/s synchronous (voice) link. The asynchronous channel can support an asymmetric link of maximally 721 kb/s in either direction while permitting 57.6 kb/s in the return direction, or a 432.6 kb/s symmetric link.
Modes of operation
An interesting aspect of the technology is the instant formation of networks once the bluetooth devices come in range to each other. A piconet is a collection of devices connected via Bluetooth technology in an ad hoc fashion. A Piconet can be a simple connection between two devices or more than two devices. Multiple independent and non-synchronized piconets can form a scatternet. Any of the devices in a piconet can also be a member of another by means of time multiplexing. i.e a device can be a part of more than one piconet by suitably sharing the time. The Bluetooth system supports both point-to-point and point-to-multi-point connections. When a device is connected to another device it is a point to point connection. If it is connected to more that one (upto 7 ) it is a point to multipoint connection. Several piconets can be established and linked together ad hoc, where each piconet is identified by a different frequency hopping sequence. All users participating on the same piconet are synchronized to this hopping sequence. If a device is connected to more than one piconet it communicates in each piconet using a different hopping sequence. A piconet starts with two connected devices, such as a portable PC and cellular phone, and may grow to eight connected devices. All Bluetooth devices are peer units and have identical implementations. However, when establishing a piconet, one unit will act as a master and the other(s) as slave(s) for the duration of the piconet connection. In a piconet there is a master unit whose clock and hopping sequence are used to synchronize all other devices in the piconet. All the other devices in a piconet that are not the master are slave units. A 3-bit MAC address is used to distinguish between units participating in the piconet. Devices synchronized to a piconet can enter power-saving modes called Sniff and hold mode, in which device activity is lowered. Also there can be parked units which are synchronized but do not have a MAC addresses. These parked units have a 8 bit address, therefore there can be a maximum of 256 parked devices.

Voice channels use either a 64 kbps log PCM or the Continuous Variable Slope Delta Modulation (CVSD) voice coding scheme, and never retransmit voice packets. The voice quality on the line interface should be better than or equal to the 64 kbps log PCM. The CVSD method was chosen for its robustness in handling dropped and damaged voice samples. Rising interference levels are experienced as increased background noise: even at bit error rates up 4%, the CVSD coded voice is quite audible.

Mobile Phones as a Medical Diagnostic Platform


 Many people die every day due to lack of access to basic medical measurements, such as blood pressure, and corresponding diagnoses. In order to combat this, a medical diagnostic platform is being designed which will use low-cost sensors and utilize the proliferation of mobile phones in emerging regions for computational power.
mhealth-flow.jpg


Broadly speaking, the Mobile Phones as a Medical Diagnostic Platform project can be divided into an electronics phase and a software phase. The electronics phase involves selecting a suitable pressure sensor, amplifying its output, and modulating the signal for transmission to the phone. The software phase involves demodulating the signal, calculating the blood pressure, creating a GUI for the phone targeted for the regions in which it will be deployed, and creating a database with basic diagnostic information correlated to the blood pressure calculated.
A chief design difficulty in this project has been the implementation of amplitude modulation (necessary to transmit DC information to the mobile phone) on the 3.2V provided by the phone battery. An analysis of the standard modulation IC, ON Semiconductor’s MC1496 balanced modulator, is presented, along with modifications and design decisions that demonstrate optimized operation for low-power, DC input, and minimal harmonics.

What's the Difference: 2D vs 3D vs 4D Technology



What's the Difference: 2D vs 3D vs 4D
Having a baby can be fun, exciting and a little scary all at once. Are you going to have a baby soon? If you're expecting parents and would like to have a chance to see your new baby up close and create that special moment which can be helpful in drawing the whole family closer together, 
2D Image of a Fetus

then keep reading and see why SEE BABY may be for you…
Discovering what your baby looks like can be a very fun and rewarding experience. If you are at all curious what your baby looks like. Then an ultrasound is probably right for you. 

All expectant mothers' worldwide are seeking 3D/4D ultrasound because they feel it will have a positive impact during their pregnancy and enhance their bonding, as well as, the family.

In recent studies it has been shown that viewing an ultrasound can cause marked improvement in maternal health habits and family dynamics.

Images created by traditional two-dimensional ultrasound technology cannot compare as a first portrait. Traditional 2D ultrasound produce black-and-white swirls and streaks for images, making it is very difficult in some settings, to identify the different parts of baby's anatomy. Although during the time of the examination and with guidance from the Sonography, the images can appear clear, and the experience is also rewarding.

Images can vary depending on the position of the baby, amount of fluid present, baby's gestational age, position, and mother's condition.


Do you want to see your baby up close in a 3D/4D ultrasound image?

It is now possible to do just that.  With the ultrasound technology of today you can create memories that will last a lifetime.  You are able to see every feature of your baby in a 3D/4D image and capture images from all angles with this new exciting technology.

                                                                                                        So what is 3D/4D Baby Ultrasound?

This is a medical technique that is normally used by doctors and nurses during pregnancy, to display 3D images of the baby during the mothers’ pregnancy. It is referred to as a 4D ultrasound when the baby is moving while viewed in 3D. Ultrasound has been used for many years in pregnancy, however the advent of 3D/4D technology, makes the experience more exciting and memorable.

Ultrasound has been around since 1987 and was developed by Olaf von Ramm and Stephen Smith from Duke University. Naturally with advancing technology ultrasounds have changed dramatically, with the advancements from 2D to 3D/4D technology where you can now see all angles of your baby in utero. This is because the new technology scan’s the whole area of the fetus and can make dimensional image changes based on the sectional scans.

Thanks to our 3D/4D technology, now you don't have to wait until baby is born to feel the joy of seeing him or her for the first time.

Imagine seeing his or her little hands, a stretch, a yawn a kick. Their little mouth moving with their bodies as they change position. If you have never experienced this, it is truly a sight that will help form an extra special bond between you and your baby. Imagine being able to see what your special bundle of joy is really doing inside, even how big they are getting and what they look like.

In fairness, images can vary depending on the position of the baby, amount of fluid present, baby's gestational age, position, and the mothers’ body habitus.

The 3D/4D ultrasound technology we have at SEE BABY is some of the best equipment in the business, and the environment to experience your baby in is uncompromising. In essence you deserve to “Experience Excellence” during the care and imaging of your baby. Please see our image gallery and review (the soon to be posted) patient experience anecdotes, to get a better sense of what SEE BABY can bring to your pregnancy.


4D Ultrasound


The differences between 3D and 4D ultrasound 
3D ultrasound probe will collect series of fetus images and put them through the process to obtain the 3D images. These images will have the depth (volume) which is called life-like pictures of the fetus. These images will resemble the photos of newborn baby. For 4D Ultrasound, the dimension of time is added, resulting in the movement of the fetus. This newest technology of ultrasound will scan 25 of 3D images continuously per second. 




Advantages of 4D Ultrasound
4D Ultrasound enables us to obtain the movement of the fetus that seems natural. Usually 3D ultrasound technique will only provide still images of the part of the baby such as face, arms or legs. 4D ultrasound will help us to see the baby when it yawns; open its eyes, etc. With 4D ultrasound, all of these movements can be clearly noticed as if we film them by video camera. When compared to 2D technique, 4D will give more natural images of the baby and help increasing the bonding between the mother and her baby 
Apart from this, 4D ultrasound also use to detect abnormality of the fetus’ internal organs such as spinal cord and heart, to assist the injection to draw some liquid from the fetus organ as the ultrasound can give the precise and accurate injection point, and to measure the volume of the desired organs such as amniotic sac which will enable doctor to calculate period of pregnancy more precisely, especially when the fetus is not yet noticeable.