Wednesday, May 15, 2013

Applied Materials designs tools to leverage big data and build better chips

In semiconductor manufacturing, metrology — the science of measuring things — is an absolutely vital part of the manufacturing process. Much of this analysis is handled by CDSEM (Critical Dimension Scanning Electron Microscopy) equipment. As process nodes shrink and manufacturing difficulty increases, the amount of data being collected per wafer has increased. Foundries now collect more data per wafer than ever before, and they need to be able to analyze that information quickly and compare it to other readouts from different pieces of equipment. Applied Materials has launched a new web backend it calls TechEdge Prizm that’s designed to offer foundries better data on their day-to-day production and to do so in a far better manner than what’s currently available.
CD-SEM
This image is drawn from an IBM 2013 SPIE paper from a study by Eric Solecky et al: SPIE 8681, Metrology, Inspection, and Process Control for Microlithography XXVII, 86810D (April 10, 2013); doi:10.1117/12.2010007.
With the amount of data per fab skyrocketing from 50TB per fab per year at 45nm to 80TB at 28nm, and an estimated 141TB at 14nm, better tools are needed for visualizing and examining system output closer to real-time. In the past, data was gathered by individual tools, locally stored, and painful to parse. There was no unified system for collecting information or comparing results between tools or across longer periods of time. With Prizm, Applied Materials hopes to change that. Instead of trying to parse data sets on a tool-by-tool basis, Prizm can gather data from multiple tools and present it through a unified interface. Results are searchable and can be analyzed much more quickly. Total time savings, again according to Applied Materials, are shown below.
Prizm comparison
The green bar is analysis time with Prizm, the brown bar is current time.
Prizm allows engineers to see various metrics on individual sections of a wafer map rather than simply as a chart of total data. Prizm is capable of showing how specific metrics have changed over time, or comparing specific metrics from one set of wafers against a later set. According to Applied Materials, Prizm can improve workflow efficiency by 10x in certain cases and spares engineers hours of tedious work manually gathering data. The online backend also stores data far longer — typical tools preserve data sets for a month; Applied Materials is guaranteeing seven years of storage for particular tools.
Prizm Metrology
 
We spoke to Applied Materials about Prizm, and the company offered us a remote demo of how the service works. In the screenshot above, the engineer is able to drill down to examine metrics at each specific point on the wafer. Clicking on a section brings up an image of that area and gives more information on the selected metrics. The entire system is designed for flexibility — the engineers can examine and sort by tool type, process node, or a specific quality measure.

When Big Data matters

I’m skeptical of “big data” for the same reasons I’m skeptical of “cloud computing,” but the dramatic overuse and subsequent dilution of the latter phrase doesn’t mean there aren’t cases where cloud computing hasn’t offered something unique and different compared to the services we used to have. In this case, the  term “big data” term also seems to fit. Not only do these tools produce a staggering amount of information, the ability to sift and sort said research is essential to progress.
TechEdge Prizm
We’ve previously discussed the mind-boggling levels of accuracy the modern semiconductor industry requires as a matter of course, and the ability to measure those levels accurately is a necessity if products are to continue pushing below 20nm. Improving data collection and analysis doesn’t directly solve the problems facing the semiconductor industry, but it does ensure that the researchers working at companies like Intel, TSMC, and GlobalFoundries have access to the data they need to investigate defects more quickly.

Chinese physicists create first single-photon quantum memory, leading to quantum internet


Quantum entanglement (blue)
A lab in China is reporting that it has constructed the first memory device that uses single photons to store quantum data. This is a significant breakthrough that takes us further down the path towards a quantum internet, and potentially quantum computing as well.
As it currently stands, we already make extensive use of photons — the bulk of the internet and telecommunications backbone consists of photons traveling down fiber optic cables. Rather than single photons, though, these signals consist of carrier light waves of millions of photons, with the wave being modulated by binary data. These pulses are never stored, either; when they reach a router, they’re converted into electrical signals, and then stored in RAM before being converted back into light.
A diagram showing the generation of a single photon (a), and the storage of a single photon with OAM (b)
A diagram showing the generation of a single photon (a), and the storage of a single photon with OAM (b)
Now, however, Dong-Sheng Ding and fellow researchers at the University of Science and Technology of China have announced that they have generated a single photon, stored it in a “cigar-shaped atomic cloud of rubidium atoms” for 400 nanoseconds, and then released the photon. The single photon is created using a process called spontaneous four-wave mixing, and the rubidium cloud stores the photon due to electromagnetically induced transparency (EIT). EIT causes a phenomenon called “slow light,” which is used here to “store” the photon for 400ns (more than long enough to count as computer memory).
The retrieved photon signal, vs. the storage timeThe photon, being stored
The retrieved photon signal, vs. the storage timeThe photon, being stored
The generation and storage would be a big achievement in itself, but there’s more: the rubidium trap also preserves the orbital angular momentum (OAM) of the photon. As we’ve covered before, electromagnetic waves (including photons) can have both spin and orbital angular momentum. Spin angular momentum (SAM), which is equivalent to the Earth spinning on its own axis, produces polarization — and then there’s OAM, which is equivalent to the Earth rotating around the Sun. Generally, in wireless and wired communications, signals only use SAM and are therefore flat — but by introducing OAM, a signal becomes a 3D helix. You can encode a lot more data into a carrier wave  – perhaps an infinite amount – if you play with both the SAM and OAM. By preserving the OAM of the single photon, the Chinese researchers could be onto something very big indeed.
Moving forward, a photonic quantum memory is absolutely vital if we ever want to build a quantum internet out of quantum routers. Even if we pull back from lofty, quantum applications, if we could introduce OAM to the world’s fiber optic networks, the internet would suddenly get a whole lot faster.
Research paper: arXiv:1305.2675 - “Single-Photon-Level Quantum Image Memory Based on Cold Atomic Ensembles”

Sunday, May 12, 2013

Captain Your Own Computer Technology Needs

Today, you too can become an efficient personal computer user. Looking every time for tech help for your computer-related needs is no more a smart thing for you to do. Today it is you who should make first initiative. You can manage your personal computer according to your requirements by knowing more about practical tips from computer technology experts. You can look for online sources such as computer forums where you can find useful interaction with computer technology experts giving tech help advices for PC users like you.

According to tech help experts, one simple thing that all personal computer users should do is to keep their PC clean. Now to be in charge, you want to carry this activity by yourself. This is quite easy and hassle free. By taking a side panel of and using a can of compressed air or air compressor, you can blow the dust out. Apply this computer help tip before stopping the fans from moving in your CPU and paying special attention to the CPU, heat sink, the video card, the front case fan, and the power supply. This will increase the reliability of your PC and save it from many hardware-related hurdles. This useful tip from a Microsoft Certified tech help expert in a computer forum is perhaps not hard for a first time PC user to follow. According to computer technology experts, you should carry out this simple activity once in 3 months. This is not that inconvenient either!

You need to groom your PC skills in association with other visitors and tech experts in online information technology forums,as well as your own personal online computer support service provider.You should try to make full use of free learning online tools such as online computer forums.These forums are most suited for you as a personal computer user to seek practical tips on better use of computer.It will be an important refresher for you while on work. You will come to know about latest trends in computer technology through such free online information technology tools.When faced with multiple choices and you have to select one out of many, do take personalized services of your own computer technology expert.

Today, you no more go to a computer repair shop or arrange an appointment with a tech expert. Today, you take such consultation by having an online technical support provider. Their experts will help you in what will be best for you as a personal computer user.These services are today outsourced online and are cheap. As a result, you groom yourself not taking any risks which may cause you or your business any loss in terms of wrong tips from any sources.You wanted to be expert for your practical needs.You need to move in a practical way.

Now, serious practical-minded computer users like to visit online computer forums for serious practical computer-related ideas and are captain of their computer use. Whenever they face any computer issue users consult their online tech help experts

New Programming Language Makes Coding Social Apps Easier

The language, Dog, is designed to reduce the complexity of existing programming languages.



While it takes just a few keystrokes and mouse clicks to post a tweet on Twitter or “friend” someone on Facebook, it may require thousands of lines of code to accomplish the task.
Dog, a new programming language, could make it easier and more intuitive to write all sorts of social applications—anything from peer-to-peer question-and-answer sites to online dating. And because Dog incorporates natural language, this may make it easier for newbies to learn to code, too.
MIT Media Lab professor Sep Kamvar, who developed Dog with the help of some graduate students, hopes to release the language in a private beta version in the next few months, and offer a public release of it in the spring.
Dog emerged from Kamvar’s frustration with existing programming languages, such as Java, which he felt were needlessly difficult to use to write code governing social interactions. Things that were easy to describe in English—such as a command to notify a person of something—had to be thought of in terms of data storage and communication protocols when he sat down to write it in code.
“I had to write code at a lower level of abstraction than I had to think about the interactions,” he says. “And so I thought it would be interesting to start writing a programming language that allowed me to write at the same level of abstraction that I think.”
Kamvar started working on Dog by defining specific challenges he has with traditional programming languages when building social applications, which include identifying people and talking and listening to them. He came up with some ideas for solving these problems with a new programming language—for example, to make it easier to identify people, he made people a basic data type that the language could recognize, just as other languages recognize strings of text or integers.
Then he created a simple syntax around these ideas that uses natural language (since the language deals with coordinating and communicating with people) and focused on a small set of very clear commands: ask, listen, notify, and compute. A sample line of code in a simple social news feed application reads, “LISTEN TO PEOPLE FROM mit VIA http FOR posts,” which would have the application monitor the Web for updates from a group of MIT-affiliated people.
While all these things can be done in other programming languages, Kamvar contends it’s not generally very easy. And users can import functions from other programming languages, Kamvar says, so interaction design and social processes can be written in Dog while other functions can be written in another language.
Over the past year, Kamvar and students have been developing the Dog compiler—the software that turns code into a task that a computer will execute—and writing demo programs in the language to test it out such as a Twitter-like news feed. One is a peer-to-peer teaching-and-learning platform called Karma that works within a user’s extended social network; it is expected to be publicly available by next summer.
Dog will be free and open source, so users will be able to add to it and modify it as they wish. And while Dog is a server-side language, which means it relies on sending data to a server in order to execute tasks, the group is also building a client-side version.
Kamvar is likely to face some Dog skeptics, such as Robert Harper, a computer science professor at Carnegie Mellon University who studies programming language theory. While Harper says it makes sense to create languages that are easier for non-coders to understand, he doesn’t see programming for social computing as a niche that needs to be filled. And though a language such as Dog may start out as being geared toward a special type of coding, “you invariably get involved in more complex issues, and if you’re using a language that’s coded to stereotypical scenarios, it quickly breaks down,” he says.
While Kamvar emphasizes that he doesn’t see Dog as natural language programming in the vein of, for example, Wolfram Alpha or Inform 7, the inclusion of natural language phrasing should make Dog more easily understood by non-programmers, such as interaction designers or product managers at startups, who often come up with ideas about what needs to be done but then must wait for a software engineer to make those changes to the company’s code.
More generally, Dog could make it simpler for anyone to program or at least understand what’s going on behind the scenes of a website. Despite the attention paid to online code-learning startups like Codecademy, not much attention has been focused on the fact that programming may just be harder than it has to be, Kamvar says.
“Maybe that attention should go toward designing programming languages that are inherently more learnable, but still industrial strength,” he says.


Tuesday, May 7, 2013

More Than a Good Eye: Robot Uses Arms, Location and More to Discover Objects

A robot can struggle to discover objects in its surroundings when it relies on computer vision alone. But by taking advantage of all of the information available to it -- an object's location, size, shape and even whether it can be lifted -- a robot can continually discover and refine its understanding of objects, say researchers at Carnegie Mellon University's Robotics Institute.
The Lifelong Robotic Object Discovery (LROD) process developed by the research team enabled a two-armed, mobile robot to use color video, a Kinect depth camera and non-visual information to discover more than 100 objects in a home-like laboratory, including items such as computer monitors, plants and food items.
Normally, the CMU researchers build digital models and images of objects and load them into the memory of HERB -- the Home-Exploring Robot Butler -- so the robot can recognize objects that it needs to manipulate. Virtually all roboticists do something similar to help their robots recognize objects. With the team's implementation of LROD, called HerbDisc, the robot now can discover these objects on its own.
With more time and experience, HerbDisc gradually refines its models of the objects and begins to focus its attention on those that are most relevant to its goal -- helping people accomplish tasks of daily living.
Findings from the research study will be presented May 8 at the IEEE International Conference on Robotics and Automation in Karlsruhe, Germany.
The robot's ability to discover objects on its own sometimes takes even the researchers by surprise, said Siddhartha Srinivasa, associate professor of robotics and head of the Personal Robotics Lab, where HERB is being developed. In one case, some students left the remains of lunch -- a pineapple and a bag of bagels -- in the lab when they went home for the evening. The next morning, they returned to find that HERB had built digital models of both the pineapple and the bag and had figured out how it could pick up each one.
"We didn't even know that these objects existed, but HERB did," said Srinivasa, who jointly supervised the research with Martial Hebert, professor of robotics. "That was pretty fascinating."
Discovering and understanding objects in places filled with hundreds or thousands of things will be a crucial capability once robots begin working in the home and expanding their role in the workplace. Manually loading digital models of every object of possible relevance simply isn't feasible, Srinivasa said. "You can't expect Grandma to do all this," he added.
Object recognition has long been a challenging area of inquiry for computer vision researchers. Recognizing objects based on vision alone quickly becomes an intractable computational problem in a cluttered environment, Srinivasa said. But humans don't rely on sight alone to understand objects; babies will squeeze a rubber ducky, beat it against the tub, dunk it -- even stick it in their mouth. Robots, too, have a lot of "domain knowledge" about their environment that they can use to discover objects.
Taking advantage of all of HERB's senses required a research team with complementary expertise -- Srinivasa's insights on robotic manipulation and Hebert's in-depth knowledge of computer vision. Alvaro Collet, a robotics Ph.D. student they co-advised, led the development of HerbDisc. Collet is now a scientist at Microsoft.
Depth measurements from HERB's Kinect sensors proved to be particularly important, Hebert said, providing three-dimensional shape data that is highly discriminative for household items.
Other domain knowledge available to HERB includes location -- whether something is on a table, on the floor or in a cupboard. The robot can see whether a potential object moves on its own, or is moveable at all. It can note whether something is in a particular place at a particular time. And it can use its arms to see if it can lift the object -- the ultimate test of its "objectness."
"The first time HERB looks at the video, everything 'lights up' as a possible object," Srinivasa said. But as the robot uses its domain knowledge, it becomes clearer what is and isn't an object. The team found that adding domain knowledge to the video input almost tripled the number of objects HERB could discover and reduced computer processing time by a factor of 190. A HERB's-eye view of objects is available on YouTube.
HERB's definition of an object -- something it can lift -- is oriented toward its function as an assistive device for people, doing things such as fetching items or microwaving meals. "It's a very natural, robot-driven process," Srinivasa said. "As capabilities and situations change, different things become important." For instance, HERB can't yet pick up a sheet of paper, so it ignores paper. But once HERB has hands capable of manipulating paper, it will learn to recognize sheets of paper as objects.
Though not yet implemented, HERB and other robots could use the Internet to create an even richer understanding of objects. Earlier work by Srinivasa showed that robots can use crowdsourcing via Amazon Mechanical Turk to help understand objects. Likewise, a robot might access image sites, such as RoboEarth, ImageNet or 3D Warehouse, to find the name of an object, or to get images of parts of the object it can't see.
Bo Xiong, a student at Connecticut College, and Corina Gurau, a student at Jacobs University in Bremen, Germany, also contributed to this study.



The above story is reprinted from materials provided byCarnegie Mellon University.

Monday, May 6, 2013

Computer Algorithms Help Find Cancer Connections


Powerful data-sifting algorithms developed by computer scientists at Brown University are helping to untangle the profoundly complex genetics of cancer.


In a study reported today in the New England Journal of Medicine, researchers from Washington University in St. Louis used two algorithms developed at Brown to assemble the most complete genetic profile yet of acute myeloid leukemia (AML), an aggressive form of blood cancer. The researchers hope the work will lead to new AML treatments based on the genetics of each patient's disease.
The algorithms, developed by Ben Raphael, Eli Upfal, and Fabio Vandin from the Department of Computer Science and the Center for Computational Molecular Biology (CCMB), played a key role in making sense of the giant datasets required for the study. The work was part of The Cancer Genome Atlas project, which aims to catalog the genetic mutations that cause cells to become cancerous. Doing that requires sequencing the entire genome of cancer cells and comparing it to the genome of healthy cells. Without computational tools like the ones the Brown team has developed, analyzing those data would be impossible.
The AML study used two algorithms developed by the Brown team: HotNet and Dendrix. Both aim to find networks of genes that are important in creating cancerous cells. To understand how they work and why they are important, it helps to know a little about the genetics of cancer.
"We hope that the algorithms produce actionable information that is clinically important.""Genes don't usually act or their own, but instead act together in pathways or networks," said Raphael, associate professor computer science. "Cancer-causing mutations often target these networks and pathways." This presents a problem for researchers trying to find important mutations, because these mutations are often spread across the network and hidden in the genetic data.
Imagine a cellular pathway containing five genes. If any one of those genes acquires a mutation, the pathway fails and the cell becomes cancerous. That means five patients with the same cancer can have any one of five different mutations. That makes life difficult for researchers trying to find the mutations that cancer cells have in common. The algorithms developed by Raphael and his team are designed to connect those dots and identify the important pathways, rather than looking only at individual genes.
The HotNet algorithm works by plotting mutation data from patients onto a map of known gene interactions and looking for connected networks that are mutated more often than would be expected by chance. The program represents frequently mutated genes as heat sources. By looking at the way heat is distributed and clustered across the map, the program finds the "hot" networks involved in cancer.
HotNet picked out several networks that seem to be active in the AML genome. In a study published in 2011, HotNet identified networks important to ovarian cancer as well.
Dendrix, the newest algorithm developed at Brown, takes the power of HotNet one step further. HotNet works by looking for mutations in networks that are already known to researchers. However, there are countless gene networks that researchers have not yet identified. Dendrix is designed to look for mutations in those previously unknown networks.
To find new networks, Dendrix takes advantage of the fact that cancer-causing mutations are relatively rare. A patient with a mutation in one gene in a network is unlikely to have a concurrent mutation in another gene in that network. Dendrix looks for combinations of mutations that happen frequently across patients but rarely happen together in a single patient. Put another way: Imagine that a substantial number patients with a given cancer have a mutation in gene X. Another large group of patients has a mutation in gene Y. But very few patients have mutations in both X and Y at the same time. Dendrix looks for these patterns of exclusivity and predicts that groups of genes with high exclusivity are probably working together.
"Where we see those patterns of exclusivity," Raphael said, "it suggests a possible pathway." The group has tested Dendrix on cancers in which the pathways were already known, just to see if the program would find them. Indeed, the pathways "just fall right out of the data," Raphael said.
For the AML paper, Raphael's group developed an improved algorithm -- Dendrix++ -- which better handles extremely rare mutations. Dendrix++ picked out three potential new pathways in AML for doctors to investigate.
Raphael and Vandin, along with computational biology graduate students Max Leiserson and Hsin-Ta Wu, are continuing to improve their algorithms and to apply them to new datasets. The group recently started putting the algorithms to work on what's called the Pan-Cancer project, which looks for commonalities in mutations across cancer types.
"For us as computational people, it's fun to push these algorithms and apply them to new datasets," Raphael said. "At the same time, in analyzing cancer data we hope that the algorithms produce actionable information that is clinically important."



Story Source:
The above story is reprinted from materials provided by Brown University.

Top 20 Programming Languages of Hacker News Readers January 2013


Just like anyone, programmers are meticulous about their work as one simple line could ruin months of work.   And reaching the ultimate goal may take longer if they are not using the language they are accustomed to.
In April year, Hacker News conducted a poll regarding the top languages used by programmers in writing codes and Python came out at the top with 3,044 votes.  Come September, RedMonk released the result of its very own poll and JavaScript came in first while Python was at fourth place.
RedMonk once again released the results for January 2013 and it’s almost identical to the list from last year.
  1. JavaScript
  2. Java
  3. PHP
  4. Python
  5. Ruby
  6. C#
  7. C++
  8. C
  9. Objective-C
  10. Perl
  11. Shell
  12. Scala
  13. ASP
  14. Haskell
  15. Assembly
  16. ActionScript
  17. R
  18. CoffeeScript
  19. Visual Basic
  20. MATLAB












The first nine languages stayed the same when compared to the September list, but Perl has overtaken Shell, ASP has overtaken Haskell, CoffeeScript overtook Visual Basic, and Groovy dropped out of the Top 20 and was replaced by MATLAB.
Interestingly, CodeEval released its own statistics regarding the “Most Popular Programming Languages” in 2013 and Python took the top spot at 29.8 percent, while Java came in second at 25.8 percent, followed by C++ at 12.6 percent, Ruby at 9.6 percent, PHP at 7.3 percent, C 4.9 percent, Javascript 3.9 percent, C# 2.5 percent, Perl 2 percent, Clojure 0.8 percent, Scala 0.6 percent, Objective C 0.1 percent, and TCL at 0.02 percent.
Note: Statistics and Figures are based on a sample size of over 100,000+ challenges processed from Employers who have run challenges on CodeEval in 2012.
The disparity on the lists may be brought about by the difference in the demographics of the sample population used.  Also, there’s no assurance for the legitimacy of these polls since we are not sure if respondents are actually programmers and if voting is screened that you can only do it once.  Also, some of these polls may be biased since the one behind it may have been influenced.  Still, it is great to have an idea of what other programmers are using to write codes.

Author : Mellisa Tolentino