Rapid cellphone charging getting closer to reality

The ability to charge cellphones in seconds is one step closer after researchers at the University of Waterloo used nanotechnology to significantly improve energy-storage devices known as supercapacitors.

Their novel design roughly doubles the amount of electrical energy the rapid-charging devices can hold, helping pave the way for eventual use in everything from smartphones and laptop computers, to electric vehicles and high-powered lasers.

“We’re showing record numbers for the energy-storage capacity of supercapacitors,” said Michael Pope, a professor of chemical engineering who led the Waterloo research. “And the more energy-dense we can make them, the more batteries we can start displacing.”

Supercapacitors are a promising, green alternative to traditional batteries–with benefits including improved safety and reliability, in addition to much faster charging–but applications have been limited so far by their relatively low storage capacity.

Existing commercial supercapacitors only store enough energy, for example, to power cellphones and laptops for about 10 per cent as long as rechargeable batteries.

To boost that capacity, Pope and his collaborators developed a method to coat atomically thin layers of a conductor called graphene with an oily liquid salt in supercapacitor electrodes.

The liquid salt serves as a spacer to separate the thin graphene sheets, preventing them from stacking like pieces of paper. That dramatically increases their exposed surface area, a key to maximizing energy-storage capacity.

At the same time, the liquid salt does double duty as the electrolyte needed to actually store electrical charge, minimizing the size and weight of the supercapacitor.

“That is the really cool part of this,” Pope said. “It’s a clever, elegant design.”

The innovation also uses a detergent to reduce the size of the droplets of oily salt – which is combined with water in an emulsion similar to salad dressing – to just a few billionths of a metre, improving their coating action. The detergent also functions like chemical Velcro to make the droplets stick to the graphene.

Increasing the storage capacity of supercapacitors means they can be made small and light enough to replace batteries for more applications, particularly those requiring quick-charge, quick-discharge capabilities.

In the short term, Pope said better supercapacitors could displace lead-acid batteries in traditional vehicles, and be used to capture energy otherwise lost by buses and high-speed trains when they brake.

Further out, although they are unlikely to ever attain the full storage capacity of batteries, supercapacitors have the potential to conveniently and reliably power consumer electronic devices, electric vehicles and systems in remote locations like space.

“If they’re marketed in the correct ways for the right applications, we’ll start seeing more and more of them in our everyday lives,” Pope said.

Making big data a little smaller

When we think about digital information, we often think about size. A daily email newsletter, for example, may be 75 to 100 kilobytes in size. But data also has dimensions, based on the numbers of variables in a piece of data. An email, for example, can be viewed as a high-dimensional vector where there’s one coordinate for each word in the dictionary and the value in that coordinate is the number of times that word is used in the email. So, a 75 Kb email that is 1,000 words long would result in a vector in the millions.

This geometric view on data is useful in some applications, such as learning spam classifiers, but, the more dimensions, the longer it can take for an algorithm to run, and the more memory the algorithm uses.

As data processing got more and more complex in the mid-to-late 1990s, computer scientists turned to pure mathematics to help speed up the algorithmic processing of data. In particular, researchers found a solution in a theorem proved in the 1980s by mathematics William B. Johnson and Joram Lindenstrauss working the area of functional analysis.

Known as the Johnson-Lindenstrauss lemma (JL lemma), computer scientists have used the theorem to reduce the dimensionality of data and help speed up all types of algorithms across many different fields, from streaming and search algorithms, to fast approximation algorithms for statistical and linear algebra and even algorithms for computational biology.

But as data has grown even larger and more complex, many computer scientists have asked: Is the JL lemma really the best approach to pre-process large data into a manageably low dimension for algorithmic processing?

Now, Jelani Nelson, the John L. Loeb Associate Professor of Engineering and Applied Sciences at the Harvard John A. Paulson School of Engineering and Applied Sciences, has put that debate to rest. In a paper presented this week at the annual IEEE Symposium on Foundations of Computer Science in Berkeley, California, Nelson and co-author Kasper Green Larsen, of Aarhus University in Denmark, found that the JL lemma really is the best way to reduce the dimensionality of data.

“We have proven that there are ‘hard’ data sets for which dimensionality reduction beyond what’s provided by the JL lemma is impossible,” said Nelson.

Essentially, the JL lemma showed that for any finite collection of points in high dimension, there is a collection of points in a much lower dimension which preserves all distances between the points, up to a small amount of distortion. Years after its original impact in functional analysis, computer scientists found that

The JL lemma can act as a preprocessing step, allowing the dimensions of data to be significantly reduced before running algorithms.

Rather than going through each and every dimension — like the hundreds of dimensions in an email — the JL lemma uses a system of geometric classification to speed things up. In this geometry, the individual dimensions don’t matter as much as the similarities between them. By mapping these similarities, the geometry of the data and the angles between data points are preserved, just in fewer dimensions.

Of course, the JL lemma has a wide range of applications that go far beyond spam filters. It is used in compressed sensing for reconstructing sparse signals using few linear measurements; clustering high-dimensional data; and DNA motif finding in computational biology.

“We still have a long way to go to understand the best dimension reduction possible for specific data sets as opposed to comparing to the worst case,” said Nelson. “I think that’s a very interesting direction for future work. There are also some interesting open questions related to how quickly we can perform the dimensionality reduction, especially when faced with high-dimensional vectors that are sparse, i.e. have many coordinates equal to zero. This sparse case is very relevant in many practical applications. For example, vectors arising from e-mails are extremely sparse, since a typical email does not contain every word in the dictionary.”

“The Johnson-Lindenstrauss Lemma is a fundamental result in high dimensional geometry but an annoying logarithmic gap remained between the upper and lower bounds for the minimum possible dimension required as a function of the number of points and the distortion allowed,” said Noga Alon, professor of Mathematics at Tel Aviv University, who had proven the previous best lower bound for the problem. “The recent work of Jelani Nelson and Kasper Green Larsen settled the problem. It is a refreshing demonstration of the power of a clever combination of combinatorial reasoning with geometric tools in the solution of a classical problem.”

New type of supercomputer could be based on ‘magic dust’ combination of light and matter

A team of researchers from the UK and Russia have successfully demonstrated that a type of ‘magic dust’ which combines light and matter can be used to solve complex problems and could eventually surpass the capabilities of even the most powerful supercomputers.

The researchers, from Cambridge, Southampton and Cardiff Universities in the UK and the Skolkovo Institute of Science and Technology in Russia, have used quantum particles known as polaritons – which are half light and half matter – to act as a type of ‘beacon’ showing the way to the simplest solution to complex problems. This entirely new design could form the basis of a new type of computer that can solve problems that are currently unsolvable, in diverse fields such as biology, finance or space travel. The results are reported in the journal Nature Materials.

Our technological progress — from modelling protein folding and behaviour of financial markets to devising new materials and sending fully automated missions into deep space — depends on our ability to find the optimal solution of a mathematical formulation of a problem: the absolute minimum number of steps that it takes to solve that problem.

The search for an optimal solution is analogous to looking for the lowest point in a mountainous terrain with many valleys, trenches, and drops. A hiker may go downhill and think that they have reached the lowest point of the entire landscape, but there may be a deeper drop just behind the next mountain. Such a search may seem daunting in natural terrain, but imagine its complexity in high-dimensional space. “This is exactly the problem to tackle when the objective function to minimise represents a real-life problem with many unknowns, parameters, and constraints,” said Professor Natalia Berloff of Cambridge’s Department of Applied Mathematics and Theoretical Physics and the Skolkovo Institute of Science and Technology, and the paper’s first author.

Modern supercomputers can only deal with a small subset of such problems when the dimension of the function to be minimised is small or when the underlying structure of the problem allows it to find the optimal solution quickly even for a function of large dimensionality. Even a hypothetical quantum computer, if realised, offers at best the quadratic speed-up for the “brute-force” search for the global minimum.

Berloff and her colleagues approached the problem from an unexpected angle: What if instead of moving along the mountainous terrain in search of the lowest point, one fills the landscape with a magical dust that only shines at the deepest level, becoming an easily detectible marker of the solution?

“A few years ago our purely theoretical proposal on how to do this was rejected by three scientific journals,” said Berloff. “One referee said, ‘Who would be crazy enough to try to implement this?!’ So we had to do it ourselves, and now we’ve proved our proposal with experimental data.”

Their ‘magic dust’ polaritons are created by shining a laser at stacked layers of selected atoms such as gallium, arsenic, indium, and aluminium. The electrons in these layers absorb and emit light of a specific colour. Polaritons are ten thousand times lighter than electrons and may achieve sufficient densities to form a new state of matter known as a Bose-Einstein condensate, where the quantum phases of polaritons synchronise and create a single macroscopic quantum object that can be detected through photoluminescence measurements.

The next question the researchers had to address was how to create a potential landscape that corresponds to the function to be minimised and to force polaritons to condense at its lowest point. To do this, the group focused on a particular type of optimisation problem, but a type that is general enough so that any other hard problem can be related to it, namely minimisation of the XY model which is one of the most fundamental models of statistical mechanics. The authors have shown that they can create polaritons at vertices of an arbitrary graph: as polaritons condense, the quantum phases of polaritons arrange themselves in a configuration that correspond to the absolute minimum of the objective function.

“We are just at the beginning of exploring the potential of polariton graphs for solving complex problems,” said co-author Professor Pavlos Lagoudakis, Head of the Hybrid Photonics Lab at the University of Southampton and the Skolkovo Institute of Science and Technology, where the experiments were performed. “We are currently scaling up our device to hundreds of nodes, while testing its fundamental computational power. The ultimate goal is a microchip quantum simulator operating at ambient conditions.”