About Arun Kumar N

Arun has been associated with India International Times since 2018 and he has been a key reporter in covering science and space related stories. He can be reached at arunKnn@indiainternationaltimes.com.

35-year solar power contract enables lower prices

The world’s first 35-year day or night solar contract (ACWA Power’s with DEWA in Dubai) also had a record-low price for solar with storage – of just 7.3 cents per kWh.

Energy developers always look to find ways to structure deals to reduce their costs. A key task in developing utility-scale renewable energy projects is finding every possible way to reduce the price at which you must sell power to make a project pencil out financially.

The advantage of any renewable energy like solar and wind is that with no future fuel purchases, there is no uncertain future expense, so being able to guarantee a set price over as long as possible would seem to leverage that advantage.

Normally solar contracts are only for 20 to 25 years. But in 2017, ACWA Power, a developer that is no stranger to innovative deal structures, applied out-of-the-box thinking on contract design to bid a record low price for solar with storage of just 7.3 cents per kilowatt hour for DEWA, in Dubai.

This ACWA Power PPA marked the first-ever 35-year contract for Concentrated Solar Power (CSP), the thermal form of solar that can operate a power block from its energy storage.

With a longer contract, the costs incurred in developing and permitting any new income-generating projects can be put off while revenue continues, so there are more years of income generation to amortize the upfront costs. But how much did it actually reduce the price?

ETH Zürich Professor of Renewable Energy Policy Johan Lilliestam has calculated, in a paper online at Renewable Energy Focus, that as much as 2 cents per kWh was knocked off the bid in ACWA Power’s DEWA bid in Dubai.

In Concentrating solar power for less than USD 0.07 per kWh: finally the breakthrough? Lilliestam together with co-author Robert Pitz-Paal, co-director of the Institute of Solar Research at the German Aerospace Centre (DLR) attribute the cost reduction in part to the unusually long 35-year contract.

The paper states: “…with a more standard 20-year PPA, the LCOE would be USD 0.106 per kWh, which is about the same as declared by many Chinese stations under construction [7]. The long PPA duration thus directly reduces the LCOE by some 2 cents per kWh; in addition, it could help de-risking the investment by giving a very long-term perspective for investors, thus reducing the cost of capital.”

But what additional costs might be incurred over a longer operating life?

In all energy-generating technologies, engineers must design components for a specific lifespan and have to prove that components will not fail within that time. Insurers guarantee components for a set time. The agreed 20-year design lifetime means engineers can design to meet one consistent requirement, ensuring that new components can be guaranteed to work reliably – and be insured – for that period.

Would the cost of replacing components outweigh the benefit of a 35-year contract? SENER knows what’s involved in designing a project for greater longevity, as the engineering and construction firm for the 510 MW ACWA Power CSP project in Morocco, NOOR I,II and III.

SENER has been technology provider and contractor for 29 CSP projects and in three of those, it provided – roughly – all the technology and half the EPC (Engineering, procurement, and construction).

SENER’s Gemasolar CSP project in Spain, the world’s first commercial solar tower, has operated with its day and night solar successfully since being grid-connected in 2011.

“I wouldn’t say there is a major problem for designing a plant for 35 years,” SENER Performance Guarantee Manager Sergio Relloso said. “In our plants we designed the components to last for 25 years and it is completely possible to last 35 years without a problem.”

Most of the expenses would fall under normal O&M costs. However, Relloso cautioned that higher O&M costs would be expected towards the end, for example in major equipment like the steam generators in the power block. But many of the expenses he described would be the normal O&M expenses, such as in the thermal energy storage system that enables CSP to generate solar at night.

“The HTF for example; we normally replace a small quantity year-by-year in a trough project just because with HTF there is some degradation,” he pointed out. ??”This is not the case with the salts in a tower project, because there you don’t have such a high temperature near the degradation limit for the salts which top out at 565°C, while their limit is 600ºC.”

ACWA Power’s 35-year DEWA project will combine both trough (600 MW) and tower (100 MW) technologies. In overall durability, mirrors, or heliostats – in both technologies – would see negligible degradation, Relloso said.

“We are not seeing any measurable degradation in our plants in mirrors; they have operated very well and normally the mirrors last a long time,” Relloso said, referencing SEGS.

“Mirrors have had a really good track record at SEGS. You would replace year by year the small number of mirrors that are broken maybe in a high wind event or during maintenance tasks. But the percentage of breakage of mirrors is in the range of .1% to .3% of mirrors in a year – it is a very normal operation to replace mirrors in a CSP plant.”

In a trough project, the receiver tubes that run along the length of the parabolic mirrors would have a higher replacement rate, he said, because “the receiver tubes in a trough plant are not as simple as the mirrors. They could be subjected to more degradation.”

But in both tower and trough technologies, Relloso said that all the metal components themselves would last – from the heliostat structures in the solar field to the pipe racks in the power block, as everything is adequately protected and designed for 35 years.

With the longer period at a known price, ACWA Power’s interesting contract design leverages the advantage of solar power generation; that its costs are more predictable over the long term than fossil energy, as the fuel is free.

With its ability to dispatch its power whenever needed, solar thermal energy competes directly with natural gas which is also a dispatchable form of thermal generation. Since CSP seems well suited to a 35-year lifespan, if the benefits outweigh the costs, longer contracts could enable lower costs going forward.

China archeo-tools suggest Man left Africa earlier than previously thought

Ancient tools and bones discovered in Shangchen in the southern Chinese Loess Plateau by archaeologists suggest early humans left Africa and arrived in Asia earlier than previously thought.

The artefacts show that our earliest human ancestors colonised East Asia over two million years ago. They were found by a Chinese team that was led by Professor Zhaoyu Zhu of the Chinese Academy of Sciences, and included Professor Robin Dennell of Exeter University.

The tools were discovered at a locality called Shangchen in the southern Chinese Loess Plateau. The oldest are ca. 2.12 million years old, and are c. 270,000 years older than the 1.85 million year old skeletal remains and stone tools from Dmanisi, Georgia, which were previously the earliest evidence of humanity outside Africa.

The artefacts include a notch, scrapers, cobble, hammer stones and pointed pieces. All show signs of use – the stone had been intentionally flaked. Most were made of quartzite and quartz that probably came from the foothills of the Qinling Mountains 5 to 10 km to the south of the site, and the streams flowing from them. Fragments of animal bones 2.12 million years old were also found.

The Chinese Loess Plateau covers about 270,000 square kilometres, and during the past 2.6m years between 100 and 300m of wind-blown dust – known as loess – has been deposited in the area.  “Our discovery means it is necessary now to reconsider the timing of when early humans left Africa,” said Professor Dennell.

The 80 stone artefacts were found predominantly in 11 different layers of fossil soils which developed in a warm and wet climate. A further 16 items were found in six layers of loess that developed under colder and drier conditions. These 17 different layers of loess and fossil soils were formed during a period spanning almost a million years. This shows that early types of humans occupied the Chinese Loess Plateau under different climatic conditions between 1.2 and 2.12 million years ago.

The layers containing these stone tools were dated by linking the magnetic properties of the layers to known and dated changes in the earth’s magnetic field.

 

Growing a dinosaur’s dinner, the way it was 150 million years ago

Scientists have measured the nutritional value of herbivore dinosaurs’ diet by growing their food in atmospheric conditions similar to those found roughly 150 million years ago.

Previously, many scientists believed that plants grown in an atmosphere with high carbon dioxide levels had low nutritional value. But a new experimental approach led by Dr Fiona Gill at the University of Leeds has shown this is not necessarily true.

The team grew dinosaur food plants, such as horsetail and ginkgo, under high levels of carbon dioxide mimicking atmospheric conditions similar to when sauropod dinosaurs, the largest animals ever to roam Earth, would have been widespread.

An artificial fermentation system was used to simulate digestion of the plant leaves in the sauropods’ stomachs, allowing the researchers to determine the leaves’ nutritional value. The findings, published in Palaeontology, showed many of the plants had significantly higher energy and nutrient levels than previously believed.

This suggests that the megaherbivores would have needed to eat much less per day and the ecosystem could potentially have supported a significantly higher dinosaur population density, possibly as much as 20% greater than previously estimated.

Dr Gill, a palaeontologist and geochemist from the School of Earth and Environment at Leeds, said: “The climate was very different in the Mesozoic era – when the huge brachiosaurus and diplodocus lived – with possibly much higher carbon dioxide levels. There has been the assumption that as plants grow faster and/or bigger under higher CO2 levels, their nutritional value decreases. Our results show this isn’t the case for all plant species.

“The large body size of sauropods at that time would suggest they needed huge quantities of energy to sustain them. When the available food source has higher nutrient and energy levels it means less food needs to be consumed to provide sufficient energy, which in turn can affect population size and density.

“Our research doesn’t give the whole picture of dinosaur diet or cover the breadth of the plants that existed at this time, but a clearer understanding of how the dinosaurs ate can help scientists understand how they lived.”

“The exciting thing about our approach to growing plants in prehistoric atmospheric conditions is that it can used to simulate other ecosystems and diets of other ancient megaherbivores, such as Miocene mammals – the ancestors of many modern mammals.”

Deaths from heart-related disease rising in India, study finds

Death due to heart-related disease is on the rise in India, causing more than one-fourth of all deaths in the country in 2015 and affecting significantly rural populations and young adults the most, suggests a study.

This work is the first nationally representative study to measure cardiovascular mortality in India, led by Dr. Prabhat Jha, director of the Centre for Global Health Research of St. Michael’s Hospital in Toronto, Canada. The study found that rates of dying from ischaemic heart disease – cardiac issues caused by a narrowing of the heart’s arteries – in populations aged 30 to 69 increased rapidly in rural areas of India and surpassed those in urban areas between the year 2000 and 2015.

In contrast, the probability of dying from stroke decreased overall, but increased in India’s northeastern states, where a third of premature stroke deaths occurred and only one sixth of the population lives, said the study. In these states, deaths due to stroke were about 3 times higher than the country’s average.

“The finding that cardiac disease rose nationally in India and that stroke rose in some states was surprising,” said Dr. Jha, who is also a professor at the University of Toronto. “This study also unearthed an important fact for prevention of death due to cardiovascular disease. Most deaths were among people with previously known cardiac disease, and at least half were not taking any regular medications.”

Led by Dr. Prabhat Jha, director of the Centre for Global Health Research of St. Michael’s Hospital in Toronto, Canada, a new study found that rates of dying from ischemic heart disease — cardiac issues caused by a narrowing of the heart’s arteries — in populations aged 30 to 69 increased rapidly in rural areas of India and surpassed those in urban areas between the year 2000 and 2015.CREDIT: St. Michael’s Hospital

Dr. Jha and his team also showed that younger adults, born after 1970, have the highest rate of death due to heart problems caused by narrowing of the heart’s arteries, leading to ischaemic heart failure and stroke. Until now, most of the evidence of cardiovascular mortality in India has come from local studies and the new study has more detailed information that “we couldn’t have predicted based on earlier studies,” said Dr. Jha.

This research is part of the Million Death Study, one of the largest studies of premature deaths in the world. In India, most deaths occur at home and without medical attention. Hundreds of specially trained census staff in India knocked on doors to interview household members about deaths. Two physicians independently examined these “verbal autopsies” to establish the most likely cause of death.

“Making progress in fighting the leading cause of death in India is necessary for making progress at the global level,” Dr. Jha said. “We demonstrated the unexpected patterns of heart attack and stroke deaths. Both conditions need research and action if the world is going to achieve the United Nations Sustainable Development Goal of reducing cardiovascular mortality by 2030.”

The study was published in The Lancet Global Health.

84 endangered Amur leopards found in China, Russia

In a good and bad news to tiger conservationists, scientists estimate that 84 highly endangered Amur leopards are roaming in the wild across its current range along the southernmost border of Primorskii Province in Russia and Jilin Province of China.

This new estimate of the Amur leopard population was recently reported in the scientific journal, Conservation Letters by scientists from China, Russia, and the United States. The scientists combined forces to collate information from camera traps on both sides of the border of China and Russia to derive the estimate. Because there are no records of leopards in other parts of its former range, this estimate represents the total global population of this subspecies in the wild.

Although numbers are small, previous estimates in Russia were even less, ranging from 25 to 50 individuals. However, those surveys, based on tracks left in the snow, were extremely difficult to interpret due to the unclear relationship between numbers of tracks and number of individuals. With camera traps, each individual can be identified by its unique spot pattern, providing a much more precise estimate.

Combining data from both countries increased precision of the estimate, and provided greater accuracy. Surprisingly, about one-third of the leopards were photographed on both sides of the Sino-Russian border.

Anya Vitkalova, a biologist at Land of the Leopard National Park in Russia, and one of the two lead authors of the publication said: “We knew that leopards moved across the border, but only by combining data were we able to understand how much movement there really is.”

Despite the movement, there were differences in population dynamics in Russia versus China. Leopards are currently recolonizing habitat in China by dispersing from the Russian side, where leopard numbers appear to be close to the maximum that can be supported.

Because of these transboundary movements of leopards, simply adding results from both sides would have greatly exaggerated the estimate.

Dale Miquelle, a co-author and Tiger Program Coordinator for the Wildlife Conservation Society noted: “This first rigorous estimate of the global population of the Amur leopard represents an excellent example of the value of international collaboration. The trust and goodwill generated by this joint effort lays the foundation for future transboundary conservation actions.”

JEE, NEET go GMAT way, to hold entrance test twice a year

The Joint Entrance Examination (Mains) and the National Eligibility cum Entrance Test (NEET) will be conducted twice a year instead of just once, and the best score will be taken into account for admissions.

Union Minister of Human Resource Development Prakash Javadekar announced it on Saturday that the National Testing Agency (NTA) would conduct of these exams and not the Central Board of Secondary Education (CBSE). In addition, the NTA will also conduct UGC NET, CMAT and GPAT examinations from this year itself.

A senior official said the tests will be made foolproof as they have to be downloaded at the test centres just before the exam and then distributed to all candidates through a local server on computers. The whole system will be encrypted.

“There would be no examiners and the answers would be fed into the system. So, a candidate would know her raw score immediately. The result would come out after some days to address any possible complaints,” the official told the Hindu.

Those who did not have a computer at home can practise at authorised centres which will start functioning from August this year.

Out of 10.5 lakh students who took the JEE this year, two lakh took it online, he noted. “If schools consider holding some Class-11 exams mandatorily online, students will catch up faster,” former CBSE Chairman Ashok Ganguly told The Hindu.

Where’s world’s largest mobile manufacturing unit? It’s here in NOIDA, India

Samsung Galaxy Note 7 for re-sale in refurbished shape and price

Samsung India will open its new manufacturing in India that would be the largest mobile manufacturing unit in the world, beginning Monday, July 9, 2018. With this, China’s lead in making mobile phones will come down.

Built in a 35-acre complex at Sector 81 in Noida, Uttar Pradesh but almost in Delhi, the new factory will be inaugurated by the visiting South Korean President Moon Jae-in on Monday.

The first Samsung’s manufacturing facility was set up way back in 1997 and upgraded in 2005 with mobiles, which will emerge the largest in the world with the addition of new complex in Noida.

In June 2017, South Korean company Samsung said it would invest Rs.4,915 crore investment to expand the Noida plant to double its production, which is likely by 2019. From the current production of 67 million smartphone units being manufactured in India, the company is expecting to produce 120 million mobile phones by the end of next year.

Samsung is already a house name in India with many home appliances like refrigerators and flat-panel televisions being made here. The NOIDA unit will be nearer to the capital and the entire northern market, giving the company lee way in terms of distribution.

In the south, Samsung has its facility in Sriperumbudur, Tamil Nadu, where some key R&D centre is currently in existence. Samsung India employs over 70,000 people. Once the new unit in NOIDA comes up, the company will be manufacturing its 50% of world production in India alone.

With over Rs 50,000 crore sales annually, Samsung India is likely to triple its sales figures in India by the end of 2020.

The odds of living up to 110 years or more level out at 105, says study

Want to be a supercentenarian? The chances of reaching the ripe old age of 110 are within reach – if you survive the perilous 90s and make it to 105 when death rates level out, according to a study of extremely old Italians led by the University of California, Berkeley, and Sapienza University of Rome.

Researchers tracked the death trajectories of nearly 4,000 residents of Italy who were aged 105 and older between 2009 and 2015. They found that the chances of survival for these longevity warriors plateaued once they made it past 105.

The findings, to be published in the June 29 issue of the journal Science, challenge previous research that claims the human lifespan has a final cut-off point. To date, the oldest human on record, Jeanne Calment of France, died in 1997 at age 122.

“Our data tell us that there is no fixed limit to the human lifespan yet in sight,” said study senior author Kenneth Wachter, a UC Berkeley professor emeritus of demography and statistics. “Not only do we see mortality rates that stop getting worse with age, we see them getting slightly better over time.”

Specifically, the results show that people between the ages of 105 and 109, known as semi-supercentenarians, had a 50/50 chance of dying within the year and an expected further life span of 1.5 years. That life expectancy rate was projected to be the same for 110-year-olds, or supercentenarians, hence the plateau.

The trajectory for nonagenarians is less forgiving. For example, the study found that Italian women born in 1904 who reached age 90 had a 15 percent chance of dying within the next year, and six years, on average, to live. If they made it to 95, their odds of dying within a year increased to 24 percent and their life expectancy from that point on dropped to 3.7 years.

Overall, Wachter and fellow researchers tracked the mortality rate of 3,836 Italians — supercentenarians and semi-supercentenarians – born between 1896 and 1910 using the latest data from the Italian National Institute of Statistics.

They credit the institute for reliably tracking extreme ages due to a national validation system that measures age at time of death to the nearest day: “These are the best data for extreme-age longevity yet assembled,” Wachter said.

As humans live into their 80s and 90s, mortality rates surge due to frailty and a higher risk of such ailments as heart disease, dementia, stroke, cancer and pneumonia.

Evolutionary demographers like Wachter and study co-author James Vaupel theorize that those who survive do so because of demographic selection and/or natural selection. Frail people tend to die earlier while robust people, or those who are genetically blessed, can live to extreme ages, they say.

Wachter notes that similar lifecycle patterns have been found in other species, such as flies and worms.

“What do we have in common with flies and worms?” he asked. “One thing at least: We are all products of evolution.”

‘Breakthrough’ algorithm exponentially faster than any previous one

What if a large class of algorithms used today — from the algorithms that help us avoid traffic to the algorithms that identify new drug molecules — worked exponentially faster?

Computer scientists at the Harvard John A. Paulson School of Engineering and Applied Sciences (SEAS) have developed a completely new kind of algorithm, one that exponentially speeds up computation by dramatically reducing the number of parallel steps required to reach a solution.

The researchers will present their novel approach at two upcoming conferences: the ACM Symposium on Theory of Computing (STOC), June 25-29 and International Conference on Machine Learning (ICML), July 10 -15.

A lot of so-called optimization problems, problems that find the best solution from all possible solutions, such as mapping the fastest route from point A to point B, rely on sequential algorithms that haven’t changed since they were first described in the 1970s. These algorithms solve a problem by following a sequential step-by-step process. The number of steps is proportional to the size of the data. But this has led to a computational bottleneck, resulting in lines of questions and areas of research that are just too computationally expensive to explore.

“These optimization problems have a diminishing returns property,” said Yaron Singer, Assistant Professor of Computer Science at SEAS and senior author of the research. “As an algorithm progresses, its relative gain from each step becomes smaller and smaller.”

Singer and his colleague asked: what if, instead of taking hundreds or thousands of small steps to reach a solution, an algorithm could take just a few leaps?

“This algorithm and general approach allows us to dramatically speed up computation for an enormously large class of problems across many different fields, including computer vision, information retrieval, network analysis, computational biology, auction design, and many others,” said Singer. “We can now perform computations in just a few seconds that would have previously taken weeks or months.”

“This new algorithmic work, and the corresponding analysis, opens the doors to new large-scale parallelization strategies that have much larger speedups than what has ever been possible before,” said Jeff Bilmes, Professor in the Department of Electrical Engineering at the University of Washington, who was not involved in the research. “These abilities will, for example, enable real-world summarization processes to be developed at unprecedented scale.”

Traditionally, algorithms for optimization problems narrow down the search space for the best solution one step at a time. In contrast, this new algorithm samples a variety of directions in parallel. Based on that sample, the algorithm discards low-value directions from its search space and chooses the most valuable directions to progress towards a solution.

Take this toy example:

You’re in the mood to watch a movie similar to The Avengers. A traditional recommendation algorithm would sequentially add a single movie in every step which has similar attributes to those of The Avengers. In contrast, the new algorithm samples a group of movies at random, discarding those that are too dissimilar to The Avengers. What’s left is a batch of movies that are diverse (after all, you don’t want ten Batman movies) but similar to The Avengers. The algorithm continues to add batches in every step until it has enough movies to recommend.

This process of adaptive sampling is key to the algorithm’s ability to make the right decision at each step.

“Traditional algorithms for this class of problem greedily add data to the solution while considering the entire dataset at every step,” said Eric Balkanski, graduate student at SEAS and co-author of the research. “The strength of our algorithm is that in addition to adding data, it also selectively prunes data that will be ignored in future steps.”

In experiments, Singer and Balkanski demonstrated that their algorithm could sift through a data set which contained 1 million ratings from 6,000 users on 4,000 movies and recommend a personalized and diverse collection of movies for an individual user 20 times faster than the state-of-the-art.

The researchers also tested the algorithm on a taxi dispatch problem, where there are a certain number of taxis and the goal is to pick the best locations to cover the maximum number of potential customers. Using a data set of two million taxi trips from the New York City taxi and limousine commission, the adaptive-sampling algorithm found solutions 6 times faster.

“This gap would increase even more significantly on larger scale applications, such as clustering biological data, sponsored search auctions, or social media analytics,” said Balkanski.

Of course, the algorithm’s potential extends far beyond movie recommendations and taxi dispatch optimizations. It could be applied to:

  • designing clinical trials for drugs to treat Alzheimer’s, multiple sclerosis, obesity, diabetes, hepatitis C, HIV and more
  • evolutionary biology to find good representative subsets of different collections of genes from large datasets of genes from different species
  • designing sensor arrays for medical imaging
  • identifying drug-drug interaction detection from online health forums

This process of active learning is key to the algorithm’s ability to make the right decision at each step and solves the problem of diminishing returns.

“This research is a real breakthrough for large-scale discrete optimization,” said Andreas Krause, professor of Computer Science at ETH Zurich, who was not involved in the research. “One of the biggest challenges in machine learning is finding good, representative subsets of data from large collections of images or videos to train machine learning models. This research could identify those subsets quickly and have substantial practical impact on these large-scale data summarization problems.”

Singer-Balkanski model and variants of the algorithm developed in the paper could also be used to more quickly assess the accuracy of a machine learning model, said Vahab Mirrokni, a principal scientist at Google Research, who was not involved in the research.

“In some cases, we have a black-box access to the model accuracy function which is time-consuming to compute,” said Mirrokni. “At the same time, computing model accuracy for many feature settings can be done in parallel. This adaptive optimization framework is a great model for these important settings and the insights from the algorithmic techniques developed in this framework can have deep impact in this important area of machine learning research.”

Singer and Balkanski are continuing to work with practitioners on implementing the algorithm.

New spider species found deep in southern Indiana cave

IMAGE: This is a female specimen of the newly described rare spider species Islandiana lewisi.

Credit: Dr. Marc Milne

Spiders are ubiquitous within our forests, fields, and backyards. Although you may be used to seeing the beautiful yellow and black spiders of the genus Argiope in your garden, large ground-scurrying wolf spiders in your yard, or spindly cellar spiders in your basement, this new sheet-web-building spider is probably one you haven’t seen before. The reason is that it’s known from a single cave in the world, Stygeon River Cave, in southern Indiana.

The University of Indianapolis assistant professor, Dr. Marc Milne, described the rare species in the open access journal Subterranean Biology with the help of a University of Indianapolis alumnus, Elizabeth Wells, who illustrated the spider for the manuscript.

Sheet weavers, also known as dwarf spiders or money spiders, are minute creatures growing no larger than a few centimetres in length, which makes them particularly elusive. Their peculiar webs are flat and sheet-like, hence their common English name.

The new spider, Islandiana lewisi, is an homage. Milne was shown the spider by a fellow scientist, Dr. Julian Lewis, who noticed the critter on one of his many cave expeditions. In appreciation for his help, Milne and Wells named the spider after Lewis.

This is the fifteenth species in its genus (Islandiana) and the fifth known to live exclusively in caves. It has been over 30 years since the last species has been added to this group.

At about 2 mm in size, Islandiana lewisi is thought to feed on even smaller arthropods, such as springtails living in the debris on the cave floor. It is unknown when it reproduces or if it exists anywhere else. The spider is likely harmless to humans.

The collectors of the spider, Milne and Lewis, described the hostile conditions within the cave, which the new species calls home: “because the cave floods from time to time, the insides were wet, muddy, slippery, and dangerous to walk on without the proper equipment.”

Milne and Lewis found the spider in small, horizontal webs between large, mud-caked boulders in the largest room in the cave. It was collected in October 2016 with the permission of the landowner.

Milne hypothesized that he had collected something special, stating, “I didn’t know what the spider was at first, I just thought it was odd that so many were living within this dark cave with no other spider species around.”

After returning to the lab and inspecting the spider under a microscope, Milne initially misidentified the species. However, when he re-examined it months later, he realized that the species was indeed new to science.

NASA infrared data reveals Tropical Storm Emilia is strengthening

IMAGE: On June 28 at 4:59 p.m. EDT (2059 UTC) the AIRS instrument aboard NASA’s Aqua satellite showed powerful storms with very cold cloud top temperatures (purple). Credits: NASA JPL, Heidar Thrastarson

Infrared NASA satellite imagery provided cloud top temperatures of thunderstorms that make up Tropical Storm Emilia. Comparing those NASA temperature readings with another satellite’s data obtained the following day, forecasters determined that Emilia had strengthened.

At NASA’s Jet Propulsion Laboratory in Pasadena, California, infrared data taken of Emilia by the Atmospheric Infrared Sounder or AIRS instrument that flies aboard NASA’s Aqua satellite was made into a false-colored infrared image. That data from June 28 at 4:59 p.m. EDT (2059 UTC) revealed powerful storms with very cold cloud top temperatures in excess of minus 63 degrees Fahrenheit (minus 53 degrees Celsius) around the center.

By Friday, June 29, 2018 the National Hurricane Center noted that those cloud tops had cooled, indicating the uplift in the storm was stronger, and the cloud tops were higher. That means the storm was intensifying. NHC said “Shortwave infrared imagery and an earlier [4:55 a.m. EDT] 0855 UTC polar orbiter (satellite) pass show deep convective bursts, with associated minus 78 degree Celsius [minus 108.4 degrees Fahrenheit] cloud tops, developing near the surface center.”

Emilia is far enough away from land so that there are no coastal watches or warnings in effect.

At 11 a.m. EDT (1500 UTC) on June 29, the center of Tropical Storm Emilia was located near latitude 16.2 degrees north and longitude 116.3 degrees west. That’s about 620 miles (1,000 km) southwest of the southern tip of Baja California, Mexico.

The National Hurricane Center (NHC) said that Emilia is moving toward the west-northwest near 12 mph (19 kph), and this general motion is expected to continue for the next few days. Maximum sustained winds have increased to near 60 mph (95 kph) with higher gusts. Tropical-storm-force winds extend outward up to 80 miles (130 km) from the center. The estimated minimum central pressure is 997 millibars.

NHC said “Some additional strengthening is possible during the next 24 hours before Emilia moves over cool waters and begins to weaken over the weekend.”

How the scent of a male sensed to measure him up by female blackbucks?

At Tal Chhapar, a wildlife sanctuary in the heart of the Thar desert, a strange drama is staged twice every year. In the blistering heat of summer from March to April and the post-monsoon months of September and October, up to a hundred blackbuck males stake out territories on the flat land to entice females to mate with them in a unique assemblage called a lek.

Female blackbuck who visit the lek generally spend large amounts of time evaluating males before choosing one as a mate. A large part of this evaluation seems to be based on sniffing–even when being courted, females are so intent on inspecting odors from the dung piles, that they are often oblivious to the males’ antics.

What are these females nosing around for?

To answer this question, Jyothi Nair, a student from Uma Ramakrishnan’s group at the National Centre for Biological Sciences (NCBS), Bangalore, collaborated with Shannon Olsson’s team, also from NCBS, to develop a pipeline for investigating odors in a quick, efficient way. In a publication in the journal, Ecology and Evolution, the researchers document their evaluation of different odor collection, identification, and analysis techniques, and describe a protocol optimized for large-scale sampling of odors. Using this protocol, the team have also found that dung piles of males with high mating success seem to be much richer in the chemical meta-cresol than those of less successful males.

“Collecting odor samples from a remote area like Tal Chhapar is an extremely difficult task,” says Nair. This is because most collection methods require many hours to obtain enough amounts of odors for successful analysis. Furthermore, depending on collection methods, samples are often unstable and decompose very quickly even when stored at low temperatures. This is often impossible in remote field sites where refrigeration facilities are non-existent.

Through trial and error, however, the research team from NCBS found a solution–solid phase extraction. In this technique, odors from fecal samples were absorbed onto tiny tubes made of a silicone polymer called polydimethylsiloxane (PDMS). The odor samples in these PDMS tubes were found to be stable enough that they could then be transported safely without refrigeration to NCBS, Bangalore for chemical analyses.

In the laboratory, standard procedures such as thermal desorption (where the odor-laden PDMS tubes are heated to release trapped volatiles), gas chromatography, and mass spectrometry were used to separate and analyze the compounds making up each odor sample.

“During analysis, we faced a lot of problems in identifying compounds,” says V.S. Pragadeesh, who helped Nair with this work. “Compared to plant volatiles, there are comparatively few studies on mammalian volatiles, so we had very little information to help us recognize chemicals in these odors.”

Routine analysis for such data usually involves manual identification and documentation of the detected compounds to create a chemical profile of each odor sample. Conventionally, the process would have taken more than 8 months for all the data in this study. However, the analysis time was reduced to just two weeks through a collaboration with computational biologist, Snehal Karpe, who helped the team develop a semi-automated process that could quickly and efficiently analyze components of each sample with fairly low error rates.

To make sense of all this information, Nair then used a complex statistical technique called, ‘Random Forests’ to compare the chemical profiles of dung piles from different locations within the lek. What emerged, was a strong spatial pattern–dung piles of males at the centre of the lek, where mating success was highest, had much higher levels of the chemical meta-cresol than those of males towards the periphery.

Meta-cresol is a well-known chemical used for communication in many insects and a few mammals such as elephants and horses. The team is now busy testing different chemicals identified in this study, including meta-cresol, on the behavior of captive blackbuck at Mysore zoo.

“This research has been exciting on so many fronts. There are relatively few population studies on chemical communication, and this is the first field-based chemical ecology study for this amazing Indian mammal,” says Dr. Shannon Olsson, who heads the NICE (Naturalist-Inspired Chemical Ecology) laboratory at NCBS, and has been a close collaborator in this study.

“It’s amazing to think that we can map ‘smells’ on the blackbuck lek! This is the first step to better understand whether smells vary across the lek, and potentially, how successful males smell. All thanks to our collaboration with the NICE lab,” says Dr. Uma Ramakrishnan, who is Nair’s mentor at NCBS.

“We hope that our pipeline will inspire more large-scale studies in chemical ecology that can be used to understand the remarkable biodiversity of this country and the world,” adds Olsson.

Self-monitoring diabetes reduces future costs by half: Study

Self-monitoring of type 2 diabetes used in combination with an electronic feedback system results in considerable savings on health care costs especially in sparsely populated areas, a new study from the University of Eastern Finland shows.

Self-monitoring delivers considerable savings on the overall costs of type 2 diabetes care, as well as on patients’ travel costs. Glycated hemoglobin testing is an important part of managing diabetes, and also a considerable cost item.

By replacing half of the required follow-up visits with self-measurements and electronic feedback, the annual total costs of glycated hemoglobin monitoring were reduced by nearly 60 per cent, bringing the per-patient cost down from 280 EUR (300 USD) to 120 EUR (130 USD). With fewer follow-up visits required, the average annual travel costs of patients were reduced over 60 per cent, from 45 EUR (48 USD) to 17 EUR (18 USD) per patient. The study was published in the International Journal of Medical Informatics.

Carried out in the region of North Karelia in Finland, the study applies geographic information systems (GIS) -based geospatial analysis combined with patient registers. This was the first time the costs of type 2 diabetes follow-up were systematically calculated over a health care district in Finland. The study analysed 9,070 patients diagnosed with type 2 diabetes. Combined travel and time costs amount to 21 per cent of the total costs of glycated hemoglobin monitoring for patients with type 2 diabetes.

“The societal cost-efficiency of type 2 diabetes care could be improved in by taking into consideration not only the direct costs of glycated hemoglobin monitoring, but also the indirect costs, such as patients’ travel costs,” Researcher Aapeli Leminen from the University of Eastern Finland says.

The study used a georeferenced cost model to analyse health care accessibility and different costs associated with the follow-up of type 2 diabetes. Patients’ travel and time costs were analysed by looking at how well health care services could be reached on foot or by bike, or by using a private car, a bus, or a taxi. According to Leminen, the combination of patient registers and GIS opens up new opportunities for research within the health care sector.

“This cost model we’ve now tested in the eastern part of Finland can easily be used in other places as well to calculate the costs of different diseases, such as cancer and cardiovascular diseases.”

Forests may lose ability to protect against extremes of climate change: Study

Forests, one of the most dominate ecosystems on Earth, harbor significant biodiversity. Scientists have become increasingly interested in how this diversity is enhanced by the sheltering microclimates produced by trees.

A recent University of Montana study suggests that a warming climate in the Pacific Northwest would lessen the capacity of many forest microclimates to moderate climate extremes in the future.

The study was published in Ecography: A Journal of Space and Time in Ecology. It is online at http://bit.ly/2KcO1iC.

“Forest canopies produce microclimates that are less variable and more stable than similar settings without forest cover,” said Kimberley Davis, a UM postdoctoral research associate and the lead author of the study. “Our work shows that the ability of forests to buffer climate extremes is dependent on canopy cover and local moisture availability – both of which are expected to change as the Earth warms.”

She said many plants and animals that live in the understory of forests rely on the stable climate conditions found there. The study suggests some forests will lose their capacity to buffer climate extremes as water becomes limited at many sites.

“Changes in water balance, combined with accelerating canopy losses due to increases in the frequency and severity of disturbance, will create many changes in the microclimate conditions of western U.S. forests,” Davis said.

What makes dog man’s best friend?

From pugs to labradoodles to huskies, dogs are our faithful companions. They live with us, play with us and even sleep with us. But how did a once nocturnal, fearsome wolf-like animal evolve over tens of thousands of years to become beloved members of our family? And what can dogs tell us about human health? Through the power of genomics, scientists have been comparing dog and wolf DNA to try and identify the genes involved in domestication.

Amanda Pendleton, a postdoctoral research fellow in the Michigan Medicine Department of Human Genetics, has been reviewing current domestication research and noticed something peculiar about the DNA of modern dogs: at some places it didn’t appear to match DNA from ancient dogs. Pendleton and her colleagues in assistant professor Jeffrey Kidd, Ph.D.’s laboratory are working to understand the dog genome to answer questions in genome biology, evolution and disease.

Breed dogs, which mostly arose around 300 years ago, are not fully reflective of the genetic diversity in dogs around the world, she explains. Three-quarters of the world’s dogs are so-called village dogs, who roam, scavenge for food near human populations and are able to mate freely. In order to get a fuller picture of the genetic changes at play in dog evolution, the team looked at 43 village dogs from places such as India, Portugal and Vietnam.

Armed with DNA from village dogs, ancient dogs found at burial sites from around 5,000 years ago, and wolves, they used statistical methods to tease out genetic changes that resulted from humans’ first efforts at domestication from those associated with the development of specific breeds. This new genetic review revealed 246 candidate domestication sites, most of them identified for the first time by their lab.

Now that they’d identified the candidate genes the question remained: What do those genes do?

‘A good entry point’

Upon closer inspection, the researchers noticed that these genes influenced brain function, development and behavior. Moreover, the genes they found appeared to support what is known as the neural crest hypothesis of domestication. “The neural crest hypothesis posits that the phenotypes we see in domesticated animals over and over again — floppy ears, changes to the jaw, coloration, tame behavior — can be explained by genetic changes that act in a certain type of cell during development called neural crest cells, which are incredibly important and contribute to all kinds of adult tissues,” explains Pendleton. Many of the genetic sites they identified contained genes that are active in the development and migration of neural crest cells.

One gene in particular stuck out, called RAI1, which was the study’s highest ranked gene. In a different lab within the Department of Human Genetics, Michigan Medicine assistant professor of human genetics Shigeki Iwase, has been studying this gene’s function and role in neurodevelopmental disorders. He notes that in humans, changes to the RAI1 gene result in one of two syndromes — Smith-Magensis syndrome if RAI1 is missing or Potocki-Lupski syndrome if RAI1 is duplicated.

Kidd said that they are using these changes that were selected for by humans for thousands of years as a way to understand the natural function and gene regulatory environment of the neural crest in all vertebrates.

Cyclone Prapiroon: NASA’s GPM satellite finds heavy rainfall on southwestern side

When the Global Precipitation Measurement mission or GPM core satellite passed over the Northwestern Pacific Ocean, it saw very heavy rainfall occurring in one part of Tropical Storm Prapiroon.

Tropical Depression 09W was located east of the Philippines when it was upgraded early today, June 29, to Tropical Storm Prapiroon. The tropical storm is in a favorable environment for intensification. Vertical wind shear is low above the tropical cyclone and sea surface temperatures are warm below.

NASA’s GPM core observatory satellite had a good view of Tropical Storm Prapiroon on June 29, 2018 at 0246 UTC (June 28 at 10:46 p.m. EDT). GPM is a joint mission between NASA and the Japan Aerospace Exploration Agency, JAXA.

At the time GPM passed overhead, Prapiroon was barely a tropical storm with maximum sustained wind speeds estimated at about 35 knots (40.3 mph). GPM’s Microwave Imager (GMI) and Dual-Frequency Precipitation Radar (DPR) instruments measured precipitation around Prapiroon. GPM showed that intensifying the storm was fairly large with its most intense rainfall located in the southern part of the storm. GPM’s radar (DPR Ku Band) scanned convective storms in a feeder band on the southwestern side of the tropical storm where it found that some very intense storms there were dropping rain at a rate of over 192 mm (7.6 inches) per hour.

A 3-D view of Tropical Storm Prapiroon’s precipitation, looking toward the southwest, was created at NASA’s Goddard Space Flight Center in Greenbelt, Maryland, using data captured by GPM’s radar (DPR Ku Band). GPM’s DPR probes provided excellent information about the powerful storms in the large rain band wrapping around Prapiroon’s western side. Storm top heights in that part of the storm were measured by GPM’s radar reaching heights above 12.5 km (7.8 miles).

On June 29 at 11 a.m. EDT (1500 UTC), Prapiroon was centered near 20.0 degrees north latitude and 129.7 degrees east longitude. That’s about 404 nautical miles south-southeast of Kadena Air Base, Okinawa, Japan. The storm is moving to the northwest at 5 knots (5.7 mph/9.2 kph). Maximum sustained winds 45 knots (52 mph/83 kph).

The Joint Typhoon Warning Center (JTWC) predicts that Prapiroon will move toward the north-northwest and intensify into a typhoon on June 30. Prapiroon is expected to continue intensifying and have peak wind speeds of about 75 knots (86 mph/139 kph) as it passes over the East China Sea in a few days.

Prapiroon is predicted by the JTWC to be a minimal typhoon with winds of 65 knots (75 mph/120 kph) as it approaches South Korea on July 2, 2018.

Facing music from volcano eruption? Better listen to it before, says study

A volcano in Ecuador with a deep cylindrical crater might be the largest musical instrument on Earth, producing unique sounds scientists could use to predict its eruption, said a new study.

New infrasound recordings of Cotopaxi volcano in central Ecuador witnessed eruptions in 2015 and its crater changed shape. Now the deep narrow crater makes air to reverberate against the crater walls when the volcano rumbled, producing sound waves like those of a pipe organ.

“It’s the largest organ pipe you’ve ever come across,” said Jeff Johnson, a volcanologist at Boise State University in Idaho and lead author of a new study detailing the findings in Geophysical Research Letters, a journal of the American Geophysical Union.

The new findings show that each volcano’s unique “voiceprint” can help scientists better monitor these natural hazards and alert scientists to understand what’s going on inside the volcano before realizing its impending eruption, said the study authors.

“Understanding how each volcano speaks is vital to understanding what’s going on,” Johnson said. “Once you realize how a volcano sounds, if there are changes to that sound, that leads us to think there are changes going on in the crater, and that causes us to pay attention.”

The ongoing eruption of Kilauea in Hawaii could be a proving ground for studying how changes to a crater’s shape influence the sounds it makes, according to Johnson.

The lava lake at Kilauea’s summit drained as the magma supplying it flowed downward, which should change the tones of the infrasounds emitted by the crater.

Listening to Kilauea’s infrasound could help scientists monitor the magma depth from afar and forecast its potential eruptive hazards, according to David Fee, a volcanologist at the University of Alaska Fairbanks who was not connected to the new study.

When magma levels at Kilauea’s summit drop, the magma can heat groundwater and cause explosive eruptions, which is believed to have happened at Kilauea over the past several weeks. This can change the infrasound emitted by the volcano.

“It’s really important for scientists to know how deep crater is, if the magma level is at the same depth and if it’s interacting with the water table, which can create a significant hazard,” Fee said.

 

Teleconnection? To forecast winter rainfall in LA, take cue from New Zealand summer

Variability in El Niño cycles was long considered a reliable tool for predicting winter precipitation in the Southwest United States, but its forecasting power has diminished considerably over the years. Scientists at the University of California, Irvine (UCI) have found a new link to predict wet or dry conditions for the winter far ahead.

“Influences between the hemispheres promise earlier and more accurate prediction of winter precipitation in California and the Southwest U.S.,” said study co-author Efi Foufoula-Georgiou of UCI. “Knowing how much rain to expect in the coming winter is crucial for the economy, water security, and ecosystem management of the region.”

The researchers call the new ‘teleconnection’ the New Zealand Index (NZI) because the sea surface temperature anomaly that triggers it begins in July and August in the southwestern Pacific Ocean near New Zealand.

The heating or cooling of the sea surface temperature there causes a change in the southern Hadley cell, a zone of atmospheric circulation from the equator to the 30-degree south parallel.

In turn, the Hadley cell change causes an anomaly east of the Philippine Islands, resulting in a strengthening or weakening of the jet stream in the Northern Hemisphere. That directly influences the amount of rain that falls on California from November through March.

In conducting the research, the team performed an analysis of sea surface temperatures and atmospheric pressures from 1950 to 2015 in 1- and 2-degree cells around the globe.

“With the NZI, we can predict the likelihood of above- or below-normal winter precipitation in the Southwest U.S.,” said lead author Antonios Mamalakis of UCI. “Our research also shows an amplification of this newly discovered “teleconnection” over the past four decades.”

Mamalakis said that the most unexpected result is the discovery of persistent sea surface temperature and atmospheric pressure patterns in the southwestern Pacific that show a strong connection with precipitation in Southern California, Nevada, Arizona and Utah.

In recent years, however, El Niño conditions did not bring heavy rains to California as they have in the past, forcing researchers to study the link in the interhemispheric bridge.

The study was published in Nature Communications. 

Weather forecast to become more accurate as new technique found now

Meteorologists have long known for wrong rainfall forecasts but now researchers from the University of Missouri have developed a system that improves the precision of forecasts by accounting for evaporation in rainfall estimates, particularly for locations 30 miles or more from the nearest National Weather Service radar.

“Right now, forecasts are generally not accounting for what happens to a raindrop after it is picked up by radar,” said Neil Fox of the School of Natural Resources at MU. “Evaporation has a substantial impact on the amount of rainfall that actually reaches the ground. By measuring that impact, we can produce more accurate forecasts that give farmers, agriculture specialists and the public the information they need.”

Fox and doctoral student Quinn Pallardy used dual-polarization radar, which sends out two radar beams polarized horizontally and vertically, to differentiate between the sizes of raindrops. Since the size of a raindrop affects both its evaporation rate and its motion, with smaller raindrops evaporating more quickly but encountering less air resistance, a combination technique has helped them make the prediction more accurate.

By combining this information with a model that assessed the humidity of the atmosphere, the researchers were able to develop a tracing method that followed raindrops from the point when they were observed by the radar to when they hit the ground, precisely determining how much evaporation would occur for any given raindrop.

Researchers found that this method significantly improved the accuracy of rainfall estimates, especially in locations at least 30 miles from the nearest National Weather Service radar. Radar beams rise higher into the atmosphere as they travel, and as a result, radar that does not account for evaporation becomes less accurate at greater distances because it observes raindrops that have not yet evaporated.

“Many of the areas that are further from the radar have a lot of agriculture,” Fox said. “Farmers depend on rainfall estimates to help them manage their crops, so the more accurate we can make forecasts, the more those forecasts can benefit the people who rely on them.”

Fox said more accurate rainfall estimates also contribute to better weather forecasts in general, as rainfall can affect storm behavior, air quality and a variety of other weather factors.

NASA satellite Aqua caaptures formation of Tropical Storm Maliksi over Philippines

Aqua captured an image of Tropical Storm Maliksi on June 8 that showed the circulation center over open waters of the Philippine Sea. Bands of thunderstorms circling the center extended over the northern and central Philippines bringing rainfall and gusty winds.
Credit
Credits: NASA

Tropical Storm Maliksi formed in the Philippine Sea, off the northeastern coast of the Philippines as NASA’s Aqua satellite passed overhead.

The Moderate Resolution Imaging Spectroradiometer or MODIS instrument aboard Aqua captured an image of the storm on June 8 that showed the circulation center over open waters of the Philippine Sea. Bands of thunderstorms circling the center extended over the northern and central Philippines bringing rainfall and gusty winds.

On June 8 at 5 a.m. EDT (0900 UTC), Tropical Storm Maliksi was located near 19.5 degrees north latitude and 127.2 degrees east longitude. That’s about 443 nautical miles northeast of Manila, Philippines. Maliksi was moving north at 12 knots (14 mph/22 kph). Maximum sustained winds were near 40 knots (46 mph/74 kph).

Maliksi is forecast to move to the northeast and parallel the coast of Japan while remaining several hundred miles off shore.