Wind: Abbott explains that wind actually comes from the sun (since the sun heats the ground creating massive convection currents, meaning that wind is a diluted form of solar power), although he shows that wind power is economically uncompetitive with solar power in all locations except cold regions with poor sun levels. Further, a typical 1.5-MW wind turbine requires 20 gallons of lubricating oil every 5 years, which would become unsustainable in a few decades. Why a hydrogen economy doesn’t make sense Explore further Citation: How a Solar-Hydrogen Economy Could Supply the World’s Energy Needs (2009, August 24) retrieved 18 August 2019 from https://phys.org/news/2009-08-solar-hydrogen-economy-world-energy.html Unlike many other current hydrogen-powered vehicles, the BMW Hydrogen 7 directly ignites the hydrogen in its internal combustion engine. Image credit: Wikimedia Commons. Work by User: Mattes. (PhysOrg.com) — As the world’s oil supply continues to dry out every day, the question of what will replace oil and other fossil fuels is becoming more and more urgent. According to the World Coal Institute, at the present rate of consumption, coal will run out in 130 years, natural gas in 60 years, and oil in 42 years. Around the world, researchers are investigating alternative energy technologies with encouraging progress – but the question still remains: which source(s) will prove to be most efficient and sustainable in 30, 50, or 100 years from now? Despite the advantages, hydrogen fuel technology still faces challenges. For instance, the electrodes used in water electrolysis are currently coated with platinum, which is not a sustainable resource, and researchers are currently investigating other materials. Other issues include transporting hydrogen – a recent study has shown that it is more economical to deliver hydrogen by truck to refueling stations rather than perform on-site electrolysis. Another hurdle is storage – in terms of sustainability, Abbott suggests that the most straightforward approach is to liquefy the hydrogen. Although liquefying hydrogen requires an additional energy cost, Abbott argues that the scenario should not be mistaken for a zero-sum game as is the case with fossil fuels. Since the sun supplies a virtually unlimited amount of energy, the solution is to factor in the non-recurring cost of extra solar collectors to provide the energy for liquefaction. His calculations show that the cost of a solar collector farm used to produce hydrogen is still lower than a nuclear station of equivalent power.Overall, Abbott’s message is that there exists a single technology that can supply the world’s energy needs in a clean, sustainable way: solar-hydrogen. The difference in his approach compared to other analyses, he explains, is his long-term perspective. While nuclear power is often cited to be the economically favorable technology in the short-term, Abbott argues that the long-term return on nuclear power is virtually zero due to its limited lifetime, while solar-hydrogen power can theoretically last us the next one billion years. “The biggest challenge is escaping from the economic effects of vendor lock-in where large investments in nuclear and traditional energy sources keep us ‘locked-in’ to feeding monsters that will bring us down an economic black hole,” Abbott said. “It’s rather like the play The Little Shop of Horrors where a man-eating plant is initially fed small amounts, but then its voracious appetite sends it into a downward spiral swallowing up anyone that gets in its way.”Of course, Abbott’s analysis is just one approach in the ongoing debate on the advantages and disadvantages of hydrogen. Among several reviews published in a special issue of the Proceedings of the IEEE in October 2006 is an analysis by Ulf Bossel, which shows that a hydrogen economy is uncompetitive due to the energy costs of storage, transportation, etc. Abbott agrees that hydrogen is not an efficient energy storage method, but he also points out that energy from the sun is virtually unlimited, and more solar collectors could make up for the inefficiency of hydrogen technology.”The Bossel paper did not consider the case of using sun to generate the hydrogen,” Abbott said. “So, of course all the inefficiencies added up and hydrogen looked bad compared to fossil fuels. But the point about solar energy is that there is so much of it that you only have to tap 5% of it at an efficiency as tiny as 1% and you already have energy over 5 times the whole world’s present consumption.”This demonstrates that efficiency is not the issue when you go solar. There is so much solar that all you have to do is invest in the non-recurring cost of more dishes to drive a solar-hydrogen economy at whatever efficiency it happens to sit at. I show in my paper that if you do this you come out cheaper than nuclear and you take up less than 8% of the world’s desert area. … So let’s begin now, what are we waiting for?”More information: Derek Abbott. “Keeping the energy debate clean: How do we supply the world’s energy needs?” Proceedings of the IEEE. To be published.• Join PhysOrg.com on Facebook!• Follow PhysOrg.com on Twitter!© 2009 PhysOrg.com Abbott calculates that, in order to supply the world’s energy needs, the footprint of such a system with pessimistic assumptions would be equivalent to a plot of land of about 1250 km by 1250 km – about 8% of the land area of the hot deserts of the world. With less pessimistic assumptions, the land area could be reduced to 500 km by 500 km, corresponding to 1.7 billion solar dishes that are each 10 meters wide. At massive volumes, if these Stirling engine dishes could be produced at a cost of $1,000 each, the total world cost would be $1.7 trillion – “which is less than the going rate of a war these days,” Abbott noted. He also believes that further cost savings can be made by considering 30-meter diameter dishes, driving much larger Rankine engines, in order to reduce overhead and maintenance costs.Ideally, Abbott says, solar farms should be distributed widely throughout the world in order to avoid geopolitical stresses and minimize transportation costs. Solar farms of one or two square km could be built in deserts in many regions: the Americas, Africa, Australasia, Asia, and the Middle East.Hydrogen: After connecting these solar farms to the local electricity grid, the electricity could then be used to electrolyze water to produce liquid hydrogen to run our vehicles. Abbott suggests that the next step would be to power public transport, such as buses, using liquid hydrogen. Then consumers could buy liquid hydrogen cars and refuel at public transport depots for a transition period until existing gasoline stations begin providing liquid hydrogen refueling.”Governments should begin by setting up sizable solar farms that supplement existing grid electricity and provide enough hydrogen to power buses,” Abbott said. “Enthusiasts will then buy hydrogen cars, retrofit existing cars, and refuel at bus depots. Then things will grow from there. You gotta start somewhere.”According to Abbott, running vehicles on hydrogen rather than electricity is superior in terms of sustainability. The batteries in electric vehicles consume chemicals and finite resources such as lithium, and release high levels of toxic waste. On the other hand, vehicles that burn hydrogen simply emit clean water vapor, and do not require the unsustainable use of chemicals. Other advantages of hydrogen vehicles are that today’s gasoline combustion engines can be retrofitted to run on hydrogen, and the car manufacturing industry has infrastructure tailored to combustion technology.”With solar-hydrogen, questions of safe handling are not the issue,” Abbott said. “Industry already uses 50 million tonnes of hydrogen annually, and so storage and handling are well-trodden areas. The BMW company has demonstrated the hydrogen combustion engine in a family-sized car [the BMW Hydrogen 7]. Also, 20% of buses in Berlin use hydrogen combustion.” At the Stirling Energy Systems suncatcher dish farm being developed in California, 38-foot-diameter dishes power track the sun and each power a 25 kW Stirling cycle generator. Image credit: Stirling Energy Systems. Credit: Derek Abbott. This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no part may be reproduced without the written permission. The content is provided for information purposes only. For Derek Abbott, Professor of Electrical Engineering at the University of Adelaide in Australia, the answer is clear. In an invited opinion piece to be published in the Proceedings of the IEEE, Abbott argues that a solar-hydrogen economy is more sustainable and provides a vastly higher total power output potential than any other alternative. While he agrees with the current approach of promoting a mix of energy sources in the transition period toward a sustainable energy technology, he shows that solar-hydrogen should be the final goal of current energy policy. Eventually, as he suggests, this single dominant solution might supply 70% of the world’s energy while the remaining 30% is supplied by a mix of other sources.”My starting point is as an academic who always thought nuclear was the answer, but who then looked at the figures and came to an inescapable conclusion that solar-hydrogen is the long-term future,” Abbott told PhysOrg.com. “I did not come at this as a green evangelist. I am a reluctant convert. I deliberately don’t even mention the word CO2 once in my paper, in order to demonstrate that one can justify solar-hydrogen simply on grounds of economic resource viability without any green agenda.”In his paper, Abbott begins by providing an overview of the major non-renewable and renewable energy sources. To briefly summarize:Nuclear fission: While nuclear fission power plants may at first seem to have the economic advantage, they have “hidden costs” (the biggest being the $6 billion cost to decommission after a 30- or 40-year lifetime). In addition, nuclear fission isn’t sustainable: if fission hypothetically supplied the world’s energy needs, there would only be five years’ supply of uranium; and thorium, a suggested substitute, has a recoverable supply of only half of the world’s uranium reserves. Nuclear fusion: Abbott argues that nuclear fusion, which usually involves the fusion of deuterium and tritium, is not actually clean or sustainable. In addition to suffering from the same hidden costs as fission, tritium is considered dangerous enough to require weekly cleaning (as in the case of the International Thermonuclear Experimental Reactor). Plus, tritium is bred by reacting neutrons with lithium; Abbott estimates that the world’s lithium reserves would last about 100 years if it were to supply the world’s energy along with continuing use in industrial applications, such as batteries, glass, ceramics, and lubricants. On the left is a vehicle with a hydrogen tank, and on the right a vehicle with a standard gasoline tank. Both tanks have been deliberately punctured and ignited. The top panel shows the two vehicles 3 seconds after ignition. We see that, due to the buoyancy of hydrogen, the flame shoots up vertically, whereas gasoline is heavy and spreads beneath the vehicle. The bottom panel shows the two vehicles 60 seconds after ignition. The hydrogen supply has burned off and the flame is diminished, whereas the gasoline fire has accelerated and has totally engulfed the vehicle on the right. Note that hydrogen flames are not intrinsically visible, but salt and particles in the ambient air burn off giving color to the flame as seen above. Image credit: University of Miami. On a related note, Abbott emphasizes that we need to preserve at least some of our remaining oil for uses other than energy – such as lubricating the world’s engines, as well as for making dyes, plastics, and synthetic rubber. Likewise, natural gas has industrial applications for making ammonia, glass and plastics, and coal for making soap, aspirin, tires, and other materials.Hydroelectric: Hydroelectricity currently provides 20% of the world’s electricity, with room for further growth. However, hydroelectricity could not supply the whole world’s power due to the limited availability of waterways. Plus, dams often have negative effects on aquatic ecosystems, as well as tourism, fisheries, and transport. Abbott also notes that, like wind, hydroelectric power is ultimately powered by the sun (via rain), a reminder that tapping the sun directly can offer large amounts of power.Geothermal: Pumping water below the Earth’s crust to create steam that can be used to generate electricity, geothermal power has shown to be cost-effective and sustainable, due to the large amounts of heat contained in the Earth. The downside, Abbott says, is that much of the energy is diffuse and unrecoverable, so that geothermal power could ultimately supply only a fraction of the world’s energy needs. In some cases, geothermal is also known to trigger unwanted seismic activity, and can bring toxic chemicals, such as hydrogen sulphide, arsenic, and mercury, to the Earth’s surface.Solar: For Abbott, the unambiguous leader of alternative energy sources is solar power, especially low-tech solar thermal collectors rather than high-tech silicon solar cells. Today, the world’s energy consumption is currently 15 TeraWatts (TW) (15 x 10^12 watts). The total solar energy that strikes the Earth is 166 Petawatts (PW) (166 x 10^15 watts). Even with 50% of this energy being reflected back into space or absorbed by clouds, the remaining 83 PW is more than 5,000 times our present global energy consumption. In contrast, the above sources of renewable energy (wind, hydroelectric, and geothermal) can supply less than 1% of solar power potential. The challenge, of course, is how to harness this large source of renewable, sustainable solar energy.”The fact that there simply is 5,000 times more sun power than our consumption needs makes me very optimistic,” Abbott said. “It’s a fantastic resource. We have the ingenuity to send man to the moon, so we definitively have the ingenuity to tap the sun’s resources.”Despite the improvements in silicon solar cells, Abbott argues that they suffer from low efficiencies and high environmental impact compared with solar thermal collectors. Solar cells require large amounts of water and arsenic; Abbott calculates that manufacturing enough solar cells to power the world would require 6 million tonnes of arsenic, while the world’s supply is estimated at about 1 million tonnes. Even the overall solar cell design is fundamentally flawed, he says. Solar cell semiconductor reliability drops as temperature increases, yet large temperature differences are required to increase thermodynamic efficiency. For this reason, semiconductor technology is much better suited to lower powers and temperatures, such as pocket calculators.On the other hand, solar thermal collectors are specifically designed to operate under hot temperatures. The idea is to use a curved mirror to focus sunlight to boil water and create steam, which is then used to power, for example, a Stirling heat engine to produce electricity. The system has already been demonstrated in California’s Mojave Desert, which has been using a solar thermal system to heat oil in a closed-cycle instead of water for the past 20 years.
At first, Smith ran into friction problems on the small scale when trying to miniaturize the geared mechanism he used for larger models. After trial and error, he discovered that he could make an oval tube attached to the rotating shaft of a miniature geared motor. To make the train cars, he simply cut “teeth” into the edge of the tube that would poke just above the surface of the layout. Then he colored the train cars with a Sharpie. He made the layout itself completely out of extremely thin styrene, and covered the mountain with a thin lumpy layer of Squadron putty to make a forest. He made the buildings out of bits of 0.01 x 0.02-inch strips of styrene, and colored everything with Sharpies. At first, Smith thought about illuminating some of the buildings, but found that even the tiniest LED or fiber optics would be much too bulky. He still plans to make a single spotlight to hang overhead, to make it easier to see everything. Because the train is so small, even making the video was a major challenge to shoot. Although Smith used a macro lens, the camera would not focus close enough to get a good view; he had to blow it up 400% in post-production to make a decent-sized (yet grainy) image.Smith is making no promises that the tiny train layout is made precisely to scale. “This new layout was entirely eyeballed as well, with no real intent to precisely represent an N scale layout in Z scale; it’s only N scale by virtue of its overall dimensions,” he explained. “So if anyone measures the passenger cars running on it and finds they’re exactly the right size, I’ll most certainly faint dead away.”More information: http://jamesriverbranch.netvia: Telegraph© 2009 PhysOrg.com David Smith holds the tiny train he created, which has a scale of 1:35,200. Image Credit: David Smith. China develops magnetic levitation train Smith, who is a business web developer from New Jersey, has been working on the train model since 2007, spending about $11 on the project. As Smith explains, the train is a model within a model, as it appears in the window of one of the stores in his Z scale model railroad town (1:220 scale), the fictitious James River Branch on the Reading Railroad. The tiny train is a Z scale model of a 2 x 4-foot N scale (1:160) layout. Explore further (PhysOrg.com) — David Smith, who has been building model railroads since 1965, has always had a preference for the smaller scale train models. His most recent project is a five-car train that runs through a scene of mountains, a tunnel, trees, buildings, and a cloud-studded sky – the whole thing measuring just 0.125 x 0.2 inches (0.3 x 0.5 cm). The train’s modeling scale is 1:35,200. Citation: Tiny Train Model May be World’s Smallest (w/ Video) (2009, October 26) retrieved 18 August 2019 from https://phys.org/news/2009-10-tiny-world-smallest-video.html This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no part may be reproduced without the written permission. The content is provided for information purposes only.
The “Joking Computer” was developed by scientists at Aberdeen University for the Science Centre, to show children and young people what computers can do and help them explore language and engage with the underlying science. The software was originally written for children with disabilities such as cerebral palsy, to help them develop language skills and have original jokes to tell their family and friends. Dr Judith Masthoff from the Department of Computing Science at Aberdeen University said the software was developed jointly by the universities of Aberdeen, Edinburgh and Dundee. Dr Masthoff said the Joking Computer is intended to be a fun way to show children that computers can have a positive impact on people’s lives. If young people can engage with the computer, the hope is that some may consider computing science as a career or academic pursuit later.Chief Executive of the Glasgow Science Centre, Kirk Ramsay, said the exhibit is a good example of how computing power and sophistication can be used for all kinds of applications. The Joking Computer is perfect for achieving the aim of the Centre, which is to use fun and thought-provoking exhibits to promote science and technology.The Joking Computer project was funded by a £105,000 award from the EPSRC (Engineering and Physical Sciences Research Council) as part of their Partnerships for Public Engagement award scheme. It will be exhibited next year in science workshops and festivals in the UK.The Joking Computer can generate millions of cracker-style jokes, all based on puns. A few examples of its jokes are: * Q: What kind of temperature is a son? * A: A boy-ling point * Q: What do you call a shout with a window? * A: A computer scream * Q: What do you call a washing machine with a september? * A: An autumn-atic washer© 2009 PhysOrg.com Explore further Researchers design humorous ‘bot’ (PhysOrg.com) — The Glasgow Science Centre in Scotland is exhibiting a computer that makes up jokes using its database of simple language rules and a large vocabulary. Citation: Glasgow’s joking computer (2009, December 11) retrieved 18 August 2019 from https://phys.org/news/2009-12-glasgow.html This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no part may be reproduced without the written permission. The content is provided for information purposes only.
Explore further Male silverback Gorilla in SF zoo. Image: Wikipedia. (Phys.org) — In trying to figure out when humans and apes diverged, researchers have had to rely on fossil evidence and the rates of mutations that occur when both groups propagated their species. The problem is, up till now, most of that data can from the analysis of human genetic evidence which was then applied to both humans and apes, which could of course have led to errors as it’s based on guessing that mutation rates in apes are the same as humans. Now, to get around that problem, a team of researchers has gathered genetic data from both chimpanzees and gorillas and has found, as they describe in their paper published in the Proceedings of the National Academy of Sciences, that it appears that the two diverged some time earlier than has been thought. More information: Generation times in wild chimpanzees and gorillas suggest earlier divergence times in great ape and human evolution, PNAS, Published online before print August 13, 2012, doi: 10.1073/pnas.1211740109AbstractFossils and molecular data are two independent sources of information that should in principle provide consistent inferences of when evolutionary lineages diverged. Here we use an alternative approach to genetic inference of species split times in recent human and ape evolution that is independent of the fossil record. We first use genetic parentage information on a large number of wild chimpanzees and mountain gorillas to directly infer their average generation times. We then compare these generation time estimates with those of humans and apply recent estimates of the human mutation rate per generation to derive estimates of split times of great apes and humans that are independent of fossil calibration. We date the human–chimpanzee split to at least 7–8 million years and the population split between Neanderthals and modern humans to 400,000–800,000 y ago. This suggests that molecular divergence dates may not be in conflict with the attribution of 6- to 7-million-y-old fossils to the human lineage and 400,000-y-old fossils to the Neanderthal lineage. To calculate when a species diverged, researchers look at the average age of members of the species when they give birth and mutation rates. The older the average age, the more time it takes for mutations to cause changes. Insects that produce offspring in a matter of months, for example, can adapt much more quickly to environmental changes than large animals that produce offspring many years after they themselves are born. To find such data for both chimps and gorillas, the research team worked with many groups in Africa that included studies of the animals that totaled 105 gorillas and 226 chimps. They also looked at fossilized excrement that contained DNA data. In so doing they found that the average age of giving birth for female chimps was 25 years old. They then divided the number of mutations found by the average age of birth to get the mutation rate. In so doing, they found it to be slower than humans, which meant that estimates based on it to calculate divergence times were likely off by as much as a million years.The end result of the team’s research indicates that humans and chimps likely diverged some seven to eight million years ago, while the divergence of gorillas (which led to both humans and chimps) came approximately eight to nineteen million years ago. To put the numbers in perspective, humans and Neanderthals split just a half to three quarters of a million years ago.The team suggests their research model could also be used to find the divergence points of other species as well, so long as a genetic record can be obtained. © 2012 Phys.org Journal information: Proceedings of the National Academy of Sciences This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no part may be reproduced without the written permission. The content is provided for information purposes only. Citation: New genetic data shows humans and great apes diverged earlier than thought (2012, August 15) retrieved 18 August 2019 from https://phys.org/news/2012-08-genetic-humans-great-apes-diverged.html Climate change and evolution of Cross River gorillas
A system of interdependent networks is characterized by the structure (dimension) of the single networks as well as by the coupling between the networks. In random networks with no space restrictions, such as Erdös–Rényi and RR, the connectivity links (blue lines) do not have a deﬁned length. In contrast, in spatially embedded networks nodes are connected only to nodes in their geometrical neighbourhood creating a 2D network, modelled here as a square lattice. The red arrows represent directed dependency relations between nodes in different networks, which can be of different types. a, Coupled lattices. b, A coupled lattice–random network. c, Coupled random networks. d, A real-world spatial network coupled with a random network. Models b and d belong to the same universality class. Credit: Nature Physics (2013) doi:10.1038/nphys2727 New Finnish solution shortens power cuts during storms (Phys.org) —A team of physicists from Israel and the U.S. has discovered that mathematical modeling suggests modern electrical networks may be more vulnerable to cascading collapse than has been previously thought. In their paper published in the journal Nature Physics, the researchers found that previous models that showed such networks to be robust were based on more randomness than is typically found in real-world networks such as the Internet and electrical systems. Modern electrical networks such as those used in the United States have demonstrated a vulnerability to cascading collapse—most famously by the widespread outage that occurred a decade ago taking out power to millions in the northeast and Midwest parts of the country (and parts of Canada). That outage was traced to a series of mistakes that occurred after a software bug caused problems in an alarm system (following some tress falling on power lines) in a single control room. After one small part of the network went down, other parts soon followed, resulting in the largest blackout in U.S. history. Utility representatives insist that technological advances and improvements to the infrastructure have been made which have resulted in a network that is today very unlikely to experience such an outage again. In this new effort, the physicists disagree.The problem with mathematical models that are used to demonstrate how a network might continue working when failures occur, or when they don’t, the team writes, is that they are used to describe networks with nodes randomly spaced. The Internet, they note, and electrical systems are not randomly spaced because of population density differences, geography, etc.—taking out randomness results in orderly lattices that lead to more critical nodes. That in turn, of course, leads to less stable networks.A mathematical model can’t fully describe a real-world electrical grid, of course, and it’s entirely possible that components have been put in place that are able to counteract the lack of randomness in the system currently in use. On the other hand, it’s also possible that such measures have vulnerabilities as well. The researchers suggest utilities add long lines to connect critical nodes to bring the system as a whole back to a more random state to make it more robust. Citation: Physicists suggest electrical networks more at risk of cascading failure than thought (2013, August 26) retrieved 18 August 2019 from https://phys.org/news/2013-08-physicists-electrical-networks-cascading-failure.html More information: The extreme vulnerability of interdependent spatially embedded networks, Nature Physics (2013) DOI: 10.1038/nphys2727AbstractRecent studies show that in interdependent networks a very small failure in one network may lead to catastrophic consequences. Above a critical fraction of interdependent nodes, even a single node failure can invoke cascading failures that may abruptly fragment the system, whereas below this critical dependency a failure of a few nodes leads only to a small amount of damage to the system. So far, research has focused on interdependent random networks without space limitations. However, many real systems, such as power grids and the Internet, are not random but are spatially embedded. Here we analytically and numerically study the stability of interdependent spatially embedded networks modelled as lattice networks. Surprisingly, we find that in lattice systems, in contrast to non-embedded systems, there is no critical dependency and any small fraction of interdependent nodes leads to an abrupt collapse. We show that this extreme vulnerability of very weakly coupled lattices is a consequence of the critical exponent describing the percolation transition of a single lattice. Journal information: Nature Physics © 2013 Phys.org Explore further This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no part may be reproduced without the written permission. The content is provided for information purposes only.
Credit: AGU This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no part may be reproduced without the written permission. The content is provided for information purposes only. GPS solution provides 3-minute tsunami alerts More information: fallmeeting.agu.org/2013/press … blic-safety-threats/ Southern California, due to its unique geography, is subject to both earthquakes and fast moving storms that can cause flashfloods. Predicting when either or both may strike is critical to saving lives. To study earthquake behavior, various research organizations have installed monitors at locations throughout the southern part of the state. GPS is used because it allows for noting ground movement. Now, in this new effort, the researchers from the three groups have been using the GPS data for a different purpose and have also been installing accelerometers to more precisely measure ground movement in near real-time.GPS, the researchers told the audience, in addition to location information, also contains an additional data element—humidity level. The degree of water in the air impacts the time it takes for a GPS signal to travel to and from a satellite—higher levels, or a sudden increase in moisture may indicate that a storm is developing. Monitoring levels in real time can help predict such storms and thus flashfloods. That information , the team hopes, could then be relayed to media outlets, emergency workers, and even regular people via their cell phones, offering local citizenry an opportunity to take proactive measures to ensure their safety. Currently, weather balloons serve in this capacity, but they can’t provide data in real time. The addition of GPS data has already proven its worth—it’s predicted several flashfloods in the San Diego area.Meanwhile, adding accelerometers allows for monitoring very slight ground movement (and P-waves) which under the right circumstances can mean an earthquake is likely to happen—when combined with existing GPS data more information is available, making for a more reliable system—information that could be disseminated giving people some small amount of warning before a quake strikes, potentially saving many lives. If the system proves to be as accurate as the researchers believe it could be, it could become part of the western United States’ tsunami early warning system as well.To date, seventeen stations in southern California have been upgraded—more will be modified as money comes available, and presumably, as the upgraded monitors prove their worth. (Phys.org) —Teams of researchers with the Scripps Institute, NASA’s JPL laboratory and NOAA are working together, representatives from each have reported at this year’s meeting of the Geophysical Union, to upgrade monitoring stations in parts of southern California—the aim is to use existing monitoring stations as an early warning system for earthquakes and flashfloods. Explore further Citation: Researchers using GPS and accelerometers in base stations to create early warning system in southern California (2014, January 6) retrieved 18 August 2019 from https://phys.org/news/2014-01-gps-accelerometers-base-stations-early.html © 2014 Phys.org
Citation: Skull tower and skull rack offer evidence of Aztec human sacrifice in early Mexico City (2018, June 28) retrieved 18 August 2019 from https://phys.org/news/2018-06-skull-tower-rack-evidence-aztec.html A team of researchers has uncovered what they describe as a skull rack—a basketball court length wall of skulls with poles passed through them—in Mexico City. Lizzie Wade, with ScienceMag, outlines the work being done by a team from Mexico’s National Institute of Anthropology and History. This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no part may be reproduced without the written permission. The content is provided for information purposes only. © 2018 Phys.org Journal information: Science Mexico finds ‘main’ skull rack at Aztec temple complex More information: Lizzie Wade. Feeding the gods: Hundreds of skulls reveal massive scale of human sacrifice in Aztec capital, Science (2018). DOI: 10.1126/science.aau5404 Explore further Three years ago, the researchers uncovered what has been described as a skull tower—a circular tower built using skulls held together with mortar. The tower was found to be part of a trophy rack area that more recently, the researchers found, includes a skull rack.The skull rack was found to be approximately 35 meters in length and approximately five meters high. It once consisted of wood posts at either end with smaller posts spaced every few meters between them. Wooden poles stretched horizontally between the posts. It would have looked like a high wooden fence. But it was used instead to hold human skulls—each had holes bored on either side to allow for sliding them onto a pole, like beads on an abacus. The wood was decayed, of course, but evidence found at the site allowed the team to piece together the original structure, along with the skulls. The researchers note that such a rack was believed to exist due to writings by Spanish explorers—they called it the tzompantli.The researchers believe both the tower and rack were part of human sacrifice rituals, carried out to preserve the Aztec way of life. The dig site is located at Tenochtitlan, the center of an Aztec civilization, in a part of what is now modern Mexico City. The Mexica people lived there from approximately the 14th to the 16th centuries. When Spanish explorers arrived, they found the native people and their practices barbaric and knocked down many of their structures and covered over others. Credit: 1587 Aztec Manuscript, The Codex Tovar/Wikimedia Commons As the excavations have continued, the team has been finding clues regarding the makeup of the entire area, which is believed to have been a temple. They now believe that the skull tower has a twin nearby, but have yet to find it. They plan to continue excavating and to further study the skulls and other artifacts to learn more about the culture of the people who lived there, including those who were sacrificed.
Chef Vipul Gupta, Sous Chef, at the Sheraton says that winter is the best time to eat because people don’t mind heavy dishes and the spices. While summer is all about eating light, winter is just the opposite. And Punjab seems to embody the concept of eating it up with all the desi ghee, the butter and the tadka this season. To celebrate the mood, Sheraton has organised their Sardiyan Da Swad at Baywatch. A meal is priced at Rs 1750 per person without taxes and alcohol. Also Read – ‘Playing Jojo was emotionally exhausting’On offer is an excellent buffet that will fill you up with some great food. For starters they have Tandoori Paneer Tikka, Bhatti de Aloo, Cholian di Tikki for the vegetarians and the others can dig into Paind ha Kukkad, Macchi Amritsar Wali, Lahori Sheekh Kabab and Tandoori Pasliyan.Take a pick of a great spread of main courses with Chilmil Ghost ke Sitare, Hare Saag wali Macchi, Rara Gosht, Amritsari Nalli, Teekha Masala Paneer, Sohare wali Daal, Batale di Masala Arbi, Shalgam Palak and much more. They rotate their dishes over the week so guests have a wider range to pick from.Choose between a fragrant biriyani made Punjabi style, the chole pulao or a selection of Indian breads to go with the delicious gravies.Keep place for some yummy desserts like Gud Wali Kheer, Peeran di Niyaz and of course Gajar ka Halwa and we promise you will not regret tucking in so much. Give weight-watching a rest this season and book yourself a table!
Here’s an intriguing exhibition for the art lovers in the Capital! Stupid Eye – created by artist Vipul Amar and psychologist Harsheen K Arora brings together two very different disciplines – psychology and photography. Stupid Eye depicts the process of photographing one’s True Self as a technique to delve deeper into human psyche and bring out the real you! Vipul Amar Studio of Photographic Arts and Psychologist Harsheen K Arora initiated this project where the concept of self-actualization is explored by focusing on one’s Real Vs the Ideal Self. Also Read – ‘Playing Jojo was emotionally exhausting’Stupid comes from its Latin origins, stupere, meaning: be amazed or stunned, with what your inner eye can make you see.As Jung (1978) stated that, ‘It requires no art to become stupid; the whole art lies in extracting wisdom from stupidity. Stupidity is the mother of the wise but cleverness never.’ The study is based on the premise, that with a better understanding of one’s true self, one is better equipped to realize one’s actual potential. Stupid Eye is a culmination of photography with psychological therapeutic techniques to enhance an individual’s insight into their self. Also Read – Leslie doing new comedy special with NetflixOriginating from the Humanistic School of thought, Real Self stands for the way an individual actually is, whereas ideal self on the other hand is the person one wants to be…The idea behind Stupid Eye is to be able to bridge this gap between the two, and move toward self-actualization – the desire to become more and more what one is, to become everything that one is capable of becoming (Maslow, 2006). Through photography and psychology the aim was to o help each person look underneath and connect with their true self; to capture that true self as seen by them and that which emerges out of their therapy sessions into a photograph and to give them that picture of themselves – it being their own little mirror for life. Each member who registered became a part of the Stupid Eye family. It was the group at work! Every member was a wheel in the journey of every other member. In the first meeting a group was formed – where the discussion led to and revolved around every member’s Real Vs Ideal Self.This project began in June 2012 with the first meet-up of a group of people in Mumbai. A brilliant first meeting that was meant to be an hour but lasted six! led to the scope of this project becoming what it is today. Post the photo-shoots for Mumbai and Delhi with the members’ stories being captured beautifully in photographs and through videos made of the entire process reached completion. All the stories that were shared, shot and captured in time are now ready to be revealed to the world. WHERE: Triveni Kala Sangam WHEN: 29 April to 7 May, 11 am to 7 pm
The Capital is going to witness an evening of melodious renditions by Santoor maestro and music composer Abhay Rustam Sopori. India Habitat Centre in collaboration with Siet and Sakhsi presents this analytical santoor concert. The concert is a part of Sound of Music – an introduction to musical instruments of India which is a workshop on recognition, cognition and appreciation of instrumental music with eminent and world acclaimed musicians. The event will also feature a discussion about the Indian classical music and musical scenario with known musicologists Shubhra Mazumdar, Manjari Sinha, Ravindra Mishra and Pt. Vijay Shankar Mishra. Also Read – ‘Playing Jojo was emotionally exhausting’The show aims to create awareness and to bring back the charisma of Indian classical music among the educated, urban middle class youth, who know very little about the depths and intricacies of Indian classical music or the instruments used therein. Also it tries to provide the artistes a broader visibility and opportunity to be acclaimed by the many who have not had an easy opportunity to enjoy such arts. It tries to make the public aware of the eminence and skill of these artistes and also to ensure that no art lover can miss the opportunity to learn from the nuances of music from the best in the field.When: 30 June Where: Stein Auditorium, India Habitat Centre Timing: 6.30 pm – 8.30 pm