Connect with us

(14 September 2020 – KAVLI IPMU) The detection more than a decade ago by the Fermi Gamma-ray Space Telescope of an excess of high-energy radiation in the center of the Milky Way convinced some physicists that they were seeing evidence of the annihilation of dark matter particles, but a team led by a researcher at the Kavli Institute for the Physics and Mathematics of the Universe (Kavli IPMU) has ruled out that interpretation.

In a paper published recently in the journal Physical Review D, the Kavli IPMU Project Researcher Oscar Macias and colleagues at other institutions report that—through an analysis of the Fermi data and an exhaustive series of modelling exercises—they were able to determine that the observed gamma rays could not have been produced by what are called weakly interacting massive particles (WIMPS), most popularly theorized as the stuff of dark matter.

An artist’s interpretation of the Milky Way shows the “boxy” distribution of stars in the Galactic Center. A research team of physicists said in a newly published study that this shape leaves very little room for excess radiation from the destruction of dark matter particles. (courtesy: Oscar Macias)

dark 2

This representation of data from the Fermi Gamma-ray Space Telescope after its launch in 2008 shows an excess of high-energy radiation in the Milky Way’s Galactic Center. Many physicists attributed this to the annihilation of weakly interacting dark matter parti-cles, but a research team study has excluded this possibility through a range of particle masses. (courtesy: Oscar Macias)

“The crucial point of our recent paper is that, our approach covers the wide range of astro-physical background models that have been used to infer the existence of the Galactic Center excess, and goes beyond them. So, using any of our state-of-the-art background models, we find no need for a dark matter component to be included in our model for this sky region. This allows us to impose very stringent constraints on particle dark matter models,” said Macias.

By eliminating these particles, the destruction of which could generate energies of up to 300 giga-electron volts, the paper’s authors say, they have put the strongest constraints yet on dark matter properties.

“For 40 years or so, the leading candidate for dark matter among particle physicists was a thermal, weakly interacting and weak-scale particle, and this result for the first time rules out that candidate up to very high-mass particles,” said co-author Kevork Abazajian, pro-fessor of physics and astronomy at the University of California, Irvine (UCI).

“In many models, this particle ranges from 10 to 1,000 times the mass of a proton, with more massive particles being less attractive theoretically as a dark matter particle,” add-ed co-author Manoj Kaplinghat, also a UCI professor of physics and astronomy. “In this paper, we’re eliminating dark matter candidates over the favored range, which is a huge improvement in the constraints we put on the possibilities that these are representative of dark matter.”

Abazajian said that dark matter signals could be crowded out by other astrophysical phenomena in the Galactic Center—such as star formation, cosmic ray deflection off mo-lecular gas and, most notably, neutron stars and millisecond pulsars—as sources of ex-cess gamma rays detected by the Fermi space telescope.

“We looked at all of the different modelling that goes on in the Galactic Center, including molecular gas, stellar emissions and high-energy electrons that scatter low-energy pho-tons,” said Kavli IPMU’s Macias. “We took over three years to pull all of these new, better models together and examine the emissions, finding that there is little room left for dark matter.”

Macias, who is also a postdoctoral researcher with the GRAPPA Centre at the University of Amsterdam, added that this result would not have been possible without data and software provided by the Fermi Large Area Telescope collaboration.

The group tested all classes of models used in the Galactic Center region for excess emission analyses, and its conclusions remained unchanged. “One would have to craft a diffuse emission model that leaves a big ‘hole’ in them to relax our constraints, and sci-ence doesn’t work that way,” Macias said.

Kaplinghat noted that physicists have predicted that radiation from dark matter annihila-tion would be represented in a neat spherical or elliptical shape emanating from the Ga-lactic Center, but the gamma ray excess detected by the Fermi space telescope after its June 2008 deployment shows up as a triaxial, bar-like structure.

“If you peer at the Galactic Center, you see that the stars are distributed in a boxy way,” he said. “There’s a disk of stars, and right in the center, there’s a bulge that’s about 10 degrees on the sky, and it’s actually a very specific shape—sort of an asymmetric box—and this shape leaves very little room for additional dark matter.”

Does this research rule out the existence of dark matter in the galaxy? “No,” Kaplinghat said. “Our study constrains the kind of particle that dark matter could be. The multiple lines of evidence for dark matter in the galaxy are robust and unaffected by our work.”

Far from considering the team’s findings to be discouraging, Abazajian said they should encourage physicists to focus on concepts other than the most popular ones.

“There are a lot of alternative dark matter candidates out there,” he said. “The search is going to be more like a fishing expedition where you don’t already know where the fish are.”

This project was made possible via funding from the World Premiere International Center Initiative (WPI), an initiative of the Ministry of Education, Culture, Sport, Science and Technology to create world-leading research centres in Japan; the National Science Foundation, and the U.S. Department of Energy Office of Science.

Publication

Title: Strong constraints on thermal relic dark matter from Fermi-LAT observations of the Galactic Center
Authors: Kevork N. Abazajian, Shunsaku Horiuchi, Manoj Kaplinghat, Ryan E. Keeley, and Oscar Macias
Journal: Physical Review D
DOI: doi.org/10.1103/PhysRevD.102.043012 (Published August 20, 2020)

Source link

0
Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Space

hints of fresh ice in northern hemisphere

hints of fresh ice in northern hemisphere

(18 September 2020 – JPL) New composite images made from NASA’s Cassini spacecraft are the most detailed global infrared views ever produced of Saturn’s moon Enceladus. And data used to build those images provides strong evidence that the northern hemisphere of the moon has been resurfaced with ice from its interior.

Cassini’s Visible and Infrared Mapping Spectrometer (VIMS) collected light reflected off Saturn, its rings and its ten major icy moons – light that is visible to humans as well as infrared light. VIMS then separated the light into its various wavelengths, information that tells scientists more about the makeup of the material reflecting it.

The VIMS data, combined with detailed images captured by Cassini’s Imaging Science Subsystem, were used to make the new global spectral map of Enceladus.

In these detailed infrared images of Saturn’s icy moon Enceladus, reddish areas indicate fresh ice that has been deposited on the surface. (courtesy: NASA/JPL-Caltech/University of Arizona/LPG/CNRS/University of Nantes/Space Science Institute)

Cassini scientists discovered in 2005 that Enceladus – which looks like a highly reflective, bright white snowball to the naked eye – shoots out enormous plumes of ice grains and vapor from an ocean that lies under the icy crust. The new spectral map shows that infrared signals clearly correlate with that geologic activity, which is easily seen at the south pole. That’s where the so-called “tiger stripe” gashes blast ice and vapor from the interior ocean.

But some of the same infrared features also appear in the northern hemisphere. That tells scientists not only that the northern area is covered with fresh ice but that the same kind of geologic activity – a resurfacing of the landscape – has occurred in both hemispheres. The resurfacing in the north may be due either to icy jets or to a more gradual movement of ice through fractures in the crust, from the subsurface ocean to the surface.

“The infrared shows us that the surface of the south pole is young, which is not a surprise because we knew about the jets that blast icy material there,” said Gabriel Tobie, VIMS scientist with the University of Nantes in France and co-author of the new research published in Icarus.

“Now, thanks to these infrared eyes, you can go back in time and say that one large region in the northern hemisphere appears also young and was probably active not that long ago, in geologic timelines.”

Managed by NASA’s Jet Propulsion Laboratory in Southern California, Cassini was an orbiter that observed Saturn for more than 13 years before exhausting its fuel supply. The mission plunged it into the planet’s atmosphere in September 2017, in part to protect Enceladus, which has the potential of holding conditions suitable for life, with its ocean likely heated and churned by hydrothermal vents like those on Earth’s ocean floors.

The Cassini-Huygens mission is a cooperative project of NASA, ESA (the European Space Agency) and the Italian Space Agency. JPL, a division of Caltech in Pasadena, manages the mission for NASA’s Science Mission Directorate in Washington. JPL designed, developed and assembled the Cassini orbiter.

Source link

0
Continue Reading

Space

Rocket Lab completes final dress rehearsal at Launch Complex 2 ahead of first Electron mission from U.S. soil

Rocket Lab completes final dress rehearsal at Launch Complex 2

(17 September 2020 – Rocket Lab) Rocket Lab has successfully completed a wet dress rehearsal of the Electron vehicle at Rocket Lab Launch Complex 2 (LC-2) at the Mid-Atlantic Regional Spaceport in Wallops Island, Virginia.

With this major milestone complete, the Electron launch vehicle, launch team, and the LC-2 pad systems are now ready for Rocket Lab’s first launch from U.S. soil. The mission is a dedicated launch for the United States Space Force in partnership with the Department of Defense’s Space Test Program and the Space and Missile Systems Center’s Small Launch and Targets Division.

(courtesy: Rocket Lab)

The wet dress rehearsal is a crucial final exercise conducted by the launch team to ensure all systems and procedures are working perfectly ahead of launch day. The Electron launch vehicle was rolled out to the pad, raised vertical and filled with high grade kerosene and liquid oxygen to verify fueling procedures. The launch team then flowed through the integrated countdown to T-0 to carry out the same operations they will undertake on launch day. Before a launch window can be set, NASA is conducting the final development and certification of its Autonomous Flight Termination System (AFTS) software for the mission. This flight will be the first time an AFTS has been has flown from the Mid-Atlantic Regional Spaceport and represents a valuable new capability for the spaceport.

Launch Complex 2 supplements Rocket Lab’s existing site, Launch Complex 1 in New Zealand, from which 14 Electron missions have already launched. The two launch complexes combined can support more than 130 launch opportunities every year to deliver unmatched flexibility for rapid, responsive launch to support a resilient space architecture. Operating two launch complexes in diverse geographic locations provides an unrivalled level of redundancy and assures access to space regardless of disruption to any one launch site.

“Responsive launch is the key to resilience in space and this is what Launch Complex 2 enables,” said Peter Beck, Rocket Lab founder and Chief Executive. “All satellites are vulnerable, be it from accidental or deliberate actions. By operating a proven launch vehicle from two launch sites on opposite sides of the world, Rocket Lab delivers unmatched flexibility and responsiveness for the defense and national security community to quickly replace any disabled satellite. We’re immensely proud to be delivering reliable and flexible launch capability to the U.S. Space Force and the wider defense community as space becomes an increasingly contested domain.”

While the launch team carried out this week’s wet dress rehearsal, construction is nearing completion on the Rocket Lab Integration and Control Facility (ICF) within the Wallops Research Park, adjacent to NASA Wallops Flight Facility Main Base. The ICF houses a launch control center, state-of-the-art payload integration facilities, and a vehicle integration department that enables the processing of multiple Electron vehicles to support multiple launches in rapid succession. The build has been carried out in just a few short months thanks to the tireless support of Virginia Space, Governor Northam, Virginia Secretary of Transportation Shannon Valentine, and Accomack County.

Source link

0
Continue Reading

Space

NASA technology enables precision landing without a pilot

NASA technology enables precision landing without a pilot

(17 September 2020 – NASA) Some of the most interesting places to study in our solar system are found in the most inhospitable environments – but landing on any planetary body is already a risky proposition.

With NASA planning robotic and crewed missions to new locations on the Moon and Mars, avoiding landing on the steep slope of a crater or in a boulder field is critical to helping ensure a safe touch down for surface exploration of other worlds. In order to improve landing safety, NASA is developing and testing a suite of precise landing and hazard-avoidance technologies.

A new suite of lunar landing technologies, called Safe and Precise Landing – Integrated Capabilities Evolution (SPLICE), will enable safer and more accurate lunar landings than ever before. Future Moon missions could use NASA’s advanced SPLICE algorithms and sensors to target landing sites that weren’t possible during the Apollo missions, such as regions with hazardous boulders and nearby shadowed craters. SPLICE technologies could also help land humans on Mars. (courtesy: NASA)

A combination of laser sensors, a camera, a high-speed computer, and sophisticated algorithms will give spacecraft the artificial eyes and analytical capability to find a designated landing area, identify potential hazards, and adjust course to the safest touchdown site. The technologies developed under the Safe and Precise Landing – Integrated Capabilities Evolution (SPLICE) project within the Space Technology Mission Directorate’s Game Changing Development program will eventually make it possible for spacecraft to avoid boulders, craters, and more within landing areas half the size of a football field already targeted as relatively safe.

Three of SPLICE’s four main subsystems will have their first integrated test flight on a Blue Origin New Shepard rocket during an upcoming mission. As the rocket’s booster returns to the ground, after reaching the boundary between Earth’s atmosphere and space, SPLICE’s terrain relative navigation, navigation Doppler lidar, and descent and landing computer will run onboard the booster. Each will operate in the same way they will when approaching the surface of the Moon.

The fourth major SPLICE component, a hazard detection lidar, will be tested in the future via ground and flight tests.

The New Shepard (NS) booster lands after this vehicle’s fifth flight during NS-11 May 2, 2019. (courtesy: Blue Origin)

Following Breadcrumbs

When a site is chosen for exploration, part of the consideration is to ensure enough room for a spacecraft to land. The size of the area, called the landing ellipse, reveals the inexact nature of legacy landing technology. The targeted landing area for Apollo 11 in 1968 was approximately 11 miles by 3 miles, and astronauts piloted the lander. Subsequent robotic missions to Mars were designed for autonomous landings. Viking arrived on the Red Planet 10 years later with a target ellipse of 174 miles by 62 miles.

nasa 3

The Apollo 11 landing ellipse, shown here, was 11 miles by 3 miles. Precision landing technology will reduce landing area drastically, allowing for multiple missions to land in the same region. (courtesy: NASA)

Technology has improved, and subsequent autonomous landing zones decreased in size. In 2012, the Curiosity rover landing ellipse was down to 12 miles by 4 miles.

Being able to pinpoint a landing site will help future missions target areas for new scientific explorations in locations previously deemed too hazardous for an unpiloted landing. It will also enable advanced supply missions to send cargo and supplies to a single location, rather than spread out over miles.

Each planetary body has its own unique conditions. That’s why “SPLICE is designed to integrate with any spacecraft landing on a planet or moon,” said project manager Ron Sostaric. Based at NASA’s Johnson Space Center in Houston, Sostaric explained the project spans multiple centers across the agency.

“What we’re building is a complete descent and landing system that will work for future Artemis missions to the Moon and can be adapted for Mars,” he said. “Our job is to put the individual components together and make sure that it works as a functioning system.”

Atmospheric conditions might vary, but the process of descent and landing is the same. The SPLICE computer is programmed to activate terrain relative navigation several miles above the ground. The onboard camera photographs the surface, taking up to 10 pictures every second. Those are continuously fed into the computer, which is preloaded with satellite images of the landing field and a database of known landmarks.

Algorithms search the real-time imagery for the known features to determine the spacecraft location and navigate the craft safely to its expected landing point. It’s similar to navigating via landmarks, like buildings, rather than street names.

In the same way, terrain relative navigation identifies where the spacecraft is and sends that information to the guidance and control computer, which is responsible for executing the flight path to the surface. The computer will know approximately when the spacecraft should be nearing its target, almost like laying breadcrumbs and then following them to the final destination.

This process continues until approximately four miles above the surface.

nasa 4

NASA’s navigation Doppler lidar instrument is comprised of a chassis, containing electro-optic and electronic components, and an optical head with three telescopes. (courtesy: NASA)

Laser Navigation

Knowing the exact position of a spacecraft is essential for the calculations needed to plan and execute a powered descent to precise landing. Midway through the descent, the computer turns on the navigation Doppler lidar to measure velocity and range measurements that further add to the precise navigation information coming from terrain relative navigation. Lidar (light detection and ranging) works in much the same way as a radar but uses light waves instead of radio waves. Three laser beams, each as narrow as a pencil, are pointed toward the ground. The light from these beams bounces off the surface, reflecting back toward the spacecraft.

The travel time and wavelength of that reflected light are used to calculate how far the craft is from the ground, what direction it’s heading, and how fast it’s moving. These calculations are made 20 times per second for all three laser beams and fed into the guidance computer.

Doppler lidar works successfully on Earth. However, Farzin Amzajerdian, the technology’s co-inventor and principal investigator from NASA’s Langley Research Center in Hampton, Virginia, is responsible for addressing the challenges for use in space.

nasa 5

Langley engineer John Savage inspects a section of the navigation Doppler lidar unit after its manufacture from a block of metal. (courtesy: NASA/David C. Bowman)

“There are still some unknowns about how much signal will come from the surface of the Moon and Mars,” he said. If material on the ground is not very reflective, the signal back to the sensors will be weaker. But Amzajerdian is confident the lidar will outperform radar technology because the laser frequency is orders of magnitude greater than radio waves, which enables far greater precision and more efficient sensing.

The workhorse responsible for managing all of this data is the descent and landing computer. Navigation data from the sensor systems is fed to onboard algorithms, which calculate new pathways for a precise landing.

nasa 6

SPLICE hardware undergoing preparations for a vacuum chamber test. Three of SPLICE’s four main subsystems will have their first integrated test flight on a Blue Origin New Shepard rocket. (courtesy: NASA)

Computer Powerhouse

The descent and landing computer synchronizes the functions and data management of individual SPLICE components. It must also integrate seamlessly with the other systems on any spacecraft. So, this small computing powerhouse keeps the precision landing technologies from overloading the primary flight computer.

The computational needs identified early on made it clear that existing computers were inadequate. NASA’s high-performance spaceflight computing processor would meet the demand but is still several years from completion. An interim solution was needed to get SPLICE ready for its first suborbital rocket flight test with Blue Origin on its New Shepard rocket. Data from the new computer’s performance will help shape its eventual replacement.

John Carson, the technical integration manager for precision landing, explained that “the surrogate computer has very similar processing technology, which is informing both the future high-speed computer design, as well as future descent and landing computer integration efforts.”

Looking forward, test missions like these will help shape safe landing systems for missions by NASA and commercial providers on the surface of the Moon and other solar system bodies.

“Safely and precisely landing on another world still has many challenges,” said Carson. “There’s no commercial technology yet that you can go out and buy for this. Every future surface mission could use this precision landing capability, so NASA’s meeting that need now. And we’re fostering the transfer and use with our industry partners.”

Source link

0
Continue Reading

Trending