Sunday, July 12, 2020

Scientists tap the world's most powerful computers in the race to understand and stop the coronavirus

It takes a tremendous amount of computing power to simulate all the components and behaviors of viruses and cells. Thomas Splettstoesser, CC BY-ND
Jeremy Smith, University of Tennessee
In “The Hitchhiker’s Guide to the Galaxy” by Douglas Adams, the haughty supercomputer Deep Thought is asked whether he can find the answer to the ultimate question concerning life, the universe and everything. He replies that, yes, he can do it, but it’s tricky and he’ll have to think about it. When asked how long it will take him he replies, “Seven-and-a-half million years. I told you I’d have to think about it.”
Real-life supercomputers are being asked somewhat less expansive questions but tricky ones nonetheless: how to tackle the COVID-19 pandemic. They’re being used in many facets of responding to the disease, including to predict the spread of the virus, to optimize contact tracing, to allocate resources and provide decisions for physicians, to design vaccines and rapid testing tools and to understand sneezes. And the answers are needed in a rather shorter time frame than Deep Thought was proposing.
The largest number of COVID-19 supercomputing projects involves designing drugs. It’s likely to take several effective drugs to treat the disease. Supercomputers allow researchers to take a rational approach and aim to selectively muzzle proteins that SARS-CoV-2, the virus that causes COVID-19, needs for its life cycle.
The viral genome encodes proteins needed by the virus to infect humans and to replicate. Among these are the infamous spike protein that sniffs out and penetrates its human cellular target, but there are also enzymes and molecular machines that the virus forces its human subjects to produce for it. Finding drugs that can bind to these proteins and stop them from working is a logical way to go.

The Summit supercomputer at Oak Ridge National Laboratory has a peak performance of 200,000 trillion calculations per second – equivalent to about a million laptops. Oak Ridge National Laboratory, U.S. Dept. of Energy, CC BY

I am a molecular biophysicist. My lab, at the Center for Molecular Biophysics at the University of Tennessee and Oak Ridge National Laboratory, uses a supercomputer to discover drugs. We build three-dimensional virtual models of biological molecules like the proteins used by cells and viruses, and simulate how various chemical compounds interact with those proteins. We test thousands of compounds to find the ones that “dock” with a target protein. Those compounds that fit, lock-and-key style, with the protein are potential therapies.
The top-ranked candidates are then tested experimentally to see if they indeed do bind to their targets and, in the case of COVID-19, stop the virus from infecting human cells. The compounds are first tested in cells, then animals, and finally humans. Computational drug discovery with high-performance computing has been important in finding antiviral drugs in the past, such as the anti-HIV drugs that revolutionized AIDS treatment in the 1990s.

World’s most powerful computer

Since the 1990s the power of supercomputers has increased by a factor of a million or so. Summit at Oak Ridge National Laboratory is presently the world’s most powerful supercomputer, and has the combined power of roughly a million laptops. A laptop today has roughly the same power as a supercomputer had 20-30 years ago.
However, in order to gin up speed, supercomputer architectures have become more complicated. They used to consist of single, very powerful chips on which programs would simply run faster. Now they consist of thousands of processors performing massively parallel processing in which many calculations, such as testing the potential of drugs to dock with a pathogen or cell’s proteins, are performed at the same time. Persuading those processors to work together harmoniously is a pain in the neck but means we can quickly try out a lot of chemicals virtually.
Further, researchers use supercomputers to figure out by simulation the different shapes formed by the target binding sites and then virtually dock compounds to each shape. In my lab, that procedure has produced experimentally validated hits – chemicals that work – for each of 16 protein targets that physician-scientists and biochemists have discovered over the past few years. These targets were selected because finding compounds that dock with them could result in drugs for treating different diseases, including chronic kidney disease, prostate cancer, osteoporosis, diabetes, thrombosis and bacterial infections.

Scientists are using supercomputers to find ways to disable the various proteins – including the infamous spike protein (green protrusions) – produced by SARS-CoV-2, the virus responsible for COVID-19. Thomas Splettstoesser, CC BY-ND

Billions of possibilities

So which chemicals are being tested for COVID-19? A first approach is trying out drugs that already exist for other indications and that we have a pretty good idea are reasonably safe. That’s called “repurposing,” and if it works, regulatory approval will be quick.
But repurposing isn’t necessarily being done in the most rational way. One idea researchers are considering is that drugs that work against protein targets of some other virus, such as the flu, hepatitis or Ebola, will automatically work against COVID-19, even when the SARS-CoV-2 protein targets don’t have the same shape.

ACE2 acts as the docking receptor for the SARS-CoV-2 virus’s spike protein and allows the virus to infect the cell. The Conversation, CC BY-SA

The best approach is to check if repurposed compounds will actually bind to their intended target. To that end, my lab published a preliminary report of a supercomputer-driven docking study of a repurposing compound database in mid-February. The study ranked 8,000 compounds in order of how well they bind to the viral spike protein. This paper triggered the establishment of a high-performance computing consortium against our viral enemy, announced by President Trump in March. Several of our top-ranked compounds are now in clinical trials.
Our own work has now expanded to about 10 targets on SARS-CoV-2, and we’re also looking at human protein targets for disrupting the virus’s attack on human cells. Top-ranked compounds from our calculations are being tested experimentally for activity against the live virus. Several of these have already been found to be active.
Also, we and others are venturing out into the wild world of new drug discovery for COVID-19 – looking for compounds that have never been tried as drugs before. Databases of billions of these compounds exist, all of which could probably be synthesized in principle but most of which have never been made. Billion-compound docking is a tailor-made task for massively parallel supercomputing.

Dawn of the exascale era

Work will be helped by the arrival of the next big machine at Oak Ridge, called Frontier, planned for next year. Frontier should be about 10 times more powerful than Summit. Frontier will herald the “exascale” supercomputing era, meaning machines capable of 1,000,000,000,000,000,000 calculations per second.
Although some fear supercomputers will take over the world, for the time being, at least, they are humanity’s servants, which means that they do what we tell them to. Different scientists have different ideas about how to calculate which drugs work best – some prefer artificial intelligence, for example – so there’s quite a lot of arguing going on.
Hopefully, scientists armed with the most powerful computers in the world will, sooner rather than later, find the drugs needed to tackle COVID-19. If they do, then their answers will be of more immediate benefit, if less philosophically tantalizing, than the answer to the ultimate question provided by Deep Thought, which was, maddeningly, simply 42.
[Get our best science, health and technology stories. Sign up for The Conversation’s science newsletter.]The Conversation
Jeremy Smith, Governor's Chair, Biophysics, University of Tennessee
This article is republished from The Conversation under a Creative Commons license. Read the original article.

Wednesday, July 8, 2020

Scientific fieldwork 'caught in the middle' of US-Mexico border tensions

The political border cuts in two a region rich in biological and cultural diversity. John Moore/Getty Images News via Getty Images
Taylor Edwards, University of Arizona

Imagine you’re a scientist, setting out camera traps to snap pictures of wildlife in a remote area of southern Arizona. You set out with your gear early in the morning, but it took longer than expected to find all the locations with your GPS. Now, on your hike back, it’s really starting to heat up.

You try to stick to the shaded, dry washes, and as you round a bend, you’re surprised to see several people huddled under a scraggly mesquite tree against the side of the steep ravine: Mexican immigrants crossing the border. They look dirty and afraid, but so do you.

“¿Tienes agua?” they timidly ask, and you see their empty plastic water containers.

This fictionalized scenario reflects a composite of real incidents experienced by U.S. and Mexican researchers, including me, on both sides of the border in the course of their fieldwork. While giving aid may be the moral thing to do, there can be consequences. Humanitarian aid workers in Arizona have been arrested for leaving food and water for migrants in similar situations, and such arrests have risen since 2017.

In the course of their fieldwork, researchers can encounter migrants, Border Control agents and drug traffickers. Loren Elliott/AFP via Getty Images

The U.S.-Mexico border is a region of significant biological and cultural diversity that draws researchers from a wide variety of disciplines, including geology, biology, environmental sciences, archaeology, hydrology, and cultural and social sciences. It is also an area of humanitarian crisis and contentious politics.

Migrants have always been a part of this area, but dangerous drug cartels and increasing militarization have added additional challenges for those who live and work here. U.S. and Mexican researchers are faced with ethical and logistical challenges in navigating this political landscape. To better understand these complex dynamics, my colleagues and I conducted an anonymous survey among researchers who work in the border region to learn how border politics affect collaboration and researchers’ ability to perform their jobs.

Camera traps meant to take photos of wildlife also capture images of the people traversing this landscape. Myles Traphagen, CC BY-ND

Border fieldwork comes with complications

Our binational, multidisciplinary group of concerned scientists distributed an anonymous, online survey to 807 members of the Next-Generation Sonoran Desert Researchers Network. From this group of academic professionals, college students and employees of nonprofit organizations and federal and state agencies who work in the U.S.-Mexico border region, we received 59 responses. While not yet published in a peer-reviewed journal, a summary of our results can be found on the N-Gen website, and the original data is available online.

Researchers in our pre-pandemic study reported feeling safe for the most part while working in the U.S.-Mexico border region. However this may reflect the fact that they adjust their work to stay away from risky places.

Respondents noted the importance of knowing individuals and communities where they work. For instance, one U.S.-based researcher told us, “I feel safe in Mexico where I know landowners and they know me. I don’t feel safe in U.S. public lands due to Border Patrol’s extensive presence, their racial profiling ways and guns pulled on me.”

Many respondents reported having encountered situations during fieldwork when they felt their security was threatened, occurring relatively equally on both sides of the border. Participants did not express safety concerns due to migrants themselves, but instead pointed to the militarization and criminal activity associated with the region.

Safety concerns on the Mexico side were primarily due to drug cartels and other criminal activity. Concerns in the U.S. centered on direct intimidation or “uneasy” or threatening encounters with U.S. Border Patrol, private landowners or militias.

As a result of safety concerns, many researchers from both countries reported their organization or employer had placed restrictions on working in the border areas of Mexico. In most cases, this meant limiting access to specific areas or requiring additional paperwork or approval through their institution.

Respondents reported logistical issues “altered or disrupted” their ability to perform fieldwork. These problems ranged from trouble crossing the border to difficulty obtaining necessary paperwork and permissions.

One researcher reported that permit delays for shipping scientific equipment across the border had stalled their research for over a year. More than half of respondents said these issues had increased in frequency or caused greater disruption to their work within the last three years.

Caught in the middle

Unsurprisingly, most researchers surveyed (69%) said they’ve encountered undocumented migrants while conducting fieldwork in the border region, although infrequently.

In situations of contact, migrants asked for assistance, such as food, water or a ride, a little over half of the time. Researchers drew a clear distinction between their willingness to offer food or water versus providing transportation.

Despite concerns about recent prosecutions of humanitarian aid workers in the border region, the threat was not sufficient to stop most respondents from taking action they viewed as moral or ethical.

“I would have pause given legal ramifications,” one person told us, “But I do not think this would change how I would act.” Survey respondents commented that they felt “caught in the middle” of an “impossible situation,” where the fear of prosecution conflicts with their moral imperative to help people in need.

A volunteer collects data as part of an ongoing Borderlands Sister Parks project in Rancho San Bernardino, Sonora, Mexico. Sky Island Alliance, CC BY-ND

Overall our results suggest that research is affected by border policies in myriad ways: Restricted access to areas reduces scientists’ ability to collect comprehensive data, such as are necessary for conducting biodiversity inventories.

Restrictions directly affecting the ability of researchers to collaborate over international boundaries can limit creativity and discovery. That can have long-term impacts, such as further separating countries’ ability to understand each other and foster meaningful partnerships catalyzed by science, including industrial innovation or ecological sustainability.

Societies have the right to enjoy the benefits of science. This requires that scientists are able to collaborate internationally and to fulfill their functions without discrimination or fear of repression or prosecution.

[Deep knowledge, daily. Sign up for The Conversation’s newsletter.]The Conversation

Taylor Edwards, Associate Staff Scientist, University of Arizona

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Tuesday, June 23, 2020

Self-driving taxis could be a setback for those with different needs – unless companies embrace accessible design now

Wheelchair advocates and taxi drivers protest lack of accessibility and surge pricing in New York City on Tuesday, January 19, 2016. Richard Levine/Corbis via Getty Images
John Lunsford, Cornell University

Autonomous vehicles (AVs), like self-driving taxis, continue to garner media attention as industry and political stakeholders claim that they will improve safety and access to transportation for everyone. But for people who have different mobility needs and rely on human drivers for work beyond the task of driving, the prospect of driverless taxis may not sound like progress. Unless accommodations are built in to autonomous vehicle designs, companies risk undermining transportation access for the very communities this technology is promising to include.

The promise

A January 2020 joint report issued by the National Science and Technology Council and U.S. Department of Transportation paints a bright picture of an autonomous-enabled future. They predict autonomous vehicles will provide “improved quality of life, access and mobility for all citizens.” Replacing the driver with an autonomous system will create safer transportation by removing the “possibility of human error.”

In addition, synchronizing vehicle movement with distance and traffic patterns would not only result in more efficient service, but safer roadway navigation. These advances should mean fewer cars, less traffic, more economical fuel use and increased vehicle availability.

More than driving

If done right, autonomous vehicles could improve access to transportation for everyone. But by not accounting for the many other kinds of labor a driver performs, current AVs may present problems for people with different needs.

Drivers perform work beyond driving. Justice Ender/Flickr

For older people, those with disabilities and even individuals in emergency situations, the driver bridges the gap between personal capability and vehicle accessibility.

Drivers help people to and from vehicles, as well as into and out of them. Drivers move and store luggage and mobility equipment like wheelchairs and walkers, and navigate emergency situations like cardiac arrest, allergic reaction or drug overdose.

Yet right now asking an AV interface for assistance would be like asking Siri to help you up if you’ve fallen down.

Two unequal systems

In the 1970s and years thereafter, Congress determined that redesigning transportation for accessibility was too costly. Instead they fitted assistive devices to old transportation networks and expected private sector taxi drivers to help. Some did, many didn’t.

Problems of discrimination led to the landmark American with Disabilities Act of 1990. The ADA made discrimination based on ability illegal – but access to transportation was still dependent on the driver.

Taxi access is already problematic due to a two-tiered system. mokee81/iStock via Getty Images Plus

Today, cities and companies are still struggling with accessibility. People with different needs remain vulnerable to the whims and prejudices of the driver. Too often people with different needs are denied assistance or transportation altogether.

It was only in 2016, for instance, that Boston’s taxis, Uber and later Lyft began integrating a small number of Wheelchair Accessible Vehicles into their fleets, and other companies have emerged like SilverRide offer specialty service for people who are older.

But even with these additions, taxi, Uber and Lyft riders still experience cancellations and longer wait times in cities like Washington, D.C., Boston, Chicago, San Francisco and New York.

A 2019 study comparing the wait times for Wheelchair Accessible Vehicles (WAVs) to inaccessible vehicles in New York City. The wait time for Uber WAV was more than two times as long and Lyft WAV was more than five times as long. New York Lawyers for the Public Interest, Still Left Behind whitepaper, CC BY

While specialized vehicles are a valuable step toward accessible transportation, they also mean more cars on the road. A 2017 study found Uber and Lyft are increasing traffic congestion in cities leading to increases in safety risks, transit times and pollution. To add to the traffic problem, the International Transportation Forum predicts that traffic will likely increase even more as autonomous cars occupy the road alongside traditional ones.

The future

AV developers struggle with what accessibility should look like. Some leading AV companies focus on accessibility inside the car. Waymo and Lyft are working to communicate information to passengers with disabilities. Nissan’s Virtual Reality avatars may provide company, comfort and assistance to passengers in need.

Other AV companies approach accessibility by redesigning access. Startup May Mobility’s low speed shuttle can deploy a wheelchair ramp. Tesla’s gull wing doors open vertically for easier access and their Smart Summons feature allows drivers to call their car to fetch them.

In my opinion, vehicle specialization should not be the path forward. A wheelchair ramp in one car and Braille in another will increase cars on the road, decrease availability and increase consumer cost. For AVs to fulfill the promise of accessibility and be environmentally efficient, all cars need to be similarly accessible – even if the mechanisms of accessibility are not always in use. This way AVs can more closely mirror the variety of tasks human drivers currently perform and do it reliably, without discrimination. Standard features could include push button or voice activated motorized doors with sliding ramps, an entry space instead of front seats and interior handrails.

A good place to start is for stakeholders to agree on what accessibility needs must be met and treat AV developments as pieces of an accessibility solution rather than separate niche markets racing toward minimum accommodations. The nonprofit research and community equity organization, The Greenlining Institute, suggests, in addition to capability, accessibility should also include financial, cultural, technological, logistical, race, gender, age, class and geographic considerations. If autonomous vehicles are developed to handle the messiness and complexity taxi drivers currently deal with, society will be one step closer to real accessibility.

[Deep knowledge, daily. Sign up for The Conversation’s newsletter.]The Conversation

John Lunsford, PhD Candidate in Media, Technology and Society, Cornell University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Tuesday, June 2, 2020

To safely explore the solar system and beyond, spaceships need to go faster – nuclear-powered rockets may be the answer

Over the last 50 years, a lot has changed in rocketry. The fuel that powers spaceflight might finally be changing too. CSA-Printstock/DIgital Vision Vectors via Getty Images
Iain Boyd, University of Colorado Boulder

With dreams of Mars on the minds of both NASA and Elon Musk, long-distance crewed missions through space are coming. But you might be surprised to learn that modern rockets don’t go all that much faster than the rockets of the past.

There are a lot of reasons that a faster spaceship is a better one, and nuclear-powered rockets are a way to do this. They offer many benefits over traditional fuel-burning rockets or modern solar-powered electric rockets, but there have been only eight U.S. space launches carrying nuclear reactors in the last 40 years.

However, last year the laws regulating nuclear space flights changed and work has already begun on this next generation of rockets.

Why the need for speed?

The first step of a space journey involves the use of launch rockets to get a ship into orbit. These are the large fuel-burning engines people imagine when they think of rocket launches and are not likely to go away in the foreseeable future due to the constraints of gravity.

It is once a ship reaches space that things get interesting. To escape Earth’s gravity and reach deep space destinations, ships need additional acceleration. This is where nuclear systems come into play. If astronauts want to explore anything farther than the Moon and perhaps Mars, they are going to need to be going very very fast. Space is massive, and everything is far away.

There are two reasons faster rockets are better for long-distance space travel: safety and time.

Astronauts on a trip to Mars would be exposed to very high levels of radiation which can cause serious long-term health problems such as cancer and sterility. Radiation shielding can help, but it is extremely heavy, and the longer the mission, the more shielding is needed. A better way to reduce radiation exposure is to simply get where you are going quicker.

But human safety isn’t the only benefit. As space agencies probe farther out into space, it is important to get data from unmanned missions as soon as possible. It took Voyager-2 12 years just to reach Neptune, where it snapped some incredible photos as it flew by. If Voyager-2 had a faster propulsion system, astronomers could have had those photos and the information they contained years earlier.

Speed is good. But why are nuclear systems faster?

The Saturn V rocket was 363 feet tall and mostly just a gas tank. Mike Jetzer/, CC BY-NC-ND

Systems of today

Once a ship has escaped Earth’s gravity, there are three important aspects to consider when comparing any propulsion system:

  • Thrust – how fast a system can accelerate a ship
  • Mass efficiency – how much thrust a system can produce for a given amount of fuel
  • Energy density – how much energy a given amount of fuel can produce

Today, the most common propulsion systems in use are chemical propulsion – that is, regular fuel-burning rockets – and solar-powered electric propulsion systems.

Chemical propulsion systems provide a lot of thrust, but chemical rockets aren’t particularly efficient, and rocket fuel isn’t that energy-dense. The Saturn V rocket that took astronauts to the Moon produced 35 million Newtons of force at liftoff and carried 950,000 gallons of fuel. While most of the fuel was used in getting the rocket into orbit, the limitations are apparent: It takes a lot of heavy fuel to get anywhere.

Electric propulsion systems generate thrust using electricity produced from solar panels. The most common way to do this is to use an electrical field to accelerate ions, such as in the Hall thruster. These devices are commonly used to power satellites and can have more than five times higher mass efficiency than chemical systems. But they produce much less thrust – about three Newtons, or only enough to accelerate a car from 0-60 mph in about two and a half hours. The energy source – the Sun – is essentially infinite but becomes less useful the farther away from the Sun the ship gets.

One of the reasons nuclear-powered rockets are promising is because they offer incredible energy density. The uranium fuel used in nuclear reactors has an energy density that is 4 million times higher than hydrazine, a typical chemical rocket propellant. It is much easier to get a small amount of uranium to space than hundreds of thousands of gallons of fuel.

So what about thrust and mass efficiency?

The first nuclear thermal rocket was built in 1967 and is seen in the background. In the foreground is the protective casing that would hold the reactor. NASA/Wikipedia

Two options for nuclear

Engineers have designed two main types of nuclear systems for space travel.

The first is called nuclear thermal propulsion. These systems are very powerful and moderately efficient. They use a small nuclear fission reactor – similar to those found in nuclear submarines – to heat a gas, such as hydrogen, and that gas is then accelerated through a rocket nozzle to provide thrust. Engineers from NASA estimate that a mission to Mars powered by nuclear thermal propulsion would be 20%-25% shorter than a trip on a chemical-powered rocket.

Nuclear thermal propulsion systems are more than twice as efficient as chemical propulsion systems – meaning they generate twice as much thrust using the same amount of propellant mass – and can deliver 100,000 Newtons of thrust. That’s enough force to get a car from 0-60 mph in about a quarter of a second.

The second nuclear-based rocket system is called nuclear electric propulsion. No nuclear electric systems have been built yet, but the idea is to use a high-power fission reactor to generate electricity that would then power an electrical propulsion system like a Hall thruster. This would be very efficient, about three times better than a nuclear thermal propulsion system. Since the nuclear reactor could create a lot of power, many individual electric thrusters could be operated simultaneously to generate a good amount of thrust.

Nuclear electric systems would be the best choice for extremely long-range missions because they don’t require solar energy, have very high efficiency and can give relatively high thrust. But while nuclear electric rockets are extremely promising, there are still a lot of technical problems to solve before they are put into use.

An artist’s impression of what a nuclear thermal ship built to take humans to Mars could look like. John Frassanito & Associates/Wikipedia

Why aren’t there nuclear powered rockets yet?

Nuclear thermal propulsion systems have been studied since the 1960s but have not yet flown in space.

Regulations first imposed in the U.S. in the 1970s essentially required case-by-case examination and approval of any nuclear space project from multiple government agencies and explicit approval from the president. Along with a lack of funding for nuclear rocket system research, this environment prevented further improvement of nuclear reactors for use in space.

That all changed when the Trump administration issued a presidential memorandum in August 2019. While upholding the need to keep nuclear launches as safe as possible, the new directive allows for nuclear missions with lower amounts of nuclear material to skip the multi-agency approval process. Only the sponsoring agency, like NASA, for example, needs to certify that the mission meets safety recommendations. Larger nuclear missions would go through the same process as before.

Along with this revision of regulations, NASA received US$100 million in the 2019 budget to develop nuclear thermal propulsion. DARPA is also developing a space nuclear thermal propulsion system to enable national security operations beyond Earth orbit.

After 60 years of stagnation, it’s possible a nuclear-powered rocket will be heading to space within a decade. This exciting achievement will usher in a new era of space exploration. People will go to Mars and science experiments will make new discoveries all across our solar system and beyond.

[You’re too busy to read everything. We get it. That’s why we’ve got a weekly newsletter. Sign up for good Sunday reading. ]The Conversation

Iain Boyd, Professor of Aerospace Engineering Sciences, University of Colorado Boulder

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Thursday, May 28, 2020

SpaceX astronaut launch: here's the rocket science it must get right

Gareth Dorrian, University of Birmingham and Ian Whittaker, Nottingham Trent University

Two NASA astronauts, Robert Behnken and Douglas Hurley, will make history by travelling to the International Space Station in a privately funded spacecraft, SpaceX’s Falcon 9 rocket and Crew Dragon capsule. But the launch, which was due to take place on May 27, has been aborted due to bad weather, and will instead take place on May 30 at 3:22 pm EDT.

The astronauts will take off lying on their backs in the seats, and facing in the direction of travel to reduce the stress of high acceleration on their bodies. Once launched from Kennedy Space Centre, the spacecraft will travel out over the Atlantic, turning to travel in a direction that matches the ISS orbit.

With the first rocket section separating at just over two minutes, the main dragon capsule is then likely to separate from the second stage burn roughly an hour later and continue on its journey. All being well, the Dragon spacecraft will rendezvous about 24 hours after launch.

Read more: SpaceX reaches for milestone in spaceflight – a private company launches astronauts into orbit

Space mission launches and landings are the most critical parts. However, Space X has conducted many tests, including 27 drops of the parachute landing system. It has also managed an emergency separation of the Dragon capsule from the rocket. In the event of a failed rocket launch, eight engines would lift the capsule containing the astronauts up into the air and away from the rocket, with parachutes eventually helping it to land. The Falcon 9 rocket has made 83 successful launches.

Docking and return

The space station has an orbital velocity of 7.7km per second. The Earth’s rotation carries launch sites under a straight flight path of the ISS, with each instance providing a “launch window”.

ISS orbit. Author provided

To intercept the ISS, the capsule must match the station’s speed, altitude and inclination, and it must do it at the correct time such that the two spacecraft find themselves in close proximity to each other. The difference in velocity between the ISS and the Dragon capsule must then be near to zero at the point where the orbits of the two spacecraft intersect.

Once these conditions are met, the Dragon capsule must manoeuvre to the ISS docking port, using a series of small control thrusters arranged around the spacecraft. This is due to be done automatically by a computer, however the astronauts can control this manoeuvre manually if needed.

As you can see in the figure below, manoeuvring involves “translation control” as indicated by green arrows – moving left/right, up/down, forward/back. The yellow arrows show “attitude control” – rolling clockwise/anti-clockwise, pitching up/down, and yawing left/right.

How to manoeuver a spacecraft. Author provided

This is complicated by Newton’s first law of motion – that any object at rest or in motion will continue to be so unless acted upon by an external force. That means any manoeuvre, such as a roll to the right, will continue indefinitely in the absence of air resistance to provide an external force until it is counteracted by firing thrusters in the opposite direction.

So now that you have a grasp of orbital manoeuvring, why not have a go yourself? This simulator, provided by Space X, allows you to try and pilot the Dragon capsule to the ISS docking port.

The astronauts will return to Earth when a new set are ready to take their place, or at NASA’s discretion. NASA are already planning the first fully operational flight of crew Dragon, with four astronauts, although a launch date for that has not yet been announced and will undoubtedly depend on the outcome of this demonstration flight.

New era for spaceflight

The launch puts SpaceX firmly ahead of the other commercial ventures looking at providing crewed space launches. This includes both Boeing’s Starliner, which first launched last year but was uncrewed, and Sierra Nevada’s Dream Chaser which is planned to be tested with cargo during a trip to the ISS next year.

The ability of the commercial sector to send astronauts to the ISS is an important step toward further human exploration, including establishing a human presence at the Moon, and ultimately, Mars.

Read more: To the moon and beyond 4: What's the point of going back to the moon?

With companies competing, however, an open question remains whether safety could at some point be compromised to gain a commercial edge. There is no suggestion this has happened so far, but any crewed mission which failed due to a fault stemming from economic concerns would have serious legal ramifications.

In a similar way to modern aircraft legislation, a set of space safety standards and regulations will need to be put in place sooner rather than later. For commercial lunar and beyond missions we also have to ensure that any spacecraft does not contaminate the location they are visiting with germs from Earth.

With more nations and companies developing plans for lunar missions, there are obvious advantages in international cooperation and finding cost efficient launch methods. This is not least because it’s not as dependent on the whim of elected governments for direction, which can change completely from one administration to the next.

So for us scientists looking to expand our knowledge of space, it is a very exciting moment.The Conversation

Gareth Dorrian, Post Doctoral Research Fellow in Space Science, University of Birmingham and Ian Whittaker, Lecturer in Physics, Nottingham Trent University

This article is republished from The Conversation under a Creative Commons license. Read the original article.