Rendering of NASA's Plankton, Aerosol, Clouds, ocean Ecosystem (PACE) satellite on orbit.
Rendering of NASA's Plankton, Aerosol, Clouds, ocean Ecosystem (PACE) satellite on orbit.

NASA boosts the future of space data networking through tri-band antennas

NASA’s Near Space Network enables spacecraft exploring the solar system and Earth to send back essential science data for researchers and scientists to investigate and make profound discoveries.

Now, the network has integrated four new global antennas to further support science and exploration missions. In December 2022, antennas in Fairbanks, Alaska; Wallops Island, Virginia; Punta Arenas, Chile; and Svalbard, Norway went online to provide present and future missions with S-, X-, and Ka-band communications capabilities. KSAT antenna in Svalbard, Norway, with radome fully installed. Credits: KSAT

These new antennas were created to support missions capturing immense amounts of data. Just as scientists increase their instrument capabilities, NASA also advances its communications systems to enable missions near Earth and in deep space.

This upgrade is bringing unprecedented flexibility to the Near Space Network and will enhance direct-to-Earth communications – the process by which a satellite takes a picture and then sends the image over radio waves to an antenna on Earth. This data is then processed and sent to scientists. The Near Space Network is managed by NASA’s Space Communications and Navigation (SCaN) program office, which oversees the development and enhancement of NASA’s two primary communications networks: the Near Space and Deep Space networks.

The Near Space Network provides missions with communications services through a blend of government-owned and commercial assets. To develop these new antennas, the team worked with commercial partner Kongsberg Satellite Services (KSAT), who created the Chile and Norway antennas, while NASA developed the other two in Virginia and Alaska.

Now operational, the four antennas are integrated into the network’s service catalog, advancing its capabilities to support science and exploration missions that use enhanced instrumentation. Now, missions using the network will be able to send back terabytes of data for processing and discovery.

An example is the upcoming Plankton, Aerosol, Clouds, ocean Ecosystem (PACE) mission, which will help researchers better understand ocean ecosystems and carbon cycling and reveal how aerosols might fuel phytoplankton growth on the ocean’s surface. 

“Missions like the PACE satellite incorporate high-resolution science instruments,” said Damaris Guevara, project lead for the networking upgrade. “These instruments require advanced space communications capabilities, like Ka-band, to get the entirety of their data back to Earth.”

The new antennas also will have new networking capabilities.

All four ground stations are incorporating Delay/Disruption Tolerant Networking (DTN). DTN will empower missions with unparalleled connectivity by storing and forwarding data at points along the network to ensure critical information reaches its destination. DTN is an advanced communications capability being developed and tested by NASA’s SCaN and Space Technology Mission Directorate.

Additionally, to enhance mission teams’ access to data, the network incorporates cloud-based data storage services. Satellites like PACE will downlink their data to an antenna, and that data will go through the ground station’s high-rate data processors to a cloud-based storage and data access service that will allow mission teams to acquire their data faster and from almost anywhere. This reduces hardware needs and lowers overall storage costs.

Multiple missions will benefit from this new infrastructure and advanced capabilities, including the NASA-Indian Space Research Organization Synthetic Aperture Radar (NISAR) satellite. Launching in 2024, NISAR will measure Earth’s changing ecosystems, dynamic surfaces, ice masses, and more.

With four new antennas around the globe, the Near Space Network is advancing its capabilities to support science and exploration missions that use enhanced instrumentation. Now, missions using the network will be able to send back terabytes of data for processing and discovery.

Spanish researchers' new method enables the determination of the dimensionality of complex networks through hyperbolic geometry

In the study, the similarity and popularity variables are combined to give rise to the hyperbolic geometry of the model.Reducing redundant information to find simplifying patterns in data sets and complex networks is a scientific challenge in many knowledge fields. Moreover, detecting the dimensionality of the data is still a hard-to-solve problem. A new study presents a method to infer the dimensionality of complex networks through the application of hyperbolic geometrics, which capture the complexity of relational structures of the real world in many diverse domains.

Among the authors of the study are the researchers M. Ángeles Serrano and Marián Boguñá, from the Faculty of Physics and the Institute of Complex Systems of the Universitat de Barcelona UB (UBICS), and Pedro Almargo, from the Higher Technical School of Engineering of the University of Sevilla. The research study provides a multidimensional hyperbolic model of complex networks that reproduces its connectivity, with an ultra-low and customizable dimensionality for each specific network. This enables better characterization of its structure —e.g. at a community scale— and the improvement of its predictive capability.

The study reveals unexpected regularities, such as the extremely low dimensions of molecular networks associated with biological tissues; the slightly higher dimensionality required by social networks and the Internet; and the discovery that brain connectomes are close to three dimensions in their automatic organization.

Hyperbolic versus Euclidean geometry

The intrinsic geometry of data sets or complex networks is not obvious, which becomes an obstacle in determining the dimensionality of real networks. Another challenge is that the definition of distance has to be established according to their relational and connectivity structure, and this also requires sophisticated models.

Now, the new approach is based on the geometry of complex networks, and more specifically, on the configurational geometric model or SD model. "This model, which we have developed in previous work, describes the structure of complex networks based on fundamental principles", says the lecturer M. Ángeles, ICREA researcher at the Department of Condensed Matter Physics of the UB.

“More specifically —he continues—, the model postulates a law of interconnection of the network elements (or nodes) that is gravitational, so nodes that are closer in a similarity space —of spherical geometry in D dimensions— and with more popularity —an extra dimension corresponding to the importance of the node— are more likely to establish connections".

In the study, the similarity and popularity variables are combined to give rise to the hyperbolic geometry of the model, which emerges as the natural geometry representing the hierarchical architecture of complex networks.

In previous studies, the team had applied the simplest version of the one-dimensional SD model —the S1 model— to explain many typical features of real-world networks: the small-world property (the six degrees of separation), the heterogeneous distributions of the number of neighbors per node, and the high levels of transitive relationships (triangle connections that can be illustrated with the expression my friend's friend is also my friend).

"In addition, the application of statistical inference techniques allows us to obtain real network maps in the hyperbolic plan that are congruent with the established model", she says. "Beyond visualization, these representations have been used in a multitude of tasks, including efficient navigation methods, the detection of self-similarity patterns, the detection of strongly interacting communities of nodes, and the implementation of a network renormalization procedure that reveals hidden symmetries in the multi-scale organization of complex networks and allows the production of network replicas at reduced or enlarged scales”.

Now, the team infers the dimensionality of the hyperbolic space underlying the real networks from properties that relate to the dimension of their geometry. In particular, the work measures the statistics of higher-order cycles (triangles, squares, pentagons) associated with the connections.

A methodology applicable to all complex networks

In computer science, the applied techniques are based on data that typically make definitions of similarity distance between their elements, an approach that involves the construction of graphs that are mapped onto a latent space of Euclidean features.

"Our estimates of the dimensionality of complex networks are well below our estimates based on Euclidean space since hyperbolic space is better suited to represent the hierarchical structure of real complex networks. For example, the Internet only requires D = 7 dimensions to be mapped into the hyperbolic space of our model, whereas this name is multiplied by six and scales to D = 47 in one of the most recent techniques using Euclidean space", says Professor Marián Boguñá.

In addition, techniques for mapping complex data usually assume a latent space, with a predetermined name of dimensions, or implement heuristic techniques to find a suitable value. Thus, the new method is based on a model that does not need the spatial mapping of the network to determine the dimension of its geometry.

In the field of network science, many methodologies use the shortest distances to study the connectivity structure of the network (shortest paths) as a metric space. However, these distances are strongly affected by the small-world property and do not provide a wide range of distance values.

"Our model uses a completely different definition of distance based on an underlying hyperbolic space, and we do not need to map the network. Our methodology applies to any real network or data series with complex structure and with a size that is typically thousands or tens of thousands of nodes but can reach hundreds of thousands in a reasonable computational time", says M. Ángeles Serrano.

The team infers the dimensionality of the hyperbolic space underlying the real networks from properties that relate to the dimension of their geometry.What is the real dimensionality of social networks and the Internet?

Social networks and the Internet are higher (between 6 and 9) compared to networks in other domains, according to the study's findings. However, it is still very low —6 to 7 times lower— compared to that obtained by other methods. This reflects the fact that interactions in these systems are more complex and determined by a greater variety of factors.

On the other hand, friendship-based social networks are at the top of the dimensionality ranking. "This is an unexpected result since one might think that friendship is a freer type of affective relationship, but our results link to the fact that homophily in human interactions is determined by a multitude of sociological factors such as age, gender, social class, beliefs, attitudes or interests", says M. Ángeles Serrano.

In the case of the Internet, even though it is a technological network, its greater dimensionality reflects the fact that for an autonomous system, connecting does not mean only accessing the system, as one might think at first. On the contrary, many different factors influence the formation of these connections, and as a consequence, a variety of other relationships may be present (e.g., supplier-client, peer-to-peer, exchange-based peering, etc.).

"What is really surprising, both for social networks and the internet, is that our theoretical framework —which does not use any annotations about connections beyond their existence— can capture this multidimensional reality that is not explicit in our data", concludes the team, which is currently working on constructing hyperbolic multidimensional maps of complex networks that are congruent with the theoretical framework established by the SD model.

Chinese researcher demos LNS improves the user experience of network interactive services while maintaining high resource utilization

An abstracted massive-client scenario where a single server provides services for millions of clients. These clients send two classes of requests, namely delay-sensitve and delay-insensitve requests. Bursty traffic forms on the server side.Network interaction has become ubiquitous with the development of the information age and has penetrated into every field of our lives, such as cloud gaming, web searching, and autonomous driving, promoting human progress and providing convenience to society. However, the growing number of clients also caused some problems affecting the user experience. Online services may not respond to some users within the expected timeframe, which is known as high tail latency. In addition, the server's burst traffic exacerbates this issue.

In order to solve this problem and improve computer performance, researchers must constantly optimize network stacks. At the same time, Low entropy cloud (i.e., low interference among workloads and low system jitter) is becoming a new trend, where the Labeled Network Stack (LNS) based server is a good case to gain orders of magnitude performance improvement compared to servers based on traditional network stacks. Thus, it is essential to conduct a quantitative analysis of LNS in order to reveal its benefits and potential improvements.

Wenli Zhang, a researcher at State Key Laboratory of Processors, Institute of Computing Technology, and co-authors of this study said: "Although prior experiments have demonstrated that LNS can support millions of clients with low tail latency, compared with mTCP, a typical user-space network stack in academia, and Linux network stack, the mainstream network stack in the industry, a thorough quantitative study is lacking in answering the following two questions:

(i) Where do the low tail latency and the low entropy of LNS mainly come from, compared with mTCP and Linux network stack?

(ii) How much LNS can be further optimized?"

In order to answer the above questions, an analytical method based on queueing theory is proposed to simplify the quantitative study of the tail latency of cloud servers. In the Massive-Client Scenario, Zhang and co-authors establish models characterizing the change of processing speed in different stages for an LNS-based server, an mTCP-based server, and a Linux-based server, with bursty traffic as an example. In addition, authors derive the formulas for the tail latency of the three servers.

"Our models 1) reveal that two technologies in LNS, including the fulldatapath prioritized processing and the full-path zero-copy, are primary factors for high performance, with orders of magnitude improvement of tail latency as the latency entropy reduces maximally 5.5 × over the mTCP-based server, and 2) suggest the optimal number of worker threads querying a database, improving the concurrency of the LNSbased server 2.1 × –3.5 × ." Zhang said, "The analytical method can also apply to the modeling of other servers characterized as tandem stage queueing networks."

This work is supported in part by the National Key Research and Development Program of China (2016YFB1000200), and the Key Program of the National Natural Science Foundation of China (61532016).

Article Reference: Hongrui Guo, Wenli Zhang, Zishu Yu, Mingyu Chen, "Queueing-Theoretic Performance Analysis of a Low-Entropy Labeled Network Stack", Intelligent Computing, vol. 2022, Article ID 9863054, 16 pages, 2022. https://doi.org/10.34133/2022/9863054