A, Collection of the DNA sequences obtained from high-throughput synthesis. The sequences were classified into easy-to-synthesize (blue) or difficult-to-synthesize (red). B, Graphical representations of DNA sequences: repeat, GC content, information entropy and other types of features. Key features were identified from these sequence features by machine learning methods. C, The XGBoost algorithm utilized to build the classification model and calculate the S-index. D, Methods used to interpret the model. The feature contributions were quantified according to the global importance scores and local SHAP explanations. e, Application of the S-index on a specific chromosome. The heatmap indicates the synthesis difficulties for the different fragments, which range from difficult (red) to easy (blue). The white sequences indicate the unanalyzed chromosome sequence.
A, Collection of the DNA sequences obtained from high-throughput synthesis. The sequences were classified into easy-to-synthesize (blue) or difficult-to-synthesize (red). B, Graphical representations of DNA sequences: repeat, GC content, information entropy and other types of features. Key features were identified from these sequence features by machine learning methods. C, The XGBoost algorithm utilized to build the classification model and calculate the S-index. D, Methods used to interpret the model. The feature contributions were quantified according to the global importance scores and local SHAP explanations. e, Application of the S-index on a specific chromosome. The heatmap indicates the synthesis difficulties for the different fragments, which range from difficult (red) to easy (blue). The white sequences indicate the unanalyzed chromosome sequence.

China develops an XGBoost model that predicts the synthesis difficulties of fragments for designer chromosomes

Artificially synthesizing genomes has broad prospects in medical research and industrial strains. From the synthesis of the artificial life JCVI-syn1.0 by Craig Venter's team in 2010, to the rewriting and synthesis of the prokaryotic E. coli genome, and the Sc2.0 project's artificial synthesis of the yeast genome, researchers are constantly advancing in the depth and breadth of genome design and synthesis. However, there are still difficulties in synthesizing specific gene segments, ultimately leading to the inability to complete artificial chromosomes, which limits the application and promotion of artificial genome synthesis technology. To address this issue, the team of Professor Yingjin Yuan from Tianjin University in China has developed an interpretable machine learning framework (Figure 1) that can predict and quantify the difficulty of chromosome synthesis, guiding optimizing chromosome design and synthesis processes. 

The research team designed an efficient feature selection method by analyzing data from many known chromosome fragments and identified six key sequence features that cover energy and structural information during DNA chemical synthesis and assembly. Based on these results, the team developed an eXtreme Gradient Boosting (XGBoost) model that can effectively predict the synthesis difficulties of chromosome fragments. The model achieved an AUC (area under the receiver operating characteristic curves) of 0.895 in cross-validation and an AUC of 0.885 on an independent test set in collaboration with a DNA synthesis company, demonstrating a high accuracy and predictive ability.

The research team proposed a Synthesis difficulty Index (S-index) based on the SHAP algorithm to evaluate and interpret the synthesis difficulties of chromosomes. The study found that there were significant differences in the synthesis difficulties of different chromosomes, and the S-index could quantitatively explain the causes of synthesis difficulties for some gene fragments (Figure 2), providing a basis for chromosome sequence design and synthesis and improving the efficiency and success rate of designer chromosome synthesis. This achievement provides a practical tool for researchers in chromosome engineering and genome rewriting and is expected to provide more comprehensive guidance and support for chromosome design and synthesis. A, The distribution of DNA sequences with different S-index for the natural and synthetic chromosomes and genomes. The heatmap shows the S-index for the different sequences and the color has the same meaning in B and C. B, The difficulties of synthesizing DNA sequences for the different locations within the chromosomes. The black boxes mark the centromeric satellite of Homo sapiens chromosome 22 and telomeres of synV and synX. c, The S-index for the 45,100-45,200-kb region of M. musculus chr19. D, Force plot for 45,138-45,140 kb sequence of M. musculus chr19. The feature with a positive effect value is highlighted in red, and the feature with a negative effect value is highlighted in blue. Photo credit: Yan Zheng.

 

 

Photo by Nate Edwards/BYU Photo. (Photo Illustration)
Photo by Nate Edwards/BYU Photo. (Photo Illustration)

ChatGPT can't ace accounting exams, but experts think it soon will… What does it mean for teaching?

“We’re just seeing the tip of the AI iceberg. What’s coming is going to be exciting or terrifying, depending on your perspective.”

Last month, OpenAI launched its newest AI chatbot product, GPT-4. According to OpenAI, the bot, which uses machine learning to generate natural language text, passed the bar exam with a score in the 90th percentile, passed 13 of 15 AP exams, and got a nearly perfect score on the GRE Verbal test.

Inquiring minds at BYU and 186 other universities wanted to know how OpenAI’s tech would fare on accounting exams. So, they put the original version, ChatGPT, to the test. The researchers say that while it still has work to do in the realm of accounting, it’s a game changer that will change how everyone teaches and learns — for the better.

“When this technology first came out, everyone was worried that students could now use it to cheat,” said lead study author David Wood, a BYU professor of accounting. “But opportunities to cheat have always existed. So for us, we’re trying to focus on what we can do with this technology now that we couldn’t do before to improve the teaching process for faculty and the learning process for students. Testing it out was eye-opening.”

Since its debut in November 2022, ChatGPT has become the fastest-growing technology platform ever, reaching 100 million users in under two months. In response to intense debate about how models like ChatGPT should factor into education, Wood decided to recruit as many professors as possible to see how the AI fared against actual university accounting students.

His co-author recruiting pitch on social media exploded: 327 co-authors from 186 educational institutions in 14 countries participated in the research, contributing 25,181 classroom accounting exam questions. They also recruited undergrad BYU students (including Wood’s daughter, Jessica) to feed another 2,268 textbook test bank questions to ChatGPT. The questions covered accounting information systems (AIS), auditing, financial accounting, managerial accounting, and tax, and varied in difficulty and type (true/false, multiple choice, short answer, etc.).

Although ChatGPT’s performance was impressive, the students performed better. Students scored an overall average of 76.7%, compared to ChatGPT’s score of 47.4%. On 11.3% of questions, ChatGPT scored higher than the student average, doing particularly well on AIS and auditing. But the AI bot did worse on tax, financial, and managerial assessments, possibly because ChatGPT struggled with the mathematical processes required for the latter type.

When it came to question type, ChatGPT did better on true/false questions (68.7% correct) and multiple-choice questions (59.5%), but struggled with short-answer questions (between 28.7% and 39.1%). In general, higher-order questions were harder for ChatGPT to answer. In fact, sometimes ChatGPT would provide authoritative written descriptions for incorrect answers, or answer the same question differently.

“It’s not perfect; you’re not going to be using it for everything,” said Jessica Wood, currently a freshman at BYU. “Trying to learn solely by using ChatGPT is a fool’s errand.”

The researchers also uncovered some other fascinating trends through the study, including:

  • ChatGPT doesn’t always recognize when it is doing math and makes nonsensical errors such as adding two numbers in a subtraction problem or dividing numbers incorrectly.
  • ChatGPT often provides explanations for its answers, even if they are incorrect. Other times, ChatGPT’s descriptions are accurate, but it will then proceed to select the wrong multiple-choice answer.
  • ChatGPT sometimes makes up facts. For example, when providing a reference, it generates a real-looking reference that is completely fabricated. The work and sometimes the authors do not even exist.

That said, authors fully expect GPT-4 to improve exponentially on the accounting questions posed in their study, and the issues mentioned above. What they find most promising is how the chatbot can help improve teaching and learning, including the ability to design and test assignments, or perhaps be used for drafting portions of a project.

“It’s an opportunity to reflect on whether we are teaching value-added information or not,” said study coauthor and fellow BYU accounting professor Melissa Larson. “This is a disruption, and we need to assess where we go from here. Of course, I’m still going to have TAs, but this is going to force us to use them in different ways.”

From left to right: researchers Hugues de Riedmatten (group leader), Jelena Rakonjac, Dario Lago and Samuele Grandi in the lab at ICFO. Image credit: ICFO/ D. Lago
From left to right: researchers Hugues de Riedmatten (group leader), Jelena Rakonjac, Dario Lago and Samuele Grandi in the lab at ICFO. Image credit: ICFO/ D. Lago

Spanish researchers demo long-distance quantum teleportation enabled by multiplexed quantum memories

Quantum teleportation is a technique allowing the transfer of quantum information between two distant quantum objects, a sender and a receiver, using a phenomenon called quantum entanglement as a resource. The unique feature of this process is that the actual information is not transferred by sending quantum bits (qubits) through a communication channel connecting the two parties; instead, the information is destroyed at one location and appears at the other one without physically traveling between the two. This surprising property is enabled by quantum entanglement, accompanied by the transmission of classical bits. 

There is a deep interest in quantum teleportation nowadays within the field of quantum communications and quantum networks because it would allow the transfer of quantum bits between network nodes over very long distances, using previously shared entanglement. This would help the integration of quantum technologies into current telecommunication networks and extend the ultra-secure communications enabled by these systems to very long distances. In addition, quantum teleportation permits the transfer of quantum information between different kinds of quantum systems, e.g. between light and matter or between different kinds of quantum nodes.

Quantum teleportation was theoretically proposed in the early 90s and experimental demonstrations were carried out by several groups around the world. While the scientific community has gained extensive experience on how to perform these experiments, there is still an open question on how to teleport information practically, allowing reliable and fast quantum communication over an extended network. It seems clear that such an infrastructure should be compatible with the current telecommunications network. In addition, the protocol of quantum teleportation requires a final operation to be applied on the teleported qubit, conditioned on the result of the teleportation measurement (transmitted by classical bits), to transfer the information faithfully and at a higher rate, a feature called active feed-forward. This means that the receiver requires a device known as a quantum memory that can store the qubit without degrading it until the final operation can be implemented. Finally, this quantum memory should be able to operate in a multiplexed fashion to maximize the speed of teleporting information when the sender and the receiver are far away. To date, no implementation had incorporated these three requirements in the same demonstration.

Located in Castelldefels, Barcelona, Spain, ICFO researchers Dario Lago-Rivera, Jelena V. Rakonjac, and Samuele Grandi, led by ICREA Prof. at ICFO Hugues de Riedmatten have reported achieving long-distance teleportation of quantum information from a photon to a solid-state qubit, a photon stored in a multiplexed quantum memory. The technique involved the use of an active feed-forward scheme, which, together with the multimodality of the memory, has allowed the maximization of the teleportation rate. The proposed architecture was compatible with the telecommunications channels, thus enabling future integration and scalability for long-distance quantum communication.

How to achieve quantum teleportation 

The team built two experimental setups, that in the jargon of the community are usually called Alice and Bob. The two setups were connected by a 1km optical fiber spun up in a spool, to emulate a physical distance between the parties.

Three photons were involved in the experiment. In the first setup, Alice, and the team used a special crystal to create two entangled photons: the first photon at 606 nm, called the signal photon, and the second photon called the idler photon, compatible with the telecommunications infrastructure. Once created, “we kept the first 606 nm photon at Alice and stored it in a multiplexed solid-state quantum memory, holding it in the memory for future processing. At the same time, we took the telecom photon created at Alice and sent it through the 1km of optical fiber to reach the second experimental setup, called Bob,” Dario Lago recalls.

In this second setup, Bob, the scientists had another crystal where they created a third photon, where they had encoded the quantum bit they wanted to teleport. Once the third photon was created, the second photon arrived at Bob from Alice, this is where the core of the teleportation experiment takes place. schematic of the experimental setup of the quantum teleportation platform

Teleporting information over 1km

The second and third photons interfered with each other through what is known as a Bell State measurement (BSM). The effect of this measurement was to mix the state of the second and third photons. Thanks to the fact that the first and second photons were entangled, to begin with, i.e. their joint state were highly correlated, the result of the BSM was that of transferring the information encoded in the third photon to the first one, stored by Alice in the quantum memory, 1 km away. As Dario Lago and Jelena Rakonjac mention, “We are capable of transferring information between two photons that were never in contact before but connected through a third photon that was indeed entangled with the first. The uniqueness of this experiment lies in the fact that we employed a multiplexed quantum memory capable of storing the first photon for long enough such that by the time Alice found out that the interaction had happened, we were still able to process the teleported information as the protocol requires.”

This processing that Dario and Jelena mentioned was the active feed-forward technique mentioned earlier. Depending on the outcome of the BSM, a phase shift was applied to the first photon after storage in the memory. In this way, the same state would always be encoded in the first photon. Without this, half of the teleportation events would have to be discarded. Moreover, the multimodality of the quantum memory allowed them to increase the teleportation rate beyond the limits imposed by the 1 km separation between them without degrading the quality of the teleported qubit. Overall, this resulted in a teleportation rate three times higher than for a single-mode quantum memory, only limited by the speed of the classical hardware.

Scalability and Integration

The experiment carried out by this group in 2021, where they achieved for the first time entanglement of two multimode quantum memories separated by 10 meters and heralded by a photon at the telecommunication wavelength, has been the precursor of this experiment.

As Hugues de Riedmatten emphasizes, “Quantum teleportation will be crucial for enabling high-quality long-distance communication for the future quantum internet. Our goal is to implement quantum teleportation in more and more complex networks, with previously distributed entanglement. The solid-state and multiplexed nature of our quantum nodes as well as their compatibility with the telecom network make them a promising approach to deploying the technology over a long distance in the installed fiber network.”

Further improvements are already being planned. On the one hand, the team is focused on developing and improving the technology to extend the setup to much longer distances while maintaining efficiency and rates. On the other hand, they also aim at studying and using this technique in the transfer of information between different types of quantum nodes, for a future quantum Internet that will be able to distribute and process quantum information between remote parties.

The nonlocal probability increases as the number of particles grows, which differs from previous studies. ©Tohoku University
The nonlocal probability increases as the number of particles grows, which differs from previous studies. ©Tohoku University

Japanese prof Le proposes a theoretical framework for attaining a higher nonlocal probability

The 2022 Nobel Prize in Physics was awarded to Alain Aspect, John Clauser, and Anton Zeilinger for their works on "quantum nonlocality" in quantum mechanics. Quantum nonlocality is a phenomenon where connected particles can affect each other instantly, regardless of the distance separated. 

Imagine you owned a pair of gloves. These gloves are a pair and therefore correlated in some way, no matter how far apart they are. One day, you place one of the gloves into your backpack and hop on a flight to travel to another country, while the other glove remains at home. According to quantum nonlocality, if you changed the color of the glove you brought with you, the color of the glove back home would instantaneously change too, despite being separated by a large distance.

Nonlocality violates many of the concepts predicted by classical physics, where particles' properties are predetermined and change occurs only through direct physical interaction or fields propagated at a finite speed. Nonlocality has a wide array of implications for understanding the future of reality, quantum mechanics, and the development of quantum technologies. 

There exist several ways to define and interpret nonlocality. For instance, a set of mathematical expressions called the Bell and CHSH inequalities demonstrates nonlocality by violating inequalities. Meanwhile, Lucien Hardy proposed an alternative interpretation of quantum nonlocality in 1992 when he developed the Hardy Paradox.

Suppose there are three quantities A, B, and C, where A is greater than B and B is greater than C. Intuitively, and according to a fundamental mathematical property known as the transitive property (or local hidden variable theories in physics), this would render A greater than C.

However, Hardy noted that there is still room for a situation where C is greater than A. This violates the transitive property, and such violations are possible in the quantum world when particles are entangled with each other. In other words, this is nonlocality.

We can use "rock-paper-scissors" to imagine this. While it is evident that rock beats scissors and scissors beat paper, the rock can't beat paper. Paper beating rock does not align with any mathematical reasoning, hence why it is a paradox. The Hardy nonlocality can be interpreted as a rock-paper-scissors game: while rock beats scissors and scissors beat paper, it is impossible for the rock to beat the paper; instead, the paper beats the rock, which causes a paradox, i.e., nonlocality. ©Tohoku University

A recent study, published in the journal American Physical Review A, has made interesting revelations about the Hardy nonlocality. The study was co-authored by Dr. Le Bin Ho from Tohoku University's Frontier Research Institute for Interdisciplinary Sciences (FRIS) in Japan.

"The Hardy nonlocality has significant implications for understanding fundamental quantum mechanics, and it is vital for strengthening the probability of nonlocal," said Le. "We used quantum computers and methods to investigate the measurement of Hardy nonlocality to improve its probability."

Le and his colleagues did this by proposing a theoretical framework for attaining a higher nonlocal probability. They verified this by using a theoretical model and a quantum simulation.

Despite previous studies showing the opposite, they discovered that nonlocal probability increases as the number of particles grows. This suggests that quantum effects persist even at larger scales, further challenging classical theories of physics.

Le says these findings have important ramifications for understanding quantum mechanics and its potential applications in communications. "Understanding quantum nonlocality can lead to groundbreaking technological advancements, such as the secure transmission of information through quantum communication via nonlocality resources."

An illustration showing how some Earth’s signature features, such as its abundance of water and its overall oxidized state could potentially be attributable to  interactions between the molecular hydrogen atmospheres and magma oceans on the planetary embryos that comprised Earth’s formative years. Illustration by Edward Young/UCLA and Katherine Cain/Carnegie Institution for Science.
An illustration showing how some Earth’s signature features, such as its abundance of water and its overall oxidized state could potentially be attributable to interactions between the molecular hydrogen atmospheres and magma oceans on the planetary embryos that comprised Earth’s formative years. Illustration by Edward Young/UCLA and Katherine Cain/Carnegie Institution for Science.

Carnegie, UCLA researchers build a model for Earth’s formation that shows a new method for detecting signs of life on other worlds

For decades, what researchers knew about planet formation was based primarily on our own Solar System. However, the explosion of exoplanet research over the past decade informed a new approach to modeling the Earth’s embryonic state. 

Our planet’s water could have originated from interactions between the hydrogen-rich atmospheres and magma oceans of the planetary embryos that comprised Earth’s formative years, according to new work from Carnegie Science’s Anat Shahar and UCLA’s Edward Young and Hilke Schlichting. Their findings could explain the origins of Earth’s signature features. matt hardy

For decades, what researchers knew about planet formation was based primarily on our own Solar System. Although there are some active debates about the formation of gas giants like Jupiter and Saturn, it is widely agreed upon that Earth and the other rocky planets accreted from the disk of dust and gas that surrounded our Sun in its youth.

As increasingly larger objects crashed into each other, the baby planetesimals that eventually formed Earth grew both larger and hotter, melting into a vast magma ocean due to the heat of collisions and radioactive elements. Over time, as the planet cooled, the densest material sank inward, separating Earth into three distinct layers—the metallic core, and the rocky, silicate mantle and crust.

However, the explosion of exoplanet research over the past decade informed a new approach to modeling the Earth’s embryonic state.

“Exoplanet discoveries have given us a much greater appreciation of how common it is for just-formed planets to be surrounded by atmospheres that are rich in molecular hydrogen, H2, during their first several million years of growth,” Shahar explained. “Eventually these hydrogen envelopes dissipate, but they leave their fingerprints on the young planet’s composition.”

Using this information, the researchers developed new models for Earth’s formation and evolution to see if our home planet’s distinct chemical traits could be replicated.

Using a newly developed model, the Carnegie and UCLA researchers were able to demonstrate that early in Earth’s existence, interactions between the magma ocean and a molecular hydrogen proto-atmosphere could have given rise to some of Earth’s signature features, such as its abundance of water and its overall oxidized state.  

The researchers used mathematical modeling to explore the exchange of materials between molecular hydrogen atmospheres and magma oceans by looking at 25 different compounds and 18 different types of reactions—complex enough to yield valuable data about Earth’s possible formative history, but simple enough to interpret fully.

Interactions between the magma ocean and the atmosphere in their simulated baby Earth resulted in the movement of large masses of hydrogen into the metallic core, the oxidation of the mantle, and the production of large quantities of water.

Even if all of the rocky material that collided to form the growing planet was completely dry, these interactions between the molecular hydrogen atmosphere and the magma ocean would generate copious amounts of water, the researchers revealed. Other water sources are possible, they say, but not necessary to explain Earth’s current state.

“This is just one possible explanation for our planet’s evolution, but one that would establish an important link between Earth’s formation history and the most common exoplanets that have been discovered orbiting distant stars, which are called Super-Earths and sub-Neptunes,” Shahar concluded.

This work was part of the interdisciplinary, multi-institution AEThER project, initiated and led by Shahar, which seeks to reveal the chemical makeup of the Milky Way galaxy’s most common planets—Super-Earths and sub-Neptunes—and to develop a framework for detecting signatures of life on distant worlds. Funded by the Alfred P. Sloan Foundation, this effort was developed to understand how the formation and evolution of these planets shape their atmospheres. This could—in turn—enable scientists to differentiate true biosignatures, which could only be produced by the presence of life, from atmospheric molecules of non-biological origin.

“Increasingly powerful telescopes are enabling astronomers to understand the compositions of exoplanet atmospheres in never-before-seen detail,” Shahar said. “AEThER’s work will inform their observations with experimental and modeling data that, we hope, will lead to a foolproof method for detecting signs of life on other worlds.”