Seeing through a new LENS allows brain-like navigation in robots
Peer-Reviewed Publication
Updates every hour. Last Updated: 10-Nov-2025 06:11 ET (10-Nov-2025 11:11 GMT/UTC)
QUT robotics researchers have developed a new robot navigation system that mimics neural processes of the human brain and uses less than 10 per cent of the energy required by traditional systems.
Excitons--bound pairs of electrons and holes created by light--are key to the optoelectronic behavior of carbon nanotubes (CNTs). However, because excitons are confined to extremely small regions and exist for only fleeting moments, it has been extremely challenging to directly observe their behavior using conventional measurement techniques.
In this study, we overcame that challenge by using an ultrafast infrared near-field optical microscope that focuses femtosecond infrared laser pulses down to the nanoscale. This advanced approach allowed us to visualize where excitons are generated and decay inside CNTs in real space and real time.
Our observations revealed that nanoscale variations in the local environment--such as subtle lattice distortions within individual CNTs or interactions with neighboring CNTs--can significantly affect exciton generation and relaxation dynamics.
These insights into local exciton dynamics pave the way for precise control of light-matter interactions at the nanoscale, offering new opportunities for the development of advanced optoelectronic devices and quantum technologies based on carbon nanotube platforms.
Researchers generated a strong immune response to HIV with just one vaccine dose, by adding two powerful adjuvants to the vaccine. This strategy could lead to vaccines that only need to be given once, for infectious diseases including HIV or SARS-CoV-2.
Researchers at the University of Massachusetts Amherst have pushed forward the development of computer vision with new, silicon-based hardware that can both capture and process visual data in the analog domain. Their work, described in the journal Nature Communications, could ultimately add to large-scale, data-intensive and latency-sensitive computer vision tasks.
Regulator-approved AI models used in eye care vary widely in providing evidence for clinical performance and lack transparency about training data, including details of gender, age and ethnicity, according to a new review led by researchers at UCL (University College London) and Moorfields Eye Hospital.