Research News
Love, Power and Fantasy in the Age of AI Companions
A new study of AI chatbots suggests people aren’t just turning to artificial intelligence for conversation or emotional support. Instead, many are using these systems to act out romantic fantasies and co-create fictional worlds.
Drawing on a dataset of more than 5.7 million chatbots and thousands of Reddit discussions, NYU Tandon researchers led by Ph.D. student Julia Kieserman and Assistant Professor Rosanna Bellini found that two dominant use cases define the Character.AI chatbot platform: intimate roleplay and narrative exploration. Together, they point to a shift in how some people engage with AI — not as passive assistants or companions, but as collaborative actors in deeply personalized fictions.
Interactive ‘Romantasy’
The study found that about 63 percent of a sample of nearly 1500 popular chatbots were designed for romantic or intimate interactions. These bots often take on roles like a boyfriend, husband or love interest, and are built to simulate emotional or sexual roleplay with users.
“We found that creators were defining chatbots to be avenues to explore fantasies with a technology that can provide unexpected feedback.” Kieserman says. “Character.AI chatbots generally appear to be more similar to fan fiction, rather than as a replacement for companionship.”
Many of these scenarios follow familiar patterns from romance fiction. The AI characters are frequently portrayed as dominant or high-status figures — such as CEOs, celebrities or mafia bosses — while the user takes on a more subordinate role.
Power imbalances were present in roughly one-quarter of popular chatbots, and some included traits like jealousy, possessiveness or emotional intensity. In addition, about 22 percent contained references to violence, including aggressive behavior or dangerous situations .
Researchers note that these elements mirror common tropes in books and fanfiction, but the difference is that users can now actively participate in the story rather than just read it.
Beyond romance, many users are treating chatbots as tools for storytelling, “to explore fictional worlds and interact with favorite characters,” Kieserman says.
Around 39 percent of popular chatbots were based on existing fandoms, such as anime, video games or movies. Users often place themselves inside these worlds, creating new storylines or extending existing ones.
Some rely on chatbots to overcome writer’s block, while others use them to simulate role-playing games or fanfiction scenarios. Unlike traditional writing tools, the AI can respond unpredictably, adding new ideas and directions to the story.
Breaking Boundaries
Despite the appeal, users frequently report friction between expectation and reality. Some complain that chatbots become sexual too quickly, disrupting carefully constructed storylines. Others express frustration with increasing content restrictions that limit romantic or explicit interactions.
This tension reveals a fundamental challenge: how to moderate AI behavior in spaces where users actively seek edge cases. What counts as inappropriate in one context may be the entire point in another.
Complicating matters further is the question of responsibility. When a chatbot behaves badly — becoming aggressive, inappropriate or incoherent — users are divided over who is to blame: the platform, the creator or the AI itself. The result is a kind of distributed authorship, unique to chatbot-based platforms like Character.AI that create chatbots from user input such that no single entity fully controls the outcome.
Heavy Use and Potential Risks
Perhaps the most important insight from this research is that AI chatbots are not exclusively replacing human relationships, but are amplifying existing cultural patterns. . What AI seems to add is immediacy and agency. Users can step inside these narratives, test emotional boundaries and explore identities in ways that were previously confined to imagination or text.
At the same time, the immersive nature of these interactions raises concerns. Some users report spending hours a day on the platform, occasionally to the detriment of offline relationships and well-being. The very qualities that make AI compelling — responsiveness, adaptability, lack of judgment — also make it hard to disengage.
Overall, the findings suggest a shift in how people engage with AI systems. Rather than treating them as assistants or tools, users are increasingly using them as interactive environments for exploring relationships and stories.
That shift raises new questions about safety, moderation and the psychological impact of highly personalized AI experiences. But it also highlights something more basic: people are using AI not just to get things done, but to imagine and experience different versions of reality.
Rivalry and Collaboration Attitudes: NYU Study Finds Writers Need Both to Thrive in the Age of AI
When a screenwriter told New York University researchers last year that letting AI do her work would make her "miserable inside," she was onto something.
A follow-up study from NYU’s Tandon School of Engineering and Stern School of Business finds that the instinct to compete with generative AI, rather than simply embrace it, is associated with meaningful long-term benefits for writing professionals.
The catch: rivalry alone isn't enough either.
The 2026 study, led by Rama Adithya Varanasi, a postdoctoral researcher in Tandon's Technology, Management and Innovation Department, alongside Tandon Professor Oded Nov, and Batia Mishan Wiesenfeld, a professor of management at Stern, surveyed 403 professional writers across marketing, publishing, education, and the arts. Findings will be presented at the CHI Conference on Human Factors in Computing Systems this month.
The work extends a 2025 qualitative study by the same team, which interviewed 25 experienced writers and introduced the concept of "AI rivalry" — the idea that some writers proactively compete against AI rather than simply avoid it, targeting what they see as its weaknesses, such as its difficulty producing content rooted in specific communities or geographies.
The new research asked a larger question: what actually happens to writers' careers, skills, and satisfaction depending on how they orient themselves toward AI?
The study finds risks at both extremes. Writers who reported strong collaborative attitudes toward AI also reported higher short-term productivity and job satisfaction, but invested less in maintaining their own skills — the risk of over-reliance.
Writers who perceived AI as a rival reported stronger skill maintenance and greater investment in peer relationships, but that perception showed no significant association with productivity or satisfaction — the risk of under-reliance.
"The concern isn't that workers use AI," said Varanasi. "It's that they stop developing the capabilities that make humans irreplaceable. What this study tells managers is that they can't measure success purely by output. If the workflow removes the need for human judgment, the skill atrophies and that cost doesn't show up until it's too late."
Notably, rivalry attitudes didn't reflect a rejection of the technology. The data showed these writers reported more experience with generative AI than those who held neither orientation strongly. They studied the AI competition, rather than ignoring it.
The most striking result came from writers who scored high on both orientations simultaneously. This group showed the strongest associations with job crafting and skill maintenance across nearly every dimension measured, and posted productivity levels closer to the pure collaboration group — though satisfaction remained higher among pure collaborators — without sacrificing the long-term skill maintenance that pure collaborators showed less of.
"What surprised us is that rivalry and collaboration don't cancel each other out," said Wiesenfeld. "Writers who hold both orientations seem to use AI more deliberately. They get the productivity benefits without outsourcing the judgment."
The study is among the first to measure this tradeoff across a broad set of outcomes — relationships, tasks, cognition, skills, satisfaction, and productivity — drawing on expertise in both human-computer interaction and organizational behavior.
The implications for employers are direct. Organizations that push widespread AI adoption to boost efficiency may be optimizing for the wrong thing, particularly if those workflows come at the cost of workers practicing core human skills.
"Most organizations right now are still developing policies on how employees should relate to AI. " said Nov. "Our findings suggest that the relationship workers have with AI matters as much as whether they use it."
The researchers call for a new design structure that builds productive "friction" into AI tools, calibrating how much assistance is offered based on a user's reliance attitudes rather than defaulting to maximum engagement.
The team's next phase will test that concept directly. They are building prototypes of AI tools designed to promote appropriate reliance, and plan to expand the research beyond writing to other creative professions including game developers, graphic designers, and visual artists.
Funding for this research was provided by the National Science Foundation.
Your Call Center Rep Is Emotionally Exhausted, Their Computer May Know When to Help
When a customer calls to complain about a billing error or a delayed package, the person on the other end of the line is doing more than answering questions.
They are managing their emotions, suppressing frustration, projecting warmth, absorbing anger, often dozens of times a day.
Researchers at KAIST in South Korea and NYU Tandon School of Engineering say the routine logs generated by call center software may be the most powerful tool yet for detecting when that work is taking a serious toll.
Their study, being presented this month at the CHI Conference on Human Factors in Computing Systems, found that records of call duration, inquiry type, and the notes agents type during conversations ranked among the strongest predictors of post-call stress.
A Cornell University study revealed that 87 percent of call agents experience high levels of stress, contributing to depression, burnout, and turnover rates that plague an industry valued at nearly $30 billion globally. Most workplace stress research focuses on knowledge workers, leaving call agents largely unexamined.
To examine the problem, the researchers spent a month inside a South Korean city government call center. Eighteen agents wore Fitbit trackers, sat beside sensors monitoring CO2 and temperature, and used tablets that recorded the force and rhythm of their typing. After every call, agents rated their stress on a five-point scale, generating more than 7,400 records.
The study was designed around the job's natural rhythm, alternating between customer calls and brief intervals for notes and preparation.
The team fed data from the call center's own server into their models. Every modern call center logs details about each interaction: when a call started, how long it lasted, what the customer's issue was, whether it ended in a complaint. That information outperformed heart rate and movement data.
Long calls with unresolved problems were especially predictive of stress, as were calls requiring repeated explanations. "Even though it's stressful when the customer is unpleasant," one agent said, "it's even more challenging when they inquire without knowing their issue."
Vedant Das Swain, an assistant professor in NYU Tandon's Technology, Management and Innovation Department and a co-author of the study, said the findings reveal a blind spot in how workplaces think about stress. "Most stress research focuses on the worker's body, tracking heart rate, sleep, movement," he said. "But we found that the most revealing signal was the work itself. The call log tells you more about how someone is feeling than a wearable ever could."
Stress also looks different from person to person. Some agents went quiet and still after a hard call. Others typed harder, exhaled loudly, or left their desks. Models calibrated to individual workers outperformed general ones, though only after accumulating around 300 calls of personal history.
The researchers argue that any technology built from these findings should help workers, not surveil them. Stress predictions should go to the agent, not to managers. Aggregated data could help organizations identify systemic problems, but should never be used to evaluate individuals.
"Call centers already know a great deal about their workers," said Uichin Lee, a professor at KAIST's School of Computing and the study's corresponding author. "Our goal was to show that this data could be turned toward the worker's benefit, not used against them."
"In my own earlier research, I looked at designing tools to help call agents regulate their emotions in the moment," Das Swain added. "This paper answers the harder questions first: when to intervene, who to intervene on, and why."
Some agents said that tracking their own stress after each call made them more aware of their emotional state going into the next one. For a job defined by the feelings of others, that turned out to mean something.
The paper's first author is Duri Lee, a researcher in the School of Computing at KAIST. Co-authors are Heejeong Lim of KAIST's Graduate School of Data Science and Das Swain, who shares co-last authorship with Uichin Lee.
NYU and KAIST have built an expanding relationship in recent years, formalizing a partnership in 2022 and introducing a dual degree master's program with NYU Tandon in technology management in 2024. The current study did not originate as part of the NYU-KAIST Partnership, but Das Swain and Uichin Lee are now continuing to work together through this institution-to-institution collaboration.
Funding for the study was provided by the Institute of Information and Communications Technology Planning and Evaluation and the National Research Foundation of Korea, both funded by the Korean government, as well as Microsoft's Accelerating Foundation Models Research program and the National Institute on Drug Abuse, part of the National Institutes of Health.
New Research Shows Chaos Shapes How Meandering Rivers Change Over Time
Rivers are rarely the calm, orderly streams we imagine on maps. Over time, their winding paths — called meanders — shift, bend, and occasionally snap off in sudden “cutoff” events that shorten loops and reshape the landscape. While scientists have long suspected that such cutoffs inject a dose of unpredictability into river evolution, a new study published in Communications Earth & Environment demonstrates that these abrupt events are, by themselves, enough to produce chaos in river channels.
Harvard Ph.D. candidate Brayden Noh and NYU Tandon Assistant Professor Omar Wani used a widely-used computational model to explore how meandering rivers evolve over time. This model isolates the essential dynamics: bends migrate laterally in proportion to curvature, and loops are occasionally severed through cutoffs. Other real-world complexities — like sediment transport, bank composition, and vegetation — are treated as secondary, allowing the researchers to focus squarely on the geometry-driven behavior of rivers.
To test the role of cutoffs, the team simulated rivers starting from nearly identical initial shapes, then introduced infinitesimally small perturbations to each of the multiple copies. They tracked how the channels diverged over time by mapping their evolving shapes onto a fixed grid and measuring differences cell by cell. In a striking counterfactual experiment, when cutoffs were disabled, the two channels stayed nearly identical over large time horizons. When cutoffs were allowed, even tiny initial differences grew exponentially, a hallmark of deterministic chaos.
The researchers quantified this sensitivity using the finite-time Lyapunov exponent, a metric from dynamical systems theory that measures how fast nearby trajectories diverge. They found that the rate of divergence depended primarily on the speed at which bends migrated, not on the specific cutoff threshold. In other words, faster meander migration amplifies chaos, while the geometric criteria for triggering a cutoff mostly determine how frequently the river “resets” its local shape.
Importantly, this chaotic behavior was robust across a wide range of initial river geometries. Whether the model started with gentle or pronounced bends, the presence of cutoffs consistently created sensitive dependence on initial conditions. The team also showed that the predictability of a river’s course is bounded: beyond a certain horizon, roughly the number of cutoffs expected in one Lyapunov time, deterministic forecasts of channel position become unreliable.
The study highlights a subtle but powerful insight: continuous meander migration creates gradual stretching of the river planform, while cutoffs act as abrupt topological resets. Together, these processes produce a hybrid system that is both structured and inherently unpredictable. The finding resonates with broader observations of natural rivers, where cutoffs cluster or cascade, triggering sequences of rearrangements along the channel.
While the model is simplified — it does not include full fluid dynamics, sediment heterogeneity, or flood variability — it provides a clear counterfactual experiment: no real river can evolve without cutoffs, but simulations can, revealing the mechanism behind chaotic divergence. This approach connects geomorphology with fundamental concepts from chaos theory, offering a concrete way to quantify a river’s predictability horizon.
Ultimately, the research suggests that some limits to forecasting river evolution are intrinsic. Even in the absence of storms, landslides, or human intervention, the combination of smooth bend migration and occasional cutoffs ensures that lowland rivers retain a degree of inherent unpredictability. For engineers, ecologists, and planners, this work underscores the importance of probabilistic frameworks over deterministic predictions when assessing river migration and floodplain evolution.
Researchers Steer Tiny Waves of Energy Through Liquid Crystals
In physics, some waves behave in a surprising way: instead of spreading out and fading, they hold their shape as they travel at constant speeds. These unusual waves, called solitons, have interested scientists since they were first observed in canals in the 19th century. Today, researchers study solitons in everything from optical fibers to biological systems.
A new study published in Proceedings of the National Academy of Sciences, shows that these stubborn waves can be guided and steered through materials by carefully designing internal strain, offering new ways to move energy or information at microscopic scales.
The research focuses on liquid crystals, the same class of materials used in LCD screens. But beyond displays, liquid crystals are prized by physicists because their internal structure can be manipulated with remarkable precision. Molecules inside them tend to align in a common direction, but that alignment can be twisted, bent, or reoriented with electric fields or surface chemistry.
In the new work, a research team from NYU and Cornell created special liquid-crystal cells where the molecules were forced to align differently at two opposite surfaces. One surface caused molecules to lie flat, while the other made them stand upright. The result was a continuously bent molecular orientation across the film — a built-in strain field inside the material.
When the researchers applied a high-frequency alternating electric field, something interesting happened. Tiny, localized pulses called “soliton bullets” began shooting through the liquid crystal. These bullets are not physical particles. Instead, they are traveling distortions of molecular alignment, moving through the material while maintaining a stable shape.
Earlier experiments showed that in uniformly aligned liquid crystals, these bullets typically move in just one direction. Under the new conditions, instead of following a single straight path, the soliton bullets traveled along two slanted trajectories, forming diagonal routes through the material. Even more intriguing, the direction of these paths could be tuned simply by adjusting the frequency of the electric field.
To understand why, the team combined experiments with theoretical models and computer simulations. The key turned out to be a phenomenon called flexoelectricity, a coupling between electric fields and mechanical distortions in liquid crystals.
Because the background molecular alignment in the strained cells was already bent, the electric field produced uneven torques on different parts of the soliton structure. Each soliton has two “wings,” regions where the molecular orientation tilts in opposite ways. In the strained environment, one wing becomes stronger than the other, generating a sideways push that sends the soliton along an angled path.
“In these systems, the material itself becomes a way to steer nonlinear signals,” said Juan de Pablo, Executive Dean at NYU Tandon and a coauthor of the study. “By engineering strain into the liquid crystal, we can control how these localized waves move.”
The finding illustrates a broader principle in materials science: the geometry and internal stresses of a material can shape how energy moves through it. In this case, carefully designed strain fields turn a simple liquid-crystal film into a kind of microscopic racetrack for solitons.
Such control could eventually help researchers design active or autonomous materials: systems that move energy, particles, or signals without mechanical components. Previous work has already shown that soliton waves in liquid crystals can transport tiny particles or even trigger droplet formation at fluid interfaces.
While practical devices may still be years away, the study highlights how liquid crystals serve as powerful model systems for exploring nonlinear physics.
“Controlled propagation of soliton bullets in an engineered strain field,” Alexis de la Cotte, Xingzhou Tang, Chuqiao Chen, S. J. Kole, Noe Atzin, Juan J. de Pablo, and Nicholas L. Abbott PNAS, #2025-18064R.
Why AI Still Can’t Beat a New Video Game
For decades, video games have served as a proving ground for artificial intelligence. From early checkers programs to systems that conquered chess and Go, each milestone has seemed to bring machines closer to human-like intelligence. But a new paper by Julian Togelius and colleagues argues that this narrative is misleading. Despite impressive victories, today’s AI still struggles with a deceptively simple challenge: playing a game it has never seen before.
Most headline-grabbing successes in game AI rely on systems that are finely tuned to a single game. These systems can achieve superhuman performance, but only within narrow boundaries. Change the rules, visuals or environment even slightly, and their competence can collapse.
This brittleness reveals a deeper limitation. Intelligence, as humans experience it, is not just about mastering one task but adapting to new ones. Video games, with their enormous variety of mechanics and goals, offer an unusually rich testbed for that kind of flexibility. As the authors note, games collectively probe a wide range of cognitive skills, from spatial reasoning and long-term planning to social intuition and learning through trial and error. Yet modern AI systems fall short on this broader challenge.
One major approach, reinforcement learning, has powered many recent breakthroughs. These systems learn by trial and error, improving through millions — or billions — of simulated plays. But they tend to overfit, becoming experts at the exact scenarios they were trained on while failing to generalize. Even minor changes, such as shifting colors or positions on a screen, can render a trained agent ineffective.
Planning-based systems, such as those used in chess or Go, offer more generality. They simulate possible moves and outcomes rather than relying on prior training. But they depend on fast, accurate simulations — something that most modern video games, and certainly the real world, cannot provide at scale.
Large language models, the technology behind today’s most visible AI tools, might seem like a promising alternative. After all, they can write essays, generate code and solve complex reasoning tasks. But when it comes to playing unfamiliar games, they perform surprisingly poorly.
Even in cases where language models appear to succeed — such as playing well-known games — the results often rely on elaborate, game-specific scaffolding. Systems are augmented with tools to interpret game states, manage memory and execute actions. Strip away this custom infrastructure, and performance drops sharply.
The gap likely exists do to the nature of training data. Language models are trained on vast amounts of text, not on sequences of game states and actions. As a result, they lack the embodied understanding and interactive experience that games demand.
The authors suggest that truly general game-playing ability would require something very different: an AI that can learn a new game from scratch in roughly the same time it takes a skilled human — perhaps tens of hours — without relying on prior exposure or massive simulation.
That benchmark is far beyond current capabilities. Today’s reinforcement learning systems require far more data, while language models lack the mechanisms to accumulate and refine knowledge over extended interaction. Bridging this gap would likely demand entirely new architectures and learning paradigms.
The implications extend well beyond gaming. The ability to adapt to unfamiliar situations is central to the idea of artificial general intelligence (AGI). If an AI cannot handle a novel video game — a controlled, simplified environment — it is unlikely to cope with the unpredictability of the real world.
The paper offers a different perspective on one area where AI does excel: computer programming. Coding, the authors argue, can be viewed as a kind of “game” with clear rules, well-defined goals and immediate feedback through debugging and testing. Modern AI systems have effectively mastered this particular game because they have been trained extensively on its structure and data.
But outside such well-structured domains, their abilities remain limited.
Ultimately, the researchers propose that games should remain central to AI evaluation. Not as isolated challenges but as a vast, evolving ecosystem of tests for adaptability and creativity. A truly intelligent system would not only learn to play new games efficiently but might even invent compelling ones of its own.
NYU Tandon Supports MTA in Combating Climate Threats
As transit agencies face growing climate risks and limited capital budgets, deciding which flood protection measures to implement — and where — has become a critical challenge.
Now, a research team at NYU Tandon School of Engineering has built a computer modeling framework that allows agencies to rapidly test and prioritize hundreds of subway resilience strategies for coastal storm surge flooding before committing to major infrastructure investments.
Developed in collaboration with researchers at Columbia University and Princeton University, the model enables the New York Metropolitan Transportation Authority (MTA) to simulate coastal storm surge flooding scenarios under different climate projections and evaluate which combinations of coastal barriers and station-level protections will provide the greatest return on investment.
The physics-based approach, published in Transportation Research, calculates flooding extent and economic losses for each scenario in about one minute on a standard laptop. This speed makes comprehensive resilience planning practical for the MTA, the state agency that oversees NYC's public transportation system.
The research team validated its simulation by accurately reproducing Superstorm Sandy's 2012 flooding patterns. That storm inundated 150 subway stations across New York City, causing $5 billion in repair costs to stations, tunnels, and electrical systems, plus additional economic losses from extended service disruptions.
Since that event, the MTA has invested $7.6 billion in repairs and nearly 4,000 coastal surge protections, including elevating critical infrastructure, securing entrances at underground subway stations, and installing marine doors at the Hugh L. Carey and Queens Midtown tunnels.
“Protecting our infrastructure and the New Yorkers that rely on it from the impacts of climate change is one of the MTA's top priorities," said Eric Wilson, Senior Vice President of Climate & Land Use at the MTA. "Innovative tools like this give us a powerful, data-driven way to evaluate resilience investments before we build them, helping ensure every dollar we spend strengthens the system and safeguards service for millions of daily riders."
"As extreme storms become more frequent and sea level rises, transit agencies need reliable tools to determine how protective measures will actually perform in these changing circumstances before committing billions in infrastructure investments," said Yuki Miura, the study's lead author and assistant professor at NYU Tandon, where she is a faculty member in the newly established NYU Urban Institute. “Our model lets agencies rapidly compare hundreds of strategies under different future conditions. That makes it possible to identify solutions that are not only cost-effective, but also robust to uncertainty.”
Working with MTA and NYC government officials, the research team leveraged the model's speed to rapidly test numerous flooding scenarios for Lower Manhattan's subway system (below 34th Street). The study presents 13 representative stress tests through the end of the century, each combining Superstorm Sandy-level storm surges with projected sea level rise and various protective strategies.
The modeling shows that layered strategies — combining coastal barriers with targeted protection at key subway openings — can substantially reduce flood risk in a cost-effective and system-wide manner. Raising Lower Manhattan's entire coastline by two meters (about 6.5 feet) could prevent subway flooding even with mid-century sea level rise.
A hybrid approach — completing the East Side Coastal Resiliency seawall paired with sealing the 1,500 most critical of the subway's 3,500 openings (entrances, vents, stairways, and other entry points) — would cost about the same as sealing all 3,500 openings, but could also protect neighborhood streets, buildings, and infrastructure from coastal flooding, not only the subway itself. For the MTA, its 4,000 coastal surge protections are a critical first line of defense, and the East Side Coastal Resiliency seawall is a secondary protection.
The analysis reveals a counterintuitive finding: flood risk does not scale linearly. Instead, localized vulnerabilities at critical junctions can trigger cascading failures throughout the system, meaning strategic investments at a handful of key locations can be far more effective than broadly distributed protections.
In addition to calculating flood depths both above and below ground, the study quantifies economic impacts from subway inoperability. The researchers estimate a Superstorm Sandy-level storm today would cause $5.5 billion in economic losses to Manhattan from transit disruptions alone — separate from repair costs — given that 40-60% of New Yorkers depend on public transportation for daily commutes. This estimate does not account for the coastal surge protections that the MTA has implemented. These protections would be deployed by the MTA during a Superstorm Sandy-level storm, reducing the associated economic loss due to transit disruptions.
The research team developed the model in coordination with the New York City Transit Authority, a division of the MTA, which provided detailed system specifications, including tunnel dimensions, station volumes, and opening locations, while maintaining security considerations.
"We're grateful for the productive collaboration with the MTA," said Miura, who has faculty appointments in both Tandon's Center for Urban Science + Progress and its Mechanical and Aerospace Engineering Department. "Their engagement has been essential in developing a tool that supports evidence-based decision-making for infrastructure investments."
Miura points out that ongoing work is exploring how this framework can be integrated into long-term capital planning and adapted for other infrastructure systems facing climate risk. While this study focused on New York City, the methodology can be adapted to other coastal cities with underground transit infrastructure.
The research was supported by the National Science Foundation. The study's senior author is George Deodatis of Columbia University. Co-authors are Christine Y. Blackshaw of Princeton University, Michelle S. Zhang of Columbia University, and Kyle T. Mandli of the Flatiron Institute.
Yuki Miura, Christine Y. Blackshaw, Michelle S. Zhang, Kyle T. Mandli, George Deodatis,
Coastal storm-induced flooding risk of the New York City subway amid climate change,
Transportation Research Part D: Transport and Environment, Volume 149,2025, 104974, ISSN 1361-9209, https://doi.org/10.1016/j.trd.2025.104974.
Bacteria Have a Secret Engineering Trick to Keep Themselves in Shape
Blow up a long balloon and two things happen: it gets longer and it gets wider.
Now imagine a living cell that inflates itself under enormous pressure and yet only grows longer, never adding width. That is exactly what rod-shaped bacteria do, every time they divide, with a precision that has baffled scientists for decades.
A new study published in Current Biology has finally found the answer. Researchers suggest their discovery could point toward new treatments for antibiotic-resistant bacteria.
Rod-shaped bacteria like Bacillus subtilis — a harmless soil microbe and one of biology's most studied model organisms — are encased in a rigid shell called the cell wall, made of a polymer called peptidoglycan, and pressurized from within at many times the pressure of a car tire.
To grow, bacteria must continuously remodel this wall: snipping out old material with enzymes and weaving in new polymer. This should cause the cell to bulge outward as well as elongate. Yet rod-shaped bacteria hold their width to within 40 nanometers — roughly 1,750 times thinner than a human hair.
"Most antibiotics that target the bacterial cell wall disrupt its structure/architecture," said Paola Bardetti, the study's lead author and an industry assistant professor of chemical and biomolecular engineering at NYU Tandon School of Engineering. "Our work reveals an entirely different vulnerability: the physical mechanism bacteria rely on to maintain their shape. No drug has ever targeted that. Until now, we didn't understand it well enough to try."
The NYU team, led by Bardetti and the paper’s senior author Enrique Rojas, an associate professor of biology at NYU, subjected living bacteria to rapid osmotic shocks — briefly raising or lowering internal pressure — while tracking wall deformations as small as a few nanometers.
What they found was a sharp mechanical threshold. Below normal pressure the wall behaves like a finger trap toy: reducing pressure makes it expand sideways. Above it, the wall softens and the cell widens. At the transition, width stays constant, precisely where growing bacteria sit.
“The cell wall is a smart material,” said Bardetti. “It responds to mechanical stress in a way that is tuned to keep the cell the right shape. Every time we probed it, it surprised us.”
This tipping-point strategy also confers automatic self-correction. When cells were manipulated to grow wider than normal, the wall slipped into the finger-trap regime, thinning it back toward the target width. The critical pressure of the transition also shifted in response to changes in wall architecture — a second feedback loop — making this a homeostatic system encoded in the physical properties of the material itself.
In scientific terms: the wall is anisotropic, far stiffer circumferentially than longitudinally, with a Poisson ratio of 0.45–0.5 and anisotropy at the physical maximum. The stress-softening non-linearity — an abrupt drop in circumferential stiffness at the critical pressure — parks the cell at the boundary between widening and thinning.
The same phenomenon appeared in Arabidopsis thaliana plant roots, suggesting a shape-control strategy evolution has arrived at independently.
"Finding the same strategy in bacteria and plant roots was genuinely exciting," said Bardetti. "It suggests a fundamental principle of tubular morphogenesis that nature has independently discovered more than once. The next step is identifying the molecular machinery that sets the critical pressure, because once you know that, you have a potential drug target.”
In addition to Bardetti and Rojas, the paper’s authors are Felix Barber (currently Assistant Professor at Ohio State University) and Dylan Fitzmaurice, postdoctoral researcher and PhD candidate respectively in the Rojas Lab at the time of the study. Research funding came from the National Institutes of Health and the National Science Foundation. Microscopy support was provided by the NYU Langone Health Microscopy Lab, partially funded by the National Cancer Institute.
Non-linear stress-softening of peptidoglycan mediates bacterial cell shape homeostasis
Bardetti, Paola et al. Current Biology, Volume 36, Issue 5, 1156 - 1165.e5
New Method of Data Center Cooling Could Dramatically Decrease Electricity Use
Data centers — the warehouse-sized buildings that store our photos, stream our movies and train artificial intelligence — are voracious consumers of electricity. A surprisingly large share of that power never reaches a microchip. Instead, it is spent on cooling, hauling away the heat generated by millions of tightly packed servers.
As data centers proliferate thanks to the AI boom, their electricity needs are colliding with a grid already under strain. One solution is to rethink the basics of cooling. In a new study, researchers at NYU Tandon explore an alternative solution: use waste heat from nearby factories to cool data centers, by storing that heat in a material that can later deliver cooling on demand.
“While the electricity needs for data centers are still a small slice of the total U.S. electricity market, it is rapidly growing,” says Dharik Mallapragada, Assistant Professor of Chemical and Biomolecular Engineering and lead author of the paper. “This is an opportunity to ‘bend the curve’ and aim for a much more sustainable future, in a way that is beneficial to everyone involved.”
Thermal batteries
At the heart of the concept are minerals called zeolite. Zeolites are crystalline materials riddled with microscopic pores, giving them a remarkable ability to soak up water vapor. When a dry zeolite encounters water vapor, it adsorbs the vapor and releases heat. When the zeolite is heated to sufficiently high temperatures, it releases the water again, resetting the cycle.
Importantly, zeolites are inexpensive materials that are already in use for a wide range of applications, including water treatment and oil refining. “Zeolite and its interaction with water can be used for storing thermal energy”, says Assistant Professor Pavel Kots, a co-author on the study and an expert in zeolite synthesis and characterization. At an industrial facility — such as a chemical plant or refinery — low- to medium-temperature waste heat (below about 200 degrees Celsius) is used to “charge” the thermal battery by drying the zeolite. The water driven off is condensed and recovered. The charged zeolite is then transported, by truck or rail, to a data center.
Once on site, the process runs in reverse. Warm air or other coolants (e.g., water) from the server room help evaporate water, producing a cooling effect. The water vapor is adsorbed by the dried zeolite, which effectively acts as a heat sink. Crucially, this adsorption process can replace the electricity-hungry compression chillers that dominate today’s data center cooling systems.
Unlike typical heat storage methods, zeolite-based storage does not slowly lose its energy over time. The thermal energy remains locked in the material until the water is reintroduced. That makes it suitable not only for long-duration storage but also for transport over tens of kilometers.
Big energy savings, modest trade-offs
Using detailed thermodynamic modeling, the NYU team, which included Kots, Mallapragada, and postdoctoral researcher Gilvan Farias Neto, compared their proposed system with a conventional setup: a data center cooled by a compression chiller and an industrial facility rejecting waste heat through cooling towers.
The results are striking. Across a range of operating conditions, the team estimated that the proposed approach can reduce total electricity used by the data center for cooling and the industrial facility by more than 75 percent. For the data center alone, electricity consumption for cooling can be reduced by as much as 86 percent. In energy efficiency terms, this translates into a 12% improvement in power usage effectiveness, a key metric in the data center industry.
Water use tells a more nuanced story. The combined system consumes somewhat more water overall — roughly 15 to 25 percent more — because evaporation is central to the cooling process. But this increase masks an important detail: the industrial facility itself sees a dramatic reduction in water use, since much of its waste heat is diverted into charging the thermal batteries rather than being dumped through cooling towers. Water released during zeolite charging can also be reused on site, partially closing the loop. The analysis did not consider the changes in indirect water use, i.e. associated with electricity generation, for the facility, which could partially or fully offset increases in direct water use, depending on the make-up of the electricity supply.
In order for this set up to work, data centers need to be fairly close to industrial facilities. To assess the scalability of their approach, the researchers conducted a geospatial analysis of U.S. facilities. The median distance between data centers and the 10 nearest industrial sites turned out to be just 57 kilometers.
Even after accounting for the energy needed to haul tons of zeolite back and forth — assuming modern electric trucks — the system still delivers net electricity savings in many scenarios, sometimes exceeding 40 percent. Rail transport could reduce the energy penalty further.
The proposed system is still at the modeling stage, and many engineering challenges remain. Zeolite beds must be designed for durability, rapid heat transfer and repeated cycling. Coordinating operations between data centers and industrial partners will require new business models. The research team has begun speaking with several industry leaders about the possibility of scaling this solution up.
Still, the idea highlights an under-appreciated truth: in an energy-hungry digital economy, waste heat can be monetized as a valuable resource. By reimagining cooling as a problem of thermal logistics rather than electrical demand, zeolite-based thermal batteries could help data centers grow without overheating the grid.
Farias, Kots, Mallapragada(2026) Zeolite Based Thermal Energy Storage to Leverage Industrial Waste Heat for Data Center Cooling, Chem Rxiv https://doi.org/10.26434/chemrxiv-2026-28wv2.
Tracking Wildlife Trafficking in the Age of Online Marketplaces
Wildlife trafficking is one of the world’s most widespread illegal trades, contributing to biodiversity loss, organized crime, and public health risks. Once concentrated in physical markets, much of this activity has moved online. Today, animals and animal products are advertised on large e-commerce platforms alongside ordinary consumer goods. This shift makes enforcement harder — but it also creates a valuable source of data.
Every online advertisement leaves behind digital information: text descriptions, prices, images, seller details, and timestamps. If collected and analyzed at scale, these traces can help researchers understand how wildlife trafficking operates online. The problem is volume. Online marketplaces contain millions of listings, and most searches for animal names return irrelevant results such as toys, artwork, or souvenirs. Distinguishing illegal wildlife ads from harmless products is difficult to do manually and challenging to automate.
Institute Professor of Computer Science Juliana Freire is part of a team that is taking on the problem head on, building a scalable system designed to address this challenge. They developed a flexible data collection pipeline that automatically gathers wildlife-related advertisements from the web and filters them using modern machine learning techniques. The goal is not to focus on one species or one website, but to enable broad, systematic monitoring across many platforms, regions, and languages, as well as to develop strategies to disrupt illegal markets.
The team is a multi-disciplinary effort, including Gohar Petrossian, Professor of Criminal Justice at John Jay College of Criminal Justice; Jennifer Jacquet, Professor of Environmental Science and Policy at the University of Miami; and Sunandan Chakraborty, Professor of Data Science at Indiana University.
The pipeline begins with web crawling. The researchers generate tens of thousands of search URLs by combining endangered species names with the search structures of major e-commerce websites. A specialized crawler then follows these links, downloading product pages while limiting requests to avoid overwhelming servers. Over just 34 days, the system retrieved more than 11 million ads.
Next comes information extraction. Product pages are messy and inconsistent, varying widely across websites. The pipeline uses a combination of HTML parsing tools and automated scrapers to extract useful details such as titles, descriptions, prices, images, and seller information. These data are stored in structured formats that allow large-scale analysis.
The most critical step is filtering. While machine learning classifiers can be used for this filtering, training specialized classifiers for multiple collection tasks is both time-consuming and expensive, requiring experts to create training data for each task. Freire’s group developed a new approach that leverages large-language models (LLMs) to label data and use the labeled data to automatically create specialized classifiers, which can perform data triage at a low cost and at scale.
The result is essentially a "model factory:" a pipeline that can automatically produce customized, low-cost classifiers on demand for different triage tasks — different species, different product types, different platforms — without requiring experts to label data from scratch each time.
This research has enabled large-scale data collection to answer different scientific questions and shed insights into different aspects of wildlife trafficking. One analysis of 14,000 reptile leather product listings on eBay showed that crocodile, alligator, and python skins dominated the market. Only about 10 animal-product combinations (such as ‘crocodile bags’, ‘alligator bags’ and ‘alligator watches’ made up about 72 percent of all listings, indicating that the trade heavily focuses on a few luxury items. The analysis of all of the listings from these sites showed that while small leather products were shipped from 65 countries, 93 percent came from 10 countries, with the United States, United Kingdom, Australia collectively accounting for over 3/4th of this market.
Similar data from Ebay on shark and ray trophies reveals that, although the platform has introduced policies to restrict threatened or endangered species, their derivatives are still circulated widely on the platform. Tiger shark trophies accounted for one-fifth of such listings, with asking prices up to $3,000. Over 85 percent of listings were linked to sellers in the United States, suggesting a pipeline from deep sea commercial fishing vessels to the US trophy trade.
This research is also being used to determine what would be the most effective way to disrupt this market. For example, the researchers found that targeting key sellers is effective, but targeting key product types — “alligator watch,” for example — breaks the market of reptile leather products equally effectively, and is much easier to enact at a broad scale.
The authors emphasize that this system is a starting point, not a finished solution. The pipeline is designed to be extensible, allowing future researchers to incorporate better classifiers, image-based analysis, or new data sources. By making the code openly available, they aim to support broader collaboration.
As wildlife trade continues to move online, understanding its digital footprint will be increasingly important. Scalable data collection tools like this one offer a way to transform scattered online listings into actionable knowledge, an essential step toward disrupting illegal wildlife trade in the digital era.
Juliana Silva Barbosa, Ulhas Gondhali, Gohar Petrossian, Kinshuk Sharma, Sunandan Chakraborty, Jennifer Jacquet, and Juliana Freire. 2025. A Cost-Effective LLM-based Approach to Identify Wildlife Trafficking in Online Marketplaces. Proc. ACM Manag. Data 3, 3, Article 119 (June 2025), 23 pages. https://doi.org/10.1145/3725256