Image credit: FluidicSwarm: Embodiment of Swarm Robots Using Fluid Behavior Imitation
A groundbreaking SIGGRAPH 2025 Emerging Technologies project, “FluidicSwarm“, turns complex robot swarms into fluid, body-like extensions of their operator. In this Q&A, the creator dives into the tech and ideas behind this intuitive, gesture-based control system — where movement flows, shapes shift, and collaboration evolves.
SIGGRAPH: What was the main motivation behind creating “FluidicSwarm”, and what sets its approach to swarm control apart from others?
Michikuni Eguchi (ME): Our primary goal was to enable people to control large swarm robots efficiently in diverse environments. Swarm robots can perform complex tasks because they are made up of many robots, but that large number robots also makes them difficult to control efficiently, especially in cluttered spaces.
To address this, we conceptualize the swarm as a fluid-like extension of the operator’s body. This embodiment lets the operator intuitively modulate the swarm’s rigidity and shape through simple gestures — for example, stiffening the swarm to carry an object or dispersing it to flow through obstacles. Ultimately, this allows an operator to efficiently control the swarm robot as if moving their own body that can transform at will.
SIGGRAPH: How did you develop “FluidicSwarm”? What approach did you take to create this technology?
ME: To realize fluid-like behavior, we employ Smoothed Particle Hydrodynamics (SPH), a particle-based fluid simulation method. SPH represents a fluid as an ensemble of mutually interacting particles, and our controller treats each robot as one such particle. By having the robots follow SPH’s physical rules, the swarm imitates the smooth, flexible movements of a fluid.
SIGGRAPH: How does “FluidicSwarm” allow users to intuitively control and shape the movement of the robot swarm?
ME: Users can generate a variety of swarm behaviors with simple hand movements tracked by a camera, defining two key elements: the properties and shape of the fluid the swarm imitates. For example, just as we close our hand when applying force and open it to relax, users control the fluid’s viscosity based on how open or closed their hand is. This allows the swarm to naturally switch its flexibility between rigid and soft states.
Also, operators can create various swarm formations suited for different tasks with the shape of their hands. Our controller automatically controls the robots to evenly fill the given area, allowing users to create flexible formations without worrying about individual robot placement.
SIGGRAPH: In what ways does “FluidicSwarm” improve performance in tasks like obstacle avoidance and object transportation compared to traditional swarm robot control systems?
ME: “FluidicSwarm’s” primary advantage lies in its unified control framework, which can elicit a wide range of collective behaviors without the need for task-specific algorithms. Operators simply adjust the swarm’s fluid-like properties through hand gestures: designating it as a soft fluid enables the robots to flow naturally around obstacles, whereas treating it as a rigid fluid allows them to operate as a cohesive body to transport objects. Moreover, drawing inspiration from the dynamics of real fluids promises to unlock an even broader repertoire of behaviors. Consequently, adapting the swarm to new tasks becomes intuitive, greatly improving operational efficiency.
SIGGRAPH: What are some potential future applications for “FluidicSwarm”?
ME: “FluidicSwarm” broadens the scope of human-robot collaboration, offering extensive practical applications. In teleoperation and other forms of remote work, operators can safely execute physical tasks in hazardous settings —such as disaster sites — by manipulating the swarm from afar. It also has a function for sharing the swarm among multiple people, so it might be possible to handle even more complex tasks by having each person collaborate with a swarm that has different fluid characteristics.
Experience innovations like “FluidicSwarm”, and meet the minds behind them, live at SIGGRAPH 2025. Don’t miss your chance to explore the future of human-robot interaction and beyond at SIGGRAPH 2025 in Vancouver.

Michikuni Eguchi received the B.E. and M.I. degrees from the University of Tsukuba, Japan in 2023 and 2025, respectively. Currently, he is a Ph.D. student at the University of Tsukuba, Japan. His research interests include robotics, motion planning, and haptics.