Logo
FrontierNews.ai

Figure AI's Robots Learn to Dress Up: Why Robot Fashion Is Becoming a Serious Business

Figure AI's humanoid robots are shedding their mechanical appearance and putting on clothes, marking a turning point in how robots will integrate into human spaces. The company recently demonstrated its F.03 robots performing complex household tasks like making beds and hanging clothes, but the real story isn't just what the robots can do,it's how they look while doing it. Brett Adcock, Figure's founder, has prominently showcased different clothing combinations for the robots on social media, signaling that appearance is now part of the product strategy.

For years, exposed joints, cables, and metal skeletons were badges of honor in robotics, representing precision and technological prowess. But as robots move from laboratories and factory floors into shopping malls, healthcare facilities, homes, and public spaces, the calculus has shifted entirely. When people encounter a robot for the first time, their reaction depends less on its technical specs and more on whether it feels approachable, safe, and like something that belongs in their environment.

Why Are Robots Starting to Wear Clothes?

The transition from bare metal to dressed-up robots reflects a fundamental change in how these machines are being positioned in society. When a product moves from being a technological novelty to a consumer good, it almost always develops what experts call a "shell economy." Think of smartphone cases, car wraps, or customizable toy accessories,once a device becomes part of daily life, the accessories market explodes.

For robots entering human-centered environments, clothing serves multiple purposes beyond aesthetics. Safety is paramount. When users come into close contact with a robot, whether they touch a hard shell or encounter soft, protective coverings makes a real difference in the human-robot experience. Companies like 1X and Fourier have begun emphasizing soft covering materials and gentle surfaces as essential features for home scenarios, treating the outer layer not as decoration but as necessary preparation for daily environments.

Figure represents the most typical category of robots designed for households, healthcare facilities, reception areas, and shopping malls. These scenarios share one critical characteristic: people form a perceptual relationship with the robot before it starts working. Whether the robot feels cold and industrial or soft and approachable determines whether subsequent interactions will be positive.

How Are Companies Using Robot Fashion as a Business Strategy?

  • Consumer and Service Scenarios: Robots designed for homes and healthcare facilities use clothing to address psychological acceptance and visual expression, making them feel less intrusive and more trustworthy in intimate spaces.
  • Performance and Rental Markets: Robots used for brand events, shopping mall activities, and exhibitions become moving advertising platforms where appearance directly impacts commercial value and audience engagement.
  • Industrial and Special Operations: In factories and hazardous environments, outer coverings evolve into functional protective systems offering high-temperature resistance, corrosion protection, dust prevention, and potential future tactile sensing capabilities.

The performance and rental market reveals how quickly robot fashion is becoming a standalone business. According to product operation staff at Qingtianzu, a leading robot rental manufacturer, customers consistently prefer robots with clothes and character. When a fortune-god robot wears corresponding clothing, its sense of character and interactivity strengthens, making it more likely to attract audience attention and affection. The company reported receiving more than one hundred orders per month for robot "skin peripherals," with expectations for continued growth as more urban partners enter the market.

This shift reflects a deeper change in customer expectations. Rather than renting a generic "technological device," customers now seek "characters" with personality and visual customization. In the future, robots may become media spectacles where the machine characteristics fade into the background and the character takes center stage.

Figure's design team includes automotive and fashion designers, underscoring how seriously the company is taking the appearance question. This isn't a cosmetic afterthought but a core part of the product definition. When robots learn to wear clothes, they're simultaneously learning how to integrate into crowds, spaces, and society in ways that feel natural rather than intrusive.

What Technical Capabilities Are Figure's Robots Demonstrating?

Beyond the fashion angle, Figure AI recently published a video and detailed blog post showing two F.03 humanoid robots performing a fully autonomous bedroom reset, including making a bed, in under two minutes. The robots run an onboard Helix-02 Vision-Language-Action policy, which integrates visual input and motor control to produce coordinated locomotion and dexterous manipulation. Importantly, the robots coordinate purely through visual observation, with no shared planner, no message passing, and no central coordinator between them.

The demonstration included several complex capabilities:

  • Whole-Body Manipulation: The robots opened doors, pushed furniture using foot placement and posture, and managed balance while performing multiple tasks simultaneously.
  • Dexterous Hand Control: The robots hung clothing on narrow fixtures, reoriented objects in their hands, and handled books with bimanual coordination.
  • Collaborative Coordination: Two robots worked together to lift and smooth a comforter, inferring each other's intent visually rather than through explicit communication.

CEO Brett Adcock posted the video on social media with the claim that the robots are "better at it than most humans" when it comes to making beds. The behavior runs fully autonomously with no remote operation and executes entirely onboard the robots without external computing support.

However, industry observers note important caveats. The demonstration is a curated video without published benchmarks or robustness metrics. Real-world reliability, generalization across different room layouts, safety guarantees, and operational costs remain undocumented. Practitioners tracking the sector should watch for quantitative metrics including task success rates across randomized scenes, failure-mode breakdowns, latency and compute budgets for onboard inference, and sample-efficiency numbers.

The integration of locomotion, whole-body balance, and dexterous manipulation into a single learned policy represents a significant engineering challenge. This approach increases the state and action dimensionality that the model must manage, typically demanding substantial training data, careful simulation-to-real transfer, or extensive real-world data collection.

As humanoid robotics moves from single-skill demonstrations toward longer, multi-step tasks in human environments, both the visual-motor integration and multi-agent coordination demonstrated by Figure illustrate two critical vectors of progress. The company's emphasis on robot fashion, combined with its technical demonstrations, suggests that the path to practical humanoid robots requires solving both the engineering problem and the human acceptance problem simultaneously.

" }